DeviceHive on Kubernetes

We provide two variants of deployment: minicube and Google Container Engine (GKE). They are different in the way Zookeeper and Kafka are deployed -- single node or clustered.

Minikube installation


  • Kubernetes version 1.5 (due to issues in later version described below).
  • Minikube version 0.18.0
  • 4GB RAM in Minikube VM for full DeviceHive stack.

Kubernetes 1.6.4 in Minikube has issue with pod trying to access itself via Service IP. So we stick to Kubernetes 1.5.3 in Minikube.

Also the latest version of Minikube (0.19.1 at the moment) uses advanced syntax for deploying DNS addon, which is not supported in Kubernetes 1.5. For details see related section of Kubernetes changelog and PR #39981. This new functionality is used since Minikube 0.19 in kube-dns deployment. Minikube version 0.18.0 works fine with Kubernetes 1.5.3.

  1. Install Minikube 0.18.0 and Kubectl.

  2. Configure Minikube:

minikube config set kubernetes-version v1.5.3
minikube config set memory 4096
  1. Start Minikube
minikube start

DeviceHive Installation in Minikube

  1. Install PostgreSQL:
kubectl apply -f postgresql-ephemeral.yaml
  1. Install Zookeeper and Kafka:
kubectl apply -f kafka-zk-ephemeral.yaml
  1. Install DeviceHive services:
kubectl apply -f devicehive.yaml
  1. Create service for Admin console: In Minikube service can be exposed to outer world only be NodePort.
kubectl apply -f admin-console-svc-nodeport.yaml

Accessing DeviceHive

  1. Get Minikube address for dh-admin service:
$ minikube service dh-admin --url
  1. Open Admin Console in browser:

Google Container Engine (GKE)


  • Kubernetes version >=1.6
  • Helm package mananger
  • Zookeeper (from incubator/kafka Helm Chart) requests at least 4Gi memory per cluster node. You should create cluster with at least n1-standard-2 machines.

DeviceHive Installation in Minikube

  1. Create kubernetes cluster:
gcloud container clusters create "devicehive-cluster-1" \
  --zone "us-central1-a" \
  --machine-type "n1-standard-2" \
  --image-type "COS" \
  --num-nodes "3" \
  --network "default"
  1. Install Kafka to cluster:
helm init
helm repo add incubator
helm install \
  --name dh-bus \
  --version 0.1.4 \
  --set Cpu=500m \
  --set MaxCpu=4 \
  --set Storage=20Gi \
  --set Memory=1024Mi \
  --set MaxMemory=1536Mi \
  1. Install PostgreSQL to cluster:
helm install \
  --name dh-db \
  --version 0.8.0 \
  --set imageTag=9.6 \
  --set postgresUser=devicehive \
  --set postgresPassword=devicehivepassword \
  --set postgresDatabase=devicehivedb \
  1. Deploy DeviceHive:
kubectl apply -f devicehive.yaml
  1. Create service for Admin console: In GKE we export service using load balancer:
kubectl apply -f admin-console-svc-loadbalancer.yaml

Verifying installation

To verify if all services are started, check that current number of replicas in deployments is equal to desired:

kubectl get deploy
dh-admin              1         1         1            1           37m
dh-backend            1         1         1            1           37m
dh-frontend           1         1         1            1           37m

Accessing DeviceHive

  1. Get external IP for dh-admin service:
kubectl get svc dh-admin
NAME       CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
dh-admin   80:32211/TCP   32m
  1. Open it in browser: