DeviceHive on Kubernetes
We provide two variants of deployment: minicube and Google Container Engine (GKE). They are different in the way Zookeeper and Kafka are deployed -- single node or clustered.
Minikube installation
Requirements
- Kubernetes version 1.5 (due to issues in later version described below).
- Minikube version 0.18.0
- 4GB RAM in Minikube VM for full DeviceHive stack.
Kubernetes 1.6.4 in Minikube has issue with pod trying to access itself via Service IP. So we stick to Kubernetes 1.5.3 in Minikube.
Also the latest version of Minikube (0.19.1 at the moment) uses advanced syntax for deploying DNS addon, which is not supported in Kubernetes 1.5. For details see related section of Kubernetes changelog and PR #39981. This new functionality is used since Minikube 0.19 in kube-dns deployment. Minikube version 0.18.0 works fine with Kubernetes 1.5.3.
-
Install Minikube 0.18.0 and Kubectl.
-
Configure Minikube:
minikube config set kubernetes-version v1.5.3
minikube config set memory 4096
- Start Minikube
minikube start
DeviceHive Installation in Minikube
- Install PostgreSQL:
kubectl apply -f postgresql-ephemeral.yaml
- Install Zookeeper and Kafka:
kubectl apply -f kafka-zk-ephemeral.yaml
- Install DeviceHive services:
kubectl apply -f devicehive.yaml
- Create service for Admin console: In Minikube service can be exposed to outer world only be NodePort.
kubectl apply -f admin-console-svc-nodeport.yaml
Accessing DeviceHive
- Get Minikube address for dh-admin service:
$ minikube service dh-admin --url
http://192.168.99.106:31360
- Open Admin Console in browser:
http://192.168.99.106:31360/admin
Google Container Engine (GKE)
Requirements
- Kubernetes version >=1.6
- Helm package mananger
- Zookeeper (from incubator/kafka Helm Chart) requests at least 4Gi memory per cluster node. You should create cluster with at least n1-standard-2 machines.
DeviceHive Installation in Minikube
- Create kubernetes cluster:
gcloud container clusters create "devicehive-cluster-1" \
--zone "us-central1-a" \
--machine-type "n1-standard-2" \
--image-type "COS" \
--num-nodes "3" \
--network "default"
- Install Kafka to cluster:
helm init
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install \
--name dh-bus \
--version 0.1.4 \
--set Cpu=500m \
--set MaxCpu=4 \
--set Storage=20Gi \
--set Memory=1024Mi \
--set MaxMemory=1536Mi \
incubator/kafka
- Install PostgreSQL to cluster:
helm install \
--name dh-db \
--version 0.8.0 \
--set imageTag=9.6 \
--set postgresUser=devicehive \
--set postgresPassword=devicehivepassword \
--set postgresDatabase=devicehivedb \
stable/postgresql
- Deploy DeviceHive:
kubectl apply -f devicehive.yaml
- Create service for Admin console: In GKE we export service using load balancer:
kubectl apply -f admin-console-svc-loadbalancer.yaml
Verifying installation
To verify if all services are started, check that current number of replicas in deployments is equal to desired:
kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dh-admin 1 1 1 1 37m
dh-backend 1 1 1 1 37m
dh-frontend 1 1 1 1 37m
Accessing DeviceHive
- Get external IP for dh-admin service:
kubectl get svc dh-admin
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dh-admin 10.79.242.246 104.155.147.92 80:32211/TCP 32m
- Open it in browser:
http://104.155.147.92/admin
Updated over 5 years ago