Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Learn how to use Java and Kubernetes in the cloud

In the ‘learn how to create services using the Kubernetes API tutorial’, we explained the key concepts in developing microservices. This tutorial will focus on demonstrating how microservices are deployed. To simplify your understanding of the Deployment process, this tutorial will begin by demonstrating deployment in a local minikube Cluster and proceed to demonstrate how to use the Amazon Web Services (AWS). To complete this tutorial you need minikube, Docker and kubectl. Previous tutorials have demonstrated how to set up these tools so we won’t be repeating those again here. Later on you will also need an AWS account to complete deploying services in the cloud.

To start the minikube cluster issue this Command

minikube start
 . The command will use the virtual box as the default VM but you have the option of using other VMs. Alternative VMs are vmwarefusion, hyperv, xhyve and kvm. For example, to use kvm as your VM of choice the command below is used
minikube	start	--vm-driver=kvm

After the cluster has been started with the selected VM, you are able to access the dashboard using this command

minikube dashboard
 . After issuing a command, the dashboard will be available on the default web browser. There is nothing of much importance on the dashboard at this stage. Although the dashboard provides a way of managing a cluster kubectl is a better tool for cluster management. In a default Kubernetes cluster set up, the only way to access a pod is through its internal IP address. To enable external access to a pod we need to use a Kubernetes service. An example of a service configuration is shown below
apiVersion:	v1
kind:	Service
	type:	NodePort
	-port:	8080
		app:	connect-sample
		tier:	backend

In the service definition, there is no reference to a Docker image because a service only provides connectivity between one or multiple pods. A service is assigned an IP address and a port which will never change. To facilitate discovery by services labels are used.

In a Kubernetes cluster, kube-proxy plays a very important role of exposing virtual IP to the services. Available options for configuring a proxy are iptables and userspace. When using a userspace configuration kube-proxy listens for connections at a local port where they are forwarded by iptables. The kube-proxy is responsible for connection termination, creating new connections and mediating back and forth connection between back end and local process. The userspace configuration has the advantage of allowing a connection attempt to a different back end when the first attempt is refused.

In an iptable configuration packet forwarding is directed to the back end. Because there is no need to move packets back and forth between the kernel and kube-proxy there is an advantage of efficiency. The efficiency results from better throughput and latency. The three service options that can be specified are NodePort, Loadbalancer and cluster IP.

When we use a Nodeport service, a service exposure is external to a cluster. A master will assign a port from a preset range and all cluster nodes will use the port. A LoadBalancer enables creation of load balancers on the clouds so it is not available when using minikube. Cluster IP is the default option and it only exposes a service inside a cluster.

Before any deployment can happen a Docker image needs to be on a public or private registry. The default behavior of kubectl is to get images from the specified registry, but this can be altered using an imagePullPolicy. There are two options that can be specified. With the IfNotPresent option when an image is not locally available, it will be obtained from a registry. With the never option only local images are used.

The syntax for image manipulation is similar to that of Docker. Tags are important in the image name to ensure the intended image is installed because Kubernetes will choose the latest tag when picking an image.

An example of a deployment file is shown below

apiVersion:	extensions/v1beta1
kind:	Deployment
	replicas:	2
		     tier:	backend
		-name:	rest-example
		image:	jotka/rest-example
		imagePullPolicy:	IfNotPresent
			cpu:	100m
			memory:	100Mi
			-name:	GET_HOSTS_FROM
			value:	dns
                                   -	containerPort:	8080

A deployment is created by passing the path of the deployment file to

kubectl create -f

Using configuration files is not the only way to create services and deployments. An alternative is to use kubectl command line tool.

With every deployment a ReplicaSet set is created to ensure the number of pod clones specified in the replicas field are running. When there is an excess number of pods, some will be terminated and when there are fewer pods, more will be created. Any attempt to manually change the number of replicas will not succeed. For example in the deployment presented earlier the number of replicas was set to 2 and the command shown below will successfully increase the replicas to 4

kubectl scale deployment first-deployment --replicas=3

In the autoscaler above, the target CPU use will be 30% and replicas will fall between 4 and 12.

This tutorial explained the use of the minikube dashboard and kubectl cluster management tools. Creation of services and exposing them within and beyond the cluster was discussed. Finally creating and scaling deployments was discussed.

The post Learn how to use Java and Kubernetes in the cloud appeared first on Eduonix Blog.

This post first appeared on How And When Should You Use HBase NoSQL DB, please read the originial post: here

Share the post

Learn how to use Java and Kubernetes in the cloud


Subscribe to How And When Should You Use Hbase Nosql Db

Get updates delivered right to your inbox!

Thank you for your subscription