The idea here is that we can take our container images and now deploy these as pods into our Kubernetes cluster to take advantage of Kubernetes as a container orchestrator.
There are several ways in which we can deploy our applications into our Kubernetes cluster, we will cover two of the most common approaches which will be YAML files and Helm charts.
We will be using our minikube cluster for these application deployments. We will be walking through some of the previously mentioned components or building blocks of Kubernetes.
All through this section and the Container section we have discussed images and the benefits of Kubernetes and how we can handle scale quite easily on this platform.
In this first step, we are simply going to create a stateless application within our minikube cluster. We will be using the defacto standard stateless application in our first demonstration `nginx` we will configure a Deployment, which will provide us with our pods and then we will also create a service which will allow us to navigate to the simple web server hosted by the nginx pod. All of this will be contained in a namespace.
In the first demo, we want to define everything we do with YAML, we could have a whole section on YAML but I am going to skim over this and leave some resources at the end that will cover YAML in more detail.
We could create the following as one YAML file or we could break this down for each aspect of our application, i.e this could be separate files for namespace, deployment and service creation but in this file, below we separate these by using `---` in one file. You can find this file located [here](Kubernetes) (File name:- nginx-stateless-demo.YAML)
Before we deploy anything we should just make sure that we have no existing namespaces called `nginx` we can do this by running the `kubectl get namespace` command and as you can see below we do not have a namespace called `nginx`
We need to navigate to our YAML file location and then we can run `kubectl create -f nginx-stateless-demo.yaml` which you then see that 3 objects have been created, and we have a namespace, deployment and service.
The above takes a few commands that are worth knowing but you can also use `kubectl get all -n nginx` to see everything we deployed with that one YAML file.
You will notice in the above that we also have a replicaset, in our deployment we define how many replicas of our image we would like to deploy. This was set to 1 initially, but if we wanted to quickly scale our application then we can do these several ways.
We can edit our file using `kubectl edit deployment nginx-deployment -n nginx` which will open a text editor within your terminal and enable you to modify your deployment.
Upon saving the above in your text editor within the terminal if there were no issues and the correct formatting was used then you should see additional deployed in your namespace.
We can equally use this method to scale our application down back to 1 again if we wish to use either method. I used the edit option but you can also use the scale command above.
Hopefully, here you can see the use case not only are things super fast to spin up and down but we have the ability to quickly scale up and down our applications. If this was a web server we could scale up during busy times and down when the load is quiet.
If you look above at our service you will see there is no External IP available so we cannot just open a web browser and expect this to be there magically. For access, we have a few options.
**LoadBalancer** - Creates an external load balancer in the current cloud, we are using minikube but also if you have built your own Kubernetes cluster i.e what we did in VirtualBox you would need to deploy a LoadBalancer such as metallb into your cluster to provide this functionality.
**Port-Forward** - We also have the ability to Port Forward, which allows you to access and interact with internal Kubernetes cluster processes from your localhost. This option is only for testing and fault finding.
We are now going to run through specifically with Minikube how we can expose our application. We can also use minikube to create a URL to connect to a service [More details](https://minikube.sigs.k8s.io/docs/commands/service/)
Next, we are going to create a new service using `kubectl expose deployment nginx-deployment --name nginx-service --namespace nginx --port=80 --type=NodePort` notice here that we are going to use the expose and change the type to NodePort.
Helm is another way in which we can deploy our applications. Known as "The package manager for Kubernetes" You can find out more [here](https://helm.sh/)
Helm is a package manager for Kubernetes. Helm could be considered the Kubernetes equivalent of yum or apt. Helm deploys charts, which you can think of like a packaged application., it is a blueprint for your pre-configured application resources which can be deployed as one easy-to-use chart. You can then deploy another version of the chart with a different set of configurations.
They have a site where you can browse all the Helm charts available and of course, you can create your own. The documentation is also clear and concise and not as daunting as when I first started hearing the term helm amongst all of the other new words in this space.
It is super simple to get Helm up and running or installed. Simply. You can find the binaries and download links here for pretty much all distributions including your RaspberryPi arm64 devices.
Or you can use an installer script, the benefit here is that the latest version of the helm will be downloaded and installed.
Finally, there is also the option to use a package manager for the application manager, homebrew for mac, chocolatey for windows, apt with Ubuntu/Debian, snap and pkg also.
A good resource to link here would be [ArtifactHUB](https://artifacthub.io/) which is a resource to find, install and publish Kubernetes packages. I will also give a shout-out to [KubeApps](https://kubeapps.com/) which is a UI to display helm charts.
We have started covering some of these mentioned below but we are going to get more hands-on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.