Day 46, 47, 48

This commit is contained in:
Michael Cade 2022-02-14 16:43:48 +00:00
parent a12b000b69
commit 1a3ef199e5
41 changed files with 8716 additions and 5 deletions

View File

@ -2,5 +2,7 @@
FROM ubuntu:18.04
# Install nginx and curl
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y nginx curl
RUN rm -rf /var/lib/apt/lists/*
#RUN apt-get install -y nginx curl
#RUN rm -rf /var/lib/apt/lists/*
RUN groupadd -g 1000 basicuser && useradd -r -u 1000 -g basicuser basicuser
USER basicuser

View File

@ -0,0 +1,58 @@
## Compose sample application
### Elasticsearch, Logstash, and Kibana (ELK) in single-node
Project structure:
```
.
└── docker-compose.yml
```
[_docker-compose.yml_](docker-compose.yml)
```
services:
elasticsearch:
image: elasticsearch:7.8.0
...
logstash:
image: logstash:7.8.0
...
kibana:
image: kibana:7.8.0
...
```
## Deploy with docker-compose
```
$ docker-compose up -d
Creating network "elasticsearch-logstash-kibana_elastic" with driver "bridge"
Creating es ... done
Creating log ... done
Creating kib ... done
```
## Expected result
Listing containers must show three containers running and the port mapping as below:
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
173f0634ed33 logstash:7.8.0 "/usr/local/bin/dock…" 43 seconds ago Up 41 seconds 0.0.0.0:5000->5000/tcp, 0.0.0.0:5044->5044/tcp, 0.0.0.0:9600->9600/tcp, 0.0.0.0:5000->5000/udp log
b448fd3e9b30 kibana:7.8.0 "/usr/local/bin/dumb…" 43 seconds ago Up 42 seconds 0.0.0.0:5601->5601/tcp kib
366d358fb03d elasticsearch:7.8.0 "/tini -- /usr/local…" 43 seconds ago Up 42 seconds (healthy) 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp es
```
After the application starts, navigate to below links in your web browser:
* Elasticsearch: [`http://localhost:9200`](http://localhost:9200)
* Logstash: [`http://localhost:9600`](http://localhost:9600)
* Kibana: [`http://localhost:5601/api/status`](http://localhost:5601/api/status)
Stop and remove the containers
```
$ docker-compose down
```
## Attribution
The [example Nginx logs](https://github.com/docker/awesome-compose/tree/master/elasticsearch-logstash-kibana/logstash/nginx.log) are copied from [here](https://github.com/elastic/examples/blob/master/Common%20Data%20Formats/nginx_json_logs/nginx_json_logs).

View File

@ -0,0 +1,48 @@
services:
elasticsearch:
image: elasticsearch:7.16.1
container_name: es
environment:
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
ports:
- "9200:9200"
- "9300:9300"
healthcheck:
test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
interval: 10s
timeout: 10s
retries: 3
networks:
- elastic
logstash:
image: logstash:7.16.1
container_name: log
environment:
discovery.seed_hosts: logstash
LS_JAVA_OPTS: "-Xms512m -Xmx512m"
volumes:
- ./logstash/pipeline/logstash-nginx.config:/usr/share/logstash/pipeline/logstash-nginx.config
- ./logstash/nginx.log:/home/nginx.log
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "5044:5044"
- "9600:9600"
depends_on:
- elasticsearch
networks:
- elastic
command: logstash -f /usr/share/logstash/pipeline/logstash-nginx.config
kibana:
image: kibana:7.16.1
container_name: kib
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- elastic
networks:
elastic:
driver: bridge

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,30 @@
input {
file {
path => "/home/nginx.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
json {
source => "message"
}
geoip {
source => "remote_ip"
}
useragent {
source => "agent"
target => "useragent"
}
}
output {
elasticsearch {
hosts => ["http://es:9200"]
index => "nginx"
}
stdout {
codec => rubydebug
}
}

View File

@ -0,0 +1,31 @@
version: "3.9"
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wordpress_data:/var/www/html
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
wordpress_data: {}

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -0,0 +1,172 @@
## Docker Compose
The ability to run one container could be great if you have a self contained image that has everything you need for your single use case, where things get interesting is when you are looking to build multiple applications between different container images. For example if I had a website front end but had a requirement for a backend database I could put everything in one container but better and more efficient would be to have its own container for the database.
This is where Docker compose comes in which is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application. The example I am going to walkthrough in this post is from the [Docker QuickStart sample apps (Quickstart: Compose and WordPress)](https://docs.docker.com/samples/wordpress/).
In this first example we are going to:
- Use Docker compose to bring up WordPress and a separate MySQL instance.
- Use a YAML file which will be called `docker-compose.yml`
- Build the project
- Configure WordPress via a Browser
- Shutdown and Clean up
### Install Docker Compose
As mentioned Docker Compose is a tool, If you are on macOS or Windows then compose is included in your Docker Desktop installation. However you might be wanting to run your containers on a Windows server host or Linux server and in which case you can install using these instructions [Install Docker Compose](https://docs.docker.com/compose/install/)
To confirm we have `docker-compose` installed on our system we can open a terminal and simply type the above command.
![](Images/Day46_Containers1.png)
### Docker-Compose.yml (YAML)
The next thing to talk about is the docker-compose.yml which you can find in the container folder of the repository. But more importantly we need to discuss YAML in general a little.
YAML could almost have its own session as you are going to find it in so many different places. But for the most part
"YAML is a human-friendly data serialization language for all programming languages."
It is commonly used for configuration files and in some applications where data is being stored or transmitted. You have no doubt come across XML files that tend to offer that same configuration file. YAML provides a minimal syntax but is aimed at those same use cases.
YAML Ain't Markup Language (YAML) is a serialisation language that has steadily increased in popularity over the last few years. The object serialisation abilities make it a viable replacement for languages like JSON.
The YAML acronym was shorthand for Yet Another Markup Language. But the maintainers renamed it to YAML Ain't Markup Language to place more emphasis on its data-oriented features.
Anyway, back to the docker-compose.yml file. This is a configuration file of what we want to do when it comes to multiple containers being deployed on our single system.
Straight from the tuturial linked above you can see the contents of the file looks like this:
```
version: "3.9"
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wordpress_data:/var/www/html
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
wordpress_data: {}
```
We declare a version and then a large part of this docker-compose.yml file is made up of our services, we have a db service and a wordpress service. You can see each of those have an image defined wiht a version tag associated. We are now also introducing state into our configuration unlike our first walkthroughs, but now we are going to create volumes so we can store our databases there.
We then have some environmental variables such as passwords and usernames. Obviously these files can get very complicated but the YAML configuration file simplifies what these look like overall.
### Build the project
Next up we can head back into our terminal and we can use some commands with our docker-compose tool. Navigate to your directory, where your docker-compose.yml file is located.
From the terminal we can simply run `docker-compose up -d` this will start the process of pulling those images and standing up your multi container application.
The `-d` in this command means detached mode, which means that the Run command is or will be in the background.
![](Images/Day46_Containers2.png)
If we now run the `docker ps` command, you can see we have 2 containers running, one being wordpress and the other being mySQL.
![](Images/Day46_Containers3.png)
Next we can validate that we have WordPress up and running by opening a browser and going to `http://localhost:8000` and you should see the wordpress set up page.
![](Images/Day46_Containers4.png)
We can run through the setup of WordPress, and then we can start building our website as we see fit in the console below.
![](Images/Day46_Containers5.png)
If we then open a new tab and navigate to that same address we did before `http://localhost:8000` we will now see a simple default theme with our site title "90DaysOfDevOps" and then a sample post.
![](Images/Day46_Containers6.png)
Before we make any changes, open Docker Desktop and navigate to the volumes tab and here you will see two volumes associated to our containers, one for wordpress and one for db.
![](Images/Day46_Containers7.png)
My Current theme for wordpress is "Twenty Twenty-Two" and I want to change this to "Twenty Twenty" Back in the dashboard we can make those changes.
![](Images/Day46_Containers8.png)
I am also going to add a new post to my site, and here below you see the latest version of our new site.
![](Images/Day46_Containers9.png)
### Clean Up or not
If we were now to use the command `docker-compose down` this would bring down our containers. But will leave our volumes in place.
![](Images/Day46_Containers10.png)
We can just confirm in Docker Desktop that our volumes are still there though.
![](Images/Day46_Containers11.png)
If we then want to bring things back up then we can issue the `docker up -d` command from within the same directory and we have our application back up and running.
![](Images/Day46_Containers12.png)
We then navigate in our browser to that same address of `http://localhost:8000` and notice that our new post and our theme change is all still in place.
![](Images/Day46_Containers13.png)
If we want to get rid of the containers and those volumes then issueing the `docker-compose down --volumes` will also destroy the volumes.
![](Images/Day46_Containers14.png)
Now when we use `docker-compose up -d` again we will be starting again, however the images will still be local on our system so you won't need to re pull them from the DockerHub repository.
I know that when I started diving into docker-compose and its capabilities I was then confused as to where this sits alongside or with Container Orchestration tools such as Kubernetes, well everything we have done here in this short demo is focused on one host we have wordpress and db running on the local desktop machine. We don't have multiple virtual machines or multiple physical machines, we also don't have the ability to easily scale up and down the requirements of our application.
Our next section is going to cover Kubernetes but we have a few more days of Containers in general first.
This is also a great resource for samples of docker compose applications with multiple integrations. [Awesome-Compose](https://github.com/docker/awesome-compose)
In the above repository there is a great example which will deploy an Elasticsearch, Logstash, and Kibana (ELK) in single-node.
I have uploaded the files to the [Containers folder](/Days/Containers/elasticsearch-logstash-kibana/) When you have this folder locally, navigate there and you can simply use `docker-compose up -d`
![](Images/Day46_Containers15.png)
We can then check we have those running containers with `docker ps`
![](Images/Day46_Containers16.png)
Now we can open a browser for each of containers:
![](Images/Day46_Containers17.png)
To remove everything we can use the `docker-compose down` command.
## Resources
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
- [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s)
- [WSL 2 with Docker getting started](https://www.youtube.com/watch?v=5RQbdMn04Oc)
- [Blog on gettng started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Docker documentation for building an image](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
- [YAML Tutorial: Everything You Need to Get Started in Minute](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started)
See you on [Day 47](day47.md)

View File

@ -0,0 +1,130 @@
## Docker Networking & Security
During this container session so far we have made things happen but we have not really looked at how things have worked behind the scenes either from a networking point of view but also we have not touched on security, that is the plan for this session.
### Docker Networking Basics
Open a terminal, and type the command `docker network` this is the main command for configuring and managing container networks.
From the below you can see this is how we can use the command, and all of the sub commands available. We can create new networks, list existing, inspect and remove networks.
![](Images/Day47_Containers1.png)
Lets take a look at the existing networks we have since our installation, so the out of box Docker networking looks like using the `docker network list` command.
Each network gets a unique ID and NAME. Each network is also associated with a single driver. Notice that the "bridge" network and the "host" network have the same name as their respective drivers.
![](Images/Day47_Containers2.png)
Next we can take a deeper look into our networks with the `docker network inspect` command.
With me running `docker network inspect bridge` I can get all the configuration details of that specific network name. This includes name, ID, drivers, connected containers and as you can see quite a lot more.
![](Images/Day47_Containers3.png)
### Docker: Bridge Networking
As you have seen above a standard installation of Docker Desktop gives us a pre-built network called `bridge` If you look back up to the `docker network list` command, you will see that the networked called bridge is associated with the `bridge` driver. Just because they have the same name doesn't they are the same thing. Connected but not the same thing.
The output above also shows that the bridge network is scoped locally. This means that the network only exists on this Docker host. This is true of all networks using the bridge driver - the bridge driver provides single-host networking.
All networks created with the bridge driver are based on a Linux bridge (a.k.a. a virtual switch).
### Connect a Container
By default the bridge network is assigned to new containers, meaning unless you specify a network all containers will be connected to the bridge network.
Lets create a new container with the command `docker run -dt ubuntu sleep infinity`
The sleep command above is just going to keep the container running in the background so we can mess around with it.
![](Images/Day47_Containers4.png)
If we then check our bridge network with `docker network inspect bridge` you will see that we have a container matching what we have just deployed because we did not specify a network.
![](Images/Day47_Containers5.png)
We can also dive into the container using `docker exec -it 3a99af449ca2 bash` you will have to use `docker ps` to get your container ID.
From here our image doesn't have anything to ping so we need to run the following command.`apt-get update && apt-get install -y iputils-ping` then ping an external interfacing address. `ping -c5 www.90daysofdevops.com`
![](Images/Day47_Containers6.png)
To clear this up we can run `docker stop 3a99af449ca2` again use `docker ps` to find your container ID but this will remove our container.
### Configure NAT for external connectivity
In this step we'll start a new NGINX container and map port 8080 on the Docker host to port 80 inside of the container. This means that traffic that hits the Docker host on port 8080 will be passed on to port 80 inside the container.
Start a new container based off the official NGINX image by running `docker run --name web1 -d -p 8080:80 nginx`
![](Images/Day47_Containers7.png)
Review the container status and port mappings by running `docker ps`
![](Images/Day47_Containers8.png)
The top line shows the new web1 container running NGINX. Take note of the command the container is running as well as the port mapping - `0.0.0.0:8080->80/tcp` maps port 8080 on all host interfaces to port 80 inside the web1 container. This port mapping is what effectively makes the containers web service accessible from external sources (via the Docker hosts IP address on port 8080).
Now we need our IP address for our actual host, we can do this by going into our WSL terminal and using the `ip addr` command.
![](Images/Day47_Containers9.png)
Then we can take this IP and open a browser and head to `http://172.25.218.154:8080/` Your IP might be different. This confirms that NGINX is accessible.
![](Images/Day47_Containers10.png)
I have taken these instructions from this site from way back in 2017 DockerCon but they are still relevant today. However the rest of the walkthrough goes into Docker Swarm and I am not going to be looking into that here. [Docker Networking - DockerCon 2017](https://github.com/docker/labs/tree/master/dockercon-us-2017/docker-networking)
### Securing your containers
Containers provide a secure environment for your workloads vs a full server configuration. They offer the ability to break up your applications into much smaller, loosly coupled components each isolated from one another which helps resude the attack surface overall.
But they are not immune from hackers that are looking to exploit systems. We still need to understand the security pitfalls of the technology and maintain best practices.
### Move away from root permission
All of the containers we have deployed have been using the root permission to the process within your containers. Which means they have full administrative access to your container and host environments. Now for the purposes of walking through we knew these systems were not going to be up and running for long. But you saw how easy it was to get up and running.
We can add a few steps to our process to enable non root users to be our preferred best practice. When creating our dockerfile we can create user accounts. You can find this example also in the containers folder in the repository.
```
# Use the official Ubuntu 18.04 as base
FROM ubuntu:18.04
RUN apt-get update && apt-get upgrade -y
RUN groupadd -g 1000 basicuser && useradd -r -u 1000 -g basicuser basicuser
USER basicuser
```
We can also use `docker run --user 1009 ubuntu` the Docker run command overrides any user specified in your Dockerfile. Therefore, in the following example, your container will always run with the least privilege—provided user identifier 1009 also has the lowest permission level.
However, this method doesnt address the underlying security flaw of the image itself. Therefore its better to specify a non-root user in your Dockerfile so your containers always run securely.
### Private Registry
Another area we have used heavily is public registries in DockerHub, with a private registry of container images set up by your organisation means that you can host where you wish or there are managed services for this as well, but all in all this gives you complete control of the images available for you and your team.
DockerHub is great to give you a baseline, but its only going to be providing you with a basic service where you have to put a lot of trust into the image publisher.
### Lean & Clean
Have mentioned this throughout, although not related to security. But the size of your container can also affect security in terms of attack surface if you have resources you do not use in your application then you do not need them in your container.
This is also my major concern with pulling the `latest` images because that can bring a lot of bloat to your images as well. DockerHub does show the compressed size for each of the images in a repository.
Checking `docker image` is a great command to see the size of your images.
![](Images/Day47_Containers11.png)
## Resources
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
- [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s)
- [WSL 2 with Docker getting started](https://www.youtube.com/watch?v=5RQbdMn04Oc)
- [Blog on gettng started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Docker documentation for building an image](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
- [YAML Tutorial: Everything You Need to Get Started in Minute](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started)
See you on [Day 48](day48.md)

View File

@ -0,0 +1,110 @@
## Alternatives to Docker
I did say at the very beginning of this section that we were going to be using Docker, simply because resource wise there is so much and the community is very big, but also this was really where the indents to making containers popular really came from. I would encourage you to go and watch some of the history around Docker and how it came to be, I found it very useful.
But as I have alluded to there are other alternatives to Docker. If we think about what Docker is and what we have covered. It is a platform for developing, testing, deploying, and managing applications.
I want to highlight a few alternatives to Docker that you might or will in the future see out in the wild.
### Podman
What is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.
I am going to be looking at this from a Windows point of view but know that like Docker there is no requirement for virtualisation there as it will use the underlying OS which is cannot do in the Windows world.
Podman can be ran under WSL2 although not as sleak as the experience with Docker Desktop. There is also a Windows remote client where you can connect to a Linux VM where your containers will run.
My Ubuntu on WSL2 is the 20.04 release. Following the next steps will enable you to install Podman on your WSL instance.
```
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /" |
sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
```
Add the GPG Key
```
curl -L "https://download.opensuse.org/repositories/devel:/kubic:\
/libcontainers:/stable/xUbuntu_20.04/Release.key" | sudo apt-key add -
```
Run a system update and upgrade with the `sudo apt-get update && sudo apt-get upgrade` command. Finally we can install podman using `sudo apt install podman`
We can now use a lot of the same commands we have been using for docker, note though that we do not have that nice docker desktop UI. You can see below I used `podman images` and I have nothing after install then I used `podman pull ubuntu` to pull down the ubuntu container image.
![](Images/Day48_Containers1.png)
We can then run our Ubuntu image using `podman run -dit ubuntu` and `podman ps` to see our running image.
![](Images/Day48_Containers2.png)
To then get into that container we can run `podman attach dazzling_darwin` your container name will most likely be different.
![](Images/Day48_Containers3.png)
If you are moving from docker to podman it is also common to change your config file to have `alias docker=podman` that way any command you run with docker will in fact use podman.
### LXC
LXC is a containerisation engine that enables users again to create multiple isolateed Linux container environments. Unlike Docker LXC acts as a hypervisor for create multiple Linux machines with separeate system files, networking features. Was around before Docker and then made a short comeback due to Docker shortcomings.
LXC is as lightweight though as docker, and easily deployed.
### Containerd
A standalone container runtime. Containerd brings simplicity and robustness as well as of course portability. Containerd was formerly a tool that runs as part of Docker container services until Docker decided to graduate its components into standalone components.
A project in the Cloud Native Computing Foundation, placing it in the same class with popular container tools like Kubernetes, Prometheus, and CoreDNS.
### Other Docker tooling
We could also mention toolings and options around Rancher, VirtualBox but we can cover them off in more detail another time.
[**Gradle**](https://gradle.org/)
- Build scans allow teams to collaboratively debug their scripts and track the history of all builds.
- Execution options give teams the ability to continuously build so that whenever changes are inputted, the task is automatically executed.
- The custom repository layout gives teams the ability to treat any file directory structure as an artifact repository.
[**Packer**](https://packer.io/)
- Ability to create multiple machine images in parallel to save developer time and increase efficiency.
- Teams can easily debug builds using Packers debugger, which inspects failures and allows teams to try out solutions before restarting builds.
- Support with many platforms via plugins so teams can customize their builds.
[**Logspout**](https://github.com/gliderlabs/logspout)
- Logging tool - The tools customisability allows teams to ship the same logs to multiple destinations.
- Teams can easily manage their files because the tool only requires access to the Docker socket.
- Completely open-sourced and easy to deploy.
[**Logstash**](https://www.elastic.co/products/logstash)
- Customize your pipeline using Logstashs pluggable framework.
- Easily parse and transform your data for analysis and to deliver business value.
- Logstashs variety of outputs let you route your data where you want.
[**Portainer**](https://www.portainer.io/)
- Utilise pre-made templates or create your own to deploy applications.
- Create teams and assign roles and permissions to team members.
- Know what is running in each environment using the tools dashboard.
## Resources
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
- [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s)
- [WSL 2 with Docker getting started](https://www.youtube.com/watch?v=5RQbdMn04Oc)
- [Blog on gettng started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Docker documentation for building an image](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
- [YAML Tutorial: Everything You Need to Get Started in Minute](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started)
- [Podman | Daemonless Docker | Getting Started with Podman](https://www.youtube.com/watch?v=Za2BqzeZjBk)
- [LXC - Guide to building a LXC Lab](https://www.youtube.com/watch?v=cqOtksmsxfg)
-
See you on [Day 49](day49.md)

View File

@ -76,9 +76,9 @@ This will not cover all things DevOps but it will cover the areas that I feel wi
- [✔️] 🏗️ 43 > [What is Docker & Getting installed](Days/day43.md)
- [✔️] 🏗️ 44 > [Docker Images & Hands-On with Docker Desktop](Days/day44.md)
- [✔️] 🏗️ 45 > [The anatomy of a Docker Image](Days/day45.md)
- [🚧] 🏗️ 46 > [](Days/day46.md)
- [] 🏗️ 47 > [](Days/day47.md)
- [] 🏗️ 48 > [](Days/day48.md)
- [✔️] 🏗️ 46 > [Docker Compose](Days/day46.md)
- [✔️] 🏗️ 47 > [Docker Networking & Security](Days/day47.md)
- [🚧] 🏗️ 48 > [Alternatives to Docker](Days/day48.md)
### Kubernetes