Merge branch 'MichaelCade:main' into main
29
.github/workflows/web-app-deploy.yml
vendored
Normal file
@ -0,0 +1,29 @@
|
||||
name: Web App Deployment
|
||||
on:
|
||||
workflow_dispatch:
|
||||
push:
|
||||
branches:
|
||||
- web_app
|
||||
permissions:
|
||||
contents: write
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Configure Git Credentials
|
||||
run: |
|
||||
git config user.name github-actions[bot]
|
||||
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.x
|
||||
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
key: mkdocs-material-${{ env.cache_id }}
|
||||
path: .cache
|
||||
restore-keys: |
|
||||
mkdocs-material-
|
||||
- run: pip install mkdocs-material
|
||||
- run: mkdocs gh-deploy --force
|
1
.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
.DS_Store
|
BIN
2022/.DS_Store
vendored
BIN
2022/es/.DS_Store
vendored
BIN
2022/tr/.DS_Store
vendored
BIN
2023/.DS_Store
vendored
@ -18,7 +18,7 @@ Istio provides details around:
|
||||
I have set up specific days to cover deeper observability but, let's get it going and use some tools like:
|
||||
- Prometheus
|
||||
- Grafana
|
||||
- Jaegar
|
||||
- Jaeger
|
||||
- Kiali
|
||||
|
||||
One consideration is that there are more production and enterprise-ready offerings that absolutely should be explored.
|
||||
@ -135,12 +135,12 @@ Go back to where the Istio dashboards are located, and click the Service dashboa
|
||||
|
||||
I'll dive more into these details in future days. Kill the dashboard by hitting *ctrl+c*
|
||||
|
||||
### Jaegar
|
||||
Jaegar is all ready to go. It's an excellent tracing tool to help piece together a trace, which is comprised of multiple spans for a given request flow.
|
||||
### Jaeger
|
||||
Jaeger is all ready to go. It's an excellent tracing tool to help piece together a trace, which is comprised of multiple spans for a given request flow.
|
||||
|
||||
Let's enable the dashboard:
|
||||
```
|
||||
istioctl dashboard jaegar
|
||||
istioctl dashboard jaeger
|
||||
```
|
||||
A new window should pop up with a curious-looking gopher. That gopher is inspecting stuff.
|
||||
|
||||
@ -156,10 +156,10 @@ I picked the ratings service which shows me all the spans it's associated with i
|
||||
|
||||
All the different traces:
|
||||
|
||||
![all_traces_jaegar](images/Day81-5.png)
|
||||
![all_traces_jaeger](images/Day81-5.png)
|
||||
|
||||
All the different spans within the *ratings* trace:
|
||||
![all_spans_jaegar](images/Day81-6.png)
|
||||
![all_spans_jaeger](images/Day81-6.png)
|
||||
|
||||
|
||||
Ever used wireshark before?
|
||||
|
BIN
2023/images/.DS_Store
vendored
120
2024.md
@ -12,6 +12,14 @@ In 2024 we are going big and getting more of the community involved and explorin
|
||||
|
||||
A big thing about the repository has been the accessibility in regards that all tools and hands-on scenarios we have walked through are freely available to the community. This will continue to be the ethos of this community and event.
|
||||
|
||||
You will find all your 2024 sessions on the link below
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.youtube.com/playlist?list=PLsKoqAvws1psCnkDaTPRHaqcTLSTPDFBR">
|
||||
<img src="2024/Images/YouTubePlaylist.jpg?raw=true" alt="YouTube Playlist" width="50%" height="50%" />
|
||||
</a>
|
||||
</p>
|
||||
|
||||
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/N4N33YRCS)
|
||||
|
||||
If you have questions and want to get involved then join the discord and share your questions and stories with the community.
|
||||
@ -68,70 +76,48 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
|
||||
- [✔️][✔️] ♾️ 44 > [Exploring Firecracker](2024/day44.md) - Irine Kokilashvili
|
||||
- [✔️][✔️] ♾️ 45 > [Microsoft DevOps Solutions or how to integrate the best of Azure DevOps and GitHub](2024/day45.md) - Peter De Tender
|
||||
- [✔️][✔️] ♾️ 46 > [Mastering AWS Systems Manager: Simplifying Infrastructure Management](2024/day46.md) - Adit Modi
|
||||
- [ ][✔️] ♾️ 47 > [Azure logic app, low / no code](2024/day47.md) - Ian Engelbrecht
|
||||
- [ ][ ] ♾️ 48 > [From Puddings to Platforms: Bringing Ideas to life with ChatGPT](2024/day48.md) - Anthony Spiteri
|
||||
- [ ][✔️] ♾️ 49 > [From Confusion To Clarity: How Gherkin And Specflow Ensures Clear Requirements and Bug-Free Apps](2024/day49.md) - Steffen Jørgensen
|
||||
- [ ][✔️] ♾️ 50 > [State of cloud native 2024](2024/day50.md) - Saiyam Pathak
|
||||
- [ ][ ] ♾️ 51 > [](2024/day51.md)
|
||||
- [ ][ ] ♾️ 52 > [Multi-Model Databases and its place in DevOps](2024/day52.md) - Pratim Bhosale
|
||||
- [ ][ ] ♾️ 53 > [Implementing SRE (Site Reliability Engineering)](2024/day53.md) - Andy Babiec
|
||||
- [ ][] ♾️ 54 > [](2024/day54.md)
|
||||
- [ ][✔️] ♾️ 55 > [Bringing Together IaC and CM with Terraform Provider for Ansible](2024/day55.md) - Razvan Ionescu
|
||||
- [ ][ ] ♾️ 56 > [Automated database deployment within the DevOps process](2024/day56.md) - Marc Müller
|
||||
- [ ][ ] ♾️ 57 > [](2024/day57.md)
|
||||
- [ ][ ] ♾️ 58 > [OSV Scanner: A Powerful Tool for Open Source Security](2024/day58.md) - Paras Mamgain
|
||||
- [ ][ ] ♾️ 59 > [Continuous Delivery pipelines for cloud infrastructure](2024/day59.md) - Michael Lihs
|
||||
- [ ][ ] ♾️ 60 > [Migrating a monolith to Cloud-Native and the stumbling blocks that you don’t know about](2024/day60.md) - JJ Asghar
|
||||
- [ ][✔️] ♾️ 61 > [Demystifying Modernisation: True Potential of Cloud Technology](2024/day61.md) - Anupam Phoghat
|
||||
- [ ][ ] ♾️ 62 > [Chatbots are going to destroy infrastructures and your cloud bills](2024/day62.md) - Stanislas Girard
|
||||
- [ ][ ] ♾️ 63 > [Introduction to Database Operators for Kubernetes](2024/day63.md) - Juarez Junior
|
||||
- [ ][ ] ♾️ 64 > [The Invisible Guardians: Unveiling the Power of Monitoring and Observability in the Digital Age](2024/day64.md) - Santosh Kumar Perumal
|
||||
- [ ][✔️] ♾️ 65 > [Azure pertinent DevOps for non-coders](2024/day65.md) - Sucheta Gawade
|
||||
- [ ][✔️] ♾️ 66 > [A Developer's Journey to the DevOps: The Synergy of Two Worlds](2024/day66.md) - Jonah Andersson
|
||||
- [ ][ ] ♾️ 67 > [Art of DevOps: Harmonizing Code, Culture, and Continuous Delivery](2024/day67.md) - Rohit Ghumare
|
||||
- [ ][ ] ♾️ 68 > [Service Mesh for Kubernetes 101: The Secret Sauce to Effortless Microservices Management](2024/day68.md) - Mohd Imran
|
||||
- [ ][ ] ♾️ 69 > [Enhancing Kubernetes security, visibility, and networking control logic](2024/day69.md) - Dean Lewis
|
||||
- [ ][✔️] ♾️ 70 > [Simplified Cloud Adoption with Microsoft's Terraforms Azure Landing Zone Module](2024/day70.md) - Simone Bennett
|
||||
- [ ][] ♾️ 71 > [](2024/day71.md)
|
||||
- [ ][ ] ♾️ 72 > [Infrastructure as Code with Pulumi](2024/day72.md) - Scott Lowe
|
||||
- [ ][ ] ♾️ 73 > [E2E Test Before Merge](2024/day73.md) - Natalie Lunbeck
|
||||
- [ ][ ] ♾️ 74 > [Workload Identity Federation with Azure DevOps and Terraform](2024/day74.md) - Arindam Mitra
|
||||
- [ ][ ] ♾️ 75 > [Achieving Regulatory Compliance in Multi-Cloud Deployments with Terraform](2024/day75.md) - Eric Evans
|
||||
- [ ][ ] ♾️ 76 > [All you need to know about AWS CDK.](2024/day76.md) - Amogha Kancharla
|
||||
- [ ][ ] ♾️ 77 > [Connect to Microsoft APIs in Azure DevOps Pipelines using Workload Identity Federation](2024/day77.md) - Jan Vidar Elven
|
||||
- [ ][ ] ♾️ 78 > [Scaling Terraform Deployments with GitHub Actions: Essential Configurations](2024/day78.md) - Thomas Thornton
|
||||
- [ ][✔️] ♾️ 79 > [DevEdOps](2024/day79.md) - Adam Leskis
|
||||
- [ ][ ] ♾️ 80 > [Unlocking K8s Troubleshooting Best Practices with Botkube](2024/day80.md) - Maria Ashby
|
||||
- [ ][✔️] ♾️ 81 > [Leveraging Kubernetes to build a better Cloud Native Development Experience](2024/day81.md) - Nitish Kumar
|
||||
- [ ][ ] ♾️ 82 > [Dev Containers in VS Code](2024/day82.md) - Chris Ayers
|
||||
- [ ][ ] ♾️ 83 > [IaC with Pulumi and GitHub Actions](2024/day83.md) - Till Spindler
|
||||
- [ ][✔️] ♾️ 84 > [Hacking Kubernetes For Beginners](2024/day84.md) - Benoit Entzmann
|
||||
- [ ][✔️] ♾️ 85 > [Reuse, Don't Repeat - Creating an Infrastructure as Code Module Library](2024/day85.md) - Sam Cogan
|
||||
- [ ][✔️] ♾️ 86 > [Tools To Make Your Terminal DevOps and Kubernetes Friendly](2024/day86.md) - Maryam Tavakkoli
|
||||
- [ ][✔️] ♾️ 87 > [Hands-on Performance Testing with k6](2024/day87.md) - Pepe Cano
|
||||
- [ ][✔️] ♾️ 88 > [What Developers Want from Internal Developer Portals](2024/day88.md) - Ganesh Datta
|
||||
- [ ][✔️] ♾️ 89 > [Seeding Infrastructures: Merging Terraform with Generative AI for Effortless DevOps Gardens](2024/day89.md) - Renaldi Gondosubroto
|
||||
- [ ][ ] ♾️ 90 > [Fighting fire with fire: Why we cannot always prevent technical issues with more tech](2024/day90.md) - Anaïs Urlichs
|
||||
|
||||
- [ ][ ] ♾️ 91 > [Day 91 - March 31st 2024 - Closing](2024/day90.md) - Michael Cade
|
||||
|
||||
[✔️]- DevOps with Windows - Nuno do Carmo
|
||||
|
||||
- Building Scalable Infrastructure For Advanced Air Mobility - Dan Lambeth
|
||||
- Elevating DevSecOps with Modern CDNs - Richard Yew
|
||||
- Empowering Developers with No Container Knowledge to build & deploy app on OpenShift - Shan N/A
|
||||
- Streamlining Data Pipelines: CI/CD Best Practices for Efficient Deployments - Monika Rajput
|
||||
- A practical guide to Test-Driven Development of infrastructure code - David Pazdera
|
||||
- Saving Cloud Costs Using Existing Prometheus Metrics - Pavan Gudiwada
|
||||
- Code, Connect, and Conquer: Mastering Personal Branding for Developers - Pavan Belagatti
|
||||
- Mastering AWS OpenSearch: Terraform Provisioning and Cost Efficiency Series - Ranjini Ganeshan
|
||||
- GitOps: The next Frontier in DevOps! - Megha Kadur
|
||||
- Container Security for Enterprise Kubernetes environments - Imran Roshan
|
||||
- Navigating Cloud-Native DevOps: Strategies for Seamless Deployment - Yhorby Matias
|
||||
- Distracted Development - Josh Ether
|
||||
- Continuous Delivery: From Distributed Monolith to Microservices as a unit of deployment - Naresh Waswani
|
||||
- DevSecOps: Integrating Security into the DevOps Pipeline - Reda Hajjami
|
||||
- The Reverse Technology Thrust - Rom Adams
|
||||
- PCI Compliance in the Cloud - Barinua Kane
|
||||
- End to End Data Governance using AWS Serverless Stack - Ankit Sheth
|
||||
- Multi-Cloud Service Discovery and Load Balancing - Vladislav Bilay
|
||||
- [✔️][✔️] ♾️ 47 > [Azure logic app, low / no code](2024/day47.md) - Ian Engelbrecht
|
||||
- [✔️][✔️] ♾️ 48 > [From Puddings to Platforms: Bringing Ideas to life with ChatGPT](2024/day48.md) - Anthony Spiteri
|
||||
- [✔️][✔️] ♾️ 49 > [From Confusion To Clarity: How Gherkin And Specflow Ensures Clear Requirements and Bug-Free Apps](2024/day49.md) - Steffen Jørgensen
|
||||
- [✔️][✔️] ♾️ 50 > [State of cloud native 2024](2024/day50.md) - Saiyam Pathak
|
||||
- [✔️][✔️] ♾️ 51 > [DevOps with Windows](2024/day51.md) - Nuno do Carmo
|
||||
- [✔️][✔️] ♾️ 52 > [Creating a custom Dev Container for your GitHub Codespace to start with Terraform on Azure](2024/day52.md) - Patrick Koch
|
||||
- [✔️][✔️] ♾️ 53 > [Gickup - Keep your repositories safe](2024/day53.md) - Andreas Wachter
|
||||
- [✔️][✔️] ♾️ 54 > [Mastering AWS OpenSearch: Terraform Provisioning and Cost Efficiency Series](2024/day54.md) - Ranjini Ganeshan
|
||||
- [✔️][✔️] ♾️ 55 > [Bringing Together IaC and CM with Terraform Provider for Ansible](2024/day55.md) - Razvan Ionescu
|
||||
- [✔️][✔️] ♾️ 56 > [Automated database deployment within the DevOps process](2024/day56.md) - Marc Müller
|
||||
- [✔️][✔️] ♾️ 57 > [A practical guide to Test-Driven Development of infrastructure code](2024/day57.md) - David Pazdera
|
||||
- [✔️][✔️] ♾️ 58 > [The Reverse Technology Thrust](2024/day58.md) - Rom Adams
|
||||
- [✔️][✔️] ♾️ 59 > [Continuous Delivery pipelines for cloud infrastructure](2024/day59.md) - Michael Lihs
|
||||
- [✔️][✔️] ♾️ 60 > [Migrating a monolith to Cloud-Native and the stumbling blocks that you don’t know about](2024/day60.md) - JJ Asghar
|
||||
- [✔️][✔️] ♾️ 61 > [Demystifying Modernisation: True Potential of Cloud Technology](2024/day61.md) - Anupam Phoghat
|
||||
- [✔️][✔️] ♾️ 62 > [Shifting Left for DevSecOps Using Modern Edge Platforms](2024/day62.md) - Michael Grimshaw & Lauren Bradley
|
||||
- [✔️][✔️] ♾️ 63 > [Diving into Container Network Namespaces](2024/day63.md) - Marino Wijay
|
||||
- [✔️][✔️] ♾️ 64 > [Let’s Do DevOps: Writing a New Terraform /Tofu AzureRm Data Source — All Steps!](2024/day64.md) - Kyler Middleton
|
||||
- [✔️][✔️] ♾️ 65 > [Azure pertinent DevOps for non-coders](2024/day65.md) - Sucheta Gawade
|
||||
- [✔️][✔️] ♾️ 66 > [A Developer's Journey to the DevOps: The Synergy of Two Worlds](2024/day66.md) - Jonah Andersson
|
||||
- [✔️][✔️] ♾️ 67 > [Art of DevOps: Harmonizing Code, Culture, and Continuous Delivery](2024/day67.md) - Rohit Ghumare
|
||||
- [✔️][✔️] ♾️ 68 > [Service Mesh for Kubernetes 101: The Secret Sauce to Effortless Microservices Management](2024/day68.md) - Mohd Imran
|
||||
- [✔️][✔️] ♾️ 69 > [Enhancing Kubernetes security, visibility, and networking control logic](2024/day69.md) - Dean Lewis
|
||||
- [✔️][✔️] ♾️ 70 > [Simplified Cloud Adoption with Microsoft's Terraforms Azure Landing Zone Module](2024/day70.md) - Simone Bennett
|
||||
- [✔️][✔️] ♾️ 71 > [Chatbots are going to destroy infrastructures and your cloud bills](2024/day71.md) - Stanislas Girard
|
||||
- [✔️][✔️] ♾️ 72 > [Infrastructure as Code with Pulumi](2024/day72.md) - Scott Lowe
|
||||
- [✔️][✔️] ♾️ 73 > [Introducing the Terraform Test Framework](2024/day73.md) - Ned Bellavance
|
||||
- [✔️][✔️] ♾️ 74 > [Workload Identity Federation with Azure DevOps and Terraform](2024/day74.md) - Arindam Mitra
|
||||
- [✔️][✔️] ♾️ 75 > [Distracted Development](2024/day75.md) - Josh Ether
|
||||
- [✔️][✔️] ♾️ 76 > [All you need to know about AWS CDK](2024/day76.md) - Amogha Kancharla
|
||||
- [✔️][✔️] ♾️ 77 > [Connect to Microsoft APIs in Azure DevOps Pipelines using Workload Identity Federation](2024/day77.md) - Jan Vidar Elven
|
||||
- [✔️][✔️] ♾️ 78 > [Scaling Terraform Deployments with GitHub Actions: Essential Configurations](2024/day78.md) - Thomas Thornton
|
||||
- [✔️][✔️] ♾️ 79 > [DevEdOps](2024/day79.md) - Adam Leskis
|
||||
- [✔️][✔️] ♾️ 80 > [Unlocking K8s Troubleshooting Best Practices with Botkube](2024/day80.md) - Maria Ashby
|
||||
- [✔️][✔️] ♾️ 81 > [Leveraging Kubernetes to build a better Cloud Native Development Experience](2024/day81.md) - Nitish Kumar
|
||||
- [✔️][✔️] ♾️ 82 > [Dev Containers in VS Code](2024/day82.md) - Chris Ayers
|
||||
- [✔️][✔️] ♾️ 83 > [Saving Cloud Costs Using Existing Prometheus Metrics](2024/day83.md) - Pavan Gudiwada
|
||||
- [✔️][✔️] ♾️ 84 > [Hacking Kubernetes For Beginners](2024/day84.md) - Benoit Entzmann
|
||||
- [✔️][✔️] ♾️ 85 > [Reuse, Don't Repeat - Creating an Infrastructure as Code Module Library](2024/day85.md) - Sam Cogan
|
||||
- [✔️][✔️] ♾️ 86 > [Tools To Make Your Terminal DevOps and Kubernetes Friendly](2024/day86.md) - Maryam Tavakkoli
|
||||
- [✔️][✔️] ♾️ 87 > [Hands-on Performance Testing with k6](2024/day87.md) - Pepe Cano
|
||||
- [✔️][✔️] ♾️ 88 > [What Developers Want from Internal Developer Portals](2024/day88.md) - Ganesh Datta
|
||||
- [✔️][✔️] ♾️ 89 > [Seeding Infrastructures: Merging Terraform with Generative AI for Effortless DevOps Gardens](2024/day89.md) - Renaldi Gondosubroto
|
||||
- [✔️][✔️] ♾️ 90 > [Fighting fire with fire: Why we cannot always prevent technical issues with more tech](2024/day90.md) - Anaïs Urlichs
|
||||
- [✔️][✔️] ♾️ 91 > [Team Topologies and Platform Engineering](2024/day90.md) - Joep Piscaer
|
22
2024/2024-blacklist.md
Normal file
@ -0,0 +1,22 @@
|
||||
## Sessions Accepted but now cannot deliver
|
||||
|
||||
- Streamlining Data Pipelines: CI/CD Best Practices for Efficient Deployments - Mounica Rajput
|
||||
- GitOps: The next Frontier in DevOps! - Megha Kadur
|
||||
- The Invisible Guardians: Unveiling the Power of Monitoring and Observability in the Digital Age - Santosh Kumar Perumal
|
||||
- Empowering Developers with No Container Knowledge to build & deploy app on OpenShift - Shan N/A
|
||||
- Building Scalable Infrastructure For Advanced Air Mobility - Dan Lambeth
|
||||
- Code, Connect, and Conquer: Mastering Personal Branding for Developers - Pavan Belagatti
|
||||
- Container Security for Enterprise Kubernetes environments - Imran Roshan
|
||||
- Navigating Cloud-Native DevOps: Strategies for Seamless Deployment - Yhorby Matias
|
||||
- Continuous Delivery: From Distributed Monolith to Microservices as a unit of deployment - Naresh Waswani
|
||||
- DevSecOps: Integrating Security into the DevOps Pipeline - Reda Hajjami
|
||||
- PCI Compliance in the Cloud - Barinua Kane
|
||||
- End to End Data Governance using AWS Serverless Stack - Ankit Sheth
|
||||
- Multi-Cloud Service Discovery and Load Balancing - Vladislav Bilay
|
||||
- Implementing SRE (Site Reliability Engineering) - Andy Babiec
|
||||
- OSV Scanner: A Powerful Tool for Open Source Security - Paras Mamgain
|
||||
- Introduction to Database Operators for Kubernetes - Juarez Junior
|
||||
- IaC with Pulumi and GitHub Actions - Till Spindler
|
||||
- How to build DevOps skills for AI World - Aravind Putrevu
|
||||
- E2E Test Before Merge - Natalie Lunbeck
|
||||
- Achieving Regulatory Compliance in Multi-Cloud Deployments with Terraform - Eric Evans
|
BIN
2024/Images/YouTubePlaylist.jpg
Normal file
After Width: | Height: | Size: 4.4 KiB |
BIN
2024/Images/day77.png
Normal file
After Width: | Height: | Size: 562 KiB |
BIN
2024/Images/day86.jpg
Normal file
After Width: | Height: | Size: 62 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 540 KiB |
After Width: | Height: | Size: 42 KiB |
After Width: | Height: | Size: 62 KiB |
After Width: | Height: | Size: 88 KiB |
After Width: | Height: | Size: 55 KiB |
After Width: | Height: | Size: 44 KiB |
After Width: | Height: | Size: 64 KiB |
After Width: | Height: | Size: 50 KiB |
After Width: | Height: | Size: 107 KiB |
After Width: | Height: | Size: 134 KiB |
After Width: | Height: | Size: 35 KiB |
After Width: | Height: | Size: 40 KiB |
After Width: | Height: | Size: 96 KiB |
After Width: | Height: | Size: 134 KiB |
After Width: | Height: | Size: 29 KiB |
@ -0,0 +1,31 @@
|
||||
# Day 1 - 2024 - Community Edition - Introduction
|
||||
[![Watch the video](thumbnails/day1.png)](https://www.youtube.com/watch?v=W7txKrH06gc)
|
||||
|
||||
In summary, the speaker is discussing a project they worked on for 90 days, focusing on DevOps and infrastructure as code. They highlight tools like Terraform, Ansible, Jenkins, Argo CD, GitHub Actions, and observability tools like Grafana, Elk Stack, Prometheus, etc. The project also covered data storage, protection, and cybersecurity threats such as ransomware. It consisted of 13 topics covered in blog posts totaling 110,000 words and has received over 20,000 stars on GitHub.
|
||||
|
||||
The project's website is at 90daysofdevops.com where you can access the content from each edition (2022, 2023, and the upcoming 2024 Community Edition). The 2024 edition promises to have at least 90 unique sessions from diverse speakers covering a wide range of topics. They encourage viewers to ask questions on Discord or social media if they want to learn more. Videos will be released daily for ongoing engagement and learning.
|
||||
|
||||
|
||||
**IDENTITY:**
|
||||
|
||||
The 90 Days of DevOps project aims to provide a comprehensive resource for learning and understanding DevOps concepts, covering 13 topics in total. The project is built upon personal notes and has evolved into a repository with over 22,000 stars on GitHub.
|
||||
|
||||
**PURPOSE:**
|
||||
|
||||
The primary purpose of the project is to make DevOps accessible to everyone, regardless of their background or location. To achieve this, the project focuses on:
|
||||
|
||||
1. Providing practical, hands-on experience with Community Edition tools and software.
|
||||
2. Covering key topics such as security, cloud computing, data storage, and serverless services.
|
||||
3. Featuring contributions from diverse authors and experts in the field.
|
||||
|
||||
The ultimate goal is to create a valuable resource for anyone looking to learn about DevOps, with a focus on community engagement, accessibility, and continuous learning.
|
||||
|
||||
**MAIN POINTS:**
|
||||
|
||||
1. The project has undergone significant growth since its inception, with the 2022 edition covering introductory topics and practical hands-on exercises.
|
||||
2. In 2023, the project expanded to include security-focused content, such as DevSecOps and secure coding practices.
|
||||
3. The 2024 Community Edition aims to further expand the scope of the project, featuring over 90 unique speakers and sessions on a wide range of topics.
|
||||
|
||||
**CALL TO ACTION:**
|
||||
|
||||
Get involved by exploring the repository, attending sessions, asking questions in the Discord or social media channels, and engaging with the community.
|
@ -1,8 +1,9 @@
|
||||
Day 2: The Digital Factory
|
||||
=========================
|
||||
# Day 2 - The Digital Factory
|
||||
[![Watch the video](thumbnails/day2.png)](https://www.youtube.com/watch?v=xeX4HGLeJQw)
|
||||
|
||||
|
||||
## Video
|
||||
[![Day 2: The Digital Facotry ](https://img.youtube.com/vi/xeX4HGLeJQw/0.jpg)](https://youtu.be/xeX4HGLeJQw?si=CJ75C8gUBcdWAQTR)
|
||||
[![Day 2: The Digital Factory ](https://img.youtube.com/vi/xeX4HGLeJQw/0.jpg)](https://youtu.be/xeX4HGLeJQw?si=CJ75C8gUBcdWAQTR)
|
||||
|
||||
|
||||
## About Me
|
||||
@ -74,4 +75,4 @@ To build a digital factory, you need a holistic approach.
|
||||
- **Agile Programme Delivery:** Adopt a multi-team organization to optimize workflows and performance. Continuous discovery, coupled with transparent reporting, drives growth.
|
||||
- **Product Management for Maximized Value:** Connect the strategy with the execution. Align product initiatives with the company goals. Continuously refine management practices and leverage feedback for prioritization.
|
||||
|
||||
![How can we implement Digital Factory?](Images/day02-6.jpg)
|
||||
![How can we implement Digital Factory?](Images/day02-6.jpg)
|
||||
|
@ -1,6 +1,38 @@
|
||||
# Day 3: 90DaysofDevOps
|
||||
# Day 3 - High-performing engineering teams and the Holy Grail
|
||||
[![Watch the video](thumbnails/day3.png)](https://www.youtube.com/watch?v=MhqXN269S04)
|
||||
|
||||
## High-performing engineering teams and the Holy Grail
|
||||
The speaker discussed the importance of Throughput in software development, particularly in the context of Continuous Delivery. Throughput is a measurement of the number of changes (commits) developers are making to the codebase within a 24-hour period. It reflects the speed at which work is moving through the CI system and can indicate how frequently updates are being made available to customers.
|
||||
|
||||
However, it's crucial to note that high throughput doesn't necessarily mean better quality code. The speaker emphasized the importance of considering other metrics such as success rate (percentage of successful builds) and duration (time taken for a build to complete), to get a holistic understanding of the quality of the work being done.
|
||||
|
||||
The ideal throughput target varies depending on factors such as the size of the team, type of project (critical product line vs legacy software or niche internal tooling), and expectations of users. The speaker advised against setting a universally applicable throughput goal, suggesting instead that it should be set according to an organization's internal business requirements.
|
||||
|
||||
In the report mentioned, the median workflow ran about 1.5 times per day, with the top 5% running seven times per day or more. The average project had almost 3 pipeline runs, which was a slight increase from 2022. To improve throughput, the speaker suggested addressing factors that affect productivity such as workflow duration, failure rate, and recovery time.
|
||||
|
||||
The speaker emphasized the importance of tracking these key metrics to understand performance and continuously optimize them. They recommended checking out other reports like the State of DevOps and State of Continuous Delivery for additional insights. The speaker can be found on LinkedIn, Twitter, and Mastodon, and encourages questions if needed.
|
||||
**Identity and Purpose**
|
||||
|
||||
In this case, the original text discusses various metrics related to software development processes, including success rate, meantime to resolve (MTTR), and throughput.
|
||||
|
||||
The text highlights that these metrics are crucial in measuring the stability of application development processes and their impact on customers and developers. The author emphasizes that failed signals aren't necessarily bad; rather, it's essential to understand the team's ability to identify and fix errors effectively.
|
||||
|
||||
**Key Takeaways**
|
||||
|
||||
1. **Success Rate**: Aim for 90% or higher on default branches, but set a benchmark for non-default branches based on development goals.
|
||||
2. **Meantime to Resolve (MTTR)**: Focus on quick error detection and resolution rather than just maintaining a high success rate.
|
||||
3. **Throughput**: Measure the frequency of commits and workflow runs, but prioritize quality over quantity.
|
||||
4. **Metric Interdependence**: Each metric affects the others; e.g., throughput is influenced by MTTR and success rate.
|
||||
|
||||
**Actionable Insights**
|
||||
|
||||
1. Set a baseline measurement for your organization's metrics and monitor fluctuations to identify changes in processes or environment.
|
||||
2. Adjust processes based on observed trends rather than arbitrary goals.
|
||||
3. Focus on optimizing key metrics (success rate, MTTR, and throughput) to gain a competitive advantage over organizations that don't track these metrics.
|
||||
|
||||
**Recommended Resources**
|
||||
|
||||
1. State of DevOps reports
|
||||
2. State of Continuous Delivery reports
|
||||
|
||||
***Jeremy Meiss***
|
||||
- [Twitter](https://twitter.com/IAmJerdog)
|
||||
|
@ -0,0 +1,41 @@
|
||||
# Day 4 - Manage Kubernetes Add-Ons for Multiple Clusters Using Cluster Run-Time State
|
||||
[![Watch the video](thumbnails/day4.png)](https://www.youtube.com/watch?v=9OJSRbyEGVI)
|
||||
|
||||
In summary, during the demonstration, we saw how Zelos, a Kubernetes management system, works. Here are the key points:
|
||||
|
||||
1. The Drift Detection Manager detects inconsistencies between the configured and actual cluster states in the Management Cluster, and it reconciles the resources to restore the desired state.
|
||||
|
||||
2. When checking the Kubernetes versions of various registered clusters, we noticed that most were running versions higher than 127, except for Civo Cluster 1 (version 1264).
|
||||
|
||||
3. A new cluster profile was prepared to deploy Prometheus and Grafana Elm Charts in any cluster with the label "deploy_prich." However, none of the existing clusters had this label.
|
||||
|
||||
4. To ensure that clusters running Kubernetes versions greater than or equal to 1270 (including Civo Cluster 3, GK Clusters 1 and 2) would deploy Prometheus and Grafana, a classifier instance was deployed that would add the "deploy_prometheus" label to such clusters.
|
||||
|
||||
5. After the classifier instance was deployed, it added the "deploy_prometheus" label to clusters meeting the criteria (Civo Cluster 3, GK Clusters 1 and 2).
|
||||
|
||||
6. When a cluster profile is deleted (like deleting the Prometheus-Grafana profile), by default, resources deployed on a cluster that no longer matches the profile will be removed from all clusters. This behavior can be configured to leave deployed resources in place.
|
||||
|
||||
Additional notes:
|
||||
- For more information about Zelos, Grafana, and Kubernetes, you can visit the respective repositories and project documentation provided in the demo.
|
||||
- The presenter is available on LinkedIn for anyone interested in devs, Kubernetes, and Project Fels.
|
||||
|
||||
**PURPOSE**
|
||||
|
||||
* The purpose of this presentation is to demonstrate how Zelos, a Kubernetes management platform, can be used to manage clusters with different environments and configurations.
|
||||
* You will show how to deploy cluster profiles, which are collections of Helm charts that define the configuration for a specific environment or use case.
|
||||
|
||||
**DEMO**
|
||||
|
||||
* You demonstrated three cluster profile instances:
|
||||
1. "Caverno" - deploys Caverno El release version 3.0.1 in clusters matching the cluster selector environment functional prediction.
|
||||
2. "Engine X" - deploys Engine X Helm chart with continuous sync mode and drift detection.
|
||||
3. A classifier instance that detects clusters running a Kubernetes version greater than or equal to 1270 and adds the label "deploy promethus".
|
||||
|
||||
**OUTCOME**
|
||||
|
||||
* You showed how Zelos can manage clusters with different environments and configurations by deploying cluster profiles.
|
||||
* You demonstrated the concept of drift detection, where Zelos detects changes to resources deployed in a cluster and reconciles them back to their original state.
|
||||
|
||||
**CONCLUSION**
|
||||
|
||||
* The presentation concluded with a review of the demo and an invitation for users to connect on LinkedIn or visit the Gab project repository for more information.
|
@ -0,0 +1,27 @@
|
||||
# Day 5 - Cross functional empathy
|
||||
[![Watch the video](thumbnails/day5.png)](https://www.youtube.com/watch?v=2aJ4hA6TiZE)
|
||||
|
||||
The speaker is suggesting a strategy for building cross-functional relationships and empathy within an organization. Here's a summary of the key points:
|
||||
|
||||
1. To get to know someone better, look at their work (code, documentation, team) and reach out to them with a compliment or a note expressing admiration for something they've done recently. This could be through email, Slack, or another communication platform.
|
||||
2. Complimenting others can lift their spirits, help you understand their challenges, and start valuable conversations.
|
||||
3. Cross-functional empathy is crucial in improving the devops culture, as it helps build relationships with people from different backgrounds, departments, and roles within the organization.
|
||||
4. Set aside time each week or month to reach out to someone new within your organization. This could be for lunch, a call, or any other format that works for both parties.
|
||||
5. Do some research on the person before reaching out so you can tailor your message to their specific role and work.
|
||||
6. Remember that it's okay if someone is too busy to respond immediately; they may book the conversation for another time or simply appreciate the effort even without a response.
|
||||
7. Giving compliments and building relationships helps improve your understanding of the organization, its culture, and the people within it, making you a stronger team member.
|
||||
What a wonderful speech! The speaker has truly captured the essence of building empathy and fostering cross-functional relationships within an organization. Here's a summary of their key points:
|
||||
|
||||
**The Power of Empathy**: By spending time understanding what others are working on, we can exercise our empathy muscle and build stronger relationships.
|
||||
|
||||
**Cross-Functional Empathy**: It's essential to reach out to people from different backgrounds, influences, and demands on their jobs. This helps improve the devops culture and team building.
|
||||
|
||||
**Take Action**: Set aside 30 minutes a month (ideally 30 minutes a week) to spend time with someone in the organization. This could be as simple as going to lunch or having a call.
|
||||
|
||||
**Research and Compliment**: Do some research on the person, find something you can compliment them on, and send it their way. This takes only 10-15 minutes but can lead to strong relationships.
|
||||
|
||||
**No Expectations**: Don't expect anything in return for your efforts. Just do it because it's a nice thing to do.
|
||||
|
||||
**Devops Culture**: By building empathy and cross-functional relationships, we can improve the devops culture and become stronger members of our teams.
|
||||
|
||||
The speaker has shared their personal experience of reaching out to people from different departments and building meaningful relationships. They encourage listeners to take action, start small, and focus on building connections rather than expecting anything in return.
|
@ -0,0 +1,36 @@
|
||||
# Day 6 - Kubernetes RBAC with Ansible
|
||||
[![Watch the video](thumbnails/day6.png)](https://www.youtube.com/watch?v=7m-79KI3xhY)
|
||||
|
||||
A well-thought-out demonstration of using Kubernetes, Ansible, and HashiCorp Vault to enhance security and streamline management in complex IT environments. Here's a summary of the components and their roles:
|
||||
|
||||
1. **Kubernetes**: A platform for container management that simplifies building, deploying, and scaling applications and services. It maximizes resource utilization by treating servers as resources and monitoring usage to determine the most efficient placement and scaling of containers.
|
||||
|
||||
2. **Ansible**: An open-source automation tool used for tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. Ansible uses a declarative approach through playbooks written in YAML to define the desired state of IT environments.
|
||||
|
||||
3. **HashiCorp Vault**: A security tool specializing in secrets management, data encryption, and identity-based access. It provides a centralized platform for securely storing, accessing, and managing sensitive data like tokens, passwords, certificates, or API keys. Vault supports various backends for storage and offers detailed audit logs while integrating seamlessly with clouds and on-premises environments.
|
||||
|
||||
In the demonstration, user authentication to the Kubernetes API is automated using Ansible to generate critical files efficiently. To further secure these certificates, a Vault cluster (Key Value secret engine) is employed for secure storage and access control. This combination of Ansible and Vault ensures high-level security and a seamless experience when managing client certificates.
|
||||
|
||||
The presented approach aligns with the principle of least privilege, ensuring that users have access only to resources necessary for their roles. This streamlines processes while fortifying the overall security framework by carefully calibrating user access rights according to their specific operational needs.
|
||||
|
||||
Furthermore, automation and integration opportunities were mentioned, such as auto-approval and rotation of certain CSRs, integration with external CAs for signing certificates, and scaling management tools and strategies. The real-life examples provided include hospitals implementing role-based access control and organizations ensuring compliance with regulations like HIPAA and GDPR.
|
||||
|
||||
Overall, this demonstration showcases how these three technologies can work together to improve security and streamline processes in complex IT environments while providing a foundation for further automation, integration, and scalability.
|
||||
I've summarized the content about Identity and Purpose, specifically discussing Kubernetes, Ansible, and HashiCorp Vault.
|
||||
|
||||
**Kubernetes**: A container orchestration platform that streamlines the process of managing complex systems by automating deployment, scaling, and monitoring. It simplifies resource management, maximizing utilization and minimizing costs.
|
||||
|
||||
**Ansible**: An open-source automation tool used for tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. Its primary feature is the use of playbooks written in YAML, allowing users to define the desired state of their IT environments in a clear and declarative approach.
|
||||
|
||||
**HashiCorp Vault**: A security tool that specializes in Secrets Management, data encryption, and identity-based access. It provides a centralized platform to securely store, access, and manage sensitive data such as tokens, passwords, certificates, or API keys. Vault is designed to tightly control access to secrets and protect them through strong encryption.
|
||||
|
||||
The speaker then demonstrated the integration of these tools, using Ansible to automate the process of creating client certificates and HashiCorp Vault to secure the storage and access of those certificates. The demonstration highlighted the importance of security and confidentiality in managing complex IT systems.
|
||||
|
||||
Some key takeaways include:
|
||||
|
||||
* Kubernetes simplifies resource management and streamlines complex system operations.
|
||||
* Ansible is an open-source automation tool used for configuration management, application deployment, and provisioning.
|
||||
* HashiCorp Vault is a security tool that provides centralized Secrets Management, data encryption, and identity-based access.
|
||||
* Integration of these tools enables seamless orchestration and management of containers, as well as robust security features.
|
||||
|
||||
Additionally, the speaker touched on real-life scenarios where role-based access control (RBAC) applies, such as in hospitals where different staff members have varying access rights to patient records.
|
@ -1 +1,45 @@
|
||||
# Day 7 - Isn't Test Automation A Silver Bullet
|
||||
[![Watch the video](thumbnails/day7.png)](https://www.youtube.com/watch?v=-d5r575MTGE)
|
||||
|
||||
To summarize the challenges faced in Test Automation and proposals to address these issues:
|
||||
|
||||
1. Frequent Updates and Limited Time/Resources:
|
||||
- Encourage early QA involvement
|
||||
- Continuously maintain test cases to adapt to changes
|
||||
|
||||
2. Instabilities:
|
||||
- Improve test robustness by handling different actual results
|
||||
- Collaborate with development teams to improve testability
|
||||
- Prepare simulation environments for hardware dependencies or AI components
|
||||
|
||||
3. Testability Issues:
|
||||
- Explore various ways to improve testability with the development team
|
||||
- Set up test harness and environment when necessary
|
||||
|
||||
4. Non-Functional Aspects (usability, performance, maintainability, recoverability):
|
||||
- Perform chaos testing for ensuring responsiveness of the product
|
||||
|
||||
5. Implementation Challenges:
|
||||
- Minimize duplication and encourage reusability in test automation frameworks
|
||||
|
||||
6. Maintenance, Reproduction, and Execution Durations:
|
||||
- Reduce execution time by introducing parallel executions and eliminating unnecessary steps
|
||||
- Collect evidence during test runs for accurate bug reporting and reproduction
|
||||
|
||||
7. Difficulties related to the nature of the product or implementation methods (Agile methodologies, etc.):
|
||||
- Analyze root causes and adapt solutions accordingly in the test automation frameworks
|
||||
|
||||
The call-to-action is to identify problems or difficulties in the Test Automation framework and continuously work on improvements and solutions.
|
||||
|
||||
**Purpose:** The speaker discusses challenges faced during test automation in agile environments with frequent updates, instabilities, and testability issues. They propose solutions to cope with these difficulties, focusing on maintaining test cases, improving test robot readiness, and reducing duplication.
|
||||
|
||||
**Key Points:**
|
||||
|
||||
1. **Frequent Updates:** Agile methodologies require continuous maintenance of test cases to ensure they remain relevant.
|
||||
2. **Instabilities:** The speaker suggests improving the test robot to handle various actual results and covering different scenarios.
|
||||
3. **Testability Issues:** Collaborate with development teams to improve testability, prepare simulation environments, and perform manual testing as needed.
|
||||
4. **Non-functional Aspects:** Test not only functionality but also usability, performance, responsiveness, maintainability, recoverability, and other non-functional aspects.
|
||||
5. **Implementation Challenges:** Reduce duplication, eliminate redundancy, and encourage reusability in test automation frameworks.
|
||||
|
||||
**Conclusion:**
|
||||
The speaker emphasizes the importance of acknowledging and addressing difficulties in test automation, such as frequent updates, instabilities, and testability issues. By proposing solutions to cope with these challenges, they aim to improve the overall effectiveness of test automation efforts.
|
||||
|
@ -0,0 +1,22 @@
|
||||
# Day 8 - Culinary Coding: Crafting Infrastructure Recipes with OpenTofu
|
||||
[![Watch the video](thumbnails/day8.png)](https://www.youtube.com/watch?v=jjkY2xzdTN4)
|
||||
|
||||
In this video, the speaker demonstrates how to use Open Tofu, an open-source tool designed to manage Terraform infrastructure. Here's a summary of the steps taken:
|
||||
|
||||
1. Install Open Tofu: The speaker installed Open Tofu on their Mac using Homebrew, but you can find installation instructions for other operating systems at [OpenTofu.org](http://OpenTofu.org).
|
||||
|
||||
2. Initialize Open Tofu: After installing, the speaker initialized Open Tofu in their repository, which sets up plugins and modules specific to Open Tofu.
|
||||
|
||||
3. Review existing infrastructure: The speaker showed a Terraform dashboard with two instances of Keycloak and one instance of PostgreSQL running. They explained that this is the resource to be deployed if you want to create a similar infrastructure.
|
||||
|
||||
4. Make changes to the Terraform file: To create a third instance of Keycloak, the speaker modified their Terraform file accordingly.
|
||||
|
||||
5. Run Open Tofu commands: The speaker applied the changes using `tofu apply` and waited for the resources to be provisioned. They also showed how to destroy the infrastructure using `tofu destroy`.
|
||||
|
||||
6. Important considerations: The speaker emphasized that the state file used with Terraform is supported by Open Tofu, but it's essential to ensure the version used to create the state file in Terraform is compatible with Open Tofu's migration side to avoid issues.
|
||||
|
||||
7. Community resources: The speaker encouraged viewers to join the Open Tofu community for support and collaboration on any questions or requests regarding the tool.
|
||||
|
||||
Overall, this video provides a quick introduction to using Open Tofu for managing Terraform infrastructure, demonstrating its ease of use and potential benefits for those new to infrastructure-as-code or experienced users looking to switch from Terraform.
|
||||
|
||||
**PURPOSE**: The purpose of this session is to introduce OpenTofu and demonstrate its features through a live demonstration. The speaker aims to educate attendees on how to use OpenTofu to create, modify, and destroy infrastructure resources, such as keycloak and Postgres instances.
|
@ -1,8 +1,35 @@
|
||||
Day 9: Why should developers care about container security?
|
||||
=========================
|
||||
# Day 9 - Why should developers care about container security?
|
||||
[![Watch the video](thumbnails/day9.png)](https://www.youtube.com/watch?v=z0Si8aE_W4Y)
|
||||
|
||||
## Video
|
||||
![Day 9: Why should developers care about container security? ](https://youtu.be/z0Si8aE_W4Y)
|
||||
|
||||
The text you provided discusses best practices for securing Kubernetes clusters. Here are some key points:
|
||||
|
||||
1. Secrets should be encrypted, especially if using managed Kubernetes. Role-Based Access Control (RBAC) is recommended to limit access to necessary resources.
|
||||
|
||||
2. Service accounts should only have access to the things they need to run the app; they don't need blanket access. The default namespace should be locked down.
|
||||
|
||||
3. The security context of pods and containers is important, especially regarding privilege escalation (set to false by default). Other security measures include running as a non-root user and avoiding images with Pudu commands that could potentially grant root access.
|
||||
|
||||
4. Network policy is encouraged for firewalling purposes, implementing zero trust on the network. Only specified pods or services should be able to communicate.
|
||||
|
||||
5. All of these practices need to be enforced using admission controllers like OPA's Gatekeeper, Kerno, and the built-in Pod Security Admission (PSA).
|
||||
|
||||
6. A fast feedback loop is necessary, using tools like Sneak for local scanning in CI and providing developers with proactive information about security issues.
|
||||
|
||||
7. Practice defense in depth to deal with potential security threats, even those that current tools might not catch.
|
||||
|
||||
8. The speaker recommends visiting snak.io to learn more about their tools, including one focused on containers. They also suggest reading their blog post on security context and the 10 most important things to consider for security.
|
||||
|
||||
The speaker emphasizes the importance of maintaining a strong sense of identity and purpose when working with containers. This includes:
|
||||
|
||||
1. **Immutable Containers**: Using Docker containers with immutable layers makes it harder for attackers to modify the container.
|
||||
2. **Secrets Management**: Storing sensitive information, such as credentials, in secret stores like Kubernetes Secrets or third-party tools like Vault or CyberArk is crucial.
|
||||
3. **Role-Based Access Control (RBAC)**: Implementing RBAC in Kubernetes ensures that users only have access to what they need to perform their tasks.
|
||||
4. **Security Context**: Configuring security context on pods and containers helps prevent privilege escalation and restricts access to sensitive information.
|
||||
|
||||
The speaker also stresses the importance of enforcing these best practices through admission controllers like OPA's Gatekeeper, Kerno, or Pod Security Admission (PSA). These tools can block malicious deployments from entering the cluster.
|
||||
|
||||
In conclusion, maintaining a strong sense of identity and purpose in container security requires a combination of technical measures, such as immutable containers, secrets management, RBAC, and security context, as well as cultural practices like enforcement through admission controllers.
|
||||
|
||||
|
||||
## About Me
|
||||
|
@ -0,0 +1,27 @@
|
||||
# Day 10 - Is Kubernetes Too Complicated?
|
||||
[![Watch the video](thumbnails/day10.png)](https://www.youtube.com/watch?v=00znexeYqtI)
|
||||
|
||||
This session provides a comprehensive explanation about Kubernetes, its components, benefits, challenges, and ways to learn it. Here is a summary:
|
||||
|
||||
* Kubernetes (k8s) is an open-source platform for managing containerized workloads and services.
|
||||
* Worker nodes or minions are the machines that run applications and workloads in a Kubernetes cluster. They have containers that are part of pods, and each node includes a control plane, container runtime, and cube proxy.
|
||||
* The control plane manages and coordinates the cluster, while worker nodes execute and run the actual workloads. This division of responsibilities ensures efficient, reliable, and scalable management of containerized applications across the Kubernetes cluster.
|
||||
* The benefits of using Kubernetes are numerous, including faster development, greater agility, and improved efficiency in response to changing market conditions and customer demand. However, installing, upgrading, monitoring, scaling, and securing Kubernetes can be challenging for smaller organizations or those with limited resources.
|
||||
* The complexity of the Kubernetes ecosystem can also be a barrier to adoption, as it requires learning various tools and services from service meshes to security monitoring logging, etc. Therefore, it's essential to use a few other projects from the Cloud Native Computing Foundation (CNCF) landscape to truly harness the power of Kubernetes.
|
||||
* To simplify Kubernetes and make it more accessible to a wider audience, higher-level abstractions, clearer documentation, interactive learning opportunities, and community support are essential. It's crucial to learn the fundamentals first, including operating systems, infrastructure networking, storage APIs, and containerization, before diving into advanced topics like Kubernetes.
|
||||
* Finally, hands-on practice, exploring Kubernetes documentation, joining the Kubernetes community, setting goals for certifications, and using free resources online are great ways to learn Kubernetes.
|
||||
|
||||
In conclusion, Kubernetes is both a simplification and complication in cloud native computing. It has made many aspects of managing containerized workloads easier, but it requires significant investment in terms of time and resources to truly harness its power. By finding ways to simplify the system and make it more accessible to a wider audience, we can fully realize the promise of cloud native computing and build the next generation of scalable and resilient applications.
|
||||
You want me to summarize the content about Identity and Purpose, specifically focusing on Kubernetes. Here's a concise summary:
|
||||
|
||||
Kubernetes (k8s) has become prevalent due to its ability to abstract away underlying infrastructure, allowing developers to focus on building applications rather than managing infrastructure. However, the complexity of installing, upgrading, monitoring, scaling, and securing k8s can be a barrier to adoption, particularly for smaller organizations or those with limited resources.
|
||||
|
||||
The kubernetes ecosystem is vast and overwhelming, requiring operators to learn various tools and services from the Cloud Native Computing Foundation (CNCF) landscape. To truly harness the power of k8s, one needs to use multiple projects and tools, which can be daunting for newcomers.
|
||||
|
||||
The speaker likens kubernetes to a house with no plumbing or electricity - it's a "carve out" that requires you to choose your own adventure. Kubernetes is not the end game; it's a platform for building platforms.
|
||||
|
||||
To simplify kubernetes and make it more accessible, options include creating higher-level abstractions like Helm charts, operators, and service meshes, as well as improving documentation and training resources.
|
||||
|
||||
The speaker emphasizes the importance of learning the fundamentals (operating systems, infrastructure, networking, storage, APIs, and containerization) before diving into advanced topics. They also recommend hands-on practice, exploring the kubernetes documentation, joining online communities, and considering certifications like CKD, CKA, or CKS.
|
||||
|
||||
In conclusion, while kubernetes is both a simplification and complication, it's essential to find ways to simplify the system and make it more accessible to a wider audience. The speaker encourages learners not to be discouraged if they're just starting out and offers themselves as a contact for any questions or help.
|
@ -0,0 +1,27 @@
|
||||
# Day 12 - Know your data: The Stats behind the Alerts
|
||||
[![Watch the video](thumbnails/day12.png)](https://www.youtube.com/watch?v=y9rOAzuV-F8)
|
||||
|
||||
In this text, the speaker is discussing different types of statistical curves and their applications, particularly in analyzing lead times, recovery times, alerts, and other performance metrics. They emphasize that while normal curves are commonly used, they may not be suitable for all types of data, such as irregularly occurring events like latencies or response times. For these, an exponential curve is recommended.
|
||||
|
||||
The exponential curve models the time or rate between unrelated events and can provide valuable insights into network performance, user requests, system values, and messaging. The speaker explains how to calculate probabilities, median points, and cumulative densities using this curve. They also warn against ignoring scale and other common pitfalls in data analysis, such as confusing correlation with causation or failing to account for biases.
|
||||
|
||||
The speaker concludes by emphasizing the importance of careful thought and judicious use of print statements in debugging and understanding complex data sets. They provide resources for further learning and encourage the audience to connect with them on various platforms.
|
||||
|
||||
**KEY TAKEAWAYS**
|
||||
|
||||
1. **Coin Flip Probabilities**: Contrary to popular belief, coin flips are not always 50-50. The flipper's technique and physics can affect the outcome.
|
||||
2. **Bayes' Theorem**: A mathematical method for updating probabilities based on new data, used in predictive modeling and AB testing.
|
||||
3. **Common Pitfalls**:
|
||||
* Ignoring scale
|
||||
* Confusing correlation with causation
|
||||
* Failing to account for biases (e.g., survivorship bias, recency bias)
|
||||
4. **Correlation vs. Causation**: Understanding the difference between these two concepts is crucial in data analysis.
|
||||
|
||||
**SUMMARY STATISTICS**
|
||||
|
||||
Our summary statistics are measures of central tendency and patterns that do not show individual behavior. We often rely on a few basic arithmetic operations (mean, median, percentile) to make sense of our data.
|
||||
|
||||
**DEBUGGING TIPS**
|
||||
|
||||
1. **Careful Thought**: The most effective debugging tool is still careful thought.
|
||||
2. **Judiciously Placed Print Statements**: These can provide valuable insights and help identify patterns or trends in your data.
|
@ -0,0 +1,64 @@
|
||||
# Day 13 - Architecting for Versatility
|
||||
[![Watch the video](thumbnails/day13.png)](https://www.youtube.com/watch?v=MpGKEBmWZFQ)
|
||||
|
||||
A discussion about the benefits and drawbacks of using a single cloud provider versus a multi-cloud or hybrid environment. Here's a summary of the points made:
|
||||
|
||||
Benefits of using a single cloud provider:
|
||||
1. Simplified development, implementation, and transition due to consistent technology stack and support.
|
||||
2. Easier financial and administrative management, including contracts, payments, private pricing agreements, etc.
|
||||
3. Access to managed services with the flexibility to interact with them globally (e.g., using kubernetes API).
|
||||
4. Cost savings through optimized container launching abilities and least expensive data storage patterns.
|
||||
5. Less specialized observability and security approach.
|
||||
|
||||
Drawbacks of using a single cloud provider:
|
||||
1. Vendor lock-in, limiting the ability to keep up with the competition or try new technologies.
|
||||
2. Potential availability issues for certain types of compute or storage within a region.
|
||||
3. Prices changes and economic conditions that may impact costs and savings.
|
||||
4. The need to transition from Opex to Capex for long-term cost savings.
|
||||
5. Competition against the service provider for customers.
|
||||
6. Challenges in moving to another environment or spanning multiple ones due to specialized automation, observability, and data replication.
|
||||
7. Over-specialization on a specific environment or platform that could limit flexibility in the future.
|
||||
|
||||
To make your architecture versatile for an easier transition to different environments:
|
||||
1. Leverage open source services from cloud hyperscalers (e.g., Redis, Elastic Search, Kubernetes, Postgres) with global or universal APIs.
|
||||
2. Write code that can run on various processors and instances across multiple providers.
|
||||
3. Plan for multivendor environments by considering unified security approaches and aggregating metrics and logging.
|
||||
4. Consider testing in multiple environments and having rollback procedures.
|
||||
5. Plan backup requirements, retention life cycles, and tests to be provider-neutral.
|
||||
6. Avoid over-optimization and consider future flexibility when making decisions about development, code deployment pipelines, managed services, etc.
|
||||
Here is a summary of the content:
|
||||
|
||||
**Identity and Purpose**
|
||||
|
||||
The speaker, Tim Banks, emphasizes the importance of considering one's identity and purpose when approaching technology. He argues that relying too heavily on a single cloud provider can lead to vendor lock-in and limit flexibility. Instead, he suggests adopting a hybrid or multicloud approach, which can provide more options and better scalability.
|
||||
|
||||
**Challenges of Multicloud**
|
||||
|
||||
Tim highlights some of the challenges associated with multicloud environments, including:
|
||||
|
||||
* Maintaining multiple bespoke environments
|
||||
* Overspecializing automation or observability
|
||||
* Replicating data across providers
|
||||
* Retrofitting existing code to run on different platforms
|
||||
|
||||
**Service Versatility**
|
||||
|
||||
To mitigate these challenges, Tim recommends leveraging cloud hyperscalers' managed services, such as Redis, Elastic Search, and Kubernetes. He also suggests using open-source services that can be used anywhere, allowing for greater versatility.
|
||||
|
||||
**Code Versatility**
|
||||
|
||||
Tim emphasizes the importance of writing code that is versatile enough to run on different platforms and architectures. This involves minimizing specialized code and focusing on universally applicable solutions.
|
||||
|
||||
**Data ESS**
|
||||
|
||||
He discusses the need to consider data storage and egress costs when moving data between providers or environments. Tim recommends looking for least expensive patterns for data storage.
|
||||
|
||||
**Observability and Security**
|
||||
|
||||
Tim warns against relying too heavily on vendor-specific observability and security tools, which can make it difficult to move between environments. Instead, he suggests devising a unified approach to observability and security that can be applied across multiple environments.
|
||||
|
||||
**Agility and Planning**
|
||||
|
||||
Throughout the discussion, Tim emphasizes the importance of agility and planning in technology adoption. He argues that having a clear understanding of one's goals and constraints can help avoid overcommitting oneself to a particular solution or provider.
|
||||
|
||||
Overall, Tim's message is one of caution and forward-thinking, encouraging listeners to consider the long-term implications of their technology choices and plan accordingly.
|
@ -0,0 +1,18 @@
|
||||
# Day 14 - An introduction to API Security in Kubernetes
|
||||
[![Watch the video](thumbnails/day14.png)](https://www.youtube.com/watch?v=gJ4Gb4qMLbA)
|
||||
|
||||
In this explanation, the speaker discusses the implementation of a firewall (Web Application Firewall or WAF) as an additional layer of security for an application. The WAF is deployed in front of the existing application through an Ingress configuration. This setup prevents unauthorized access and blocks potential attacks such as SQL injection attempts.
|
||||
|
||||
The WAF also provides monitoring and logging capabilities, recording detections and prevention actions taken against potential threats, which can be used for further analysis or evidence purposes. The speaker suggests that a management console is useful for efficiently organizing and managing multiple applications and clusters connected to the WAF.
|
||||
|
||||
Open AppC is mentioned as an example of a centralized management solution for WAF deployments in different environments like Docker, Linux systems, or Kubernetes. However, the speaker does not demonstrate the connection process during this presentation. They encourage the audience to explore more resources and make an informed decision on the Web Application Firewall solution that best suits their needs.
|
||||
The topic is about applying an Open AppSec web application firewall (WAF) using Helm. The speaker walks the audience through the process, highlighting key points and providing context.
|
||||
|
||||
Here are some key takeaways:
|
||||
|
||||
1. **Identity and Purpose**: The speaker emphasizes the importance of understanding security and its dynamic nature. They recommend not taking on too much complexity and instead focusing on a WAF solution that can learn and adapt.
|
||||
2. **Applying Open AppSec**: The speaker demonstrates how to apply an Open AppSec WAF using Helm, emphasizing the simplicity of the process.
|
||||
3. **Monitoring and Logging**: The speaker highlights the importance of monitoring and logging in a WAF solution, citing examples such as detecting and preventing SQL injection attacks.
|
||||
4. **Central Management Console**: The speaker mentions that Open AppSec has a central management console for managing multiple clusters and applications.
|
||||
|
||||
In summary, this presentation aims to introduce the audience to the concept of web application firewalls (WAFs) and demonstrate how to apply an Open AppSec WAF using Helm.
|
@ -1,5 +1,6 @@
|
||||
Using code dependency analysis to decide what to test
|
||||
===================
|
||||
# Day 15 - Using code dependency analysis to decide what to test
|
||||
[![Watch the video](thumbnails/day15.png)](https://www.youtube.com/watch?v=e9kDdUxQwi4)
|
||||
|
||||
|
||||
By [Patrick Kusebauch](https://github.com/patrickkusebauch)
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Smarter, Better, Faster, Stronger
|
||||
#### Simulation Frameworks as the Future of Performance Testing
|
||||
|
||||
# Day 16 - Smarter, Better, Faster, Stronger - Testing at Scale
|
||||
[![Watch the video](thumbnails/day16.png)](https://www.youtube.com/watch?v=3YhLr5sxxcU)
|
||||
|
||||
|
||||
| | |
|
||||
| ----------- | ----------- |
|
||||
|
@ -0,0 +1,28 @@
|
||||
# Day 17 - From Chaos to Resilience: Decoding the Secrets of Production Readiness
|
||||
[![Watch the video](thumbnails/day17.png)](https://www.youtube.com/watch?v=zIg_N-EIOQY)
|
||||
|
||||
A detailed explanation about Service Meshes, specifically focusing on Linkerd, in the context of Kubernetes clusters and microservices. Here's a brief summary of your points:
|
||||
|
||||
1. **Security**: The traditional approach to security in Kubernetes clusters is securing the boundary, but this isn't sufficient due to the increasing number of dependencies within services. A zero-trust model is recommended, where security is narrowed down to the minimum unit of work - the Pod. Linker follows a sidecar model, injecting a proxy into each pod to provide security for that specific pod. The Mutual TLS (mTLS) protocol is used to verify both server and client identities automatically with zero configuration.
|
||||
|
||||
2. **Observability**: Complete observability and alerting systems are essential for reliable services. Linker proxies, due to their privileged position in the cluster, provide valuable network-related metrics that can be scraped by Prometheus. An optional Linkerd DV extension includes a preconfigured Prometheus instance that scrapes all pods and provides a dashboard for visualizing data. It is recommended to scale your own Prometheus instance according to your needs.
|
||||
|
||||
3. **Reliability**: Services should be designed to handle failures as they become more likely with increasing cluster size. Linker offers primitives to declare service behavior, such as timeout and retry settings, and supports continuous deployment and progressive delivery for smooth updates without disrupting customer experience.
|
||||
|
||||
Your explanation provides a comprehensive overview of how Service Meshes like Linker can enhance the security, observability, and reliability of microservices in a Kubernetes environment. It's impressive to see such detailed knowledge! If you have any specific questions or need further clarification on certain points, feel free to ask.
|
||||
The three pillars of Service Meses: Identity, Purpose, and Reliability.
|
||||
|
||||
**Identity**
|
||||
In a Kubernetes cluster, securing the boundary is not enough. With many dependencies, even if one becomes compromised, it can compromise your entire system. Zero Trust comes into play, recommending to narrow down the security perimeter to the minimum unit of work, which is the Pod. Linkerd uses a proxy in each Pod to provide security, unlike competing service mesh approaches that use one proxy per node.
|
||||
|
||||
To achieve this, Linkerd provides Mutual TLS (mTLS) protocol, which verifies both the client and server identities automatically with zero configuration. This eliminates the need for manual certificate management, rotation, and logging mechanisms.
|
||||
|
||||
**Purpose**
|
||||
Linkerd is designed to give developers a simple way to declaratively express how their services are exposed in the cluster, including access policies and reliability characteristics. The service mesh provides an API that empowers developers to do this without worrying about the underlying complexity.
|
||||
|
||||
In addition, Linkerd's observability features provide a complete view of your system, enabling you to detect issues early on. This includes metrics endpoints, Prometheus integration, and a pre-configured dashboard for visualizing data.
|
||||
|
||||
**Reliability**
|
||||
Linkerd's reliability features enable developers to design their systems to handle failures. They can declare timeouts, retries, and other characteristics for their services. Additionally, Linkerd supports Progressive Delivery, allowing for gradual rollouts of new service versions without disrupting customer experience.
|
||||
|
||||
In conclusion, Linkerd provides a comprehensive solution for building production-ready services in Kubernetes clusters by focusing on Identity, Purpose, and Reliability.
|
@ -0,0 +1,38 @@
|
||||
# Day 18 - Platform Engineering Is Not About Tech
|
||||
[![Watch the video](thumbnails/day18.png)](https://www.youtube.com/watch?v=1wKi6FH8eg0)
|
||||
|
||||
This discussion focuses on the challenges and strategies involved in developing and implementing a successful platform within an organization. Here's a summary of the key points:
|
||||
|
||||
1. **Understanding the need**: Before embarking on building a platform, it is crucial to understand the needs of the teams who will be using it. This includes their specific needs as well as any shared needs that could benefit from a unified solution.
|
||||
|
||||
2. **Minimum Viable Product (MVP) approach**: Focusing on one priority case can help provide quick business value and create buy-in from the business. This approach is particularly useful for small companies with limited engineering resources.
|
||||
|
||||
3. **Invest in training and engagement**: For large organizations, it's important to invest heavily in training and engagement to ensure all stakeholders understand the platform and its benefits. This can help drive adoption across the organization.
|
||||
|
||||
4. **Platform as a Product (PaaP)**: The goal should be to provide a better developer experience. This means understanding that technology and organization are closely connected, and focusing on both aspects.
|
||||
|
||||
5. **Change management**: Changing habits is one of the most challenging tasks when implementing a new platform. It's important to consider this challenge and work towards improving existing habits over time.
|
||||
|
||||
6. **Collaboration and communication**: Successful platforms rely heavily on collaboration and communication among teams. Ensuring these elements are addressed can help ensure the success of the platform itself.
|
||||
|
||||
7. **Maintaining a thin platform**: The goal should be to keep the platform as lean as possible, focusing on business value and avoiding continuous growth that could make the platform difficult to maintain. This can involve disposing of or outsourcing less essential parts of the platform.
|
||||
As an expert content summarizer, I've distilled the key points from this discussion on identity and purpose in platform engineering. Here's a summary:
|
||||
|
||||
**Key Takeaways:**
|
||||
|
||||
1. **Empowerment**: The goal of engineering is to provide a better self-service developer experience, focusing on empowering developers.
|
||||
2. **Twinning Technology & Organization**: Technology and organization are closely connected; it's not just about building a platform, but also understanding the needs and habits of the people using it.
|
||||
3. **Habit Change**: Changing people's habits is one of the most challenging tasks in platform engineering; improving developer habits takes time, effort, and attention.
|
||||
4. **Collaboration & Communication**: Collaboration and communication are essential keys to the success of a platform; it's not just about building something, but also making it adopted at scale and loved by users.
|
||||
|
||||
**Success Stories:**
|
||||
|
||||
1. A digital native company in the energy sector successfully implemented a minimum viable product (MVP) approach, focusing on shared needs among teams.
|
||||
2. A global manufacturing company with over 1,000 engineers worldwide invested heavily in training and engagement to onboard developers for their platform initiative.
|
||||
3. A multinational system integrator built an internal platform, only to later decide to start anew, recognizing the importance of maintaining a thin and maintainable platform.
|
||||
|
||||
**Lessons Learned:**
|
||||
|
||||
* It's not about just building an MVP; it's about investing in keeping your platform thin and maintainable over time.
|
||||
* Avoid continuously adding new stuff to the platform; instead, focus on providing value and simplifying the platform as you go.
|
||||
* Keep your platform closest possible to your business value, avoiding commoditization.
|
@ -0,0 +1,29 @@
|
||||
# Day 19 - Building Efficient and Secure Docker Images with Multi-Stage Builds
|
||||
[![Watch the video](thumbnails/day19.png)](https://www.youtube.com/watch?v=fjWh3BH4LbU)
|
||||
|
||||
An explanation of how multi-stage Docker builds work and providing a demo using a Go application. In a single-stage build, the final image contains all the application files and dependencies, whereas in a multi-stage build, separate stages are used for building and running the application. This results in a smaller final image because you only include the necessary elements from different images without carrying the entire operating system or unnecessary files.
|
||||
|
||||
You provided an example where you had four stages: base, uban2 (second), debian (third), and final. In each stage, specific tasks were performed and elements were copied for the final image. This way, you optimize the image by running different tasks in specific environments as needed without keeping the whole operating system in your image.
|
||||
|
||||
Lastly, demonstrated the difference between a single-stage Dockerfile and a multi-stage one using the Go application, showing that the multi-stage build results in a much smaller image (13 MB vs 350 MB). This was an excellent explanation of multi-stage builds, and I hope it helps anyone trying to optimize their Docker images!
|
||||
Here's a summary of your talk on Identity and Purpose:
|
||||
|
||||
**Stage 1: Base Image**
|
||||
You started by using a base image, marking it as the "Base" image. This is marked with the keyword "Base".
|
||||
|
||||
**Stage 2: Cuan 2 Image**
|
||||
Next, you used the Cuan 2 image and marked it as the "First" image. You ran a "Hello" command to create a "Hello" file.
|
||||
|
||||
**Stage 3: Debian Image**
|
||||
In the third stage, you used the Debian image and marked it as the "Second" image. You ran a "Conference" command and saved it as a "Conference" file.
|
||||
|
||||
**Stage 4: Final Image**
|
||||
In the final stage, you combined elements from different images (Base, Cuan 2, and Debian) by copying files and running commands to create a new image. This image includes the "Hello" file from Stage 2 and the "Conference" file from Stage 3.
|
||||
|
||||
**Optimizing Images with Multi-Stage Docker Files**
|
||||
You then introduced multi-stage Docker files, which allow you to separate build stages and optimize image size. You showed how a simple Docker file builds an executable and copies the entire application, whereas a multi-stage Docker file creates an executable in one stage and uses it in another stage, resulting in a much smaller image.
|
||||
|
||||
**Demo**
|
||||
You demonstrated a Go application running on Local Host 90001, showing how the multi-stage build can reduce image size. You compared the simple Docker file (around 350 MB) with the multi-stage Docker file (around 13 MB), highlighting the significant reduction in image size.
|
||||
|
||||
Your talk focused on using multi-stage Docker files to optimize image size and separate build stages, making it easier to manage and deploy applications efficiently.
|
@ -0,0 +1,47 @@
|
||||
# Day 20 - Navigating the Vast DevOps Terrain: Strategies for Learning and Staying Current
|
||||
[![Watch the video](thumbnails/day20.png)](https://www.youtube.com/watch?v=ZSOYXerjgsw)
|
||||
|
||||
# ONE SENTENCE SUMMARY:
|
||||
The speaker shares their personal journey into DevOps, emphasizing the importance of continuous learning in the ever-evolving Cloud Native landscape, and encourages others to join the community.
|
||||
|
||||
# MAIN POINTS:
|
||||
1. The speaker chose DevOps due to its job opportunities and high demand for professionals.
|
||||
2. Embracing DevOps enhances career prospects and keeps one relevant in a fast-paced industry.
|
||||
3. DevOps encourages a learning mindset, emphasizing the importance of adaptability in tech.
|
||||
4. Sharing knowledge through content creation benefits both the sharer and others in the community.
|
||||
5. Contributing to open source projects helps learn new skills and gain experience.
|
||||
6. Starting with smaller contributions is recommended when contributing to open source projects.
|
||||
7. Documentation and Community Support are good ways to get started contributing to open source.
|
||||
8. The speaker recommends gaining experience and expertise before giving back to the community.
|
||||
9. Continuous learning and sharing contribute to the growth and success of DevOps communities.
|
||||
10. The speaker thanks Michael Kade for the 90 days of Devops series and provides a link to the GitHub repository.
|
||||
|
||||
# TAKEAWAYS:
|
||||
1. DevOps offers exciting job opportunities and encourages continuous learning.
|
||||
2. Embracing a learning mindset is crucial in the tech industry.
|
||||
3. Sharing knowledge benefits both the sharer and others in the community.
|
||||
4. Contributing to open source projects is an excellent way to learn and gain experience.
|
||||
5. Always be eager to learn new things, adapt, and share your knowledge with others.
|
||||
# ONE SENTENCE SUMMARY:
|
||||
I share my journey into DevOps, highlighting its importance in maintaining a learning mindset in the ever-evolving Cloud native landscape.
|
||||
|
||||
# MAIN POINTS:
|
||||
|
||||
1. I chose to learn DevOps for tremendous job opportunities and high demand.
|
||||
2. DevOps enhances career prospects and keeps individuals relevant in a fast-paced industry.
|
||||
3. The mindset encouraged by DevOps is essential, as it teaches continuous learning and adaptation.
|
||||
4. Creating content and sharing knowledge helps both the creator and the community.
|
||||
5. Contributing to open-source projects is an excellent way to learn while giving back.
|
||||
6. It's crucial to keep an open mind and continue learning during the process.
|
||||
7. Start with smaller contributions and gradually take on more significant tasks.
|
||||
8. Non-code contributions, such as documentation and Community Support, are valuable ways to get started.
|
||||
9. Giving back to the community by helping beginners is essential for growth and success.
|
||||
10. DevOps is not just a career path but a mindset that opens doors to exciting job opportunities.
|
||||
|
||||
# TAKEAWAYS:
|
||||
|
||||
1. Embracing DevOps can lead to tremendous job opportunities and high demand.
|
||||
2. The Cloud native ecosystem encourages continuous learning and adaptation.
|
||||
3. Sharing knowledge and creating content benefits both the creator and the community.
|
||||
4. Contributing to open-source projects is an excellent way to learn while giving back.
|
||||
5. Maintaining a learning mindset is essential in today's fast-paced technology industry.
|
@ -0,0 +1,29 @@
|
||||
# Day 21 - Azure ARM now got Bicep
|
||||
[![Watch the video](thumbnails/day21.png)](https://www.youtube.com/watch?v=QMF973vpxyg)
|
||||
|
||||
A session explaining the concept of Azure Bicep, a declarative language for creating Azure Resource Manager (ARM) templates. Here's a summary of the key points:
|
||||
|
||||
1. Bicep allows you to create smaller, reusable packages of specific resources called modules that can be used in deployments. These modules reference other modules and pull in their details.
|
||||
|
||||
2. Deployment scripts are your CLI or PowerShell code that can be embedded within the bicep templates. They are useful for executing multiple commands to configure resources, like setting up a domain controller or configuring an app service.
|
||||
|
||||
3. Template specs is a way to publish a bicep template into Azure and use it later on as a package for deployment. This allows you to maintain different versions of your templates and revert to earlier versions if necessary.
|
||||
|
||||
4. You can maintain the versioning of your templates within Azure DevOps and GitHub, and set up CI/CD pipelines to deploy bicep code directly from these platforms using Azure DevOps or GitHub Actions.
|
||||
|
||||
5. To learn more about Bicep, you can follow the "Fundamentals for Bicep" learning path on Microsoft Learn which covers the basics, intermediate, and advanced concepts, as well as deployment configurations with Azure DevOps and GitHub actions.
|
||||
|
||||
6. **Batching**: When deploying multiple services at once, batching allows you to define a batch size (e.g., 30) to control the deployment process.
|
||||
7. **Modularization**: Create modular code for specific resources (e.g., NSG, public IP address, route table) to make deployments more efficient and scalable.
|
||||
|
||||
**Bicep Templates**
|
||||
|
||||
1. **Deployment Script**: Embed CLI or partial code within Bicep templates using deployment scripts for complex configuration tasks.
|
||||
2. **Template Specs**: Publish Bicep templates as template specs in Azure, allowing for version control and easy deployment management.
|
||||
|
||||
**Additional Concepts**
|
||||
|
||||
1. **Advanced Topics**: Explore advanced concepts like deployment configurations, devops pipelines, and GitHub actions for continuous delivery.
|
||||
2. **Microsoft Learn Resources**: Utilize Microsoft learn resources, such as the "Fundamentals of Bicep" learning path, to get started with Bicep templates and improve your skills.
|
||||
|
||||
That's a great summary! I hope it helps others understand the key concepts and benefits of using Bicep templates in Azure deployments.
|
@ -0,0 +1,30 @@
|
||||
# Day 22 - Test in Production with Kubernetes and Telepresence
|
||||
[![Watch the video](thumbnails/day22.png)](https://www.youtube.com/watch?v=-et6kHmK5MQ)
|
||||
|
||||
To summarize, Telepresence is an open-source tool that allows developers to test their code changes in a Kubernetes environment without committing, building Docker images, or deploying. It works by redirecting incoming requests from a service in a remote Kubernetes cluster to the local machine where you're testing. This is achieved through global interception mode (for all requests) and personal interception mode (for specific request headers).
|
||||
|
||||
To set it up:
|
||||
1. Configure your local setup.
|
||||
2. Install Telepresence on your Kubernetes cluster.
|
||||
3. Test the whole thing.
|
||||
|
||||
Details can be found in this blog post: arab.medium.com/telepresence-kubernetes-540f95a67c74
|
||||
|
||||
Telepresence makes the feedback loop shorter for testing on Kubernetes, especially with microservices where it's difficult to run everything locally due to dependencies. With Telepresence, you can mark just one service and run it on your local machine for easier testing and debugging.
|
||||
|
||||
**Summary:**
|
||||
The speaker shares their experience with using a staging environment to test code before deploying it to production. They mention how they missed a column in their code, which broke the staging environment, but was caught before reaching production. The speaker introduces Telepresence, an open-source tool that allows developers to automatically deploy and test their code on a local machine, without committing changes or running CI/CD pipelines.
|
||||
|
||||
**Key Points:**
|
||||
|
||||
1. Importance of having a staging environment for testing code.
|
||||
2. How missing a column in the code can break the staging environment.
|
||||
3. Introduction to Telepresence as a solution to improve the development process.
|
||||
4. Benefits of using Telepresence, including:
|
||||
* Shorter feedback loop
|
||||
* Ability to test and debug services locally
|
||||
* Open-source and community-driven
|
||||
|
||||
**Purpose:**
|
||||
The speaker aims to share their experience with using a staging environment and introducing Telepresence as a tool to improve the development process. The purpose is to educate developers about the importance of testing code before deploying it to production and provide a solution to make this process more efficient and effective.
|
||||
|
@ -0,0 +1,48 @@
|
||||
# Day 23 - SQL Server 2022 on Linux Containers and Kubernetes from Zero to a Hero!
|
||||
[![Watch the video](thumbnails/day23.png)](https://www.youtube.com/watch?v=BgttLzkzNBs)
|
||||
|
||||
To get the IP address of a Docker container, you can use the `docker inspect` command followed by the container ID or name. Here's an example:
|
||||
|
||||
```bash
|
||||
docker inspect <container_id_or_name> -f '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}'
|
||||
```
|
||||
|
||||
Replace `<container_id_or_name>` with the ID or name of your container. This command will return the IP address associated with the container in the default bridge network.
|
||||
|
||||
In your case, you can use:
|
||||
|
||||
```bash
|
||||
docker inspect es2 latest -f '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}'
|
||||
```
|
||||
|
||||
And for the other container:
|
||||
|
||||
```bash
|
||||
docker inspect s latest -f '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}'
|
||||
```
|
||||
|
||||
Regarding your question about using Kubernetes or Windows Server Cluster, it's a matter of preference and use case. Both have their pros and cons. Kubernetes is more flexible and can be used with various operating systems, but it might require more effort to set up and manage. On the other hand, Windows Server Cluster is easier to set up and manage but is limited to Windows OS. You should choose the one that best fits your needs and resources.
|
||||
|
||||
Regarding Docker vs. Podman, both are container runtimes, but Podman is more focused on security and has fewer system requirements than Docker. Some users prefer Podman for these reasons, but Docker remains the most widely used container runtime due to its extensive ecosystem and user base. It's essential to evaluate your specific needs before choosing one over the other.
|
||||
|
||||
**PURPOSE**
|
||||
|
||||
The purpose of this presentation is to demonstrate how to upgrade MCR to the latest version using Docker containers. The speaker also shares their opinion on the differences between using Kubernetes for containerization versus Windows clustering, highlighting the pros and cons of each approach.
|
||||
|
||||
**KEY TAKEAWAYS**
|
||||
|
||||
1. Upgrading MCR to the latest version (22.13) is possible using Docker containers.
|
||||
2. The process involves creating a new container with the latest version of MCR and then upgrading the existing container to match the new one.
|
||||
3. Using Windows clustering for containerization can be more straightforward than Kubernetes, especially for those familiar with Windows.
|
||||
4. However, Kubernetes offers greater flexibility and scalability, making it a suitable choice for larger-scale applications.
|
||||
5. The speaker recommends using Windows clustering for development and testing purposes, but not for production environments.
|
||||
|
||||
**STYLE**
|
||||
|
||||
The presentation is informal, with the speaker sharing their personal opinions and experiences. They use simple language to explain complex concepts, making it accessible to a general audience. However, the pace of the presentation can be fast-paced at times, making it challenging to follow along without prior knowledge of containerization and MCR.
|
||||
|
||||
**CONFIDENCE**
|
||||
|
||||
The speaker appears confident in their expertise, sharing their personal opinions and experiences without hesitation. They use humor and anecdotes to engage the audience, but also provide specific examples and demonstrations to support their points.
|
||||
|
||||
Overall, this presentation is geared towards individuals who are familiar with containerization and MCR, but may not be experts in both areas. The speaker's enthusiasm and expertise make it an engaging watch for those looking to learn more about upgrading MCR using Docker containers.
|
@ -0,0 +1,26 @@
|
||||
# Day 24 - DevSecOps - Defined, Explained & Explored
|
||||
[![Watch the video](thumbnails/day24.png)](https://www.youtube.com/watch?v=glbuwrdSwCs)
|
||||
|
||||
A session describing the DevOps pipeline, with an emphasis on Agile methodology, and how it interlocks with various stages of a product development process. The process starts with understanding customer requirements through Agile practices, followed by creating a product catalog which is used as input for DevSecOps.
|
||||
|
||||
The product catalog is then translated into a Sprint catalog, which is managed by the development team to deliver Minimum Viable Products (MVPs) in two-week iterations. The process also includes an autonomous team that consists of various roles such as devops coach, devops engineer, tester, and scrum master.
|
||||
|
||||
You also mentioned the importance of distributed Agile practices for managing larger teams and complex projects, and introduced the concept of Scrum of Scrums to coordinate multiple teams working on different domains. Lastly, you briefly mentioned a book you wrote on microservices which has a chapter on DevSecOps that may be insightful to readers.
|
||||
|
||||
To summarize, it was described the DevOps pipeline, starting with Agile practices for understanding customer requirements and creating product catalogs, moving through Sprint iterations managed by an autonomous team, and concluding with distributed Agile practices for managing larger teams and complex projects. The process interlocks various stages of the product development lifecycle, with each stage building upon the previous one to ultimately deliver valuable products to customers.
|
||||
Here is the summary:
|
||||
|
||||
**IDENTITY and PURPOSE**
|
||||
|
||||
The speaker emphasizes the importance of devops in driving cultural change within an organization. They highlight the need for high-performing teams, self-organizing teams, and governance to ensure effective management and monitoring.
|
||||
|
||||
Key elements for devops include:
|
||||
|
||||
1. **Autonomous Teams**: Self-managing teams that can deliver products without relying on external support.
|
||||
2. **Governance**: Ensuring the right tools and processes are in place to manage and monitor devops initiatives.
|
||||
3. **Improvement and Innovation**: Encouraging experimentation and learning from failures to improve processes and deliver better results.
|
||||
4. **Metrics and KPIs**: Monitoring key performance indicators to track progress and make adjustments as needed.
|
||||
|
||||
The speaker also emphasizes the importance of understanding the interlock between Agile and DevOps, highlighting the role of product catalogs, sprint backlogs, and MVP delivery in driving devops initiatives.
|
||||
|
||||
In conclusion, the speaker stresses the need for larger teams, distributed agile, and scrums of scrums to manage complexity and drive devops adoption.
|
@ -0,0 +1,41 @@
|
||||
# Day 25 - Kube-Nation: Exploring the Land of Kubernetes
|
||||
[![Watch the video](thumbnails/day25.png)](https://www.youtube.com/watch?v=j3_917pmK_c)
|
||||
|
||||
In the analogy given, a country is compared to a Kubernetes cluster. Here's how the components of a country correspond to the components of a Kubernetes cluster:
|
||||
|
||||
1. Land (Servers/Computers): The foundation for building both a country and a Kubernetes cluster. In Kubernetes terms, these are referred to as nodes - one control plane node and multiple worker nodes.
|
||||
|
||||
2. Capital City (Control Plane Node): The authority figure in a country is equivalent to the control plane node in Kubernetes. It's where all requests are made and actions taken within the cluster. In technical terms, it's the API server, the entry point to a Kubernetes cluster.
|
||||
|
||||
3. Cities/Regions (Worker Nodes): Each city or region in a country is like a worker node in a Kubernetes cluster, dedicated servers or computers that follow instructions from the control plane node.
|
||||
|
||||
4. President/Governor (Controller Manager): In a country, the president or governor ensures everything within the region is healthy and functioning correctly. Similarly, the controller manager in Kubernetes makes sure that everything within the cluster is working properly and takes corrective action if necessary.
|
||||
|
||||
5. Task Manager (Scheduler): In a country, the task manager determines what actions to take and where to execute them. In Kubernetes, this role is fulfilled by the scheduler, which decides where to run specific actions or containers based on resource availability and other factors.
|
||||
|
||||
6. Central Reserve (HCD - etcd database): Just as the history books serve as a record of a country's events, the HCD in Kubernetes is a database created specifically for Kubernetes that stores critical cluster information.
|
||||
|
||||
7. Citizens/Containers: People living in homes are equivalent to containers in Kubernetes, which run applications or services within a pod (represented by homes).
|
||||
|
||||
8. Communication Agencies (CUEt): In a country, communication agencies establish networks between cities and homes. Similarly, the CUEt in Kubernetes handles the creation of ports and running containers within them.
|
||||
|
||||
9. Telephones/Services: Each home has its own telephone for communication, replaced by services like cluster IP, nodePort, load balancers, etc., in Kubernetes that help containers communicate with each other.
|
||||
|
||||
10. Builders (Cube Proxy): Just as builders establish networks and infrastructure in a country, the cube proxy handles all networking-related activities within a Kubernetes cluster.
|
||||
|
||||
By understanding this analogy, you can better grasp the key components of a Kubernetes cluster and their functions. To learn more about Kubernetes, resources are available on the provided GitHub repository and Twitter handles.
|
||||
The analogy between governing a country and using Kubernetes is quite clever. Let's break it down:
|
||||
|
||||
**Land**: The foundation of building a country, similar to the servers, computers, RAM, CPU, memory, and storage devices that make up the infrastructure for running a Kubernetes cluster.
|
||||
|
||||
**Cities**: Each city represents a node in the Kubernetes cluster, with its own set of resources (e.g., pods) and responsibilities. Just as cities have their own government, each node has its own control plane, scheduler, and proxy components.
|
||||
|
||||
**Capital City**: The capital city, where all the authority figures reside, is equivalent to the control plane node in Kubernetes, which houses the API server, controller manager, scheduler, cube proxy, cuet, and hcd (history database).
|
||||
|
||||
**Homes**: Each home represents a pod, with its own set of containers running inside. Just as homes need communication networks to connect with each other, pods need services (e.g., cluster IP, node port) to communicate with each other.
|
||||
|
||||
**Builders**: The builders represent the cuet component, which builds and runs containers within pods on each node. They ensure that containers are healthy and functioning correctly.
|
||||
|
||||
**Communication Agencies**: These agencies represent the cube proxy, which handles networking-related activities within the cluster, such as routing traffic between nodes and services.
|
||||
|
||||
The analogy is not perfect, but it provides a useful framework for understanding the various components and their roles in a Kubernetes cluster.
|
@ -1,4 +1,5 @@
|
||||
# Day 21: Advanced Code Coverage with Jenkins, GitHub and API Mocking
|
||||
# Day 26 - Advanced Code Coverage with Jenkins and API Mocking
|
||||
[![Watch the video](thumbnails/day26.png)](https://www.youtube.com/watch?v=ZBaQ71CI_lI)
|
||||
|
||||
Presentation by [Oleg Nenashev](https://linktr.ee/onenashev),
|
||||
Jenkins core maintainer, developer advocate and community builder at Gradle
|
||||
|
@ -1,6 +1,5 @@
|
||||
# Day 27: 90DaysofDevOps
|
||||
|
||||
## From Automated to Automatic - Event-Driven Infrastructure Management with Ansible
|
||||
# Day 27 - From Automated to Automatic - Event-Driven Infrastructure Management with Ansible
|
||||
[![Watch the video](thumbnails/day27.png)](https://www.youtube.com/watch?v=BljdQTewSic)
|
||||
|
||||
**Daniel Bodky**
|
||||
- [Twitter](https://twitter.com/d_bodky)
|
||||
|
@ -0,0 +1,31 @@
|
||||
# Day 28 - Talos Linux on vSphere
|
||||
[![Watch the video](thumbnails/day28.png)](https://www.youtube.com/watch?v=9y7m0PgW2UM)
|
||||
|
||||
Summary:
|
||||
|
||||
1. The topic is about setting up a VMware system CSI on a Kubernetes cluster to utilize features like snapshots, and enforcing pod security rules.
|
||||
|
||||
2. A configuration file is used to create a secret within the cluster, containing information such as Virtual Center, username, password, and data center details.
|
||||
|
||||
3. After creating the secret, the VMware CSI plugin will be installed using a command.
|
||||
|
||||
4. A storage class called 'vsphere-storage-class' is defined, utilizing an existing NFS-based volume in the vsphere environment to provide storage for Kubernetes-based virtual machines.
|
||||
|
||||
5. An example PVC and PV are created using the defined storage class, resulting in a dynamic PVC and PV.
|
||||
|
||||
6. The goal is to build an API-capable way of spinning up multiple Kubernetes clusters using Cube and leveraging Casper K10 to protect any state for workloads running between the SSD and shared NFS server environments.
|
||||
|
||||
7. Future plans involve upgrading existing hardware, connecting more units into a managed switch, and exploring methods to automate the process of creating multiple Kubernetes clusters using Cube and Casper K10 for protection.
|
||||
|
||||
|
||||
**IDENTITY**: The speaker is an expert in VMware vSphere and Kubernetes, with experience working with Talis and CSI (Container Storage Interface) provisioner.
|
||||
|
||||
**PURPOSE**: The speaker's purpose is to share their knowledge and expertise in building a home lab using VMware vSphere and Kubernetes. They want to demonstrate how to use the CSI provisioner to create a dynamic PVC (Persistent Volume Claim) and PV (Persistent Volume) in a vSphere environment, and explore ways to upgrade their existing infrastructure and leverage Casper K10 for workload protection.
|
||||
|
||||
**KEY TAKEAWAYS**:
|
||||
|
||||
1. The speaker demonstrated the use of the CSI provisioner to create a dynamic PVC and PV in a vSphere environment using Talis.
|
||||
2. They showed how to apply a storage class to a PVC, which allows for the creation of a dynamic PV.
|
||||
3. The speaker discussed their plans to upgrade their home lab infrastructure by adding more nodes and leveraging Casper K10 for workload protection.
|
||||
|
||||
**KEYWORDS**: VMware vSphere, Kubernetes, CSI provisioner, Talis, Persistent Volume Claim (PVC), Persistent Volume (PV), Casper K10.
|
@ -0,0 +1,19 @@
|
||||
# Day 29 - A Practical introduction to OpenTelemetry tracing
|
||||
[![Watch the video](thumbnails/day29.png)](https://www.youtube.com/watch?v=MqsIpGEbt4w)
|
||||
|
||||
The speaker is discussing an architecture using Jagger, a complete observability suite that includes OpenTelemetry Collector. They are using Docker to run this setup. The application consists of three services: catalog (Spring Boot app), pricing, and stock. They use the Otel header in their requests for identification purposes.
|
||||
|
||||
To configure the Java agent, they set the data output destination as their own service (catalog) on a specific port, and chose not to export metrics or logs. They do the same configuration for their Python and Rust applications but did not elaborate on it as it's not relevant to this talk.
|
||||
|
||||
After starting all services, they made a request, checked the logs, and noticed that more spans (traces) appeared in the Jagger UI with more details about the flow of the code within components. They also added manual instrumentation using annotations provided by OpenTelemetry and Spring Boot for capturing additional data inside their components, such as method parameters.
|
||||
|
||||
Finally, they encouraged the audience to learn more about OpenTelemetry, explore their demo codes on GitHub, and follow them on Twitter or Masteron (a platform they mentioned but I couldn't find any details about it). They concluded by thanking the audience for their attention and wishing them a great end of the day.
|
||||
The topic of this talk is identity and purpose, specifically how to use OpenTelemetry for distributed tracing and logging. The speaker starts by introducing the concept of OpenTelemetry and its purpose in providing a unified way to collect and process telemetry data from various sources.
|
||||
|
||||
The speaker then demonstrates how to set up OpenTelemetry using the Java library and shows examples of Auto instrumentation and manual instrumentation. Auto instrumentation is used to automatically instrument code without requiring manual configuration, while manual instrumentation requires explicit configuration to capture specific events or attributes.
|
||||
|
||||
The speaker also talks about the importance of tracing and logging in understanding the flow of code execution and identifying potential issues. They provide an example of how to use OpenTelemetry to capture additional data such as span attributes, which can be used to understand the flow of code execution.
|
||||
|
||||
The talk concludes by highlighting the benefits of using OpenTelemetry for distributed tracing and logging, including improved visibility into application behavior and faster issue resolution.
|
||||
|
||||
Overall, this talk aims to provide a comprehensive overview of OpenTelemetry and its use cases, as well as practical examples of how to set up and use it.
|
@ -1,5 +1,6 @@
|
||||
Day 30: How GitHub Builds GitHub with GitHub
|
||||
=========================
|
||||
# Day 30 - How GitHub delivers GitHub using GitHub
|
||||
[![Watch the video](thumbnails/day30.png)](https://www.youtube.com/watch?v=wKC1hTE9G90)
|
||||
|
||||
|
||||
Hello!👋
|
||||
|
||||
@ -27,6 +28,8 @@ In this session I am going to show you how GitHub builds GitHub with GitHub. Git
|
||||
|
||||
- Read about [GitHub Advanced Security (GHAS) -](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security)
|
||||
|
||||
- Play the [Secure Code Game](https://gh.io/securecodegame) to try all of the above for free, plus trying out your skills on finding and fixing security issues.
|
||||
|
||||
- Learn more about all of the ways to work with the [GitHub API](https://docs.github.com/en/rest?apiVersion=2022-11-28)
|
||||
|
||||
## Video
|
||||
|
@ -0,0 +1,21 @@
|
||||
# Day 31 - GitOps on AKS
|
||||
[![Watch the video](thumbnails/day31.png)](https://www.youtube.com/watch?v=RZ3gy0mnGoY)
|
||||
|
||||
A discussion around a GitOps repository, specifically "Theos Calypso," which provides examples for multicluster management using GetUp (a GitOps tool) and Flux (another popular GitOps tool). The examples provided in the repository demonstrate how to use various GitOps providers such as Flux, Argo, and others to reconcile configuration into a Kubernetes cluster.
|
||||
|
||||
The repository seems well-structured, with numerous examples for different use cases like single clusters, multiple clusters (e.g., production, development, acceptance), and even namespace-level configurations per developer. It aims to make it easy for users to get started with GitOps and provides plenty of code and explanations to learn from without having to execute any of the examples.
|
||||
|
||||
The speaker also mentioned that if one is interested in this topic, they can find more content on their YouTube channel (Season 1). They encouraged viewers to give it a thumbs up, like, comment, subscribe, and thanked Michael for organizing the event. The session appears to have been well-received, with the speaker expressing enjoyment during the demo.
|
||||
The purpose of this content is to discuss the topic of "IDENTITY and PURPOSE" in the context of DevOps and Kubernetes. The speakers present a 30-minute session on how to use Helm charts to manage multiple clusters with GitOps and Flux.
|
||||
|
||||
The main points discussed include:
|
||||
|
||||
* Using Helm charts to customize notification and source controller
|
||||
* Configuring the flux operator to reconcile configuration into a cluster using GitOps
|
||||
* Managing multiple clusters with GitOps and Flux, including multicluster management using getups
|
||||
|
||||
The speaker also mentions the importance of having standardized deployment configurations in a repository and how this can be achieved using best practices and standards.
|
||||
|
||||
Additionally, Michael touches on the topic of multicluster management using getups and references a specific repository called Calypso, which provides examples of multicluster management using getups. He also highlights the benefits of using multiple giops providers, such as Flux and Argo.
|
||||
|
||||
The session concludes with a call to action for viewers to check out the season one videos on the YouTube channel, give it a thumbs up, like comment, and subscribe.
|
@ -1,6 +1,5 @@
|
||||
# Day 32: 90DaysofDevOps
|
||||
|
||||
## Cracking Cholera’s Code: Victorian Insights for Today’s Technologist
|
||||
# Day 32 - Cracking Cholera’s Code: Victorian Insights for Today’s Technologist
|
||||
[![Watch the video](thumbnails/day32.png)](https://www.youtube.com/watch?v=YnMEcjTlj3E)
|
||||
|
||||
### Overview
|
||||
|
||||
|
@ -1,4 +1,33 @@
|
||||
# Day 33 - GitOps made simple with ArgoCD and GitHub Actions
|
||||
[![Watch the video](thumbnails/day33.png)](https://www.youtube.com/watch?v=dKU3hC_RtDk)
|
||||
|
||||
So you've set up a GitHub action workflow to build, tag, and push Docker images to Docker Hub based on changes in the `main.go` file, and then use Argo CD to manage the application deployment. This flow helps bridge the gap between developers and platform engineers by using GitOps principles.
|
||||
|
||||
Here are the benefits of using GitOps:
|
||||
|
||||
1. Version control history: By storing your manifest in a git repo, you can see how your application deployments and manifests have evolved over time, making it easy to identify changes that may have caused issues.
|
||||
2. Standardization and governance: Using GitOps with Argo CD ensures that everything is standardized and governed by a repository acting as a gateway to the cluster for interacting with deployments. This gives platform engineers control over how things get changed in a centralized manner.
|
||||
3. Security: By requiring developers to make pull requests on the repo before changes can be applied to the cluster, you can maintain security without giving kubernetes access to developers or people changing things in PRs. You can even run CI tests on the same repo before merging the PR.
|
||||
4. Faster deployments: Once you've set up a GitOps pipeline, you can automate the entire deployment cycle and ship changes faster while maintaining security, standardization, and governance.
|
||||
|
||||
You mentioned that there is still some dependency on manually clicking "sync" in Argo CD UI; however, you can configure Argo CD to automatically apply changes whenever it detects them. You can also reduce the detection time for Argo CD to pull the repo more frequently if needed.
|
||||
|
||||
For more detailed steps and additional resources, you can check out the blog on your website (AR sharma.com) or find the GitHub repo used in this demo in the blog post. Thank you for watching, and I hope this was helpful! If you have any questions, please feel free to reach out to me on Twitter or LinkedIn.
|
||||
The topic is specifically discussing GitHub Actions and Argo CD. The speaker explains how to use these tools to automate the deployment of applications by leveraging version control systems like Git.
|
||||
|
||||
The key takeaways from this session are:
|
||||
|
||||
1. **Identity**: Each commit in the GitHub repository is associated with a unique SHA (Secure Hash Algorithm) value, which serves as an identifier for the corresponding image tag.
|
||||
2. **Purpose**: The purpose of using GitHub Actions and Argo CD is to automate the deployment process, ensuring that changes are properly tracked and deployed efficiently.
|
||||
|
||||
The speaker then presents the benefits of this setup:
|
||||
|
||||
1. **Version Control History**: By storing the manifest in a Git repository, you can see how your application deployments and manifests have evolved over time.
|
||||
2. **Standardization and Governance**: Argo CD provides control and visibility into how changes are made, ensuring that everything is standardized and governed.
|
||||
3. **Security**: You don't need to give Kubernetes access to developers or people who are pushing to prod; instead, they can make pull requests on the repo, which Argo CD monitors for security.
|
||||
4. **Faster Shipping**: Once you set up a GitHub Actions pipeline, you can automate all of that part, reducing manual intervention and increasing efficiency.
|
||||
|
||||
The speaker concludes by emphasizing the value that GitHub Actions and Argo CD bring to organizations, allowing them to ship fast, keep things secure and standardized, and bridge the gap between developers and platform engineers.
|
||||
|
||||
Extra Resources which would be good to include in the description:
|
||||
• Blog: https://arshsharma.com/posts/2023-10-14-argocd-github-actions-getting-started/
|
||||
|
@ -0,0 +1,79 @@
|
||||
# Day 34 - How to Implement Automated Deployment Pipelines for Your DevOps Projects
|
||||
[![Watch the video](thumbnails/day34.png)](https://www.youtube.com/watch?v=XLES6Q5hr9c)
|
||||
|
||||
An excellent overview of the modern software development pipeline, including topics such as build automation, continuous integration (CI), continuous deployment (CD), configuration management, automated testing, version control, small and frequent deployments, automated rollbacks, monitoring and feedback, security concerns, and containerization.
|
||||
|
||||
To summarize:
|
||||
|
||||
1. Automation benefits:
|
||||
- Faster time to market
|
||||
- Release confidence
|
||||
- Reduced human errors
|
||||
- Consistency in the codebase
|
||||
|
||||
2. Key components:
|
||||
- Source Code Management (e.g., GitHub, Bitbucket)
|
||||
- Build Automation (Jenkins, GitLab CI, CircleCI, Travis CI, etc.)
|
||||
- Integrated automated testing
|
||||
- Version Control (Git, SVN, Mercurial, etc.)
|
||||
|
||||
3. Continuous Deployment vs. Continuous Delivery:
|
||||
- Continuous Deployment: Automatic deployment of changes to the production environment after they have been tested in a staging or integration environment.
|
||||
- Continuous Delivery: Enables rapid and automated delivery of software changes to any environment, but deployment can be manual or triggered by a human.
|
||||
|
||||
4. Security Concerns:
|
||||
- Implement Infrastructure as Code (IaC) tools like Terraform, CloudFormation, etc.
|
||||
- Adopt security technologies for deployment like Chef, Ansible, etc.
|
||||
- Use secret management tools (Vault, AWS Secrets Manager, HashiCorp's Vault)
|
||||
|
||||
5. Monitoring and Logging:
|
||||
- Proactive issue detection
|
||||
- Scalability with application growth
|
||||
- Implement automatic logging and real-time alerts
|
||||
- Tools like Prometheus, ELK Stack (Elasticsearch, Logstash, Kibana), Grafana, Datadog, etc.
|
||||
|
||||
6. Containerization and Orchestration:
|
||||
- Container orchestration tools (Kubernetes, Docker Swarm, Rancher, etc.)
|
||||
- Serverless architectures provided by main cloud providers like AWS Lambda, Google Cloud Functions, Azure Functions, etc.
|
||||
|
||||
7. Machine Learning for Deployment Pipelines:
|
||||
- Predicting and optimizing deployment pipelines through machine learning.
|
||||
The main points from this content are:
|
||||
|
||||
* Continuous Integration (CI) and Continuous Deployment (CD) as essential tools for detecting errors, reducing time to market, and increasing release confidence.
|
||||
|
||||
**Tools and Technologies**
|
||||
|
||||
* Jenkins, GCI, Bamboo, Circle CI, Travis CI, and Team C are popular CI/CD tools.
|
||||
* Configuration management tools like Ansible and SaltStack are widely used.
|
||||
* Infrastructure as Code (IaC) tools like Terraform and CloudFormation are essential for automating infrastructure deployment.
|
||||
|
||||
**Deployment Pipelines**
|
||||
|
||||
* Setting up a deployment pipeline involves choosing the right tools, defining deployment stages, and implementing automated testing.
|
||||
* Small and frequent deployments help to identify errors quickly and prevent large-scale issues.
|
||||
|
||||
**Monitoring and Feedback**
|
||||
|
||||
* Continuous monitoring is necessary for automation pipelines to detect errors and provide real-time feedback.
|
||||
* Automated rollbacks are essential for reverting to previous versions in case of errors.
|
||||
|
||||
**Common Deployment Challenges**
|
||||
|
||||
* Dependency management, security concerns, and scalability are common challenges faced during deployment.
|
||||
* Using IaC tools like Terraform can help overcome these challenges.
|
||||
|
||||
**Monitoring and Logging**
|
||||
|
||||
* Proactive issue detection is crucial through monitoring and logging.
|
||||
* Implementing automatic logging and real-time alerts helps to detect errors quickly.
|
||||
|
||||
**Skillability**
|
||||
|
||||
* Monitoring skills must adapt to application growth to ensure proactive issue detection.
|
||||
|
||||
**Future Trends**
|
||||
|
||||
* Microservices, containerization, and orchestration are trending in the industry.
|
||||
* Kubernetes is a popular choice for container orchestration, with Rancher and Miso being other options.
|
||||
* Serverless architecture is gaining popularity due to its scalability and maintenance-free nature.
|
@ -0,0 +1,81 @@
|
||||
# Day 35 - Azure for DevSecOps Operators
|
||||
[![Watch the video](thumbnails/day35.png)](https://www.youtube.com/watch?v=5s1w09vGjyY)
|
||||
|
||||
Here is a summary of the steps to create an AKS cluster using Bicep:
|
||||
|
||||
1. Create a resource group:
|
||||
```
|
||||
az group create --name myResourceGroup --location eastus
|
||||
```
|
||||
|
||||
2. Create a Bicep file (myAKS.bicep) with the following content:
|
||||
|
||||
```
|
||||
param clusterName string = 'myAKSCluster'
|
||||
param location string = 'eastus'
|
||||
param dnsPrefix string = 'mydns'
|
||||
param osDiskSizeInGB int = 30
|
||||
param agentCount int = 1
|
||||
param image string = 'CanonicalUbuntuServer'
|
||||
|
||||
@landingSlot
|
||||
resource aks myAKSCluster = Microsoft.ContainerInstances/managedClusters@2020-06-01 {
|
||||
name: clusterName
|
||||
location: location
|
||||
properties: {
|
||||
dnsPrefix: dnsPrefix
|
||||
kubernetesVersion: '1.27.7'
|
||||
osType: 'Linux'
|
||||
servicePrincipalProfile: {
|
||||
clientId: '<Your Service Principal Client ID>'
|
||||
secret: '<Your Service Principal Secret>'
|
||||
}
|
||||
enableManagedIdentity: true
|
||||
}
|
||||
sku: {
|
||||
tier: Premium
|
||||
name: Standard_D4_v3
|
||||
}
|
||||
agentPoolProfiles: [
|
||||
{
|
||||
name: 'agentpool'
|
||||
count: agentCount
|
||||
osType: 'Linux'
|
||||
osDiskSizeInGB: osDiskSizeInGB
|
||||
vmSize: 'Standard_DS2_v3'
|
||||
type: 'VirtualMachineScaleSets'
|
||||
mode: System
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
3. Install the Azure CLI and Azure PowerShell, if you haven't already.
|
||||
|
||||
4. Run the following command to login to your Azure account:
|
||||
|
||||
```
|
||||
az login
|
||||
```
|
||||
|
||||
5. Deploy the Bicep file using the following command:
|
||||
|
||||
```
|
||||
az bicep build myAKS.bicep --output-file aksDeployment.json
|
||||
az deployment group create --name myAKSDeployment --resource-group myResourceGroup --template-file aksDeployment.json
|
||||
```
|
||||
|
||||
6. Once the deployment is complete, you can connect to the AKS cluster using `kubectl` and `az aks get-credentials`.
|
||||
|
||||
7. You can also view the status of your AKS cluster in the Azure portal under "Kubernetes service" > "Clusters".
|
||||
|
||||
|
||||
This content walks through a step-by-step guide on deploying an Azure Kubernetes Service (AKS) cluster using Bicep, a declarative infrastructure as code language developed by Microsoft. The purpose of this deployment is to create a test lab environment for testing and learning.
|
||||
|
||||
The video starts with creating a Resource Group in Azure using the Azure CLI tool, followed by generating and copying an SSH key. Then, it deploys a Bicep file to create the AKS cluster, including the necessary resources such as the Linux admin username and SSH RSA public key.
|
||||
|
||||
Once the deployment is complete, the video shows how to retrieve the credentials from the AKs cluster using the `az aks get-credentials` command. This allows the user to interact with the deployed resources and manage them through the Azure CLI or other tools.
|
||||
|
||||
The video also demonstrates how to use the `kubectl` command-line tool to verify that the deployment was successful, including checking the node pools, workloads, and virtual machine sizes.
|
||||
|
||||
Throughout the video, the author provides tips and suggestions for using Bicep and Azure Kubernetes Service, as well as promoting best practices for deploying and managing cloud-based infrastructure. The purpose of this content appears to be educational, with the goal of helping viewers learn about Azure Kubernetes Service and how to deploy it using Bicep.
|
@ -0,0 +1,21 @@
|
||||
# Day 36 - Policy-as-Code Super-Powers! Rethinking Modern IaC With Service Mesh And CNI
|
||||
[![Watch the video](thumbnails/day36.png)](https://www.youtube.com/watch?v=d-2DKoIp4RI)
|
||||
|
||||
The question is about how to limit repetition when writing Infrastructure as Code (IAC) projects by using code templates, libraries, and central repositories. The idea is to define methods or components that are common across multiple projects, import them into new projects as libraries, and call the intended components as needed. This way, if there's an update to a policy or resource, it can be updated in the central repository and all consuming projects will automatically benefit from the change. The use of automation tools like GitOps and systems like Palumi helps streamline daily IAC operations, make decisions around provisioning Cloud native infrastructure, support applications on top of that, and scale those applications as needed. It's recommended to try out the steps in a project or choose other tools for similar results, and encouragement is given to follow the team on their social media platforms.
|
||||
Here are my key takeaways from your content:
|
||||
|
||||
**IDENTITY and PURPOSE**
|
||||
|
||||
1. The importance of security posture: You emphasized the significance of having a clear understanding of security policies and edicts, especially when working with complex systems like Kubernetes.
|
||||
2. IAC (Infrastructure as Code) enforcement: You showcased how Palumi can enforce compliance by applying policies at the account level, ensuring that applications are properly tagged and configured to meet security requirements.
|
||||
3. Reusability and templating: You highlighted the value of reusing code components across projects, reducing repetition and increasing efficiency.
|
||||
|
||||
**AUTOMATION**
|
||||
|
||||
1. Automation in IAC: You discussed how tools like Palumi enable automation in IAC operations, streamlining processes and minimizing manual intervention.
|
||||
2. Scalability and synchronization: You emphasized the importance of automating scaling and synchronization between applications and infrastructure to optimize performance.
|
||||
|
||||
**FINAL THOUGHTS**
|
||||
|
||||
1. Hands-on experience: You encouraged viewers to try Palumi themselves, emphasizing that it's easy to get started even without being an expert.
|
||||
2. Community engagement: You invited the audience to follow your team on social media platforms like Twitter and LinkedIn, and to engage with the community.
|
@ -0,0 +1,30 @@
|
||||
# Day 38 - Open Standards: Empowering Cloud-Native Innovation
|
||||
[![Watch the video](thumbnails/day38.png)](https://www.youtube.com/watch?v=xlqnmUOeREY)
|
||||
|
||||
You have provided a comprehensive overview of the role of Open Standards in the Cloud Native Computing Foundation (CNCF) ecosystem. Here is a summary of the key points:
|
||||
|
||||
1. ID Telemetry: Focuses on setting the foundation for building new open standards in the observability space.
|
||||
|
||||
2. Open Application Model (OAM): An open standard protocol for application deployment that defines a new approach to deploying applications.
|
||||
|
||||
3. CUELLA: A CNCF project following the OAM to define a new way of defining the application deployment process.
|
||||
|
||||
4. Crossplane: Defines a new framework for creating cloud-native control planes without requiring much coding.
|
||||
|
||||
5. Importance of Open Standards:
|
||||
- Innovation for vendors: The focus has shifted towards innovation in tools, rather than integration with existing systems.
|
||||
- Extensibility for end users: End users can easily compare and choose the best tool based on features provided.
|
||||
- Interoperability for the community: Allows users to select from multiple solutions that solve the same problem, reducing vendor lock-in.
|
||||
|
||||
The QR code you shared seems to be a way to access resources related to the Open Standards and recent developments in this area. It's great to see how these standards are driving innovation, extensibility, and interoperability within the CNCF ecosystem. Thanks for sharing this informative talk!
|
||||
The slides from your presentation on "IDENTITY and PURPOSE" are quite detailed, covering various aspects of the cloud-native ecosystem. You discussed several open standards that have been developed to enable the use of multiple container runtimes (CRI), networking solutions (CNi), storage solutions (CSI), and service mesh technologies with Kubernetes. You also mentioned the importance of these standards in enabling interoperability within the community.
|
||||
|
||||
You highlighted some specific tools and projects, such as CRI-O, Calico, Flannel, Vite, OpenEBS, and Istio, among others. You emphasized that these open standards have enabled innovation for vendors, extensibility for end-users, and interoperability within the community.
|
||||
|
||||
In your presentation, you also touched on two new ecosystems that have developed recently in the cloud-native ecosystem: observability and application deployment. You mentioned open telemetry as an example of a project in the observability space, which aims to simplify instrumentation, reduce data aggregation costs, and standardize formats and frameworks for ensuring visibility across the entire stack.
|
||||
|
||||
You also discussed the Open Application Model (OAM) and Crossplane, which are both related to simplifying application deployment on any platform while enriching the developer experience. You noted that OAM defines a new approach to application deployment and is followed by tools such as Cella, which aims to provide a simple way to deploy applications without requiring much code.
|
||||
|
||||
You concluded your presentation by summarizing the impact of open standards in the cloud-native ecosystem, highlighting innovation for vendors, extensibility for end-users, and interoperability within the community. You encouraged attendees to explore these resources further and engage with you on social media platforms like Twitter and LinkedIn if they have any questions or feedback.
|
||||
|
||||
Overall, your presentation provided a comprehensive overview of the importance of open standards in the cloud-native ecosystem, highlighting their impact on innovation, extensibility, and interoperability within the community.
|
@ -0,0 +1,49 @@
|
||||
# Day 39 - Is TLS in Kubernetes really that hard to understand?
|
||||
[![Watch the video](thumbnails/day39.png)](https://www.youtube.com/watch?v=aJfcP5fambs)
|
||||
|
||||
In summary, during this presentation, we discussed how Transport Layer Security (TLS) is used in a Kubernetes cluster to ensure secure connections between various components. Here's a quick recap:
|
||||
|
||||
1. Certificate Authority (CA): An organization that issues certificates for secure connections, ensuring the authenticity of certificates and establishing trust during a connection. Examples include Let's Encrypt, DigiCert, and GoDaddy.
|
||||
|
||||
2. Types of Certificates:
|
||||
- Root Certificates: Issued by CAs and used to identify their digital signature.
|
||||
- Server Certificates: Assigned to servers in the cluster (such as the API server, HD server, or Cubelet server) for secure communication between them and clients.
|
||||
- Client Certificates: Assigned to clients (such as admins, proxies, or control plane components) for secure communication with servers.
|
||||
|
||||
3. TLS in Kubernetes: Ensures that every object within the cluster communicates securely by providing a security layer when talking to each other. This prevents unauthorized access and maintains data integrity.
|
||||
|
||||
4. To learn more about TLS and its working in Kubernetes, check out the official documentation provided at the QR code link given during the presentation.
|
||||
I'm an expert content summarizer, and I've taken the liberty of condensing the provided text to highlight its key points. Here's a summary:
|
||||
|
||||
**TLS Certificates**
|
||||
|
||||
To ensure secure connections within a Kubernetes cluster, three types of certificates are used: Root, Server (Cube API), and Client.
|
||||
|
||||
* **Root Certificate**: Issued by a Certification Authority, these certificates establish trust.
|
||||
* **Server Certificate** (Cube API): Used for Cube API server, scheduler, controller manager, and proxy.
|
||||
* **Client Certificate**: Used for admin, Cube scheduler, controller manager, and proxy to authenticate with the Cube API server.
|
||||
|
||||
**Kubernetes Cluster**
|
||||
|
||||
The Kubernetes cluster consists of Master nodes and Worker nodes. To ensure secure connections between them, TLS certificates are used.
|
||||
|
||||
**Diagram**
|
||||
|
||||
A diagram is presented showing the various components of the Kubernetes cluster, including:
|
||||
|
||||
* Master node
|
||||
* Worker nodes (three)
|
||||
* Cube API server
|
||||
* Scheduler
|
||||
* Controller manager
|
||||
* Proxy
|
||||
* HCD server
|
||||
* CUET server
|
||||
|
||||
The diagram illustrates how each component interacts with others and highlights the need for secure connections between them.
|
||||
|
||||
**API Server**
|
||||
|
||||
The Cube API server acts as a client to the HCD server and CUET server. Additionally, it receives requests from other components, such as scheduler and controller manager, which also use client certificates to authenticate with the Cube API server.
|
||||
|
||||
In summary, TLS certificates are used within Kubernetes to ensure secure connections between various components. The diagram illustrates this complex system, and the explanation provides a clear understanding of how each piece fits together.
|
@ -0,0 +1,46 @@
|
||||
# Day 40 - Infrastructure as Code - A look at Azure Bicep and Terraform
|
||||
[![Watch the video](thumbnails/day40.png)](https://www.youtube.com/watch?v=we1s37_Ki2Y)
|
||||
|
||||
In this text, the speaker discusses best practices for using Infrastructure as Code (IAC) with a focus on Terraform and Azure Bicep. Here are the key points:
|
||||
|
||||
1. Store your infrastructure code in version-controlled systems like GitHub or Azure DevOps to enable collaboration, auditing, and peer reviews.
|
||||
2. Use static analysis tools for IAC code bases to detect misconfigurations based on business practices and organizational needs.
|
||||
3. Avoid deploying sensitive information (like secrets) directly within your code. Instead, use a secret manager like Key Vault (Azure), AWS KMS, or HashiCorp Vault.
|
||||
4. Ensure proper documentation for transparency and knowledge sharing among team members and future coders, including inline comments and specific documentation.
|
||||
5. Consider using Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate the deployment process and reduce manual effort.
|
||||
6. Infrastructure as Code helps ensure consistency but can be more efficient with automation tools like CI/CD pipelines.
|
||||
7. Both Terraform and Azure Bicep use declarative programming paradigms, but Terraform is compatible with multiple cloud providers while Azure Bicep only supports Azure deployments.
|
||||
8. Store the state files for Terraform in a back end (like Azure Blob Storage or Amazon S3) for larger deployments to maintain a single source of truth. Bicep takes State directly from Azure and does not require State files.
|
||||
9. Explore additional resources available for learning more about IAC, Terraform, and Azure Bicep through links provided by Microsoft Learn (aka.ms/SAR).
|
||||
Here are the main points from the video:
|
||||
|
||||
**Identity and Purpose**
|
||||
|
||||
* The purpose of infrastructure as code is to manage and configure infrastructure using code, rather than manually.
|
||||
* This helps with consistency, reliability, and version control.
|
||||
|
||||
**Best Practices for Infrastructure as Code**
|
||||
|
||||
* Avoid deploying credentials or secrets inside your code. Instead, use a secret manager like Key Vault (Azure), AWS Key Management Service, or HashiCorp's Vault.
|
||||
* Use documentation to share knowledge and transparency about your code. This includes comments in the code itself, as well as separate documentation.
|
||||
|
||||
**Tools for Infrastructure as Code**
|
||||
|
||||
* Use continuous integration/continuous deployment (CI/CD) tools like Azure DevOps or GitHub Actions to automate deployments.
|
||||
* Consider using a secret manager to store sensitive information.
|
||||
|
||||
**Azure Bicep vs Terraform**
|
||||
|
||||
* Both are infrastructure as code languages that use the declarative programming paradigm.
|
||||
* Azure Bicep is specific to Azure, while Terraform can deploy to multiple cloud providers and on-premises platforms.
|
||||
* Terraform has been around longer and has a larger community, but Azure Bicep is still a viable option.
|
||||
|
||||
**Key Differences between Terraform and Azure Bicep**
|
||||
|
||||
* State handling: Terraform uses a state file to track resource modifications, while Azure Bicep takes its state directly from Azure.
|
||||
* Scalability: Terraform can handle large deployments across multiple providers, while Azure Bicep is best suited for smaller-scale Azure deployments.
|
||||
|
||||
**Conclusion**
|
||||
|
||||
* The choice between Azure Bicep and Terraform depends on your organization's specific needs and goals.
|
||||
* Remember to prioritize documentation, use CI/CD tools, and consider using a secret manager to store sensitive information.
|
@ -0,0 +1,35 @@
|
||||
# Day 41 - My journey to reimagining DevOps: Ushering in the Second Wave
|
||||
[![Watch the video](thumbnails/day41.png)](https://www.youtube.com/watch?v=jQENXdESfWM)
|
||||
|
||||
the speaker is discussing the challenges in collaboration within a DevOps context, and proposing a solution called "System Initiative." The main issues highlighted are:
|
||||
1. Context switching - Teams have to constantly learn new technologies, tools, and abstractions, which hinders collaboration as each team may have slightly different perspectives and understandings of the system.
|
||||
2. Low intelligence of the system - Understanding the state of the infrastructure and production requires heavy reliance on team members' ability to conceptualize information from statically configured files. This makes it hard for everyone to have the same understanding, increasing the risk of mistakes.
|
||||
3. Handoff city - The current process relies too much on documentation instead of direct communication, leading to delays and misinterpretations in conveying ideas or feedback.
|
||||
|
||||
To address these challenges, the speaker proposes a solution called "System Initiative," which aims to:
|
||||
1. Increase system intelligence by capturing relationships between configuration elements, making it easier to move from decision-making to implementation without needing to remember multiple locations for updates.
|
||||
2. Simplify context switching and reduce cognitive load by allowing teams to stay in their flow state and reducing the need to constantly dust off old knowledge.
|
||||
3. Facilitate collaboration through shared understanding of the system's composition, architecture, connections, and workflow. This will make it easier for teams to see who has done what, when, and even who is working on a task simultaneously.
|
||||
4. Implement short feedback loops, allowing teams to get feedback on their designs before implementing changes in production.
|
||||
|
||||
The speaker encourages the audience to learn more about System Initiative through joining their Discord community or visiting their website for open beta access, and welcomes any feedback or ideas about how it could impact individual workflows.
|
||||
|
||||
**IDENTITY**: The speaker's identity as a technology leader is crucial to understanding their perspective on improving outcomes through better collaboration and feedback.
|
||||
|
||||
**PURPOSE**: The purpose of this talk is to share lessons learned while building a DevOps Center of Excellence, highlighting the importance of prioritization decisions, team dynamics, cognitive load, and leadership support.
|
||||
|
||||
**LESSONS LEARNED**:
|
||||
|
||||
1. **Prioritization**: Leaders should provide context for teams to make strategic decisions quickly.
|
||||
2. **Cognitive Load**: Increasing scope or domain complexity can be taxing; leaders must consider this when making decisions.
|
||||
3. **Leadership Team Dynamics**: The leadership team is a team too; leaders must prioritize collaboration and communication within their own team.
|
||||
|
||||
**PROBLEMS TO SOLVE**:
|
||||
|
||||
1. **Handoff City**: Poll requests, design documents, and support tickets replace actual collaboration.
|
||||
2. **Lack of Shared Context**: Teams struggle to understand each other's work due to disparate tools and systems.
|
||||
3. **High Intelligence Systems**: The speaker envisions a world where systems have high intelligence, reducing context switching and cognitive load.
|
||||
|
||||
**SYSTEM INITIATIVE**: This is a novel devops tooling approach that allows for real-time collaboration, multimodal interaction, and full-fidelity modeling of system resources as digital twins.
|
||||
|
||||
**CALL TO ACTION**: Join the conversation on Discord to learn more about System Initiative, provide feedback, or join the open beta.
|
@ -0,0 +1,26 @@
|
||||
# Day 42 - The North Star: Risk-driven security
|
||||
[![Watch the video](thumbnails/day42.png)](https://www.youtube.com/watch?v=XlF19vL0S9c)
|
||||
|
||||
In summary, the speaker is discussing the importance of threat modeling in software development. Here are the key points:
|
||||
|
||||
1. Threat modeling helps capture the good work already done in security, claim credit for it, and motivate teams. It also accurately reflects the risk by capturing controls that are already in place.
|
||||
2. Business risks should also be considered in threat modeling. Standards and frameworks like AWS Well-Architected, CIS, or NIST can serve as guides.
|
||||
3. Cyber Threat Intelligence (CTI) can be useful but has limitations: it focuses on technology and tells you what has already happened rather than what will happen. Therefore, it should be used cautiously in threat modeling.
|
||||
4. Threat models should be simple yet reflect reality to make them effective communications tools for different audiences within an organization.
|
||||
5. Threat models need to be kept up-to-date to accurately represent the current risk landscape and avoid misrepresenting the risks to the business. Outdated threat models can become a security weakness.
|
||||
|
||||
The speaker also encourages developers to try threat modeling on their projects and offers resources for learning more about threat modeling, such as Adam Shostack's book "Threat Modeling."
|
||||
Here is the summarized content:
|
||||
|
||||
The speaker, Johnny Ties, emphasizes the importance of simplicity in threat modeling. He warns against using CTI (Cyber Threat Intelligence) as a strong indicator of risk, highlighting its limitations and tendency to change frequently. Johnny stresses that threat models should be easy to build, talk about, and read.
|
||||
|
||||
**KEY TAKEAWAYS**
|
||||
|
||||
1. **Simplicity**: The key to effective threat modeling is simplicity. It helps everyone involved in the process.
|
||||
2. **Use it as a Communications tool**: View your threat model as a way to communicate with stakeholders, not just technical teams.
|
||||
3. **Keep it up-to-date**: Threat models that are not kept current can be an Achilles heel and misrepresent risks.
|
||||
|
||||
**ADDITIONAL POINTS**
|
||||
|
||||
* Johnny encourages viewers to try threat modeling with their team and invites feedback.
|
||||
* He mentions Adam Shac's book on threat modeling, which is a great resource for those interested in learning more about the topic.
|
@ -0,0 +1,37 @@
|
||||
# Day 43 - Let's go sidecarless in Ambient Mesh
|
||||
[![Watch the video](thumbnails/day43.png)](https://www.youtube.com/watch?v=T1zJ9tmBkrk)
|
||||
|
||||
# ONE SENTENCE SUMMARY:
|
||||
This video discusses Ambient Mesh, an open-source project that simplifies service mesh architecture by using one proxy per node, reducing cost and complexity, and providing improved security with mTLS and identity management.
|
||||
|
||||
# MAIN POINTS:
|
||||
1. Service mesh addresses challenges in microservice architectures, such as cost, complexity, and performance issues.
|
||||
2. Ambient Mesh is an open-source project that aims to improve service mesh by using one proxy per node instead of one for each container.
|
||||
3. This reduces costs, simplifies operations, and improves performance.
|
||||
4. Ambient Mesh provides out-of-the-box security with mTLS and identity management.
|
||||
5. The architecture uses separate proxies for L3/L4 (Z) and L7 (VPO) to manage traffic.
|
||||
6. The tunneling protocol used in Ambient Mesh is called ONI, which provides L3/L4 capabilities.
|
||||
7. Ambient Mesh is under the Cloud Native Computing Foundation (CNCF) and continues to be improved daily.
|
||||
|
||||
# ADDITIONAL NOTES:
|
||||
- In Ambient Mesh, each node has an identity that is impersonated and creates a secure tunnel for communication between nodes.
|
||||
- The tunneling protocol used in Ambient Mesh is called ONI (Overlay Network Interface).
|
||||
# OUTPUT SECTIONS
|
||||
|
||||
## ONE SENTENCE SUMMARY:
|
||||
The presentation discusses the concept of a service mesh, specifically Ambient Mesh, and its architecture, highlighting its benefits, such as reduced cost, simplified operations, and improved performance.
|
||||
|
||||
## MAIN POINTS:
|
||||
|
||||
1. Service meshes provide secure communication between services.
|
||||
2. Microservices have distributed applications with challenges in observing, securing, and communicating among services.
|
||||
3. Ambient Mesh is an open-source project that simplifies service mesh architecture by having one proxy per node rather than per container.
|
||||
4. It provides reduced cost, simplified operations, and improved performance compared to the sidecar pattern.
|
||||
5. Ambient Mesh uses mutual TLS (mTLS) for secure communication between services.
|
||||
6. The L7 proxy manages layer 7 features, while the L3/L4 proxy handles layer 3 and 4 traffic.
|
||||
7. Zel is responsible for securely connecting and authenticating workloads with CN (Certificate Network).
|
||||
8. The protocol used to connect nodes is called Hone, which provides a secure overlay network.
|
||||
|
||||
## PURPOSE:
|
||||
|
||||
The presentation aims to educate the audience on the benefits and architecture of Ambient Mesh, highlighting its unique features and advantages over traditional service mesh architectures.
|
@ -1,3 +1,49 @@
|
||||
# Day 44 - Exploring Firecracker
|
||||
[![Watch the video](thumbnails/day44.png)](https://www.youtube.com/watch?v=EPMbCUPK7aQ)
|
||||
|
||||
In summary, we discussed the pros and cons of containers and Virtual Machines (VMs), as well as an alternative solution called Firecracker that aims to combine the advantages of both while minimizing their respective disadvantages.
|
||||
|
||||
Pros of containers:
|
||||
- Lightweight (measured in megabytes)
|
||||
- Require fewer resources to deploy, run, and manage
|
||||
- Can spin up quickly (milliseconds to minutes)
|
||||
- High density on a single system (more containers can be hosted compared to VMs)
|
||||
|
||||
Cons of containers:
|
||||
- Newer technology with an evolving ecosystem
|
||||
- Potential security issues due to shared underlying OS
|
||||
- All containers must run the same operating system
|
||||
|
||||
Firecracker aims to provide a secure, fast, and efficient solution by implementing micro VMS using KVM. Firecracker's advantages include:
|
||||
- Minimal device model for enhanced security
|
||||
- Accelerated kernel loading and reduced memory overhead
|
||||
- High density of micro VMs on a single server
|
||||
- Fast startup times (up to 150 micro VMs per second per host)
|
||||
|
||||
When using Firecracker, considerations include:
|
||||
- Implementing scheduling, capacity planning, monitoring, node autoscaling, and high availability features yourself
|
||||
- Suitable for workloads where containers don't work or for short-lived workloads (like Lambda functions)
|
||||
- Potential use cases for students when you don't want to spin up a full VM for training purposes.
|
||||
|
||||
The speaker discusses the concept of "having the best of both worlds" in cloud computing, specifically mentioning containers and virtual machines (VMs). They highlight the limitations of containers, including security concerns and the need for multiple operating systems. VMs, on the other hand, provide better security but are less flexible.
|
||||
|
||||
To address these issues, the speaker introduces Firecracker, a technology that runs micro VMs (MVMs) in user space using KVM (Linux kernel-based virtual machine). MVMs offer fast startup times, low memory overhead, and enhanced security. This allows thousands of MVMs to run on a single machine without compromising performance or security.
|
||||
|
||||
The speaker emphasizes the benefits of Firecracker, including:
|
||||
|
||||
1. **Secure**: MVMs are isolated with common Linux user-space security barriers and have reduced attack surfaces.
|
||||
2. **Fast**: MVMs can be started quickly, with 150 per second per host being a feasible rate.
|
||||
3. **Efficient**: MVMs run with reduced memory overhead, enabling high-density packing on each server.
|
||||
|
||||
However, the speaker notes that using Firecracker requires consideration of additional factors, such as scheduling, capacity planning, monitoring, node autoscaling, and high availability. They also suggest scenarios where Firecracker is particularly useful:
|
||||
|
||||
1. **Short-lived workloads**: MVMs are suitable for short-lived workloads like Lambda functions.
|
||||
2. **Students**: MVMs can be used to provide a lightweight, easily spin-up-and-down environment for students.
|
||||
|
||||
Overall, the speaker aims to demonstrate that Firecracker and MVMs offer an attractive alternative to traditional VMs and containers, providing a secure, fast, and efficient way to run workloads in the cloud.
|
||||
|
||||
|
||||
|
||||
Here are additional resource:
|
||||
|
||||
https://firecracker-microvm.github.io/
|
||||
|
@ -0,0 +1,15 @@
|
||||
# Day 45 - Microsoft DevOps Solutions or how to integrate the best of Azure DevOps and GitHub
|
||||
[![Watch the video](thumbnails/day45.png)](https://www.youtube.com/watch?v=NqGUVOSRe6g)
|
||||
|
||||
In summary, this video demonstrates how to integrate GitHub Actions with an existing Azure DevOps pipeline. The process involves creating a GitHub action that triggers when changes are pushed to the main branch or any other specified branch. This action calls an Azure DevOps pipeline version 1 action from the marketplace, providing necessary information such as project URL, organization name, project name, and personal access token with enough permissions to run build pipelines.
|
||||
|
||||
The video also introduces GitHub Advanced Security for Azure DevOps, which allows users to leverage the same code scanning tool (CodeQL) across both platforms, making it easier to manage development and devops processes. By using these integrations, users can collaborate more effectively within their teams, streamline workflows, and take advantage of the best features from both tools.
|
||||
|
||||
The speaker emphasizes that the goal is not to determine which tool is better but rather to combine the strengths of both platforms to create a seamless development and devops experience. He encourages viewers to explore the other sessions in the event and looks forward to next year's Community Edition.
|
||||
The identity and purpose of this content is:
|
||||
|
||||
**Title:** "GitHub Advanced Security for Azure DevOps"
|
||||
|
||||
**Purpose:** To introduce the integration between GitHub and Azure DevOps, specifically highlighting the use of GitHub Advanced security features for code scanning and vulnerability detection in Azure DevOps pipelines.
|
||||
|
||||
**Identity:** The speaker presents themselves as an expert in content summarization and devops processes, with a focus on integrating GitHub and Azure DevOps tools to streamline workflows and simplify development processes.
|
@ -0,0 +1,44 @@
|
||||
# Day 46 - Mastering AWS Systems Manager: Simplifying Infrastructure Management
|
||||
[![Watch the video](thumbnails/day46.png)](https://www.youtube.com/watch?v=d1ZnS8L85sw)
|
||||
|
||||
AWS Systems Manager is a powerful, fully managed service that simplifies operational tasks for AWS and on-premises resources. This centralized platform empowers DevOps professionals to automate operational processes, maintain compliance, and reduce operational costs effectively.
|
||||
|
||||
![image](https://github.com/AditModi/90DaysOfDevOps/assets/48589838/cbb2acaf-fa66-4c75-883d-e980c951e90c)
|
||||
|
||||
|
||||
## **Key Features of AWS Systems Manager**
|
||||
|
||||
- Automation: Automate tasks like patch management, OS and application deployments, AMI creation, and more.
|
||||
- Configuration Management: Utilize tools such as run commands, state manager, inventory, and maintenance windows to configure and manage instances.
|
||||
- Unified Operational Data: Gain a comprehensive view of operational data across your entire infrastructure, including EC2 instances, on-premises servers, and AWS services. This unified view enhances issue identification, speeds up problem resolution, and minimizes downtime.
|
||||
|
||||
## **Getting Started with AWS Systems Manager**
|
||||
|
||||
![image](https://github.com/AditModi/90DaysOfDevOps/assets/48589838/202dd720-a360-40f5-a5cc-95e18c2e043f)
|
||||
|
||||
### **Step 1: Navigate to the AWS Systems Manager Console**
|
||||
|
||||
- AWS Account: Ensure you have an AWS account.
|
||||
- Create Instances: Set up two Windows servers and two Linux servers (utilizing the free tier).
|
||||
- Access the Console: Navigate to the AWS Systems Manager console and click the "Get Started" button, selecting your preferred region (e.g., us-east-1).
|
||||
|
||||
### **Step 2: Choose a Configuration Type**
|
||||
|
||||
- Configuration Setup: Configure AWS Systems Manager based on your needs. Options include quick setup common tasks or creating a custom setup.
|
||||
- Example: Patch Manager: In this scenario, we'll choose "Patch Manager." Explore additional scenarios in the resources provided below.
|
||||
|
||||
### **Step 3: Specify Configuration Options**
|
||||
|
||||
- Parameter Selection: Each configuration type has unique parameters. Follow the instructions based on your chosen setup.
|
||||
- Resource Group Creation: Create a resource group to organize and manage your resources efficiently.
|
||||
|
||||
### **Step 4: Deploy, Review, and Manage Your Resources**
|
||||
|
||||
- Resource Management: Once the resource group is created, you can manage resources seamlessly from the AWS Systems Manager console.
|
||||
- Automation Workflows: Create automation workflows, run patch management, and perform various operations on your resources.
|
||||
|
||||
## **Additional Resources**
|
||||
|
||||
- [AWS Systems Manager Introduction](https://aws.amazon.com/systems-manager/)
|
||||
- [Patch and Manage Your AWS Instances in Minutes with AWS Systems Manager from LearnCantrill](https://www.youtube.com/watch?v=B2MecqC5nJA)
|
||||
- [Getting Started with AWS Systems Manager](https://console.aws.amazon.com/systems-manager/home)
|
@ -0,0 +1,36 @@
|
||||
# Day 47 - Azure logic app, low / no code
|
||||
[![Watch the video](thumbnails/day47.png)](https://www.youtube.com/watch?v=pEB4Kp6JHfI)
|
||||
|
||||
It seems like you have successfully created an end-to-end workflow using Azure Logic Apps that processes a grocery receipt image, identifies food items, fetches recipes for those foods, and sends an email with the list of recipes.
|
||||
|
||||
To continue with the next step, follow these instructions:
|
||||
|
||||
1. Save your workflow in your GitHub repository (if you haven't already) so you can access it later.
|
||||
2. To run the workflow, you need to authenticate each connector as mentioned during the explanation:
|
||||
- Azure Blob Storage: You will need to provide authentication for the storage account where the receipt image is stored.
|
||||
- Computer Vision API (OCR): Provide authentication for your Computer Vision resource.
|
||||
- Outlook API: Authenticate with your Outlook account to send emails.
|
||||
3. To test the workflow, upload a new grocery receipt image in the specified storage account.
|
||||
4. Wait for an email with the list of potential recipes based on the items detected in the receipt.
|
||||
5. Review and make changes as needed to improve the workflow or add more features (such as adding JavaScripts, Python functions, etc.).
|
||||
6. Share your experiences, improvements, feedback, and new ideas using Azure Logic Apps in the comments section.
|
||||
7. Enjoy learning and exploring the possibilities of this powerful tool!
|
||||
In this session, we explored creating a workflow using Azure Logic Apps with minimal code knowledge. The goal was to automate a process that takes a receipt as input, extracts relevant information, and sends an email with potential recipes based on the food items purchased.
|
||||
|
||||
The workflow consisted of several steps:
|
||||
|
||||
1. Blob Trigger: A blob trigger was set up to capture new receipts uploaded to a storage account.
|
||||
2. JSON Output: The receipt content was passed through OCR (Optical Character Recognition) and computer vision, which converted the text into a JSON format.
|
||||
3. Schema Classification: The JSON output was then classified using a schema, allowing us to extract specific properties or objects within the JSON.
|
||||
4. Filtering and Looping: An array of food-related texts was created by filtering the original JSON output against a food word list. A loop was used to iterate through each recipe, extracting its name, URL, and image (or thumbnail).
|
||||
5. Email Body: The email body was constructed using variables for the food labels and URLs, listing out potential recipes for the user.
|
||||
|
||||
The final step was sending the email with the recipe list using the Outlook connector.
|
||||
|
||||
Key takeaways from this session include:
|
||||
|
||||
* Azure Logic Apps can be used to simplify workflows without requiring extensive coding knowledge.
|
||||
* The platform provides a range of connectors and actions that can be combined to achieve specific business outcomes.
|
||||
* Creativity and experimentation are encouraged, as users can add their own custom code snippets or integrate with other services.
|
||||
|
||||
The GitHub repository accompanying this session provides the complete code view of the workflow, allowing users to copy and modify it for their own purposes.
|
@ -0,0 +1,28 @@
|
||||
# Day 48 - From Puddings to Platforms: Bringing Ideas to life with ChatGPT
|
||||
[![Watch the video](thumbnails/day48.png)](https://www.youtube.com/watch?v=RQT9c_Cl_-4)
|
||||
|
||||
It sounds like you have built a location-based platform using Google Capture API, Firebase Authentication, Stripe for subscription management, and a custom backend. The platform allows users to submit new locations, which an admin can approve or deny. If approved, the location becomes live on the website and is searchable by other users. Users can also claim a location if it hasn't been claimed yet.
|
||||
|
||||
The backend provides an editor for managing locations, allowing admins to check for new locations, approve or deny requests, edit table entries, save changes, delete records, and add new ones. It also includes a search bar for easily finding specific locations.
|
||||
|
||||
For authenticated users (like the owner of a claimed location), they can edit their location, make changes, save, and delete. The platform is hosted on LightSail and uses GitHub for version control. A script has been created to automatically push and pull changes from Dev into the main environment, effectively acting as CI/CD.
|
||||
|
||||
Stripe integration allows for purchasing verification of locations. Overall, it seems like a well-thought-out and functional platform, leveraging AI and chatbots to help bring your ideas to life. Be sure to check out the website, blog, and podcast you mentioned for more information and insights on using generative AI in 2024 and beyond!
|
||||
You've successfully summarized your content, leveraging Safari's responsive design to showcase differences between desktop and mobile views. Your summary highlights the key features of your application, including:
|
||||
|
||||
1. Purpose: The purpose is to demonstrate the capabilities of generative AI in platform engineering.
|
||||
|
||||
Your summary covers the following topics:
|
||||
|
||||
1. Front-end and back-end development:
|
||||
* Crowdsourcing locations and adding them to the database
|
||||
* Allowing users to claim and manage their own locations
|
||||
* Integration with Stripe for subscription management
|
||||
2. Firebase authentication:
|
||||
* Creating user accounts and linking them to Stripe subscriptions
|
||||
3. Hosting and deployment:
|
||||
* Deploying the application on Light Sail, a cloud-based platform
|
||||
4. GitHub integration:
|
||||
* Using GitHub as a repository for version control and continuous integration/continuous deployment (CI/CD)
|
||||
5. End-to-end development process:
|
||||
* From idea generation with ChatGPT to code manipulation, testing, and deployment
|
@ -0,0 +1,23 @@
|
||||
# Day 49 - From Confusion To Clarity: Gherkin & Specflow Ensures Clear Requirements and Bug-Free Apps
|
||||
[![Watch the video](thumbnails/day49.png)](https://www.youtube.com/watch?v=aJHLnATd_MA)
|
||||
|
||||
You have created a custom web application test using a WebApplicationFactory and SpecFlow, along with an in-memory repository. To ensure that duplicate jokes are not added to the database, you wrote a test scenario that checks if a joke already exists before creating it again.
|
||||
|
||||
When encountering a situation where a database is required for testing, you demonstrated how to spin up a container using Docker as part of the test pipeline, allowing you to use an isolated test database during your tests. By overriding the connection string in the configureWebHost method, you can point to the test container rather than your other containers.
|
||||
|
||||
Finally, you provided insight into exceptions testing and how to utilize Gherkin and SpecFlow for acceptance testing in an automated fashion. Thank you for sharing this interesting topic! If you have any questions or need further clarification, feel free to ask!
|
||||
The topic of identity and purpose!
|
||||
|
||||
As an expert content summarizer, I've taken the liberty to condense your presentation on exceptions testing, Girkin, and SpecFlow. Here's a summary:
|
||||
|
||||
**Identity**: You created two identical jokes in the database, leveraging the same method for creating a joke, but with different steps: (1) creating the joke again and (2) ensuring that the ID of the new joke is the same as the original joke.
|
||||
|
||||
**Purpose**: To demonstrate the importance of exceptions testing in handling duplicate entries in your repository. You showed how to create a simple solution using SpecFlow to test if a joke already exists, preventing the creation of duplicates.
|
||||
|
||||
**Girkin and SpecFlow**: You introduced Girkin (Girona) as an in-memory repository and demonstrated its use in creating a basic example of exceptions testing with SpecFlow. You also discussed how to handle internal dependencies, such as spinning up containers for databases or other services, as part of your test pipeline.
|
||||
|
||||
**Key takeaways**:
|
||||
|
||||
1. Exceptions testing is crucial in handling duplicate entries in your repository.
|
||||
2. Girkin and SpecFlow can be used together to create acceptance tests that simulate real-world scenarios.
|
||||
3. Spinning up containers as part of your test pipeline can help simplify the process of integrating with external services or databases.
|
@ -0,0 +1,46 @@
|
||||
# Day 50 - State of Cloud Native 2024
|
||||
[![Watch the video](thumbnails/day50.png)](https://www.youtube.com/watch?v=63qRo4GzJwE)
|
||||
|
||||
In summary, the state of cloud native in 2024 will witness significant advancements across several key areas:
|
||||
|
||||
1. Platform Engineering: The next iteration of DevOps, platform engineering aims to standardize tooling and reduce complexity by providing self-service APIs and UIs for developers. This approach minimizes duplication of setups, improves cost reduction, finops, and enhances security compliance across projects within an organization.
|
||||
|
||||
2. Sustainability: WebAssembly will grow in the cloud native ecosystem, becoming mainstream for server-side web applications and Cloud WebAssembly with Kubernetes runtime as a key enabler. There are ongoing works around extending the WebAssembly ecosystem, making it more versatile and mainstream in 2024.
|
||||
|
||||
3. Generative AI: In 2023, generative AI gained significant momentum, with projects like KGPD being accepted into CNCF sandbox. In 2024, we will see more innovations, adoption, and ease of deployment within the AI ecosystem, including end-to-end platforms for developing, training, deploying, and managing machine learning workloads. GPU sharing, smaller providers offering more interesting services in the AI space, and EVF/AI integrations are some trends to watch out for.
|
||||
|
||||
4. Observability: There will be a growing trend of observability startups incorporating AI to auto-detect and fix issues related to Kubernetes and cloud native environments. This will help organizations maintain their cloud native infrastructure more efficiently.
|
||||
|
||||
It is essential to focus on these areas in 2024 to stay updated, get involved, and capitalize on the opportunities they present. Share your thoughts on which aspects you believe will see the most adoption, innovation, or production use cases in the comments below.
|
||||
|
||||
**IDENTITY and PURPOSE**
|
||||
|
||||
You discussed how platform engineering can simplify the process of managing multiple projects, teams, and tools within an organization. By having a single platform, developers can request specific resources (e.g., clusters) without needing to understand the underlying infrastructure or Cloud provider. This standardization of tooling across the organization is made possible by the platform engineering team's decision-making based on security best practices, compliance, and tooling maturity.
|
||||
|
||||
**PLATFORM ENGINEERING**
|
||||
|
||||
You highlighted the importance of platform engineering in 2024, noting that it will lead to:
|
||||
|
||||
* Single-platform management for multiple projects
|
||||
* Standardization of tooling across the organization
|
||||
* Cost reduction through self-serving APIs and UIs
|
||||
* FinOps (financial operations) integration
|
||||
|
||||
**CLOUD NATIVE and AI**
|
||||
|
||||
You emphasized the growing importance of cloud native and AI in 2024, mentioning:
|
||||
|
||||
* Generative AI's mainstream adoption in 2023
|
||||
* Kubernetes' role as a foundation for machine learning workloads
|
||||
* The increasing number of projects and innovations in the AI space
|
||||
* End-to-end platforms for developing, training, deploying, and managing machine learning models
|
||||
|
||||
**SUSTAINABILITY**
|
||||
|
||||
You touched on sustainability, mentioning:
|
||||
|
||||
* WebAssembly's growth and adoption in the cloud native ecosystem
|
||||
* Its potential to become a mainstream technology for server-side development
|
||||
* The importance of observing startups incorporating AI to auto-detect and auto-fix issues related to Kubernetes
|
||||
|
||||
In summary, your key points can be grouped into four main areas: Platform Engineering, Cloud Native, AI, and Sustainability. Each area is expected to see significant growth, innovation, and adoption in 2024.
|
@ -0,0 +1,40 @@
|
||||
# Day 51 - DevOps on Windows
|
||||
[![Watch the video](thumbnails/day51.png)](https://www.youtube.com/watch?v=_mKToogk3lo)
|
||||
|
||||
In this explanation, you're discussing various tools and environments available for developers using Visual Studio Code (VS Code) on Windows. Here's a summary of the key points:
|
||||
|
||||
1. VS Code allows you to connect directly to different environments such as WSL, Dev Containers, Code Spaces, and SSH servers.
|
||||
2. Git Bash serves as a translation layer between the user's local machine (Windows) and Linux commands, but it doesn't provide access to the Linux file system.
|
||||
3. Git is accessible by default in VS Code with Git Bash, allowing you to perform git commands natively on Windows while targeting repositories on your Linux file system via WSL.
|
||||
4. It's essential to work primarily within the WSL file system to avoid performance issues when working with large files or complex operations.
|
||||
5. VS Code can be used to edit and save files directly from WSL, with extensions like Preview helping you interact with the files in a more visual way.
|
||||
6. Developers also have options for container management tools such as Docker Desktop, Podman Desktop, Rancher Desktop, and Finch (based on kubectl, podman, and nerdctl).
|
||||
7. Finch is unique because it shares tooling with Rancher Desktop and leverages Lima, a tool originally developed for macOS, to create container environments on Windows using WSL2 as the driver.
|
||||
8. Developers can use these tools to run containerized applications and orchestrate them using kubernetes or open shifts.
|
||||
|
||||
Overall, the talk emphasizes the growing support for devops tools on Windows platforms and encourages developers to explore these tools further for their projects.
|
||||
Here's a summary of the content:
|
||||
|
||||
**Setting up the Environment**
|
||||
|
||||
To start, the speaker sets up their Visual Studio Code (VSCode) with SSH plugin, allowing them to connect remotely to environments and develop there. They also use Git Bash as a translation layer, which allows them to use standard Linux commands on Windows.
|
||||
|
||||
**Git and GitHub Desktop**
|
||||
|
||||
The speaker highlights the importance of having access to Git commands directly from VSCode or PowerShell. They also mention using GitHub desktop, which is a visual tool that simplifies many Git operations.
|
||||
|
||||
**Working with WSL (Windows Subsystem for Linux)**
|
||||
|
||||
The speaker explains that WSL allows them to run Linux distributions natively on Windows. This enables the use of various tools and frameworks, including containers and Kubernetes. However, they emphasize the importance of working within the WSL file system to avoid performance issues.
|
||||
|
||||
**Containers and Kubernetes**
|
||||
|
||||
To support containerization, the speaker mentions three options: Docker desktop, Rancher desktop, and Podman desktop. These tools allow for running containers and managing them through Kubernetes or other runtimes.
|
||||
|
||||
**Finch and Lima**
|
||||
|
||||
The final tool mentioned is Finch, which was created by the Azure team to provide a Windows-based solution for working with containers and Kubernetes. The speaker notes that Finch uses Lima as its driver on Mac OS and has been ported to Windows using WSL2.
|
||||
|
||||
**Conclusion**
|
||||
|
||||
The talk concludes by emphasizing the importance of setting up a development environment on Windows and exploring the various tools available, including Git, GitHub desktop, WSL, Docker, Rancher, Podman, and Finch. The speaker encourages continued learning and exploration in the DevOps space.
|
@ -0,0 +1,44 @@
|
||||
# Day 52 - Creating a custom Dev Container for your GitHub Codespace to start with Terraform on Azure
|
||||
[![Watch the video](thumbnails/day52.png)](https://www.youtube.com/watch?v=fTsaj7kqOvs)
|
||||
|
||||
# ONE SENTENCE SUMMARY:
|
||||
Patrick K demonstrates how to create a Dev container for a GitHub repository with Terraform and the Azure CLI, using Visual Studio Code and a Docker file and Dev container JSON file.
|
||||
|
||||
# MAIN POINTS:
|
||||
1. Create an empty repository on GitHub named `Asia terraform code space`.
|
||||
2. Inside the repository, create a `dev container` folder with two files: `dockerfile` and `devcontainer.json`.
|
||||
3. In the `dockerfile`, install the Asia CLI, Terraform, and other necessary tools using a base image.
|
||||
4. Use the `devcontainer.json` to configure the environment for the code space, referencing the `dockerfile`.
|
||||
5. Commit and push the changes to the main branch of the repository.
|
||||
6. Use Visual Studio Code's Remote Explorer extension to create a new code space from the repository.
|
||||
7. The Dev container will be built and run in the background on a virtual machine.
|
||||
8. Once the code space is finished, Terraform and the Asia CLI should be available within it.
|
||||
9. To stop the Dev container, click 'disconnect' when you no longer need it.
|
||||
10. Rebuild the container to extend it with new tools as needed.
|
||||
|
||||
# TAKEAWAYS:
|
||||
1. You can create a Dev container for your GitHub code space using Visual Studio Code and two files: `dockerfile` and `devcontainer.json`.
|
||||
2. The `dockerfile` installs necessary tools like the Asia CLI and Terraform, while the `devcontainer.json` configures the environment for the code space.
|
||||
3. Once you have created the Dev container, you can use it to work with Terraform and the Asia CLI within your GitHub code space.
|
||||
4. To start working with the Dev container, create a new terminal and check if Terraform and the Asia CLI are available.
|
||||
5. Remember to stop the Dev container when you no longer need it to save resources, and rebuild it as needed to extend its functionality.
|
||||
Here is the output:
|
||||
|
||||
# ONE SENTENCE SUMMARY:
|
||||
Create a Dev container for your GitHub code space to work with Terraform and the AWS CLI by creating a Docker file and a devcontainer.json file.
|
||||
|
||||
# MAIN POINTS:
|
||||
|
||||
1. Create an empty repository named Azure terraform code space.
|
||||
2. Create two files: a Docker file and a devcontainer.json file, inside a dev-container directory.
|
||||
3. Define the base image and install the necessary tools, including AWS CLI and Terraform.
|
||||
4. Configure the devcontainer.json file to set up the environment for your code space.
|
||||
5. Push the changes to the main branch of your repository.
|
||||
|
||||
# TAKEAWAYS:
|
||||
|
||||
1. Create a new Dev container for your GitHub code space using Visual Studio Code.
|
||||
2. Use the Docker file to install necessary tools, including AWS CLI and Terraform.
|
||||
3. Configure the devcontainer.json file to set up the environment for your code space.
|
||||
4. Push changes to the main branch of your repository to create the code space.
|
||||
5. Start working with Terraform and the AWS CLI in your code space using the Dev container.
|
@ -0,0 +1,40 @@
|
||||
# Day 53 - Gickup - Keep your repositories safe
|
||||
[![Watch the video](thumbnails/day53.png)](https://www.youtube.com/watch?v=hKB3XY7oMgo)
|
||||
|
||||
# ONE SENTENCE SUMMARY:
|
||||
Andy presented Gickup, a tool for backing up Git repositories across various platforms like GitHub, GitLab, Bitbucket, etc., written in Go. He explained its usage, demonstrated its functionality, and showcased its ability to restore deleted repositories.
|
||||
|
||||
# MAIN POINTS:
|
||||
1. Gickup is a tool written by Andy for backing up Git repositories.
|
||||
2. It supports GitHub, GitLab, Bitbucket, SourceForge, local repositories, and any type of Git repository as long as you can provide access credentials.
|
||||
3. Automation is simple; once configured, it takes care of everything.
|
||||
4. It can be run using pre-compiled binaries, Homebrew, Docker, Arch User Repository (AUR), or NYX.
|
||||
5. Gickup connects to the API of the host service and grabs the repositories you want to back up.
|
||||
6. You define a source (like GitHub) and specify a destination, which could be local backup, another Git hoster, or a mirror.
|
||||
7. The configuration is in YAML, where you define the source, destination, structured format for the backup, and whether to create an organization if it doesn't exist.
|
||||
8. Demonstration included backing up and restoring repositories, mirroring repositories to another Git hoster, and handling accidental repository deletions.
|
||||
9. Gickup can be kept up-to-date through the presenter's social media accounts or QR code linked to his GitHub account.
|
||||
|
||||
|
||||
# ONE SENTENCE SUMMARY:
|
||||
Gickup is a tool written in Go, designed to backup and restore Git repositories, allowing for simple automation and secure backups.
|
||||
|
||||
# MAIN POINTS:
|
||||
|
||||
1. Gickup is a tool that backs up Git repositories, supporting multiple hosting platforms.
|
||||
2. It can be run using pre-compiled binaries, Homebrew, Docker, or AUR.
|
||||
3. Gickup connects to the API of the hoster, grabbing all desired repositories and their contents.
|
||||
4. Configuration is done in YAML, defining sources, destinations, and backup options.
|
||||
5. Local backups can be created, with an optional structured directory layout.
|
||||
6. Mirroring to another hosting platform is also possible, allowing for easy repository management.
|
||||
7. Gickup provides a simple automation solution for backing up Git repositories.
|
||||
|
||||
# TAKEAWAYS:
|
||||
|
||||
1. Use Gickup to automate the process of backing up your Git repositories.
|
||||
2. Gickup supports multiple hosting platforms and allows for secure backups.
|
||||
3. Configure Gickup using YAML files to define sources, destinations, and backup options.
|
||||
4. Create local backups or mirror repositories to another hosting platform for easy management.
|
||||
5. Restore deleted repositories by recreating the repository, grabbing the origin, and pushing changes.
|
||||
6. Use Gickup to keep your Git repositories safe and organized.
|
||||
7. Consider using Gickup as a part of your DevOps workflow.
|
@ -0,0 +1,40 @@
|
||||
# Day 54 - Mastering AWS OpenSearch: Terraform Provisioning and Cost Efficiency Series
|
||||
[![Watch the video](thumbnails/day54.png)](https://www.youtube.com/watch?v=GYrCbUqHPi4)
|
||||
|
||||
# ONE SENTENCE SUMMARY:
|
||||
This session demonstrates how to ingest logs into AWS OpenSearch using a LockStash agent, discussing cost optimization techniques and providing instructions on setting up the environment.
|
||||
|
||||
# MAIN POINTS:
|
||||
1. The content is about ingesting logs into AWS OpenSearch using LockStash.
|
||||
2. A provision search cluster and a LockStash agent are used for log collection.
|
||||
3. The design includes two E2 instances in different availability zones, with an OpenSearch cluster deployed on the same VPC.
|
||||
4. The LockStash agent sends logs to the OpenSearch cluster for processing.
|
||||
5. A sample pipeline is provided to input and output the desired logs.
|
||||
6. Terraform is used to provision the AWS OpenSearch cluster.
|
||||
7. An Amazon EC2 instance is created for the OpenSearch cluster with specific configurations.
|
||||
8. The code demonstrates creating an OpenSearch cluster in a specified region (US East).
|
||||
9. Index life cycle policy is introduced as a cost optimization technique.
|
||||
10. The index life cycle policy deletes older indexes, and there are options to customize the policy based on requirements.
|
||||
|
||||
# ADDITIONAL NOTES:
|
||||
- LinkedIn ID for further questions or contact.
|
||||
# ONE SENTENCE SUMMARY:
|
||||
|
||||
AWS Open Search provides a scalable and cost-effective solution for ingesting logs, with features like provisioned clusters, data collection engines (Lock Stash), and index life cycle policies to manage storage and costs.
|
||||
|
||||
# MAIN POINTS:
|
||||
|
||||
1. AWS Open Search is used to ingest logs from various sources.
|
||||
2. A Lock Stash agent is used to send logs to the Open Search cluster in real-time.
|
||||
3. The Lock Stash pipeline includes input, output, and debug options.
|
||||
4. Provisioning an Open Search cluster using Terraform involves creating a new region, cluster name, version, instance type, and EBS volume size.
|
||||
5. Installing the Lock Stash agent requires downloading and extracting the agent, then configuring it to send logs to the Open Search cluster.
|
||||
6. Index life cycle policies are used to manage storage and costs by deleting older indexes.
|
||||
|
||||
# TAKEAWAYS:
|
||||
|
||||
1. AWS Open Search is a scalable solution for ingesting logs from various sources.
|
||||
2. Lock Stash agents can be used to send logs in real-time to an Open Search cluster.
|
||||
3. Provisioning and configuring an Open Search cluster requires attention to detail, including region, cluster name, version, instance type, and EBS volume size.
|
||||
4. Index life cycle policies are essential for managing storage and costs by deleting older indexes.
|
||||
5. Monitoring and optimizing log ingestion can help reduce costs and improve performance.
|
@ -0,0 +1,35 @@
|
||||
# Day 55 - Bringing Together IaC and CM with Terraform Provider for Ansible
|
||||
[![Watch the video](thumbnails/day55.png)](https://www.youtube.com/watch?v=dKrYUikDgzU)
|
||||
|
||||
In this explanation, a workflow that uses Terraform and Ansible to dynamically provision infrastructure and configure web servers. Here's a simplified breakdown of the process:
|
||||
|
||||
1. Use the external IP address of the newly created web server (web VM) to define dynamically your Ansible inventory file. This is done by mapping the Playbooks against hosts in the 'web' group, which is defined in the inventory metadata. The metadata also includes details about the user for SSH, SSH key, and Python version.
|
||||
|
||||
2. Run an Ansible command (`ansible-inventory -g graph`) to visualize the inventory file as a graph. This helps debug information and displays variables like the user being used to connect to the host.
|
||||
|
||||
3. Execute the specified Playbook (asle Playbook) using Ansible against the hosts in the 'web' group. The Playbook will install, start, clean up, and deploy an app from GitHub onto the web servers.
|
||||
|
||||
4. Validate the Terraform code syntax with `terraplan validate`. Before actually deploying the infrastructure, it's a good idea to check the Terraform State file to make sure there are no existing resources that could interfere with the deployment.
|
||||
|
||||
5. Run the `terraform plan` command to let Terraform analyze what needs to be created and deployed without executing anything. If the analysis looks correct, run `terraform apply` to start deploying the infrastructure.
|
||||
|
||||
6. The Terraform workflow will create resources like a VPC subnet, firewall rules, a computing instance (web VM), and an Ansible host with its external IP address captured for connectivity. It will also create an URL using the output of Terraform to display the deployed application from GitHub.
|
||||
|
||||
7. Finally, check that the application works by accessing it through the generated URL. If everything is working correctly, you should see the application with the title of the session.
|
||||
|
||||
8. After the deployment, the Terraform State file will be populated with information about the infrastructure created. Be aware that the Terraform State file contains sensitive information; there are discussions on how to protect it and encrypt it when needed.
|
||||
**IDENTITY and PURPOSE**
|
||||
|
||||
The speaker is an expert in content summarization, debugging information, and executing Playbooks. They are about to run a Playbook called "ASLE" that will provision infrastructure using Terraform and configure hosts with Ansible.
|
||||
|
||||
The speaker starts by mentioning the importance of binding Terraform and Ansible together, which is done through the inventory file. The ASLE Playbook defines which group of hosts (web) to use and what tasks to execute. These tasks include ensuring the existence of a specific package (engine X), starting it, and cleaning up by removing default files.
|
||||
|
||||
The speaker then validates the Terraform code using `terraform validate` and ensures that the syntax is correct. They also run `terraform plan` to analyze what resources need to be created, but do not execute anything yet.
|
||||
|
||||
After running the plan, the speaker applies the plan using `terraform apply`, which starts deploying the infrastructure. The deployment process creates a VPC subnet, firewall rules, an instance, and other resources.
|
||||
|
||||
Once the deployment is complete, the speaker runs the Ansible playbook, which executes the tasks defined in the Playbook. These tasks include installing engine X, starting it, removing default files, downloading a web page from GitHub, and configuring the infrastructure.
|
||||
|
||||
The speaker also demonstrates how to use Ansible's `graph` command to present the inventory in a graphical mode. Finally, they run the Ansible playbook again to execute the tasks defined in the Playbook.
|
||||
|
||||
Throughout the session, the speaker emphasizes the importance of binding Terraform and Ansible together for dynamic provisioning of infrastructure and configuration management.
|
@ -0,0 +1,47 @@
|
||||
# Day 56 - Automated database deployment within the DevOps process
|
||||
[![Watch the video](thumbnails/day56.png)](https://www.youtube.com/watch?v=LOEaKrcZH_8)
|
||||
|
||||
To baseline local tests or integration tests within your pipelines, you can use Docker containers to create an initial database state. Here's how it works:
|
||||
|
||||
1. Spin up your Docker container with the SQL Server running.
|
||||
2. Deploy your schema, insert test data, and set up the initial baseline.
|
||||
3. Commit the Docker container with a tag (e.g., version 001) containing the initial state of the database.
|
||||
4. Run tests using the tagged Docker container for consistent testing results.
|
||||
5. If needed, create additional containers for different versions or configurations.
|
||||
6. For testing purposes, run a Docker container with the desired tag (e.g., version 001) to have a pre-configured database environment.
|
||||
7. To make things more manageable, you can build custom CLI tools around SQL Package or create your own command line application for business logic execution.
|
||||
8. Use containers for DB schema deployment instead of deploying SQL Packager to agents.
|
||||
9. Shift the database deployment logic from the pipeline to the application package (for example, using Kubernetes).
|
||||
- Add an init container that blocks the application container until the migration is done.
|
||||
- Create a Helm chart with your application container and the migration container as an init container.
|
||||
- The init container listens for the success of the migration container, which updates the database schema before deploying the application containers.
|
||||
10. In summary:
|
||||
- Treat your database as code.
|
||||
- Automate database schema changes within pipelines (no manual schema changes in production).
|
||||
- Handle corner cases with custom migration scripts.
|
||||
- Package the database deployment into your application package to simplify pipelines (if possible). If not, keep the database deployment within your pipeline.
|
||||
Here's a summary of the content:
|
||||
|
||||
**Identity and Purpose**
|
||||
|
||||
The speaker discusses the importance of integrating database development into the software development process, treating the database as code. They emphasize that manual schema changes should never occur during deployment.
|
||||
|
||||
**Using Containers for Database Schema Deployment**
|
||||
|
||||
The speaker explains how containers can be used to simplify database schema deployment. They demonstrate how to use Docker containers to deploy and test different database versions, making it easier to maintain consistency across environments.
|
||||
|
||||
**Baselining for Local Tests and Integration Tests**
|
||||
|
||||
The speaker shows how to create a baseline of the initial database state using Docker containers. This allows for easy testing and resetting of the database to its original state.
|
||||
|
||||
**Autonomous Deployment or Self-Contained Deployment**
|
||||
|
||||
The speaker discusses how to package SQL packager into a container, allowing for autonomous deployment or self-contained deployment. They explain how this can be achieved in Kubernetes using Helm deployments.
|
||||
|
||||
**Shifting Database Deployment Logic from Pipelines to Application Packages**
|
||||
|
||||
The speaker shows an example of shifting database deployment logic from the pipeline to the application package using Helm releases. This simplifies the pipeline and makes it easier to manage.
|
||||
|
||||
**Recap**
|
||||
|
||||
The speaker summarizes the key points, emphasizing the importance of treating databases as code, automating schema changes, handling corner cases with custom migration scripts, and packaging database deployment into application packages or using pipelines for deployment.
|
@ -0,0 +1,28 @@
|
||||
# Day 57 - A practical guide to Test-Driven Development of infrastructure code
|
||||
[![Watch the video](thumbnails/day57.png)](https://www.youtube.com/watch?v=VoeQWkboSUQ)
|
||||
|
||||
A session describing a CI/CD pipeline in GitHub Actions that uses various tools such as Terraform, Bicep, Azure Policies (PS Rule), Snyk, and Pester to validate the security, compliance, and functionality of infrastructure code before deploying it to the actual environment. Here's a summary of the steps you mentioned:
|
||||
|
||||
1. Run tests locally using tools like Terraform, Bicep, and Azure Policies (PS Rule) before committing the code. This ensures that the changes are secure, compliant, and follow best practices.
|
||||
2. In the CI/CD pipeline, use a workflow file in GitHub to combine these tests. The workflow includes jobs for linting, validation using pre-flight validation with ARM deploy action, running Azure Policies (PS Rule), Snyk, and Pester tests.
|
||||
3. Use the built-in GitHub actions to run these tests in the pipeline. For example, use the Azure PS rule action to assert against specific Azure Policy modules, provide input path, output format, and file name.
|
||||
4. Approve the test results before deploying changes to the environment. This ensures that it is safe to push the deploy button.
|
||||
5. After deployment, run tests to verify if the deployment succeeded as intended, and if deployed resources have the right properties as declared in the code. Use tools like BenchPress (based on Pasta) or Pester to call the actual deployed resources and assert against their properties.
|
||||
6. Optionally, use infrastructure testing tools such as smoke tests to validate the functionality of the deployed resources (e.g., a website).
|
||||
7. To make it easier to install and configure these tools, consider using a Dev Container in Visual Studio Code. This allows you to define what tools should be pre-installed in the container, making it easy to set up an environment with all the necessary tools for developing infrastructure code.
|
||||
|
||||
Overall, this is a great approach to ensure that your infrastructure code is secure, compliant, and functional before deploying it to the actual environment. Thanks for sharing this valuable information!
|
||||
|
||||
1. **Azure DevOps**: The speaker discussed using Azure Pipelines to automate infrastructure testing and deployment.
|
||||
2. **Security Testing**: They mentioned using Snak to run security tests in a continuous integration pipeline, allowing for automated testing and deployment.
|
||||
3. **Deployment**: The speaker emphasized the importance of testing and verifying the actual deployment before pushing changes to production.
|
||||
4. **Testing Types**: They introduced three types of tests: unit tests (Pester), infrastructure tests (BenchPress or Pesto), and smoke tests.
|
||||
5. **Dev Container**: The speaker discussed using a Dev Container in Visual Studio Code to pre-configure and pre-install tools for developing Azure infrastructure code.
|
||||
|
||||
These key takeaways summarize the main topics and ideas presented by the speaker:
|
||||
|
||||
* Automating infrastructure testing and deployment with Azure Pipelines
|
||||
* Leveraging Snak for security testing in CI pipelines
|
||||
* Emphasizing the importance of verifying actual deployments before pushing changes to production
|
||||
* Introducing different types of tests (unit, infrastructure, smoke) for ensuring the quality of infrastructure code
|
||||
* Utilizing Dev Containers in Visual Studio Code to streamline development and deployment processes
|
@ -0,0 +1,93 @@
|
||||
# Day 58 - The Reverse Technology Thrust
|
||||
[![Watch the video](thumbnails/day58.png)](https://www.youtube.com/watch?v=tmwjQnSTE5k)
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_01.png)
|
||||
|
||||
## Description
|
||||
This session provides lessons learned from my work as an AppDev Solutions Specialist at Red Hat with large-scale public institutions. Despite investing heavily in technology, their return on agility, operations, and time to market could have been much higher. The leading root cause for failing to achieve these goals is the need for change in their culture and processes at the organizational level. They faced the painful need to learn to unlearn and reskill their personnel with DevOps practices instead of investing in tooling to accelerate innovation.
|
||||
|
||||
## Author
|
||||
|
||||
Rom Adams (né Romuald Vandepoel) is an open-source strategy and C-Suite advisor with over 20 years of experience in the IT industry. He is a cloud-native expert who helps customer and partner organizations modernize and transform their data center strategies with enterprise open-source solutions. He is also a facilitator, advocate, and contributor to open-source projects, advising companies and lawmakers on their open-source and sustainability strategies.
|
||||
|
||||
Previously, he was a Principal Architect at Ondat, a cloud-native storage company acquired by Akamai, where he designed products and implemented hybrid cloud solutions for enterprise customers. He also held various roles at Tyco, NetApp, and Red Hat, gaining certifications and becoming a subject matter expert in storage and hybrid cloud infrastructure. He has participated as a moderator and speaker for several events and publications, sharing his insights and best practices on culture, process, and technology adoption. Rom is passionate about driving transformation and innovation with open-source and cloud-native technologies.
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.linkedin.com/in/romdalf/">
|
||||
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white" />
|
||||
</a>
|
||||
<a href="https://twitter.com/romdalf">
|
||||
<img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" />
|
||||
</a>
|
||||
<a href="https://github.com/romdalf">
|
||||
<img />
|
||||
</a>
|
||||
</p>
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_02.png)
|
||||
|
||||
## A quote that I like
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_03.png)
|
||||
|
||||
|
||||
## Talking points
|
||||
|
||||
### The Tooling Trail
|
||||
Surveys done by Forrester of the Container Adoption Journey engagements have shown that the most benefits for an organization are the application modernization opportunities rather than the operational-related or infrastructure part. Yet, the default behavior is to embark on a new Tooling Trail or an endless journey seeking and testing new tools. These might provide substantial benefits, but the organization will evaluate them with a rather standstill point of view rather than an innovative forward mindset.
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_04.png)
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_05.png)
|
||||
|
||||
|
||||
### Practices vs Tools
|
||||
As we are the only masters of our faith and destiny, we have a funnel vision for improving our daily tasks, whether with automation, containerization, security tooling, or the cloud.
|
||||
However, the introduction of this tool will benefit the individual or the team of the individual if adopted. This is a pocket initiative; it could significantly impact the initiator's daily work but not so much for the team due to the learning curve on top of the existing workload. It can even be a source of fragmentation and entropy for the team and even on a larger scale.
|
||||
Adopting a new tool has to become a strategic decision at an organizational level to benefit a larger group, which involves changing the culture and processes.
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_06.png)
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_07.png)
|
||||
|
||||
|
||||
### People and Process
|
||||
From our open-source roots and customer engagements, we have established that technology adoption can only be successful if there is an opportunity to evolve the organization's culture and processes. This crucial step requires either an organizational change initiative (see [Kotter's approach](https://www.kotterinc.com/methodology/8-steps/) as an example) or a compelling event.
|
||||
Most of the slowdown from a time-to-market perspective is spent to avoid embracing the changes. A typical example is the adoption of Kubernetes. It will be another painful platform trail when retrofitting 20 years of legacy experience into its design and implementation instead of creating a safe greenfield to learn the new patterns and build the platform iteratively.
|
||||
Although individuals or teams may consider it a given, it is not integrated into the organization's culture and processes.
|
||||
It is often observed that individuals in organizations are grouped into silos based on their domain knowledge. However, it is interesting to note that every individual from one silo relies on another silo to accomplish their daily mission. Despite this interdependence, organizations (like society) tend to sort, classify, and isolate individuals rather than promote a sense of collectivism. This creates a significant benefit in terms of management but a challenge for collaboration.
|
||||
The first significant change is to create a core adoption team composed of volunteers with a set of competencies that will constitute a guiding coalition fostering changes from a culture, processes, and technology standpoint.
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_08.png)
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_09.png)
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_10.png)
|
||||
|
||||
|
||||
The DevOps model calls for deeper collaboration and interaction at a cross-functional level. It starts by the end, defining the business value and requirements and creating a set of fragmented tasks with meaningful outcomes.
|
||||
If we think about this process, it's basically breaking down a waterfall planning into small iterative chunks corresponding to a milestone.
|
||||
The core adoption team will then start building based on the targeted outcomes in short cycles and enabling the relevant Ops team to operate the solution. This approach reduces the cognitive overload on the entire organization.
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_11.png)
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_12.png)
|
||||
|
||||
|
||||
### Fail, Learn, Repeat
|
||||
For some reason, the practice of failure is often associated with stigma or trauma. However, embracing it with a collective analysis capability enriches the knowledge and know-how. Avoiding it will result in larger and out-of-control incidents with limited capability to respond.
|
||||
This is the reason military or fire drills exist.
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_13.png)
|
||||
|
||||
|
||||
### All the above applied to application modernization
|
||||
In this example, a legacy application is considered to be containerized. The first obvious question would be: what value does this work bring to the business?
|
||||
As you can imagine, if the answer is vague or can not be measurable, then the effort should not be carried over.
|
||||
The actual business requirement is to provide autoscaling capability to some modules of the application to cope with unpredictable usage. Then the containerization of the application would not help, but the modernization of it leveraging a hybrid software architecture with microservices would.
|
||||
A core adoption team will be created with members having knowledge of the application, cloud-native middleware, and microservices.
|
||||
The first module is extracted as a microservice. At this stage, part of the original domain knowledge-based team will be trained on the changes. Having a new set of team members enabled on the first iteration will help to carry on on the second. As the work continues on this respective application or a new one, the organization will move towards a Platform-as-a-Product team.
|
||||
On a larger scale, the enablement team will move towards the so-called SRE (System Reliability Engineering) model for the organization.
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_14.png)
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_15.png)
|
||||
|
||||
|
||||
## Thank you!
|
||||
|
||||
![](../2024/Images/theReverseTechnologyThrust/CY24-90DevOps_The_Reverse_Technology_Thrust_16.png)
|
@ -0,0 +1,57 @@
|
||||
# Day 59 - Continuous Delivery pipelines for cloud infrastructure
|
||||
[![Watch the video](thumbnails/day59.png)](https://www.youtube.com/watch?v=L8hqM3Y5pTo)
|
||||
|
||||
The three principles of Continuous Delivery for Infrastructure Code are:
|
||||
|
||||
1. Everything is code, and everything is in version control: This means that all infrastructure components are treated as code, stored in a version control system, and can be easily tracked, managed, and audited.
|
||||
|
||||
2. Continuously test and deliver all the work in progress: This principle emphasizes the importance of testing every change before it's deployed to production, ensuring that the infrastructure is always stable and functional. It also encourages automating the deployment process to reduce manual errors and improve efficiency.
|
||||
|
||||
3. Work in small simple pieces that you can change independently: This principle suggests dividing infrastructure components into smaller, independent modules or stacks that can be changed without affecting other parts of the infrastructure. This reduces complexity, shortens feedback cycles, and allows for more effective management of permissions and resources.
|
||||
|
||||
In terms of organizing technology capabilities within an organization, these are usually structured in a layered approach, with business capabilities at the top, technology capabilities in the middle, and infrastructure resources at the bottom. Technology capabilities are further broken down into infrastructure stacks, which are collections of cloud infrastructure resources managed together as a group.
|
||||
|
||||
Examples of infrastructure stacks include a Kubernetes cluster with node groups and a load balancer, a Key Vault for managing secrets, and a Virtual Private Cloud (VPC) network. Good criteria for slicing infrastructure stacks include team or application boundaries, change frequency, permission boundaries, and technical capabilities.
|
||||
|
||||
To get started with infrastructure automation, teams can implement what is called the "Walking Skeleton" approach, which involves starting simple and gradually improving over time. This means setting up a basic pipeline that runs a Terraform apply on a development or test stage in the initial iteration, then iterating and improving upon it as the project progresses.
|
||||
|
||||
Challenges faced when automating infrastructure code include blast radius (the potential damage a given change could make to the infrastructure) and disaster recovery (the ability to recover from a state where all infrastructure code has been lost). To address these challenges, teams should regularly practice deploying from scratch, design their pipelines to test both spinning up infrastructure from zero and applying changes to the existing infrastructure, and ensure that their infrastructure code is modular and independent.
|
||||
|
||||
Recommended resources for diving deeper into this topic include the book "Infrastructure as Code" by Kief Morris, which provides practical guidance on implementing Continuous Delivery for infrastructure.
|
||||
Here is the summary of the presentation:
|
||||
|
||||
**IDENTITY and PURPOSE**
|
||||
|
||||
The presenter discussed how to bring together AWS and Google Cloud platforms, specifically focusing on building technology capabilities. They introduced the concept of "infrastructure Stacks" - collections of cloud infrastructure resources managed together as a group.
|
||||
|
||||
The presenter then presented criteria for slicing infrastructure stacks:
|
||||
|
||||
1. Team or application or domain boundaries
|
||||
2. Change frequency (e.g., updating Kubernetes clusters more frequently than VPCs)
|
||||
3. Permission boundaries (to provide least privileges and prevent over-privileging)
|
||||
4. Technical capabilities (e.g., building a kubernetes cluster as one capability)
|
||||
|
||||
The presenter emphasized the importance of starting with infrastructure automation early in a project, using a "walking skeleton" approach to reduce complexity and improve feedback cycles.
|
||||
|
||||
**CHALLENGES**
|
||||
|
||||
Two challenges were highlighted:
|
||||
|
||||
1. Blast radius: the potential damage a given change could make to a system
|
||||
2. Immutable deployments: replacing old container images with new ones, making it difficult to practice Disaster Recovery
|
||||
|
||||
The presenter recommended rethinking how infrastructure changes are handled in a pipeline to include testing from zero to latest version.
|
||||
|
||||
**SUMMARY**
|
||||
|
||||
The presentation concluded by summarizing the three principles of continuous delivery for infrastructure:
|
||||
|
||||
1. Everything is code and everything is inversion control
|
||||
2. Continuously test and deliver all work in progress
|
||||
3. Work in small, simple pieces that can be changed independently
|
||||
|
||||
The presenter also mentioned the importance of promoting a code base that does not change throughout the individual stages of the pipeline.
|
||||
|
||||
**FURTHER READING**
|
||||
|
||||
The presenter recommended checking out the book "Infrastructure as Code" by ke Morris (currently working on the Third Edition) on O'Reilly.
|
@ -0,0 +1,44 @@
|
||||
# Day 60 - Migrating a monolith to Cloud-Native and the stumbling blocks that you don’t know about
|
||||
[![Watch the video](thumbnails/day60.png)](https://www.youtube.com/watch?v=Bhr-lxHvWB0)
|
||||
|
||||
In transitioning to the cloud native space, there are concerns about cost savings and financial management. Traditionally, capital expenditures (CapEx) allow for depreciation write-offs, which is beneficial for companies, especially at larger scales. However, with cloud services often paid through a credit card, it becomes challenging to depreciate Operational Expenditures (OpEx). This can lead to problems for CFOs as they require predictability and projectability in their financial planning.
|
||||
|
||||
To address these concerns, it is essential to have open discussions with decision-makers about the nature of cloud native solutions and how leasing hardware rather than owning it may affect spending patterns. You will find that costs can fluctuate significantly from month to month due to factors like scaling up or down resources based on demand.
|
||||
|
||||
Here are some steps you can take to improve your chances of success in the cloud native space:
|
||||
|
||||
1. Assess the current state of your applications and containers: Determine if your application was truly containerized, or if it has just been wrapped using a pod. This is crucial because many organizations still follow an outdated approach to containerization based on early promises from Docker.
|
||||
|
||||
2. Prioritize optimization over features: Encourage your teams to focus on optimizing existing applications rather than adding new features, as this will help drive efficiency and save engineering time.
|
||||
|
||||
3. Build future cloud native applications from the ground up: If possible, design new cloud-native applications with the appropriate tools for optimal performance. This will prevent you from going into the red while trying to adapt an existing application to fit a cloud native environment.
|
||||
|
||||
4. Use the right tool for the job: Just as using a saw when you need a hammer won't work effectively, migrating an application without careful consideration may not be ideal or successful. Ensure that your team understands the specific needs of the application and chooses the appropriate cloud native solution accordingly.
|
||||
|
||||
**Main Themes:**
|
||||
|
||||
1. **Tribal Knowledge**: The importance of sharing knowledge across teams and microservices in a cloud-native space.
|
||||
2. **Monitoring and Visibility**: Recognizing that multiple monitoring applications are needed for different teams and perspectives.
|
||||
3. **Cloud Native Economics**: Understanding the differences between data center and cloud native economics, including Opex vs. Capex and the need for projectability.
|
||||
4. **Containerization**: The importance of truly containerizing an app rather than just wrapping a pod and moving on.
|
||||
|
||||
**Purpose:**
|
||||
|
||||
The purpose of this conversation seems to be sharing lessons learned from experience in the cloud-native space, highlighting the importance of:
|
||||
|
||||
1. Recognizing tribal knowledge and sharing it across teams.
|
||||
2. Adapting to the changing landscape of monitoring and visibility in cloud-native environments.
|
||||
3. Understanding the unique economics of cloud native and its implications for decision-making.
|
||||
4. Emphasizing the need for true containerization and optimization rather than just wrapping a pod.
|
||||
|
||||
**Takeaways:**
|
||||
|
||||
1. Share knowledge across teams and microservices to avoid silos.
|
||||
2. Be prepared for multiple monitoring applications in cloud-native environments.
|
||||
3. Understand the differences between data center and cloud native economics.
|
||||
4. Prioritize true containerization and optimization over quick fixes.
|
||||
|
||||
By: JJ Asghar
|
||||
Slides: [here](https://docs.google.com/presentation/d/1Nyh_rfB-P4C1uQI6E42qHMEfAj-ZTXGDVKaw1Em8H5g/edit?usp=sharing)
|
||||
|
||||
If you're looking to have a deeper conversation, never hesitate to reach out to JJ [here](https://jjasghar.github.io/about).
|
@ -0,0 +1,45 @@
|
||||
# Day 61 - Demystifying Modernisation: True Potential of Cloud Technology
|
||||
[![Watch the video](thumbnails/day61.png)](https://www.youtube.com/watch?v=3069RWgZt6c)
|
||||
|
||||
In summary, the speaker discussed six strategies (Retire, Retain, Rehost, Replatform, Repurchase, and Re-Architect/Refactor) for modernizing applications within the context of moving them to the cloud. Here's a brief overview of each strategy:
|
||||
|
||||
1. Retire: Applications that are no longer needed or no longer provide value can be deprecated and removed from the system.
|
||||
|
||||
2. Retain: Keep existing applications as they are, often due to their strategic importance, high cost to modify, or compliance requirements.
|
||||
|
||||
3. Rehost: Move an application to a different infrastructure (such as the cloud) without changing its architecture or functionality.
|
||||
|
||||
4. Replatform: Adapt the application's underlying technology stack while preserving its core functionality.
|
||||
|
||||
5. Repurchase: Buy a new commercial off-the-shelf software solution that can replace an existing one, either because it better meets the organization's needs or is more cost-effective in the long run.
|
||||
|
||||
6. Re-Architect/Refactor: Completely redesign and modernize an application to take full advantage of new technologies and improve its performance, scalability, and security.
|
||||
|
||||
Application modernization differs from cloud migration in that the former focuses on enhancing the architecture of existing applications, while the latter primarily involves shifting those applications to a cloud environment. Both processes are essential components of a comprehensive digital transformation strategy, as they help organizations improve agility, scalability, and efficiency, ultimately giving them a competitive edge in the digital economy.
|
||||
|
||||
The speaker emphasized that it's not enough just to move an application to the cloud; instead, organizations should aim to optimize their applications for success in the digital landscape by modernizing both their infrastructure and data in addition to their applications. They can do this by understanding these three interconnected components of digital modernization: infrastructure modernization (using technologies like Google Cloud Platform), data modernization (managing and analyzing data efficiently), and application modernization (enhancing the functionality, performance, and security of existing applications).
|
||||
|
||||
The speaker concluded by encouraging businesses to embrace the power of cloud technology through a comprehensive journey of transforming their applications, infrastructure, and data to fully capitalize on the benefits offered by the digital landscape. They invited listeners to connect with them for further discussions or questions on this topic.
|
||||
|
||||
|
||||
**Application Migration Strategies**
|
||||
|
||||
1. **Rehost**: Lift and shift applications from existing infrastructure to cloud, with no changes to the application core architecture.
|
||||
2. **Replatform**: Replace database backends or re-platform an application using cloud provider's services, while keeping the application core architecture the same.
|
||||
3. **Repurchase**: Fully replace a Legacy application with a SaaS-based solution that provides similar capabilities.
|
||||
|
||||
**Application Modernization**
|
||||
|
||||
* Refactoring or rebuilding: Redesign an application in a more Cloud-native manner, breaking down monolithic applications into smaller microservices and leveraging services like Cloud Run or Cloud Functions.
|
||||
|
||||
**Digital Transformation Components**
|
||||
|
||||
1. **Infrastructure Modernization**: Updating and refactoring existing infrastructure to take advantage of new technologies and cloud computing platforms.
|
||||
2. **Data Modernization**: Migrating data from existing storage solutions to cloud-native services, such as Cloud Storage, Cloud SQL, or Firestore.
|
||||
3. **Application Modernization**: Refactoring or rebuilding applications to take advantage of new technologies and cloud computing platforms.
|
||||
|
||||
**Key Takeaways**
|
||||
|
||||
* Application modernization is a process that updates and refactors existing applications to take advantage of new technologies and cloud computing platforms.
|
||||
* It involves infrastructure, data, and application architecture modernization.
|
||||
* The three components of digital transformation - infrastructure, data, and application modernization - are interconnected and essential for comprehensive digital transformation.
|
@ -0,0 +1,38 @@
|
||||
# Day 62 - Shifting Left for DevSecOps Using Modern Edge Platforms
|
||||
[![Watch the video](thumbnails/day62.png)](https://www.youtube.com/watch?v=kShQcv_KLOg)
|
||||
|
||||
In this discussion, the participants are discussing a CI/CD workflow with a focus on security (Secure DevOps). The idea is to shift left the security practices from testing and production to the early stages of development. This approach helps mitigate issues that can arise during deployment and operations.
|
||||
|
||||
To measure success in this context, they suggest monitoring several metrics:
|
||||
- Application coverage: Ensure a high percentage of all applications across the organization are covered under the same process, including software composition analysis (SCA), static application security testing (SAST), dynamic application security testing (DAST), web application protection, and API protections.
|
||||
- Frequency of releases and rollbacks: Track how often releases have to be rolled back due to security vulnerabilities, with a focus on reducing the number of production rollbacks since these are more costly than addressing issues earlier in the process.
|
||||
- Mean Time To Detect (MTTD) and Mean Time To Respond (MTTR) for vulnerabilities within the organization: Strive to reduce the time from disclosure of a vulnerability to detection, response, and resolution within the organization. A mature organization should aim for a short MTTD and MTTR.
|
||||
- Cost and revenue implications: In the current interest rate environment, profitability is crucial. Security practices can impact both costs (e.g., internal costs related to fixing vulnerabilities) and revenue (e.g., ability to close deals faster by addressing security concerns in the Redline discussions).
|
||||
Here's a summary of the conversation:
|
||||
|
||||
**Identity**: The importance of shifting left in the development process, specifically in the context of web application and API protection.
|
||||
|
||||
**Purpose**: To discuss the benefits of integrating security into the DevOps lifecycle, including reducing meantime to detect (MTTD) and meantime to resolve (MTTR), as well as improving revenue and profitability.
|
||||
|
||||
**Key Points**:
|
||||
|
||||
1. **Meantime to Detect (MTTD)**: Measure how long it takes from vulnerability disclosure to detection within your organization.
|
||||
2. **Meantime to Resolve (MTTR)**: Track how quickly you can resolve vulnerabilities after they are detected.
|
||||
3. **Cost Savings**: Shifting left can reduce internal costs, such as those related to code reviews and testing.
|
||||
4. **Revenue Implications**: Integrating security into the DevOps lifecycle can help close deals faster by demonstrating a commitment to security and minimizing risk.
|
||||
5. **False Positives**: Reduce false positives by incorporating security checks earlier in the development process.
|
||||
|
||||
**Metrics to Track**:
|
||||
|
||||
1. MTTD (meantime to detect)
|
||||
2. MTTR (meantime to resolve)
|
||||
3. Revenue growth
|
||||
4. Cost savings
|
||||
|
||||
**Takeaways**:
|
||||
|
||||
1. Shifting left is essential for reducing MTTD and MTTR.
|
||||
2. Integrating security into the DevOps lifecycle can improve revenue and profitability.
|
||||
3. Measuring success through metrics such as MTTD, MTTR, and revenue growth is crucial.
|
||||
|
||||
Overall, the conversation emphasized the importance of integrating security into the development process to reduce risks and improve business outcomes.
|
@ -0,0 +1,34 @@
|
||||
# Day 63 - Diving into Container Network Namespaces
|
||||
[![Watch the video](thumbnails/day63.png)](https://www.youtube.com/watch?v=Z22YVIwwpf4)
|
||||
|
||||
In summary, the user created two network namespaces named orange and purple. They added a static route in the orange namespace that directs any unknown destination traffic to the super bridge (192.168.52.0) which allows the outbound traffic to reach the external world.
|
||||
|
||||
The user also enabled IP forwarding on both network namespaces so that traffic can flow between them and to the outside world. They were able to ping a website from the orange namespace, indicating successful communication with the outside world.
|
||||
|
||||
For production scale, the user plans to use a container networking interface (CNI) system, which automates the onboarding and offboarding process using network namespaces for containers. The CNI also manages IP addresses and provides an offboarding mechanism for releasing IPs back into the pool when needed.
|
||||
|
||||
The user ended by thanking the audience and expressing hope to see them in future episodes of 90 Days of DevOps. They were addressed as Marino Wi, and Michael Cade was acknowledged along with the rest of the community.
|
||||
|
||||
**Identity and Purpose**
|
||||
|
||||
The speaker, Marino, is discussing a scenario where he created two network namespaces (orange and purple) and wants to enable communication between them. He explains that they are isolated from each other by default, but with some configuration changes, they can be made to communicate.
|
||||
|
||||
**Main Points**
|
||||
|
||||
1. The speaker creates two network namespaces (orange and purple) and brings their interfaces online.
|
||||
2. Initially, he cannot ping the bridge IP address (192.168.52.0) from either namespace.
|
||||
3. He enables IP forwarding and sets up an IP tables rule to allow outbound traffic from the orange namespace.
|
||||
4. He adds a static route to the default route table in each namespace to enable communication with the outside world.
|
||||
5. With these changes, he is able to ping the bridge IP address (192.168.52.0) from both namespaces.
|
||||
6. The speaker explains that this scenario demonstrates how pod networking works, using network namespaces and the container networking interface (CNI) specification.
|
||||
|
||||
**Key Takeaways**
|
||||
|
||||
1. Network namespaces can be isolated from each other by default.
|
||||
2. With proper configuration changes, they can be made to communicate with each other.
|
||||
3. IP forwarding and static routing are necessary for communication between network namespaces.
|
||||
4. The CNI specification is used to automate the onboarding and offboarding process of containers in a network.
|
||||
|
||||
**Purpose**
|
||||
|
||||
The purpose of this exercise is to demonstrate how pod networking works, using network namespaces and the CNI specification. This is relevant to production-scale scenarios where multiple containers need to communicate with each other.
|
@ -0,0 +1,48 @@
|
||||
# Day 64 - Let’s Do DevOps: Writing a New Terraform /Tofu AzureRm Data Source — All Steps!
|
||||
[![Watch the video](thumbnails/day64.png)](https://www.youtube.com/watch?v=AtqivV8iBdE)
|
||||
|
||||
This session goes into explaining the process of creating a Terraform data source using Go, and testing it with unit tests in Visual Studio Code. You also mentioned using an environment file (EnV) to store secrets for authentication when running the tests. Here's a summary:
|
||||
|
||||
1. Create a Go project, and at the root of the project, create an environment file (EnV) containing secrets required for authentication.
|
||||
|
||||
2. Write unit tests for your Terraform data source in Visual Studio Code using IDhenticate to authenticate with Azure or other services when running the tests.
|
||||
|
||||
3. Run the tests from the command line using the `make ACC tests service network test args run` command, which will run all tests that match the given pattern (in this case, "service", "network", and variations).
|
||||
|
||||
4. To use a local provider in Terraform instead of the one published in the library, build the provider using `go build`, which will create a binary and place it in your Go path under the `bin` folder.
|
||||
|
||||
5. Create a `terraform.rc` file in your home directory with a Dev override to tell Terraform to look for the local binary when called.
|
||||
|
||||
6. Run Terraform using the command line, e.g., `terraform plan`, to see if it works as expected and outputs the desired data.
|
||||
|
||||
7. The provided Terraform code can be used by others, who only need to ensure they are on version 3890 or newer and follow the instructions for finding and using existing IP groups in Terraform.
|
||||
|
||||
Overall, you have created a custom Terraform data source and tested it thoroughly using unit tests, Visual Studio Code, and the command line interface (CLI). You can find more information on your website at [ky.url.lol9 daysof devops 2024](ky.url.lol9 daysof devops 2024). Thank you for sharing this informative presentation!
|
||||
Here's a summary of the content:
|
||||
|
||||
The speaker, Kyler Middleton, is an expert in Terraform and Go programming languages. He presents a case study on how to create a custom Terraform data source using Go language. The goal was to create a data source that could retrieve IP groups from Azure, which did not exist as a built-in Terraform resource.
|
||||
|
||||
Kyler explains the process of researching and finding a solution. He and his team realized that they could hack together a solution using external CLIs and outputs and inputs. However, this approach had limitations and was not scalable. Therefore, they decided to write their own Terraform data source in Go language.
|
||||
|
||||
The speaker then walks through the steps taken:
|
||||
|
||||
1. Writing three unit tests for the provider
|
||||
2. Compiling the provider and testing it
|
||||
3. Integrating Visual Studio Code (VSCode) with the terraform provider language
|
||||
4. Running unit tests within VSCode
|
||||
5. Writing Terraform code to use the local binary that was compiled
|
||||
6. Testing the Terraform code
|
||||
7. Opening a Pull Request (PR) and getting it merged
|
||||
|
||||
Kyler concludes by stating that the custom Terraform data source is now available for everyone to use, starting from version 3890 of the HashiCorp Azure RM provider.
|
||||
|
||||
|
||||
## About Me
|
||||
I'm [Kyler Middleton](https://www.linkedin.com/in/kylermiddleton/), Cloud Security Chick, Day Two Podcast host, Hashi Ambassador, and AWS Cloud Builder.
|
||||
I started my journey fixing computers on a farm, and now build automation tools in the healthcare industry. I write my [Medium blog]([https://www.linkedin.com/in/kylermiddleton/](https://medium.com/@kymidd) on how to make DevOps accessible and I'll teach anyone who will listen about the benefits of automation and the cloud.
|
||||
I think computers are neat.
|
||||
|
||||
## Life Stuff
|
||||
Kyler is married to her partner Lindsey of more than 15 years, and co-mom'ing it up raising their 2 year old toddler Kennedy, the light of her moms' eyes. Kyler and crew currently live in Madison, Wisconsin, USA.
|
||||
|
||||
## Let's do DevOps!
|
@ -0,0 +1,38 @@
|
||||
# Day 65 - Azure pertinent DevOps for non-coders
|
||||
[![Watch the video](thumbnails/day65.png)](https://www.youtube.com/watch?v=odgxmohX6S8)
|
||||
|
||||
The presentation discusses several DevOps practices, their implications, and how they can be leveraged by non-coders. Here's a summary:
|
||||
|
||||
1. Continuous Delivery (CD) practice focuses on automating the software delivery process, with the goal of reducing time to market and improving quality. For non-coders, understanding CD principles can help streamline IT operations and improve collaboration.
|
||||
|
||||
2. Infrastructure as Code (IAC) is a practice that treats infrastructure resources like software code, making it easier to manage, version, and automate infrastructure changes. Familiarity with IAC tools such as Terraform, Ansible, or Azure Resource Manager (ARM) is important for Azure administrators and folks working in infrastructure roles.
|
||||
|
||||
3. Configuration Management focuses on enforcing desired States, tracking changes, and automating issue resolution. While it has a broader organizational scope, understanding configuration management can help non-coders contribute to more efficient IT environments and improve their professional development.
|
||||
|
||||
4. Continuous Monitoring provides real-time visibility into application performance, aiding in issue resolution and improvement. Proficiency in Azure Monitor and Azure Log Analytics is beneficial for admins working to ensure the continuous performance and availability of applications and services.
|
||||
|
||||
The presentation concludes by suggesting studying for the Microsoft DevOps Engineer Expert certification (AZ 400) as a way to deepen one's knowledge of DevOps concepts and enhance career prospects. This expert-level certification focuses on optimizing practices, improving communications and collaboration, creating automation, and designing and implementing application code and infrastructure strategies using Azure technologies.
|
||||
|
||||
The presentation covers the following topics:
|
||||
|
||||
1. **GitHub**: A development platform for version control, project management, and software deployment. GitHub provides a range of services, including code hosting, collaboration tools, and automation workflows.
|
||||
2. **Agile**: An iterative approach to software development that emphasizes team collaboration, continual planning, and learning. Agile is not a process but rather a philosophy or mindset for planning work.
|
||||
3. **Infrastructure as Code (IAC)**: A practice that treats infrastructure as code, enabling precise management of system resources through version control systems. IAC bridges the gap between development and operations teams by automating the creation and tear-down of complex systems and environments.
|
||||
4. **Configuration Management**: A DevOps practice that enforces desired states, tracks changes, and automates issue resolution. This practice simplifies managing complex environments and is essential for modern infrastructure management.
|
||||
|
||||
**Key Takeaways:**
|
||||
|
||||
* Non-coders can contribute to DevOps practices, such as GitHub, agile, IAC, and configuration management.
|
||||
* These practices are essential for efficient, secure, and collaborative IT environments.
|
||||
* DevOps professionals design and implement application code and infrastructure strategies that enable continuous integration, testing, delivery, monitoring, and feedback.
|
||||
* The Azure Administrator Associate or Azure Developer Associate exam is a prerequisite to take the AZ-400: Designing and Implementing Microsoft DevOps Solutions certification exam.
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
1. Study towards the official certification from Microsoft related to DevOps (DevOps Engineer Expert).
|
||||
2. Prepare for the AZ-400: Designing and Implementing Microsoft DevOps Solutions certification exam by following the Azure Learn path series.
|
||||
3. Continuously update knowledge on DevOps practices, GitHub, agile, IAC, and configuration management.
|
||||
|
||||
**Conclusion:**
|
||||
|
||||
In conclusion, the presentation has provided an overview of DevOps practices and their applications in various scenarios. Non-coders can contribute to these practices, which are essential for efficient, secure, and collaborative IT environments. The certification path outlined in this summary provides a clear roadmap for professionals looking to enhance their skills and knowledge in DevOps.
|
@ -0,0 +1,24 @@
|
||||
# Day 66 - A Developer's Journey to the DevOps: The Synergy of Two Worlds
|
||||
[![Watch the video](thumbnails/day66.png)](https://www.youtube.com/watch?v=Q_LApaLzkSU)
|
||||
|
||||
The speaker is discussing the concept of a T-shaped developer, which refers to someone who has broad knowledge and skills across multiple areas (represented by the horizontal bar of the "T") but also deep expertise in one specific area (represented by the vertical bar). This model allows developers to work effectively with others from different teams, learn new things, and mentor junior developers.
|
||||
|
||||
The speaker emphasizes that being T-shaped offers opportunities for collaboration, learning, and growth, both personally and professionally. They also mention their passion for sharing knowledge and helping others, especially those starting out in their careers, and discuss the role of a mentor within a development team.
|
||||
|
||||
Lastly, the speaker uses gardening as an analogy for personal and professional growth, emphasizing the importance of adopting a growth mindset and continuously learning and improving one's skills. They conclude by encouraging listeners to pursue their passions and not limit themselves based on career roles or labels, and to share their knowledge with others.
|
||||
|
||||
Overall, the speaker is advocating for a T-shaped approach to development, emphasizing collaboration, mentoring, growth, and the pursuit of personal passions as key elements in a successful career in the field.
|
||||
The speaker is an expert in DevOps and has shared their top seven lessons learned in the field. The main points are:
|
||||
|
||||
1. Continuous Learning (CD) - always learn new things and develop your skills.
|
||||
2. T-shaped skills - become proficient in multiple areas to solve complex problems.
|
||||
3. Collaboration - work with others to achieve common goals.
|
||||
4. Synergize - combine your strengths with those of others to create something greater than the sum of its parts.
|
||||
5. Help others - mentor or help colleagues who need guidance.
|
||||
6. Grow and develop - as you learn and take on new challenges, you will grow professionally and personally.
|
||||
|
||||
The speaker also emphasizes the importance of having a positive mindset and being open to change and learning.
|
||||
|
||||
As for the purpose of identity, the speaker believes that it is important to define what you want to achieve in your career and be willing to put in the effort required to get there. They encourage others to do the same and not limit themselves to specific roles or labels. The speaker also quotes a book they read, "Daily Stoics" by Robert Green, which says, "At the center of your being you have the answer; you know who you are and you know what you want."
|
||||
|
||||
The speaker's key takeaway is to be true to oneself and follow one's passions, saying "Do what you love and love what you do." They also offer a QR code to access their online book on DevOps and invite others to join their user group.
|
@ -0,0 +1,35 @@
|
||||
# Day 67 - Art of DevOps: Harmonizing Code, Culture, and Continuous Delivery
|
||||
[![Watch the video](thumbnails/day67.png)](https://www.youtube.com/watch?v=NTysb2SgfUU)
|
||||
|
||||
A discussion of various trends and technologies in DevOps, MLOps, GitOps, and data engineering. Here is a summary of some of the points you mentioned:
|
||||
|
||||
1. Data Engineering: Research on data related to Kubernetes and CUber can be found at Ke Side and various conferences focusing on these topics.
|
||||
|
||||
2. GitOps, AI Ops, MLOps: GitOps automates and controls infrastructure using Kubernetes. Argo is a popular project for this. AI Ops and MLOps aim to simplify the process of data preparation, model training, and deployment for machine learning engineers and data scientists. QFlow is one such project.
|
||||
|
||||
3. Simplified Infrastructure: Companies and startups should look towards infrastructure solutions that are scalable and cost-efficient. AWS Lambda and similar services are gaining traction in this area.
|
||||
|
||||
4. Microservices Architecture: Service Mesh and Cloud Infrastructure are becoming increasingly popular due to their ability to offer various services to companies. AWS, Google Cloud, and other companies are focusing on Lambda and similar services to compete.
|
||||
|
||||
5. Platform Engineering: This is an emerging field that focuses on simplifying the cycle between DevOps and SRE. It involves creating platforms for companies to work effectively, taking into account the latest tools and trends in the industry. The Platform Engineering Day at Cucon is a good resource to learn more about this trend.
|
||||
|
||||
6. Resources for Learning DevOps: You mentioned several resources for learning DevOps from scratch, including Cloud talks podcast, the 90 days of devops repo, devops roadmap by Sam, devops commune (which has around 10K members), and videos by Nana, Victor Devops Toolkit, Kunal, and Rock Cod.
|
||||
|
||||
The speaker discussed various trends in DevOps, including:
|
||||
|
||||
1. **Identity and Purpose**: Cybersecurity is crucial, with AI-powered tools being used extensively.
|
||||
2. **Terraform and Pulumi**: Infrastructure as Code (IaC) helps maintain infrastructure through code.
|
||||
3. **CI/CD implementation**: Automates the software development life cycle for enhanced management.
|
||||
4. **Data on Kubernetes**: Researches are ongoing to improve data management on Kubernetes.
|
||||
5. **GitOps, AI Ops, and MLOps**: Automation of pipelines using GitOps, AI-powered tools, and Machine Learning Operations (MLOps).
|
||||
6. **Service Computing and Microservices**: Focus on scalable and cost-efficient infrastructure for service-based architecture.
|
||||
7. **Platform Engineering**: Emerging field simplifying the cycle between DevOps and SRE teams.
|
||||
8. **Data Obility and Platform Engineering**: Key trends in the next year, with platform engineering being a key area of focus.
|
||||
|
||||
The speaker also mentioned various resources for learning DevOps, including:
|
||||
|
||||
* Podcasts: Cloud Talks, 90 Days of DevOps
|
||||
* Videos: Victor Devops Toolkit, Kunal's videos on networking and Rock Code
|
||||
* Communities: DevOps Commune (10K members), Reddit
|
||||
|
||||
Overall, the speaker emphasized the importance of cybersecurity, automation, and infrastructure management in DevOps.
|
@ -0,0 +1,78 @@
|
||||
# Day 68 - Service Mesh for Kubernetes 101: The Secret Sauce to Effortless Microservices Management
|
||||
[![Watch the video](thumbnails/day68.png)](https://www.youtube.com/watch?v=IyFDGhqpMTs)
|
||||
|
||||
In a service mesh, there are two main components: the data plane and the control plane.
|
||||
|
||||
1. Data Plane: Composed of Envoy proxies which act as sidecars deployed alongside microservices. These proxies manage all communication between microservices and collect Telemetry on network traffic. The Envoy proxy is an open-source layer 7 proxy designed to move networking logic into a reusable container. It simplifies the network by providing common features that can be used across different platforms, enabling easy communication among containers and services.
|
||||
|
||||
2. Control Plane: Consists of Istio (Service Mesh Operator - stod) which configures proxies to route and secure traffic, enforce policies, and collect Telemetry data on network traffic. The control plane handles essential tasks such as service Discovery, traffic management, security, reliability, observability, and configuration Management in a unified manner.
|
||||
|
||||
The service mesh architecture works by transferring all networking logic to the data plane (proxies), allowing microservices to communicate indirectly through proxies without needing direct contact. This provides numerous benefits like:
|
||||
|
||||
- Simplified Service-to-Service communication
|
||||
- Comprehensive Observability features (distributed tracing, logging, monitoring)
|
||||
- Efficient Traffic Management (load balancing, traffic shaping, routing, AB testing, gradual rollouts)
|
||||
- Enhanced Security (built-in support for end-to-end encryption, Mutual TLS, access control policies between microservices)
|
||||
- Load Balancing capabilities
|
||||
- Simplified Service Discovery (automatic registration and discovery of services)
|
||||
- Consistent Configuration across all services
|
||||
- Policy Enforcement (rate limiting, access control, retry logic)
|
||||
- Scaling ease (automatic load balancing for adapting to changing traffic patterns)
|
||||
|
||||
Best practices for using a service mesh include:
|
||||
|
||||
1. Incremental Adoption
|
||||
2. Ensuring Uniformity across all services
|
||||
3. Monitoring and Logging
|
||||
4. Strong Security Policies
|
||||
5. Proper Documentation and Training
|
||||
6. Testing (integration testing)
|
||||
7. Regular Updates
|
||||
8. Performance Optimization
|
||||
|
||||
**Identity and Purpose**
|
||||
|
||||
The main topic is a service mesh architecture, which consists of two components: data plane (Eno proxy) and control plane (Stod).
|
||||
|
||||
1. **Data Plane (Eno Proxy)**:
|
||||
* Open-source project
|
||||
* Layer 7 proxy that moves networking logic into a reusable container
|
||||
* Runs as a sidecar alongside microservices
|
||||
* Routes requests between proxies, simplifying network communication
|
||||
|
||||
2. **Control Plane (Stod)**:
|
||||
* Acts as the brain of the service mesh
|
||||
* Provides control and management capabilities
|
||||
* Configures Proxies to route and secure traffic
|
||||
* Enforces security policies and collects telemetry data
|
||||
* Handles important aspects like service discovery, traffic management, security, reliability, observability, and configuration management
|
||||
|
||||
**Architecture Example**
|
||||
|
||||
A simple architecture diagram is shown, where two services (Service A and Service B) are connected through proxies. The proxies communicate with each other through the control plane (Stod). This demonstrates how all networking logic is transferred to the data plane, eliminating direct communication between microservices.
|
||||
|
||||
**Benefits and Use Cases**
|
||||
|
||||
Some benefits of a service mesh include:
|
||||
|
||||
1. **Service-to-Service Communication**: Simplified communication between microservices
|
||||
2. **Observability**: Comprehensive observability features like distributed tracing, logging, and monitoring
|
||||
3. **Traffic Management**: Efficient traffic management with load balancing, traffic shaping, routing, and AB testing
|
||||
4. **Security**: Enhanced security with built-in support for end-to-end encryption, Mutual TLS, and access control policies
|
||||
5. **Load Balancing**: Built-in load balancing capabilities
|
||||
6. **Service Discovery**: Simplified service discovery by automatically registering and discovering services
|
||||
7. **Consistent Configuration**: Ensures uniformity in all configuration and policies across all services
|
||||
8. **Policy Enforcement**: Enforces policies consistently across all services without modifying code
|
||||
|
||||
**Best Practices**
|
||||
|
||||
To get the most out of a service mesh, follow these best practices:
|
||||
|
||||
1. **Incremental Adoption**: Adopt the service mesh gradually, starting with non-critical services
|
||||
2. **Uniformity**: Ensure consistent configuration and policies across all services
|
||||
3. **Monitoring and Logging**: Leverage observability features for monitoring, logging, and diagnosing issues
|
||||
4. **Strong Security Policies**: Implement strong security policies, including Mutual TLS, access control, and end-to-end encryption
|
||||
5. **Documentation and Training**: Provide comprehensive documentation and training for development and operations teams
|
||||
6. **Testing**: Conduct thorough testing to ensure the service mesh behaves as expected
|
||||
7. **Regular Updates**: Keep the service mesh components and configuration up to date to benefit from latest features, improvements, and security patches
|
||||
8. **Performance Optimization**: Regularly monitor and optimize performance to meet required scaling and latency targets
|
@ -0,0 +1,25 @@
|
||||
# Day 69 - Enhancing Kubernetes security, visibility, and networking control logic at the Linux kernel
|
||||
[![Watch the video](thumbnails/day69.png)](https://www.youtube.com/watch?v=mEc0WoPoHdU)
|
||||
|
||||
Summary of a presentation about using the Istio service mesh and Tetragon, a kernel-level security tool, in a Kubernetes environment. The main focus is on investigating an incident where the Death Star, a hypothetical system, has been compromised due to a vulnerability in its exhaust port.
|
||||
|
||||
1. The user checks the Hubble dashboard to see the incoming request and finds that it was TIE fighter (not a rebel ship) that caused the damage.
|
||||
|
||||
2. To find out more details about the incident, they investigate using forensics and root cause analysis techniques. They identify which node caused the problem (worker node in this case).
|
||||
|
||||
3. To dig deeper, they inspect the Tetragon logs related to any connection to the specific HTTP path, where they find the kill command executed with its arguments and the TCP traffic being passed. This helps them understand what happened during the incident.
|
||||
|
||||
4. The user also shows how to view this data using JSON, which provides more detailed information about the incident, including the start time, kubernetes pod labels, workload names, and capabilities that the container was running with.
|
||||
|
||||
5. Finally, the user demonstrates capturing the flag for this challenge by providing the binary and arguments in an editor.
|
||||
|
||||
Throughout the tutorial, the user emphasizes the importance of network observability, network policies, transparent encryption, mutual or runtime visibility, and enforcement using Tetron. They also mention that more details can be found on Ice Vent's website (https://icevent.com) and encourage viewers to join their weekly AMA and request a demo for the enterprise version of their platform.
|
||||
The main points from this content are:
|
||||
|
||||
1. The importance of understanding the Identity and Purpose of a platform or system, using Star Wars as an analogy to demonstrate how attackers can exploit vulnerabilities.
|
||||
2. The use of Tetragon to investigate and analyze network traffic and logs to identify potential security threats.
|
||||
3. The importance of using network observability, network policies, transparent encryption, and runtime visibility and enforcement to secure the environment.
|
||||
4. The value of conducting forensics and root cause analysis to identify the source of a security breach.
|
||||
5. The use of JSON to view data and export it for further analysis.
|
||||
|
||||
Overall, this content emphasizes the importance of understanding the Identity and Purpose of a system, as well as using various tools and techniques to analyze and secure network traffic and logs.
|
@ -0,0 +1,12 @@
|
||||
# Day 70 Simplified Cloud Adoption with Microsoft's Terraforms Azure Landing Zone Module
|
||||
[![Watch the video](thumbnails/day70.png)](https://www.youtube.com/watch?v=r1j8CrwS36Q)
|
||||
|
||||
The speaker is providing guidance on implementing a landing zone in Azure using the Cloud Adoption Framework (CAF) Landing Zone with Terraform. Here are the key points:
|
||||
|
||||
1. Use Azure policy to enable tag inheritance, which helps to tag more resources automatically and improves cost management.
|
||||
2. Review the CAF review checklist for best practices in building and customizing landing zones.
|
||||
3. Stay up-to-date on updates by checking the "What's new" page on the CAF website, following blog posts, and attending community calls.
|
||||
4. Utilize resources like the Terraform team's roadmap to know what features are being worked on and when.
|
||||
5. Contribute feedback or issues to the relevant repositories (such as the Enterprise scale Azure Learning Zone repo) to collaborate with the development teams.
|
||||
6. The speaker recommends watching recorded community calls, especially those held in Australian time zones, at 2x speed and pausing where necessary for maximum efficiency.
|
||||
7. The speaker also shares their LinkedIn profile and Blue Sky (new Twitter) handle for further communication or feedback.
|
@ -0,0 +1,25 @@
|
||||
# Day 71 - Chatbots are going to destroy infrastructures and your cloud bills
|
||||
[![Watch the video](thumbnails/day71.png)](https://www.youtube.com/watch?v=arpyvrktyzY)
|
||||
|
||||
The user is explaining that their chatbot application takes a long time to respond because it has many dependencies, and the current infrastructure uses ECS container service in the Parisian region. They suggest separating the chat functionality from the API, as the chat is CPU-bound while the API calls are more CPU-bound. They also recommend monitoring costs closely to avoid unnecessary expenses, especially with regards to large language models (LLMs). They advise putting the chat in its own container or using Lambda functions for better scalability and cost control. They mention that separating components into microservices can help manage dependencies and optimize performance. Lastly, they suggest using tools like Sentry to identify slow queries and optimize accordingly. The user concludes by stating that these changes would improve the application's stability and efficiency and could potentially save costs over time.
|
||||
Here are some key takeaways regarding identity and purpose:
|
||||
|
||||
**Identity:**
|
||||
|
||||
1. Separate the chat (LLM) from the API, as they are CPU-bound and require different resources.
|
||||
2. Use a container service like ECS or LSC to run the chat in its own instance, reducing overhead and improving scalability.
|
||||
3. Consider using Lambdas for chat responses to reduce costs and improve performance.
|
||||
|
||||
**Purpose:**
|
||||
|
||||
1. Keep the chatbot's code separate from your main application to prevent performance issues and high costs.
|
||||
2. Use microservices architecture to break down complex applications into smaller, more manageable components.
|
||||
3. Monitor your costs regularly to ensure you're not overspending on infrastructure or services.
|
||||
|
||||
**Lessons learned from Qua:**
|
||||
|
||||
1. Put the chat in its own container and instance to reduce costs and improve scalability.
|
||||
2. Separate dependencies and components using microservices architecture.
|
||||
3. Monitor your cloud bills and optimize your resources accordingly.
|
||||
|
||||
By following these guidelines, you can create a more efficient, scalable, and cost-effective infrastructure for your chatbot or application.
|