Merge branch 'MichaelCade:main' into main
4
.github/workflows/deploy-blog-posts.yml
vendored
@ -22,7 +22,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v4
|
||||||
- name: Publish articles on dev.to
|
- name: Publish articles on dev.to
|
||||||
uses: sinedied/publish-devto@v2
|
uses: sinedied/publish-devto@v2
|
||||||
id: publish_devto
|
id: publish_devto
|
||||||
@ -54,4 +54,4 @@ jobs:
|
|||||||
Changes result:
|
Changes result:
|
||||||
```
|
```
|
||||||
${{ steps.publish_devto.outputs.result_summary }}
|
${{ steps.publish_devto.outputs.result_summary }}
|
||||||
```
|
```
|
||||||
|
@ -80,7 +80,6 @@ My advice is to watch all of the below and hopefully you also picked something u
|
|||||||
- [Continuous Testing - IBM YouTube](https://www.youtube.com/watch?v=RYQbmjLgubM)
|
- [Continuous Testing - IBM YouTube](https://www.youtube.com/watch?v=RYQbmjLgubM)
|
||||||
- [Continuous Integration - IBM YouTube](https://www.youtube.com/watch?v=1er2cjUq1UI)
|
- [Continuous Integration - IBM YouTube](https://www.youtube.com/watch?v=1er2cjUq1UI)
|
||||||
- [Continuous Monitoring](https://www.youtube.com/watch?v=Zu53QQuYqJ0)
|
- [Continuous Monitoring](https://www.youtube.com/watch?v=Zu53QQuYqJ0)
|
||||||
- [The Remote Flow](https://www.notion.so/The-Remote-Flow-d90982e77a144f4f990c135f115f41c6)
|
|
||||||
- [FinOps Foundation - What is FinOps](https://www.finops.org/introduction/what-is-finops/)
|
- [FinOps Foundation - What is FinOps](https://www.finops.org/introduction/what-is-finops/)
|
||||||
- [**NOT FREE** The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win](https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/1942788290/)
|
- [**NOT FREE** The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win](https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/1942788290/)
|
||||||
|
|
||||||
|
@ -170,6 +170,5 @@ You can customise this portal with your branding and this might be something we
|
|||||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
|
||||||
|
|
||||||
See you on [Day 31](day31.md)
|
See you on [Day 31](day31.md)
|
||||||
|
@ -43,43 +43,44 @@ Uma vez implantado, vamos operá-lo. E operá-lo pode envolver algo como você c
|
|||||||
|
|
||||||
## Monitoramento
|
## Monitoramento
|
||||||
|
|
||||||
All of the above parts lead to the final step because you need to have monitoring, especially around operational issues auto-scaling troubleshooting like you don't know
|
Todas as partes acima levam à etapa final que é o monitoramento, especialmente em torno de questões operacionais como solução de problemas de escalonamento automático. Você não saberá que há um problema se não tiver monitoramento implementado.
|
||||||
there's a problem if you don't have monitoring in place to tell you that there's a problem so some of the things you might build monitoring for are memory utilization CPU utilization disk space, API endpoint, response time, how quickly that endpoint is responding and a big part of that as well is logs. Logs give developers the ability to see what is happening without having to access production systems.
|
Algum dos elementos para os quais você pode desenvolver monitoramento incluem a utilização de memória, a utilização da CPU, o espaço em disco, API endpoint, tempo de resposta, e o quão rapidamente esse endpoint está respondendo.
|
||||||
|
Além disso, uma parte significativa disso são os logs. Os logs proporcionam aos desenvolvedores a capacidade de ver o que está acontecendo sem precisar acessar sistemas de produção.
|
||||||
|
|
||||||
## Rinse & Repeat
|
## Limpe & Repita
|
||||||
|
|
||||||
Once that's in place you go right back to the beginning to the planning stage and go through the whole thing again
|
Uma vez implementado, você volta imediatamente à etapa de planejamento e passa por todo o processo novamente.
|
||||||
|
|
||||||
## Continuous
|
## Contínuo
|
||||||
|
|
||||||
Many tools help us achieve the above continuous process, all this code and the ultimate goal of being completely automated, cloud infrastructure or any environment is often described as Continuous Integration/ Continuous Delivery/Continous Deployment or “CI/CD” for short. We will spend a whole week on CI/CD later on in the 90 Days with some examples and walkthroughs to grasp the fundamentals.
|
Muitas ferramentas nos ajudam a alcançar o processo contínuo acima, onde todo esse código e o objetivo final de ser completamente automatizado, seja em infraestrutura na nuvem ou em qualquer outro ambiente é frequentemente descrito como Integração Contínua /Entrega Contínua/ Implantação Contínua, ou "CI/CD" de forma abreviada. Passaremos uma semana inteira focando em CI/CD mais adiante, no decorrer dos próximos 90 dias, com exemplos e demonstrações para compreender os fundamentos.
|
||||||
|
|
||||||
### Continuous Delivery
|
### Entrega Contínua
|
||||||
|
|
||||||
Continuous Delivery = Plan > Code > Build > Test
|
Entrega Contínua = Planejar > Codar > Construir > Testar
|
||||||
|
|
||||||
### Continuous Integration
|
### Integração Contínua
|
||||||
|
|
||||||
This is effectively the outcome of the Continuous Delivery phases above plus the outcome of the Release phase. This is the case for both failure and success but this is fed back into continuous delivery or moved to Continuous Deployment.
|
Este é efetivamente o resultado das fases de Entrega Contínua acima, mais o resultado da fase de Lançamento(Release). Esse é o caso tanto do fracasso quanto do sucesso, mas isso é realimentado na entrega contínua ou movido para a implantação contínua.
|
||||||
|
|
||||||
Continuous Integration = Plan > Code > Build > Test > Release
|
Integração Contínua = Plano > Código > Construir > Teste > Lançamento(Release)
|
||||||
|
|
||||||
### Continuous Deployment
|
### Implantação Contínua
|
||||||
|
|
||||||
If you have a successful release from your continuous integration then move to Continuous Deployment which brings in the following phases
|
Se você tiver uma versão bem-sucedida de sua Integração Contínua, avance para a Implantação Contínua, que inclui as seguintes fases
|
||||||
|
|
||||||
CI Release is Success = Continuous Deployment = Deploy > Operate > Monitor
|
Lançamento (Release) da Integração Contínua bem sucedido = Implantação Contínua = Implantação > Operação > Monitoramento
|
||||||
|
|
||||||
You can see these three Continuous notions above as the simple collection of phases of the DevOps Lifecycle.
|
Você pode ver essas três noções contínuas acima como a simples coleção de fases do Ciclo de Vida DevOps.
|
||||||
|
|
||||||
This last bit was a bit of a recap for me on Day 3 but think this makes things clearer for me.
|
Esta última parte foi uma recapitulação do Dia 3 para mim, mas acho que isso torna as coisas mais claras para mim.
|
||||||
|
|
||||||
### Resources
|
### Recursos
|
||||||
|
|
||||||
- [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU)
|
- [DevOps para desenvolvedores – Engenheiro de software ou DevOps?](https://www.youtube.com/watch?v=a0-uE3rOyeU)
|
||||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
- [Techworld with Nana -DevOps Roadmap 2022 - Como se tornar um engenheiro DevOps? O que é DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||||
- [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk)
|
- [Como se tornar um engenheiro DevOps em 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk)
|
||||||
|
|
||||||
If you made it this far then you will know if this is where you want to be or not.
|
Se você chegou até aqui, saberá se é aqui que deseja estar ou não.
|
||||||
|
|
||||||
See you on [Day 6](day06.md).
|
Vejo você no [Dia 6](day06.md).
|
||||||
|
75
2022/pt-br/Days/day06.md
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
---
|
||||||
|
title: '#90DaysOfDevOps - DevOps - As histórias reais - Day 6'
|
||||||
|
published: false
|
||||||
|
description: 90DaysOfDevOps - DevOps - As histórias reais
|
||||||
|
tags: 'devops, 90daysofdevops, learning'
|
||||||
|
cover_image: null
|
||||||
|
canonical_url: null
|
||||||
|
id: 1048855
|
||||||
|
---
|
||||||
|
|
||||||
|
## DevOps - As histórias reais
|
||||||
|
|
||||||
|
No início, o DevOps era considerado fora do alcance de muitos de nós, pois não tínhamos empresas como a Netflix ou uma empresa da Fortune 500 praticando-o, mas acho que agora está começando a voltar ao normal, à medida que as empresas começam a adotar uma prática DevOps.
|
||||||
|
|
||||||
|
Você verá pelas referências abaixo que existem muitos setores e verticais diferentes que usam DevOps e, portanto, têm um enorme efeito positivo em seus objetivos de negócios.
|
||||||
|
|
||||||
|
O benefício geral aqui é que o DevOps, se feito corretamente, deve ajudar a melhorar a velocidade e a qualidade do desenvolvimento de software do seu negócio.
|
||||||
|
|
||||||
|
Queria aproveitar este dia para analisar empresas de sucesso que adotaram uma prática DevOps e compartilhar alguns recursos sobre isso. Esta será uma grande oportunidade para a comunidade mergulhar e ajudar aqui. Você adotou uma cultura DevOps em seu negócio? Foi bem sucedido?
|
||||||
|
|
||||||
|
Mencionei o Netflix acima e irei abordá-lo novamente, pois é um modelo muito bom e bastante avançado em comparação com o que geralmente vemos hoje, mas também mencionarei algumas outras grandes marcas que estão tendo sucesso nisso.
|
||||||
|
|
||||||
|
## Amazon
|
||||||
|
|
||||||
|
Em 2010, a Amazon mudou a área de seu servidor físico para a nuvem AWS (Amazon Web Services). Isso lhes permitiu economizar recursos aumentando e diminuindo a capacidade em incrementos muito pequenos. Também sabemos que a AWS gerou altas receitas enquanto administrava a filial de varejo da Amazon.
|
||||||
|
|
||||||
|
A Amazon adotou em 2011 (de acordo com o link abaixo) um processo de implantação contínua onde seus desenvolvedores poderiam implantar código sempre que quisessem e em qualquer servidor que precisassem. Isso permitiu que a Amazon conseguisse implantar novo software em servidores de produção em uma média de 11,6 segundos!
|
||||||
|
|
||||||
|
## Netflix
|
||||||
|
|
||||||
|
Quem não usa Netflix? É um serviço de streaming de alta qualidade e, pessoalmente falando, oferece uma ótima experiência ao usuário.
|
||||||
|
|
||||||
|
Por que a experiência do usuário é tão boa? Bem, a capacidade de fornecer um serviço sem nenhuma lembrança pessoal de falhas exige velocidade, flexibilidade e atenção à qualidade.
|
||||||
|
|
||||||
|
Os desenvolvedores da Netflix podem criar automaticamente trechos de código em imagens da web implantáveis, sem depender de operações de TI. À medida que as imagens são atualizadas, elas são integradas à infraestrutura da Netflix por meio de uma plataforma personalizada baseada na Web.
|
||||||
|
|
||||||
|
O monitoramento contínuo está em vigor para que, se a implantação das imagens falhar, as novas imagens sejam revertidas e o tráfego seja redirecionado de volta para a versão anterior.
|
||||||
|
|
||||||
|
Há uma ótima palestra listada abaixo que aborda mais sobre o que fazer e o que não fazer pela Netflix em suas equipes.
|
||||||
|
|
||||||
|
## Etsy
|
||||||
|
|
||||||
|
Tal como acontece com muitos de nós e com muitas empresas, houve uma verdadeira luta em torno de implementações lentas e dolorosas. Na mesma linha, também podemos ter experiência de trabalhar em empresas que possuem muitos silos e equipes que não trabalham bem juntas.
|
||||||
|
|
||||||
|
Pelo que pude perceber ao ler sobre Amazon e Netflix é que o Etsy pode ter adotado a permissão para que os desenvolvedores implantassem seu código por volta do final de 2009, o que pode ter sido antes mesmo dos outros dois. (Interessante!)
|
||||||
|
|
||||||
|
Uma conclusão interessante que li aqui foi que eles perceberam que, quando os desenvolvedores se sentem responsáveis pela implantação, eles também assumiriam a responsabilidade pelo desempenho do aplicativo, pelo tempo de atividade e por outros objetivos.
|
||||||
|
|
||||||
|
Uma cultura de aprendizagem é uma parte fundamental do DevOps. Até o fracasso pode ser um sucesso se as lições forem aprendidas. (não tenho certeza de onde veio essa citação, mas faz sentido!)
|
||||||
|
|
||||||
|
Adicionei algumas outras histórias em que o DevOps mudou o jogo em algumas dessas empresas de enorme sucesso.
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Como a Netflix pensa em DevOps](https://www.youtube.com/watch?v=UTKIT6STSVM)
|
||||||
|
- [16 casos de uso populares de DevOps e aplicativos da vida real [2021]](https://www.upgrad.com/blog/devops-use-cases-applications/)
|
||||||
|
- [DevOps: A história da Amazon](https://www.youtube.com/watch?v=ZzLa0YEbGIY)
|
||||||
|
- [Como o Etsy faz o DevOps funcionar](https://www.networkworld.com/article/2886672/how-etsy-makes-devops-work.html)
|
||||||
|
- [Adotando DevOps @ Scale Lições aprendidas na Hertz, Kaiser Permanente e IBM](https://www.youtube.com/watch?v=gm18-gcgXRY)
|
||||||
|
- [DevOps interplanetário na NASA JPL](https://www.usenix.org/conference/lisa16/technical-sessions/presentation/isla)
|
||||||
|
- [Target CIO explica como o DevOps se enraizou dentro do gigante do varejo](https://enterprisersproject.com/article/2017/1/target-cio-explains-how-devops-took-root-inside-retail-giant)
|
||||||
|
|
||||||
|
### Recapitulação dos nossos primeiros dias analisando DevOps
|
||||||
|
|
||||||
|
- DevOps é uma combinação de Desenvolvimento e Operações que permite que uma única equipe gerencie todo o ciclo de vida de desenvolvimento de aplicativos, que consiste em **Desenvolvimento**, **Testes**, **Implantação**, **Operações**.
|
||||||
|
|
||||||
|
- O principal foco e objetivo do DevOps é encurtar o ciclo de vida de desenvolvimento e, ao mesmo tempo, fornecer recursos, correções e funcionalidades frequentemente em estreito alinhamento com os objetivos de negócios.
|
||||||
|
|
||||||
|
- DevOps é uma abordagem de desenvolvimento de software por meio da qual o software pode ser entregue e desenvolvido de maneira confiável e rápida. Você também pode ver isso referenciado como **Desenvolvimento, teste, implantação, monitoramento contínuos**
|
||||||
|
|
||||||
|
Se você chegou até aqui, saberá se é aqui que deseja estar ou não. Vejo você no [Dia 7](day07.md).
|
||||||
|
|
||||||
|
No dia 7, mergulharemos em uma linguagem de programação. Não pretendo ser um desenvolvedor, mas quero entender o que os desenvolvedores estão fazendo.
|
||||||
|
|
||||||
|
Podemos conseguir isso em uma semana? Provavelmente não, mas se passarmos 7 dias ou 7 horas aprendendo algo, saberemos mais do que quando começamos.
|
71
2022/pt-br/Days/day07.md
Normal file
@ -0,0 +1,71 @@
|
|||||||
|
---
|
||||||
|
title: '#90DaysOfDevOps - Panorama geral: aprendendo uma linguagem de programação - Day 7'
|
||||||
|
published: false
|
||||||
|
description: 90DaysOfDevOps - Panorama geral: aprendendo uma linguagem de programação
|
||||||
|
tags: 'devops, 90daysofdevops, learning'
|
||||||
|
cover_image: null
|
||||||
|
canonical_url: null
|
||||||
|
id: 1048856
|
||||||
|
---
|
||||||
|
|
||||||
|
## Panorama geral: DevOps e aprendizado de uma linguagem de programação
|
||||||
|
|
||||||
|
Acho justo dizer que para ter sucesso no longo prazo como engenheiro de DevOps você precisa conhecer pelo menos uma linguagem de programação em um nível básico. Quero aproveitar esta primeira sessão desta seção para explorar por que essa é uma habilidade tão importante de se ter e, esperançosamente, até o final desta semana ou seção, você terá uma melhor compreensão do porquê, como e o que fazer. fazer para progredir em sua jornada de aprendizado.
|
||||||
|
|
||||||
|
Acho que se eu perguntasse nas redes sociais, você precisa ter habilidades de programação para funções relacionadas a DevOps, a resposta provavelmente seria um duro sim? Deixe-me saber se você pensa o contrário? Ok, mas então uma questão maior e é aqui que você não obterá uma resposta tão clara sobre qual linguagem de programação? A resposta mais comum que vi aqui foi Python ou cada vez mais, vemos que Golang ou Go deveria ser a linguagem que você aprenderá.
|
||||||
|
|
||||||
|
Para ter sucesso no DevOps, você precisa ter um bom conhecimento de habilidades de programação, pelo menos o que concluí. Mas temos que entender por que precisamos disso para escolher o caminho certo.
|
||||||
|
|
||||||
|
## Entenda por que você precisa aprender uma linguagem de programação.
|
||||||
|
|
||||||
|
A razão pela qual Python e Go são recomendados com tanta frequência para engenheiros de DevOps é que muitas das ferramentas de DevOps são escritas em Python ou Go, o que faz sentido se você pretende criar ferramentas de DevOps. Agora, isso é importante porque determinará realmente o que você deve aprender e o que provavelmente seria mais benéfico. Se você pretende construir ferramentas DevOps ou está se juntando a uma equipe que o faz, então faria sentido aprender a mesma linguagem. Se você estiver fortemente envolvido em Kubernetes ou Containers, é mais do que provável que você queira escolha Go como sua linguagem de programação. Para mim, a empresa em que trabalho (Kasten by Veeam) está no ecossistema Cloud-Native focado em gerenciamento de dados para Kubernetes e tudo é escrito em Go.
|
||||||
|
|
||||||
|
Mas então você pode não ter um raciocínio claro como esse para escolher se você pode ser um estudante ou estar em transição de carreira sem nenhuma decisão real tomada por você. Acho que nesta situação você deve escolher aquele que parece ressoar e se adequar aos aplicativos com os quais deseja trabalhar.
|
||||||
|
|
||||||
|
Lembre-se de que não pretendo me tornar um desenvolvedor de software aqui, só quero entender um pouco mais sobre a linguagem de programação para poder ler e entender o que essas ferramentas estão fazendo e, então, isso possivelmente nos levará a como podemos ajudar a melhorar as coisas.
|
||||||
|
|
||||||
|
Também é importante saber como você interage com as ferramentas DevOps que podem ser Kasten K10 ou Terraform e HCL. Isso é o que chamaremos de arquivos de configuração e é assim que você interage com essas ferramentas DevOps para fazer as coisas acontecerem, geralmente serão YAML. (Podemos usar o último dia desta seção para mergulhar um pouco no YAML)
|
||||||
|
|
||||||
|
## Acabei de me convencer a não aprender uma linguagem de programação?
|
||||||
|
|
||||||
|
Na maioria das vezes ou dependendo da função, você ajudará as equipes de engenharia a implementar DevOps em seu fluxo de trabalho, fazendo muitos testes em torno do aplicativo e garantindo que o fluxo de trabalho criado esteja alinhado aos princípios de DevOps que mencionamos nos primeiros dias. . Mas, na realidade, muitas vezes será necessário solucionar um problema de desempenho do aplicativo ou algo parecido. Isso volta ao meu argumento e raciocínio originais: a linguagem de programação que preciso saber é aquela em que o código está escrito? Se o aplicativo for escrito em NodeJS, não ajudará muito se você tiver um emblema Go ou Python.
|
||||||
|
|
||||||
|
## Por que Go?
|
||||||
|
|
||||||
|
Por que Golang é a próxima linguagem de programação para DevOps? Go se tornou uma linguagem de programação muito popular nos últimos anos. De acordo com a pesquisa StackOverflow para 2021, Go ficou em quarto lugar nas linguagens de programação, script e marcação mais procuradas, com Python sendo o principal, mas me escute. [Pesquisa de desenvolvedores StackOverflow 2021 – link mais procurado](https://insights.stackoverflow.com/survey/2021#section-most-loved-dreaded-and-wanted-programming-scripting-and-markup-languages)
|
||||||
|
|
||||||
|
Como também mencionei, algumas das ferramentas e plataformas DevOps mais conhecidas são escritas em Go, como Kubernetes, Docker, Grafana e Prometheus.
|
||||||
|
|
||||||
|
Quais são algumas das características do Go que o tornam excelente para DevOps?
|
||||||
|
|
||||||
|
## Construção e implantação de programas Go
|
||||||
|
|
||||||
|
Uma vantagem de usar uma linguagem como Python interpretada em uma função DevOps é que você não precisa compilar um programa python antes de executá-lo. Especialmente para tarefas de automação menores, você não quer ser retardado por um processo de construção que requer compilação, embora Go seja uma linguagem de programação compilada, **Go compila diretamente em código de máquina**. Go também é conhecido por tempos de compilação rápidos.
|
||||||
|
|
||||||
|
## Go vs Python para DevOps
|
||||||
|
|
||||||
|
Os programas Go são vinculados estaticamente, isso significa que quando você compila um programa go, tudo é incluído em um único executável binário e nenhuma dependência externa será necessária para ser instalada na máquina remota, o que facilita a implantação de programas go, em comparação com o programa python que usa bibliotecas externas, você deve ter certeza de que todas essas bibliotecas estão instaladas na máquina remota na qual deseja executar.
|
||||||
|
|
||||||
|
Go é uma linguagem independente de plataforma, o que significa que você pode produzir executáveis binários para todos os sistemas operacionais, Linux, Windows, macOS etc. e é muito fácil de fazer. Com Python, não é tão fácil criar esses executáveis binários para sistemas operacionais específicos.
|
||||||
|
|
||||||
|
Go é uma linguagem de muito desempenho, tem compilação rápida e tempo de execução rápido com menor uso de recursos como CPU e memória, especialmente em comparação com python, inúmeras otimizações foram implementadas na linguagem Go que a torna tão eficiente. (Recursos abaixo)
|
||||||
|
|
||||||
|
Ao contrário do Python, que geralmente requer o uso de bibliotecas de terceiros para implementar um programa Python específico, go inclui uma biblioteca padrão que possui a maioria das funcionalidades necessárias para DevOps integradas diretamente nela. Isso inclui processamento de arquivos de funcionalidade, serviços web HTTP, processamento JSON, suporte nativo para simultaneidade e paralelismo, bem como testes integrados.
|
||||||
|
|
||||||
|
Isso não é de forma alguma jogar o Python debaixo do ônibus. Estou apenas dando meus motivos para escolher Go, mas eles não são os Go vs Python acima, geralmente porque faz sentido, já que a empresa para a qual trabalho desenvolve software em Go, e é por isso.
|
||||||
|
|
||||||
|
Direi que, uma vez que você tenha feito isso, ou pelo menos me disseram que não estou lendo muitas páginas deste capítulo agora, é que depois que você aprende sua primeira linguagem de programação, fica mais fácil aprender outras linguagens. Você provavelmente nunca terá um único emprego em qualquer empresa em qualquer lugar onde não precise lidar com gerenciamento, arquitetura, orquestração e depuração de aplicativos JavaScript e Node JS.
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Pesquisa de desenvolvedores StackOverflow 2021](https://insights.stackoverflow.com/survey/2021)
|
||||||
|
- [Por que estamos escolhendo Golang para aprender](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||||
|
- [Jake Wright - Aprenda em 12 minutos](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||||
|
- [Techworld with Nana - Curso completo de Golang - 3 horas e 24 minutos](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||||
|
- [**PAGO** Nigel Poulton Pluralsight - Fundamentos do Go - 3 horas e 26 minutos](https://www.pluralsight.com/courses/go-fundamentals)
|
||||||
|
- [FreeCodeCamp - Aprenda Programação Go - Tutorial Golang para Iniciantes](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||||
|
- [Hitesh Choudhary - lista de reprodução completa](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||||
|
|
||||||
|
Agora, nos próximos 6 dias deste tópico, pretendo trabalhar com alguns dos recursos listados acima e documentar minhas anotações de cada dia. Você notará que geralmente duram cerca de 3 horas como um curso completo. Queria compartilhar minha lista completa para que, se você tiver tempo, siga em frente e trabalhe em cada um se o tempo permitir, vou me ater à minha hora de aprendizado cada dia.
|
||||||
|
|
||||||
|
Vejo você no [Dia 8](day08.md).
|
112
2022/pt-br/Days/day08.md
Normal file
@ -0,0 +1,112 @@
|
|||||||
|
---
|
||||||
|
title: '#90DaysOfDevOps - Configurando seu ambiente DevOps para Go & Hello World – Dia 8'
|
||||||
|
published: false
|
||||||
|
description: 90DaysOfDevOps - Configurando seu ambiente DevOps para Go & Hello World
|
||||||
|
tags: 'devops, 90daysofdevops, learning'
|
||||||
|
cover_image: null
|
||||||
|
canonical_url: null
|
||||||
|
id: 1048857
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configurando seu ambiente DevOps para Go & Hello World
|
||||||
|
|
||||||
|
Antes de entrarmos em alguns dos fundamentos do Go, devemos instalá-lo em nossa estação de trabalho e fazer o que cada módulo de "aprendizado de programação 101" nos ensina: criar o aplicativo Hello World. Como este irá percorrer as etapas para instalar o Go em sua estação de trabalho, tentaremos documentar o processo em imagens para que as pessoas possam acompanhar facilmente.
|
||||||
|
|
||||||
|
Em primeiro lugar, vamos para [go.dev/dl](https://go.dev/dl/) e você será saudado com algumas opções disponíveis para downloads.
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go1.png)
|
||||||
|
|
||||||
|
Se chegamos até aqui, você provavelmente sabe qual sistema operacional da estação de trabalho está executando, então selecione o download apropriado e então podemos começar a instalar. Estou usando o Windows para este passo a passo, basicamente, a partir da próxima tela, podemos deixar todos os padrões no lugar por enquanto. **_(observarei que no momento em que este artigo foi escrito, esta era a versão mais recente, portanto as capturas de tela podem estar desatualizadas)_**
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go2.png)
|
||||||
|
|
||||||
|
Observe também que se você tiver uma versão mais antiga do Go instalada, você terá que removê-la antes de instalar. O Windows a possui integrada ao instalador e irá removê-la e instalá-la como uma só.
|
||||||
|
|
||||||
|
Depois de terminar, você deve abrir um prompt de comando/terminal e queremos verificar se temos o Go instalado. Se você não obtiver o resultado que vemos abaixo, o Go não está instalado e você precisará refazer seus passos.
|
||||||
|
|
||||||
|
`go version`
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go3.png)
|
||||||
|
|
||||||
|
A seguir, queremos verificar nosso ambiente para Go. É sempre bom verificar se seus diretórios de trabalho estão configurados corretamente, como você pode ver abaixo, precisamos ter certeza de que você tem o seguinte diretório em seu sistema.
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go4.png)
|
||||||
|
|
||||||
|
Você checou? Você está acompanhando? Você provavelmente obterá algo como o abaixo se tentar navegar até lá.
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go5.png)
|
||||||
|
|
||||||
|
Ok, vamos criar esse diretório para facilitar. Vou usar o comando mkdir em meu terminal PowerShell. Também precisamos criar 3 pastas dentro da pasta Go, como você verá abaixo.
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go6.png)
|
||||||
|
|
||||||
|
Agora temos que instalar o Go e temos nosso diretório de trabalho Go pronto para ação. Agora precisamos de um ambiente de desenvolvimento integrado (IDE). Agora existem muitos disponíveis que você pode usar, mas o mais comum e o que eu uso é o Visual Studio Code ou Code. Você pode aprender mais sobre IDEs [aqui](https://www.youtube.com/watch?v=vUn5akOlFXQ).
|
||||||
|
|
||||||
|
Se você ainda não baixou e instalou o VSCode em sua estação de trabalho, poderá fazê-lo acessando [aqui](https://code.visualstudio.com/download). Como você pode ver abaixo, você tem diferentes opções de sistema operacional.
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go7.png)
|
||||||
|
|
||||||
|
Da mesma forma que na instalação do Go, vamos baixar e instalar e manter os padrões. Depois de concluído, você pode abrir o VSCode, selecionar Abrir arquivo e navegar até nosso diretório Go que criamos acima.
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go8.png)
|
||||||
|
|
||||||
|
Você pode receber um pop-up sobre confiança, leia-o se quiser e clique em Sim, confie nos autores. (Eu não sou responsável mais tarde se você começar a abrir coisas em que não confia!)
|
||||||
|
|
||||||
|
Agora você deve ver as três pastas que também criamos anteriormente e o que queremos fazer agora é clicar com o botão direito na pasta src e criar uma nova pasta chamada `Hello`
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go9.png)
|
||||||
|
|
||||||
|
Coisas muito fáceis que eu diria até agora? Agora vamos criar nosso primeiro Programa Go sem entender nada do que colocaremos nesta próxima fase.
|
||||||
|
|
||||||
|
Em seguida, crie um arquivo chamado `main.go` na sua pasta `Hello`. Assim que você pressionar enter no main.go você será perguntado se deseja instalar a extensão Go e também os pacotes. Você também pode verificar aquele arquivo pkg vazio que fizemos alguns passos atrás e perceber que devemos ter alguns novos pacotes aí agora?
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go10.png)
|
||||||
|
|
||||||
|
Agora vamos colocar este aplicativo Hello World em funcionamento, copie o código a seguir em seu novo arquivo main.go e salve-o.
|
||||||
|
|
||||||
|
```
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "fmt"
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
fmt.Println("Hello #90DaysOfDevOps")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Compreendo que o que foi dito acima pode não fazer sentido algum, mas abordaremos mais sobre funções, pacotes e muito mais posteriormente. Por enquanto, vamos executar nosso aplicativo. De volta ao terminal e à nossa pasta Hello podemos agora verificar se tudo está funcionando. Usando o comando abaixo podemos verificar se nosso programa de aprendizagem genérico está funcionando.
|
||||||
|
|
||||||
|
```
|
||||||
|
go run main.go
|
||||||
|
```
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go11.png)
|
||||||
|
|
||||||
|
Mas não termina aí, e se agora quisermos pegar nosso programa e executá-lo em outras máquinas Windows? Podemos fazer isso construindo nosso binário usando o seguinte comando
|
||||||
|
|
||||||
|
```
|
||||||
|
go build main.go
|
||||||
|
```
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go12.png)
|
||||||
|
|
||||||
|
Se executarmos isso, veríamos a mesma saída:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ ./main.exe
|
||||||
|
Hello #90DaysOfDevOps
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Pesquisa de desenvolvedores StackOverflow 2021](https://insights.stackoverflow.com/survey/2021)
|
||||||
|
- [Por que estamos escolhendo Golang para aprender](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||||
|
- [Jake Wright - Aprenda Go em 12 minutos](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||||
|
- [Techworld with Nana - Curso completo de Golang - 3 horas e 24 minutos](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||||
|
- [**PAGO** Nigel Poulton Pluralsight - Fundamentos do Go - 3 horas e 26 minutos](https://www.pluralsight.com/courses/go-fundamentals)
|
||||||
|
- [FreeCodeCamp - Aprenda Programação Go - Tutorial Golang para Iniciantes](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||||
|
- [Hitesh Choudhary - Lista de reprodução completa](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||||
|
|
||||||
|
Vejo você no [Dia 9](day09.md).
|
||||||
|
|
||||||
|
![](../../Days/Images/Day8_Go13.png)
|
@ -16,7 +16,7 @@ Nhưng việc bắt đầu tìm hiểu về điện toán đám mây sẽ là m
|
|||||||
|
|
||||||
![](../../Days/Images/Day28_Cloud1.png)
|
![](../../Days/Images/Day28_Cloud1.png)
|
||||||
|
|
||||||
Vậy các dịch vụ public cloud (đám mây công cộng) có yêu cầu tư duy DevOps không? Câu trả lời của tôi ở đây là không, nhưng để thực sự tận dụng được những lợi thế của điện toán đám mây và để tránh những hoá đơn khổng lồ cho dịch vụ điện todayán, việc nghĩ đến DevOps và Cloud cùng với nhau là điều rất quan trọng.
|
Vậy các dịch vụ public cloud (đám mây công cộng) có yêu cầu tư duy DevOps không? Câu trả lời của tôi ở đây là không, nhưng để thực sự tận dụng được những lợi thế của điện toán đám mây và để tránh những hoá đơn khổng lồ cho dịch vụ điện toán, việc nghĩ đến DevOps và Cloud cùng với nhau là điều rất quan trọng.
|
||||||
|
|
||||||
Nếu chúng ta xem xét ý nghĩa của public cloud từ góc độ high level, thì có thể nói nó loại bỏ trách nhiệm quản lý các máy chủ và giúp chúng ta tập trung hơn vào những khía cạnh khác quan trọng hơn đó là ứng dụng và người sử dụng. Xét cho cùng, đám mây công cộng cũng chỉ là máy tính của người khác.
|
Nếu chúng ta xem xét ý nghĩa của public cloud từ góc độ high level, thì có thể nói nó loại bỏ trách nhiệm quản lý các máy chủ và giúp chúng ta tập trung hơn vào những khía cạnh khác quan trọng hơn đó là ứng dụng và người sử dụng. Xét cho cùng, đám mây công cộng cũng chỉ là máy tính của người khác.
|
||||||
|
|
||||||
|
@ -1,76 +1,54 @@
|
|||||||
---
|
---
|
||||||
title: '#90DaysOfDevOps - The Big Picture: DevOps and Linux - Day 14'
|
title: '#90DaysOfDevOps - 概览: DevOps 与 Linux - 第十四天'
|
||||||
published: false
|
published: false
|
||||||
description: 90DaysOfDevOps - The Big Picture DevOps and Linux
|
description: 90DaysOfDevOps - 概览: DevOps 与 Linux
|
||||||
tags: 'devops, 90daysofdevops, learning'
|
tags: 'devops, 90daysofdevops, learning'
|
||||||
cover_image: null
|
cover_image: null
|
||||||
canonical_url: null
|
canonical_url: null
|
||||||
id: 1049033
|
id: 1049033
|
||||||
---
|
---
|
||||||
## The Big Picture: DevOps and Linux
|
## 概览: DevOps 与 Linux
|
||||||
Linux and DevOps share very similar cultures and perspectives; both are focused on customization and scalability. Both of these aspects of Linux are of particular importance for DevOps.
|
Linux和DevOps有着非常相似的文化和观点;两者都专注于定制化和可扩展性。Linux 的这两个方面对于 DevOps 来说都特别重要。
|
||||||
|
|
||||||
A lot of technologies start on Linux, especially if they are related to software development or managing infrastructure.
|
许多技术都是从Linux开始的,特别是当它们与软件开发或基础设施管理有关的时候。
|
||||||
|
|
||||||
As well lots of open source projects, especially DevOps tools, were designed to run on Linux from the start.
|
此外,许多开源项目,尤其是DevOps工具,从一开始就被设计为在Linux上运行。
|
||||||
|
|
||||||
From a DevOps perspective or in fact any operations role perspective you are going to come across Linux I would say mostly. There is a place for WinOps but the majority of the time you are going to be administering and deploying Linux servers.
|
从DevOps或任何运营角色的角度来看,我会认为你将接触到Linux。WinOps是一种选择,但大多数时候你需要管理和部署Linux服务器。
|
||||||
|
|
||||||
I have been using Linux on a daily basis for a number of years but my go to desktop machine has always been either macOS or Windows. However, when I moved into the Cloud Native role I am in now I took the plunge to make sure that my laptop was fully Linux based and my daily driver, whilst I still needed Windows for work-based applications and a lot of my audio and video gear does not run on Linux I was forcing myself to run a Linux desktop full time to get a better grasp of a lot of the things we are going to touch on over the next 7 days.
|
几年来,我一直在使用Linux,但我的桌面电脑一直是macOS或Windows。当我转变成为云原生角色(Cloud Native role)时,我需要确保我的笔记本电脑完全基于Linux和我的日常驱动程序,我仍然需要Windows用于工作的应用程序,而且我的许多音频和视频设备无法在Linux上运行,我强迫自己只运行Linux桌面以更好地掌握很多东西。我们将在接下来的7天里进行讨论。
|
||||||
|
|
||||||
## Getting Started
|
## 开始
|
||||||
I am not suggesting you do the same as me by any stretch as there are easier options and less destructive but I will say taking that full-time step forces you to learn faster on how to make things work on Linux.
|
我并不是要求你和我一样完全使用Linux,因为有更简单、破坏性更小的选择。但我会说,只使用Linux可以迫使你更快地学习如何在Linux上工作。
|
||||||
|
|
||||||
For the majority of these 7 days, I am actually going to deploy a Virtual Machine in Virtual Box on my Windows machine. I am also going to deploy a desktop version of a Linux distribution, whereas a lot of the Linux servers you will be administering will likely be servers that come with no GUI and everything is shell-based. However, as I said at the start a lot of the tools that we covered throughout this whole 90 days started out on Linux I would also strongly encourage you to take the dive into running that Linux Desktop for that learning experience as well.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
For the rest of this post, we are going to concentrate on getting a Ubuntu Desktop virtual machine up and running in our Virtual Box environment. Now we could just download [Virtual Box](https://www.virtualbox.org/) and grab the latest [Ubuntu ISO](https://ubuntu.com/download) from the sites linked and go ahead and build out our desktop environment but that wouldn't be very DevOps of us, would it?
|
|
||||||
|
|
||||||
|
在这7天的大部分时间里,我将在Windows电脑上的Virtual Box中部署虚拟机。我打算部署Linux发行版的桌面版本,而实际工作中你将管理的许多Linux服务器可能是没有GUI的服务器,并且一切都是基于shell的。但是,正如我在开始时所说,我们在整个90天学习中介绍的许多工具都是从Linux开始的,我也强烈建议你潜心使用该Linux桌面以获得学习体验。
|
||||||
|
|
||||||
|
在这篇文章的其余部分,我们将专注于在我们的Virtual Box环境中启动并运行Ubuntu桌面虚拟机。现在我们可以下载[Virtual Box](https://www.virtualbox.org/)并从链接的站点获取最新的[Ubuntu ISO](https://ubuntu.com/download),然后继续构建我们的桌面环境,但这一步对我们来说不是很DevOps,对吧?
|
||||||
|
|
||||||
Another good reason to use most Linux distributions is that they are free and open-source. We are also choosing Ubuntu as it is probably the most widely used distribution deployed not thinking about mobile devices and enterprise RedHat Enterprise servers. I might be wrong there but with CentOS and the history there I bet Ubuntu is high on the list and it's super simple.
|
Another good reason to use most Linux distributions is that they are free and open-source. We are also choosing Ubuntu as it is probably the most widely used distribution deployed not thinking about mobile devices and enterprise RedHat Enterprise servers. I might be wrong there but with CentOS and the history there I bet Ubuntu is high on the list and it's super simple.
|
||||||
|
使用大多数Linux发行版的另一个理由是它们是免费和开源的。我们选择Ubuntu,因为它可能是使用最广泛的发行版,目前没有考虑移动设备和企业RedHat Enterprise服务器。我可能是错的,但对于 CentOS 和那里的历史,我敢打赌Ubuntu是比较热门的,而且它非常简单。
|
||||||
|
|
||||||
|
|
||||||
|
## 简介:HashiCorp Vagrant
|
||||||
|
Vagrant 是一个CLI程序,用于管理虚拟机的生命周期。我们可以使用vagrant在许多不同的平台上启动虚拟机,包括vSphere,Hyper-v,Virtual Box和Docker。它也有其他提供商,我们将使用Virtual Box,所以我们可以用到它。
|
||||||
|
|
||||||
## Introducing HashiCorp Vagrant
|
我们需要做的第一件事是在我们的机器上安装Vagrant,当您转到下载页面时,您将看到供你选择的所有操作系统。[HashiCorp Vagrant](https://www.vagrantup.com/downloads) 我正在使用Windows,所以我下载了二进制文件并继续在我的系统上安装它。
|
||||||
|
|
||||||
|
接下来,我们还需要安装[Virtual Box](https://www.virtualbox.org/wiki/Downloads)。同样的,它也可以被安装在许多不同的操作系统上。如果你运行的是Windows,macOS或Linux,那么可以选择它和vagrant,我们的安装环节就到此为止。
|
||||||
|
|
||||||
|
这两个安装过程都非常简单,并且周围都有很棒的社区,所以如果你有问题,请随时联系他们,我也可以尝试提供帮助。
|
||||||
|
|
||||||
|
|
||||||
|
## 我们的第一个VAGRANTFILE
|
||||||
|
|
||||||
Vagrant is a CLI utility that manages the lifecycle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with that we are using Virtual Box here so we are good to go.
|
VAGRANT文件描述了我们要部署的计算机类型,还定义了此计算机的配置和预配。
|
||||||
|
|
||||||
|
在保存这些和组织你的VAGRANT文件时,我倾向于将它们放在我工作区的文件夹中。您可以在下面看到这在我的系统上的外观。希望在此之后,你将与Vagrant一起玩,并体验启动不同系统的便利性,这可以帮助你不断探索Linux桌面。
|
||||||
|
|
||||||
|
|
||||||
|
![](../../Days/Images/Day14_Linux1.png)
|
||||||
|
|
||||||
The first thing we need to do is get Vagrant installed on our machine, when you go to the downloads page you will see all the operating systems listed for your choice. [HashiCorp Vagrant](https://www.vagrantup.com/downloads) I am using Windows so I grabbed the binary for my system and went ahead and installed this to my system.
|
让我们看一下VAGRANT文件,看看我们正在构建什么。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Next up we also need to get [Virtual Box](https://www.virtualbox.org/wiki/Downloads) installed. Again this can also be installed on many different operating systems again a good reason to choose this and vagrant is that if you are running Windows, macOS, or Linux then we have you covered here.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Both installations are pretty straightforward. If you have issues both have great communities around them also feel free to reach out and I can try to assist also.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Our first VAGRANTFILE
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
The VAGRANTFILE describes the type of machine we want to deploy. It also defines how we want the configuration and provisioning of this machine need to look.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
When it comes to saving these and organizing your VAGRANTFILEs I tend to put them in their own folders in my workspace. You can see below how this looks on my system. Hopefully following this you will play around with Vagrant and see the ease of spinning up different systems, it is also great for that rabbit hole is known as distro hopping for Linux Desktops.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
![](Images/Day14_Linux1.png)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Let's take a look at that VAGRANTFILE then and see what we are building.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
@ -93,12 +71,9 @@ end
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
This is a very simple VAGRANTFILE overall we are saying we want a specific "box" a box being possibly either a public image or private build of the system you are looking for. You can find a long list of "boxes" publicly available here in the [public catalog of Vagrant boxes](https://app.vagrantup.com/boxes/search)
|
总的来说,这是一个非常简单的VAGRANTFILE。我们想要一个特定的“盒子”,一个盒子可能是您正在寻找的系统的公共映像或私有版本。您可以在[public catalog of Vagrant boxes](https://app.vagrantup.com/boxes/search) 中找到一系列公开的“盒子”。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Next line we are saying we want to use a specific provider in this case it is `VirtualBox` and then we want to define our machine's memory to `8GB and our number of CPUs to `4`. My experience also tells me that you may want to also add the following line if you experience display issues. This will set the video memory to what you want, I would ramp this right up to `128MB but depends on your system.
|
|
||||||
|
|
||||||
|
下一行我们说我们想使用特定的程序,这次使用的是`VirtualBox`。我们还将机器的内存`8GB`和CPU的数量定义为`4`。根据我的经验,如果您遇到问题,您可能还需要添加以下行。这会将视频内存设置为您想要的,我会将其提升到128MB ,但这取决于您的系统。
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
@ -107,66 +82,50 @@ v.customize ["modifyvm", :id, "--vram", ""]
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
I have also placed a copy of this specific vagrant file in the [Linux Folder](Linux/VAGRANTFILE)
|
我还在[Linux 文件夹](Linux/VAGRANTFILE) 中放置了这个特定vagrant文件的副本。
|
||||||
|
|
||||||
|
|
||||||
|
## 配置我们的 Linux 桌面
|
||||||
|
|
||||||
## Provisioning our Linux Desktop
|
现在,我们已经准备好在工作站的终端中启动并运行我们的第一台机器。就我而言,我正在Windows机器上使用PowerShell。导航到您的项目文件夹,您将在其中找到您的VAGRANTFILE。到达那里后,您可以输入命令`vagrant up`,如果一切正常,您将看到类似以下内容。
|
||||||
|
|
||||||
|
|
||||||
|
![](../../Days/Images/Day14_Linux2.png)
|
||||||
We are now ready to get our first machine up and running, in your workstations terminal. In my case I am using PowerShell on my Windows machine, navigate to your projects folder and where you will find your VAGRANTFILE. Once there you can type the command `vagrant up` and if everything is correct then you will see something like the below.
|
|
||||||
|
|
||||||
|
|
||||||
|
此处要添加的另一样东西是,网络将设置为虚拟机上的网络`NAT`。在此阶段,我们不需要了解NAT,我计划在网络的章节中讨论它。这是在家庭网络上获取机器的简单按钮,它也是[Virtual Box documentation](https://www.virtualbox.org/manual/ch06.html#network_nat)上的默认网络模式。你可以在虚拟盒子文档中找到更多信息。
|
||||||
![](Images/Day14_Linux2.png)
|
|
||||||
|
|
||||||
|
|
||||||
|
`vagrant up`完成后,我们现在可以使用`vagrant ssh`直接跳转到新 VM 的终端。
|
||||||
Another thing to add here is that the network will be set to `NAT` on your virtual machine, at this stage we don't really need to know about NAT and I plan to have a whole session talking about in the next section about Networking. But know that it is the easy button when it comes to getting a machine on your home network, it is also the default networking mode on Virtual Box. You can find out more in the [Virtual Box documentation](https://www.virtualbox.org/manual/ch06.html#network_nat)
|
|
||||||
|
|
||||||
|
|
||||||
|
![](../../Days/Images/Day14_Linux3.png)
|
||||||
|
|
||||||
Once `vagrant up` is complete we can now use `vagrant ssh` to jump straight into the terminal of our new VM.
|
这是我们接下来的几天里将进行大部分探索的地方,但我还想深入了解我为您的开发人员工作站所做的一些自定义,当将其作为日常驱动程序运行时,使你的工作变得更加简单。当然,有一个很酷的非标准终端,才像是在做DevOps,对吧?
|
||||||
|
|
||||||
|
|
||||||
|
在Virtual Box中进行确认,您应该会在选择VM时看到登录提示。
|
||||||
![](Images/Day14_Linux3.png)
|
|
||||||
|
|
||||||
|
|
||||||
|
![](../../Days/Images/Day14_Linux4.png)
|
||||||
|
|
||||||
This is where we will do most of our exploring over the next few days but I also want to dive into some customizations for your developer workstation that I have done and it makes your life much simpler when running this as your daily driver, and of course, are you really in DevOps unless you have a cool nonstandard terminal?
|
如果你做到了这一步,你会问“用户名和密码是什么?”
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
But just to confirm in Virtual Box you should see the login prompt when you select your VM.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
![](Images/Day14_Linux4.png)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Oh and if you made it this far and you have been asking "WHAT IS THE USERNAME & PASSWORD?"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
- Username = vagrant
|
- Username = vagrant
|
||||||
|
|
||||||
- Password = vagrant
|
- Password = vagrant
|
||||||
|
|
||||||
|
明天我们将了解一些命令以及它们的作用,终端将成为实现一切的地方。
|
||||||
|
|
||||||
|
## 相关资料
|
||||||
Tomorrow we are going to get into some of the commands and what they do, The terminal is going to be the place to make everything happen.
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70)
|
- [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70)
|
||||||
- [Linux for hackers (don't worry you don't need be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE)
|
- [Linux for hackers (don't worry you don't need be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE)
|
||||||
|
|
||||||
There are going to be lots of resources I find as we go through and much like the Go resources I am generally going to be keeping them to FREE content so we can all partake and learn here.
|
随着我们的学习,我会发现很多资源,就像Go资源一样,我通常会将它们保留为免费内容,以便我们都可以在这里参与和学习。
|
||||||
|
|
||||||
As I mentioned next up we will take a look at the commands we might be using on a daily whilst in our Linux environments.
|
As I mentioned next up we will take a look at the commands we might be using on a daily whilst in our Linux environments.
|
||||||
|
正如我接下来会提到的,我们将看看我们在Linux环境中可能每天使用的命令。
|
||||||
|
|
||||||
See you on [Day15](day15.md)
|
[第十五天](day15.md)见
|
||||||
|
|
||||||
|
@ -1,208 +1,190 @@
|
|||||||
---
|
---
|
||||||
title: '#90DaysOfDevOps - Linux Commands for DevOps (Actually everyone) - Day 15'
|
title: '#90DaysOfDevOps - DevOps 中的Linux命令 (通用) - 第十五天'
|
||||||
published: false
|
published: false
|
||||||
description: 90DaysOfDevOps - Linux Commands for DevOps (Actually everyone)
|
description: 90DaysOfDevOps - DevOps 中的Linux命令 (通用)
|
||||||
tags: "devops, 90daysofdevops, learning"
|
tags: "devops, 90daysofdevops, learning"
|
||||||
cover_image: null
|
cover_image: null
|
||||||
canonical_url: null
|
canonical_url: null
|
||||||
id: 1048834
|
id: 1048834
|
||||||
---
|
---
|
||||||
## Linux Commands for DevOps (Actually everyone)
|
## DevOps 中的Linux命令 (通用)
|
||||||
|
|
||||||
I mentioned it [yesterday](day14.md) that we are going to be spending a lot of time in the terminal with some commands to get stuff done.
|
我[昨天](day14.md)提到,我们将在终端上花费大量时间使用一些命令来完成工作。
|
||||||
|
|
||||||
I also mentioned that with our vagrant provisioned VM we can use `vagrant ssh` and gain access to our box. You will need to be in the same directory as we provisioned it from.
|
我还提到,通过我们的vagran提供的VM,我们可以使用`vagrant ssh`访问我们的盒子。你需要位于我们预配它的同一目录中。
|
||||||
|
|
||||||
For SSH you won't need the username and password, you will only need that if you decide to login to the Virtual Box console.
|
对于 SSH,你不需要用户名和密码,只有在您决定登录Virtual Box控制台时才需要用户名和密码。
|
||||||
|
|
||||||
This is where we want to be as per below:
|
这就是我们想要的地方,如图下所示:
|
||||||
|
|
||||||
![](Images/Day15_Linux1.png)
|
![](../../Days/Images/Day15_Linux1.png)
|
||||||
|
|
||||||
## Commands
|
## 命令
|
||||||
|
|
||||||
Obviously I cannot cover all the commands here, there are pages and pages of documentation that cover these but also if you are ever in your terminal and you just need to understand options to a specific command we have the `man` pages short for manual. We can use this to go through each of the commands we touch on during this post to find out more options for each one. We can run `man man` which will give you the help for manual pages. To escape the man pages you should press `q` for quit.
|
我无法涵盖此处的所有命令,有一页又一页的文档涵盖了这些命令,但是如果你在终端中并且只需要了解特定命令的选项,我们有`man`将手册的页面缩短。我们可以使用它来浏览我们在这篇文章中提到的每个命令,以找出每个命令的更多选项。我们可以运行`man man`它将为你提供手册页的帮助。若要退出手册页,你应该按`q`退出。
|
||||||
|
|
||||||
![](Images/Day15_Linux2.png)
|
![](../../Days/Images/Day15_Linux2.png)
|
||||||
![](Images/Day15_Linux3.png)
|
![](../../Days/Images/Day15_Linux3.png)
|
||||||
|
|
||||||
`sudo` If you are familar with Windows and the right click `run as administrator` we can think of `sudo` as very much this. When you run a command with this command you will be running it as `root` it will prompt you for the password before running the command.
|
`sudo`如果您熟悉Windows和右键单击`以管理员身份运行`,`sudo`意义与之类似。当你将此命令与你要运行的命令一起使用时,会在运行命令之前提示您输入密码,并以`root`的身份运行命令。
|
||||||
|
|
||||||
![](Images/Day15_Linux4.png)
|
![](../../Days/Images/Day15_Linux4.png)
|
||||||
|
|
||||||
For one off jobs like installing applications or services you might need that `sudo command` but what if you have several tasks to deal with and you want to live as `sudo` for a while? This is where you can use `sudo su` again the same as `sudo` once entered you will be prompted for your `root` password. In a test VM like ours this is fine but I would find it very hard for us to be rolling around as `root` for prolonged periods, bad things can happen. To get out of this elevated position you simply type in `exit`
|
对于安装应用程序或服务等一次性工作,您可能需要用到`sudo command`,但是如果您有多项任务要处理并且想`sudo`一段时间怎么办?你可以再次使用`sudo su`与使用`sudo`输入`root`密码相同的方法。在像我们这样的测试虚拟机中,这没什么问题。但我发现我们很难四处走动,因为 root 长时间可能会发生不好的事情。要摆脱这个更高的位置,您只需输入`exit`。
|
||||||
|
|
||||||
![](Images/Day15_Linux5.png)
|
![](../../Days/Images/Day15_Linux5.png)
|
||||||
|
|
||||||
I find myself using `clear` all the time, the `clear` command does exactly what it says it is going to clear the screen of all previous commands, putting your prompt to the top and giving you a nice clean workspace. Windows I think is `cls` in the .mdprompt.
|
我发现自己一直在使用`clear`,`clear`命令完全按照它所说的去做,它将清除所有先前命令的显示内容,将您的提示放在顶部并为您提供一个漂亮干净的工作区。我认为在Windows中的命令提示符(cmd prompt)是`cls`。
|
||||||
|
|
||||||
![](Images/Day15_Linux6.png)
|
![](../../Days/Images/Day15_Linux6.png)
|
||||||
|
|
||||||
Let's now look at some commands where we can actually create things within our system and then visualise them in our terminal, first of all we have `mkdir` this will allow us to create a folder in our system. With the following command we can create a folder in our home directory called Day15 `mkdir Day15`
|
现在让我们看一些命令,我们可以在其中实际创建系统内容,然后在终端中可视化它们,首先,我们有`mkdir`这将允许我们在系统中创建一个文件夹。使用以下命令,我们可以在主目录中创建一个名为 Day15 `mkdir Day15`的文件夹。
|
||||||
|
|
||||||
![](Images/Day15_Linux7.png)
|
![](../../Days/Images/Day15_Linux7.png)
|
||||||
|
|
||||||
With `cd` this allows us to change directory, so for us to move into our newly created directory we can do this with `cd Day15` tab can also be used to autocomplete the directory available. If we want to get back to where we started we can use `cd ..`
|
有了`cd`,我们可以更改目录,因此对于我们移动到新创建的目录。我们可以使用tab用于自动完成可用的目录`cd Day15`。如果我们想回到我们开始的地方,我们可以使用`cd ..`。
|
||||||
|
|
||||||
![](Images/Day15_Linux8.png)
|
![](../../Days/Images/Day15_Linux8.png)
|
||||||
|
|
||||||
`rmdir` allows for us to remove the directory, if we run `rmdir Day15` then the folder will be removed (note that this will only work if you have nothing in the folder)
|
`rmdir`允许我们删除目录,如果我们运行`rmdir Day15`,那么该文件夹将被删除(注意:这仅在文件夹中没有任何内容时才有效)
|
||||||
|
|
||||||
![](Images/Day15_Linux9.png)
|
![](../../Days/Images/Day15_Linux9.png)
|
||||||
|
|
||||||
I am sure we have all done it where we have navigated to the depths of our file system to a directory and not known where we are. `pwd` gives us the print out of the working directory, pwd as much as it looks like password it stands for print working directory.
|
我敢肯定,我们都在导航到文件系统深处的目录并且不知道我们在哪里。`pwd`为我们提供了工作目录的打印输出,PWD就像密码的缩写,它代表打印工作目录。
|
||||||
|
|
||||||
![](Images/Day15_Linux10.png)
|
![](../../Days/Images/Day15_Linux10.png)
|
||||||
|
|
||||||
We know how to create folders and directories but how do we create files? We can create files using the `touch` command if we were to run `touch Day15` this would create a file. Ignore `mkdir` we are going see this again later.
|
我们知道了如何创建文件夹和目录,但如何创建文件?如果我们运行`touch Day15`,我们将使用`touch`命令创建文件。忽略`mkdir`,稍后将再次看到这个命令。
|
||||||
|
|
||||||
![](Images/Day15_Linux11.png)
|
![](../../Days/Images/Day15_Linux11.png)
|
||||||
|
|
||||||
`ls` I can put my house on this, you will use this command so many times, this is going to list the all the files and folders in the current directory. Let's see if we can see that file we just created.
|
`ls`我很肯定,你会多次使用这个命令,这将列出当前目录中的所有文件和文件夹。让我们看看我们是否可以看到我们刚刚创建的文件。
|
||||||
|
|
||||||
![](Images/Day15_Linux12.png)
|
![](../../Days/Images/Day15_Linux12.png)
|
||||||
|
|
||||||
How can we find files on our Linux system? `locate` is going to allow us to search our file system. If we use `locate Day15` it will report back that location of the file. Bonus round is that if you know that the file does exist but you get a blank result then run `sudo updatedb` which will index all the files in the file system then run your `locate` again. If you do not have `locate` available to you, you can install it using this command `sudo apt install mlocate`
|
我们如何在Linux系统上找到文件?`locate`将允许我们搜索我们的文件系统。如果我们使用它`locate Day15`,它将报告文件的位置。如果您知道该文件确实存在,但得到一个空白结果,则可以运行`sudo updatedb`将再次索引文件系统中的所有你的`locate`文件。如果您没有`locate`,可以使用此命令`sudo apt install mlocate`安装它
|
||||||
|
|
||||||
![](Images/Day15_Linux13.png)
|
![](../../Days/Images/Day15_Linux13.png)
|
||||||
|
|
||||||
What about moving files from one location to another? `mv` is going to allow you to move your files. Example `mv Day15 90DaysOfDevOps` will move your file to the 90DaysOfDevOps folder.
|
将文件从一个位置移动到另一个位置该怎么做?`mv`将允许你移动文件。示例:`mv Day15 90DaysOfDevOps`将文件移动到 90DaysOfDevOps 文件夹。
|
||||||
|
|
||||||
![](Images/Day15_Linux14.png)
|
![](../../Days/Images/Day15_Linux14.png)
|
||||||
|
|
||||||
We have moved our file but what if we want to rename it now to something else? We can do that using the `mv` command again... WOT!!!? yep we can simply use `mv Day15 day15` to change to upper case or we could use `mv day15 AnotherDay` to change it altogether, now use `ls` to check the file.
|
我们已经移动了文件,但是如果我们现在想将其重命名怎么办?我们可以再次使用该`mv`命令来执行此操作...是的,我们可以简单地用`mv Day15 day15`更改为大写,或者我们可以完全`mv day15 AnotherDay`更改它,现在用`ls`检查文件。
|
||||||
|
|
||||||
![](Images/Day15_Linux15.png)
|
![](../../Days/Images/Day15_Linux15.png)
|
||||||
|
|
||||||
Enough is enough, let's now get rid (delete)of our file and maybe even our directory if we have one created. `rm` simply `rm AnotherDay` will remove our file. We will also use quite a bit `rm -R` which will recursively work through a folder or location. We might also use `rm -R -f` to force the removal of all of those files. Spoiler if you run `rm -R -f /` add sudo to it and you can say goodbye to your system....!
|
现在让我们摆脱(删除)我们的文件,甚至我们的目录(如果我们创建了一个)。`rm AnotherDay`会删除我们的文件。我们还将经常使用`rm -R`的递归方式处理文件夹或位置。我们还可能使用`rm -R -f`强制删除所有这些文件。如果你运行`rm -R -f /`添加sudo,你可以说和你的系统说再见....!
|
||||||
|
|
||||||
![](Images/Day15_Linux16.png)
|
![](../../Days/Images/Day15_Linux16.png)
|
||||||
|
|
||||||
We have looked at moving files around but what if I just want to copy files from one folder to another, simply put its very similar to the `mv` command but we use `cp` so we can now say `cp Day15 Desktop`
|
我们已经了解移动文件,但是如果我只想将文件从一个文件夹复制到另一个文件夹,与`mv`命令非常相似,但我们使用`cp`,因此我们现在可以用`cp Day15 Desktop`。
|
||||||
|
|
||||||
![](Images/Day15_Linux17.png)
|
![](../../Days/Images/Day15_Linux17.png)
|
||||||
|
|
||||||
We have created folders and files but we haven't actually put any contents into our folder, we can add contents a few ways but an easy way is `echo` we can also use `echo` to print out a lot of things in our terminal, I personally use echo a lot to print out system variables to know if they are set or not at least. we can use `echo "Hello #90DaysOfDevOps" > Day15` and this will add this to our file. We can also append to our file using `echo "Commands are fun!" >> Day15`
|
我们已经创建了文件夹和文件,但我们没有将任何内容放入我们的文件夹中,我们可以通过几种方式添加内容,但一个简单的方法是`echo`我们还可以用`echo`在我们的终端中打印出很多东西。我经常使用echo来打印出系统变量,以了解它们是否被设置。我们可以使用`echo "Hello #90DaysOfDevOps" > Day15`,这会将其添加到我们的文件中。我们还可以使用以下命令`echo "Commands are fun!" >> Day15`附加到我们的文件。
|
||||||
|
|
||||||
![](Images/Day15_Linux18.png)
|
![](../../Days/Images/Day15_Linux18.png)
|
||||||
|
|
||||||
Another one of those commands you will use a lot! `cat` short for concatenate. We can use `cat Day15` to see the contents inside the file. Great for quickly reading those configuration files.
|
您将经常使用其中的另一个命令!`cat`是连接词(concatenate)的缩写。我们可以用它来`cat Day15`查看文件内的内容。非常适合快速读取配置文件。
|
||||||
|
|
||||||
![](Images/Day15_Linux19.png)
|
![](../../Days/Images/Day15_Linux19.png)
|
||||||
|
|
||||||
If you have a long complex configuration file and you want or need to find something fast in that file vs reading every line then `grep` is your friend, this will allow us to search your file for a specific word using `cat Day15 | grep "#90DaysOfDevOps"`
|
如果你有一个很长的复杂配置文件,并且你希望或需要在该文件中找到快速内容而不是阅读每一行,那么`grep`就是你的朋友。我们能够使用`cat Day15 | grep "#90DaysOfDevOps"`。
|
||||||
|
|
||||||
![](Images/Day15_Linux20.png)
|
![](../../Days/Images/Day15_Linux20.png)
|
||||||
|
|
||||||
If you are like me and you use that `clear` command a lot then you might miss some of the commands previously ran, we can use `history` to find out all those commands we have run prior. `history -c` will remove the history.
|
如果您像我一样并且经常使用`clear`命令,那么您可能会丢失之前运行的一些命令,我们可以用来`history`找出我们之前运行过的所有`clear`命令。`history -c`将删除历史记录。
|
||||||
|
|
||||||
When you run `history` and you would like to pick a specific command you can use `!3` to choose the 3rd command in the list.
|
当您运行`history`并想要选择一个特定命令时,您可以使用该命令选择列表中的第三个命令`!3`。
|
||||||
|
|
||||||
You are also able to use `history | grep "Command` to search for something specific.
|
您还可以使用`history | grep "Command"`来搜索特定内容。
|
||||||
|
|
||||||
On servers to trace back when was a command executed, it can be useful to append the date and time to each command in the history file.
|
在服务器上追溯已执行的命令,将日期和时间附加到历史记录文件的每个命令中可能很有用。
|
||||||
|
|
||||||
The following system variable controls this behaviour:
|
以下系统变量控制此行为:
|
||||||
```
|
```
|
||||||
HISTTIMEFORMAT="%d-%m-%Y %T "
|
HISTTIMEFORMAT="%d-%m-%Y %T "
|
||||||
```
|
```
|
||||||
You can easily add to your bash_profile:
|
您可以轻松地添加到bash_profile:
|
||||||
```
|
```
|
||||||
echo 'export HISTTIMEFORMAT="%d-%m-%Y %T "' >> ~/.bash_profile
|
echo 'export HISTTIMEFORMAT="%d-%m-%Y %T "' >> ~/.bash_profile
|
||||||
```
|
```
|
||||||
So as useful to allow the history file grow bigger:
|
为了将历史记录文件变大:
|
||||||
|
|
||||||
```
|
```
|
||||||
echo 'export HISTSIZE=100000' >> ~/.bash_profile
|
echo 'export HISTSIZE=100000' >> ~/.bash_profile
|
||||||
echo 'export HISTFILESIZE=10000000' >> ~/.bash_profile
|
echo 'export HISTFILESIZE=10000000' >> ~/.bash_profile
|
||||||
```
|
```
|
||||||
|
|
||||||
![](Images/Day15_Linux21.png)
|
![](../../Days/Images/Day15_Linux21.png)
|
||||||
|
|
||||||
Need to change your password? `passwd` is going allow us to change our password. Note that when you add your password in like this when it is hidden it will not be shown in `history` however if your command has `-p PASSWORD` then this will be visible in your `history`.
|
需要更改密码?`passwd`允许我们更改密码。请注意,当你像这样添加隐藏密码时,它将不会显示在`history`。如果你的命令有`-p PASSWORD`,那么这将显示在你的`history`。
|
||||||
|
|
||||||
![](Images/Day15_Linux22.png)
|
![](../../Days/Images/Day15_Linux22.png)
|
||||||
|
|
||||||
We might also want to add new users to our system, we can do this with `useradd` we have to add the user using our `sudo` command, we can add a new user with `sudo useradd NewUser`
|
我们可能还想在系统添加新用户,我们可以用到`useradd`。我们必须使用我们的`sudo`命令添加用户,我们可以添加一个新用户`sudo useradd NewUser`。
|
||||||
|
|
||||||
![](Images/Day15_Linux23.png)
|
![](../../Days/Images/Day15_Linux23.png)
|
||||||
|
|
||||||
Creating a group again requires `sudo` and we can use `sudo groupadd DevOps` then if we want to add our new user to that group we can do this by running `sudo usermod -a -G DevOps` `-a` is add and `-G` is group name.
|
再次创建一个组需要用到`sudo`,我们可以使用`sudo groupadd DevOps`。如果我们想将新用户添加到该组,我们可以通过运行`sudo usermod -a -G DevOps`来做到这一点。`-a`是添加,`-G`是组名。
|
||||||
|
|
||||||
![](Images/Day15_Linux24.png)
|
![](../../Days/Images/Day15_Linux24.png)
|
||||||
|
|
||||||
How do we add users to the `sudo` group, this would be a very rare occassion for this to happen but in order to do this it would be `usermod -a -G sudo NewUser`
|
我们如何将用户添加到`sudo`组中。这是非常罕见的情况,但要做到这一点可以使用`usermod -a -G sudo NewUser`。
|
||||||
|
|
||||||
### Permissions
|
### 权限
|
||||||
|
|
||||||
read, write and execute are the permissions we have on all of our files and folders on our Linux system.
|
读取、写入和执行是我们对Linux系统上所有文件和文件夹的权限。
|
||||||
|
|
||||||
A full list:
|
完整列表:
|
||||||
|
- 0 = 无 `---`
|
||||||
|
- 1 = 仅执行 `--X`
|
||||||
|
- 2 = 仅写入 `-W-`
|
||||||
|
- 3 = 写入和执行 `-WX`
|
||||||
|
- 4 = 只可读 `R--`
|
||||||
|
- 5 = 可读和执行 `R-X`
|
||||||
|
- 6 = 可读和写入 `RW-`
|
||||||
|
- 7 = 可读、写入和执行 `RWX`
|
||||||
|
|
||||||
- 0 = None `---`
|
你还将看到`777`或`775`,这些数字表示上述列表相同的含义,但每个数字代表**用户 - 组 - 每个人**的权限。
|
||||||
- 1 = Execute only `--X`
|
|
||||||
- 2 = Write only `-W-`
|
|
||||||
- 3 = Write & Exectute `-WX`
|
|
||||||
- 4 = Read Only `R--`
|
|
||||||
- 5 = Read & Execute `R-X`
|
|
||||||
- 6 = Read & Write `RW-`
|
|
||||||
- 7 = Read, Write & Execute `RWX`
|
|
||||||
|
|
||||||
You will also see `777` or `775` and these represent the same numbers as the list above but each one represents **User - Group - Everyone**
|
让我们看一下我们的文件。`ls -al Day15`,你可以看到上面提到的3个组,用户和组有读写,但每个人都只有读。
|
||||||
|
|
||||||
Let's take a look at our file. `ls -al Day15` you can see the 3 groups mentioned above, user and group has read & write but everyone only has read.
|
![](../../Days/Images/Day15_Linux25.png)
|
||||||
|
|
||||||
![](Images/Day15_Linux25.png)
|
我们可以通过`chmod`更改此设置。如果你在系统上创建大量二进制文件,并且你需要提供执行这些二进制文件的能力,你可能会发现自己需要这样做。`chmod 750 Day15`,现在运行`ls -al Day15`。如果你想运行整个文件夹,那么你可以使用`-R`递归地做到这一点。
|
||||||
|
|
||||||
We can change this using `chmod` you might find yourself doing this if you are creating binaries a lot on your systems as well and you need to give the ability to execute those binaries. `chmod 750 Day15` now run `ls -al Day15` if you want to run this for a whole folder then you can use `-R` to recursively do that.
|
![](../../Days/Images/Day15_Linux26.png)
|
||||||
|
|
||||||
![](Images/Day15_Linux26.png)
|
如何更改文件的所有者?我们可以使用`chown`。如果我们想将Day15的所有权从用户`vagrant`更改为`NewUser`,我们可以再次用到`sudo chown NewUser Day15`,并根据实际情况使用`-R`。
|
||||||
|
|
||||||
What about changing the owner of the file? We can use `chown` for this operation, if we wanted to change the ownership of our `Day15` from user `vagrant` to `NewUser` we can run `sudo chown NewUser Day15` again `-R` can be used.
|
![](../../Days/Images/Day15_Linux27.png)
|
||||||
|
|
||||||
![](Images/Day15_Linux27.png)
|
当你有一个只需要特定的输出时,你会用到`awk`。比如运行`who`时,我们会得到包含信息的行,但也许我们只需要名称。我们可以运行`who | awk '{print $1}'`以获取第一列的列表。
|
||||||
|
|
||||||
A command that you will come across is `awk` where this comes in real use is when you have an output that you only need specific data from. like running `who` we get lines with information, but maybe we only need the names. We can run `who | awk '{print $1}'` to get just a list of that first column.
|
![](../../Days/Images/Day15_Linux28.png)
|
||||||
|
|
||||||
![](Images/Day15_Linux28.png)
|
如果你希望从标准输入中读取数据流,并生成和执行命令行。这意味着它可以接受命令的输出并将其作为另一个命令的输入,`xargs`是此一个有用的工具。例如,我想要系统上所有Linux用户帐户的列表。运行`cut -d: -f1 < /etc/passwd`并获取我们在下面看到的长列表。
|
||||||
|
|
||||||
If you are looking to read streams of data from standard input, then generates and executes command lines; meaning it can take output of a command and passes it as argument of another command. `xargs` is a useful tool for this use case. If for example I want a list of all the Linux user accounts on the system I can run. `cut -d: -f1 < /etc/passwd` and get the long list we see below.
|
![](../../Days/Images/Day15_Linux29.png)
|
||||||
|
|
||||||
![](Images/Day15_Linux29.png)
|
如果我想压缩该列表,我可以通过`cut -d: -f1 < /etc/passwd | sort | xargs`,使用`xargs`来实现。
|
||||||
|
|
||||||
If I want to compact that list I can do so by using `xargs` in a command like this `cut -d: -f1 < /etc/passwd | sort | xargs`
|
![](../../Days/Images/Day15_Linux30.png)
|
||||||
|
|
||||||
![](Images/Day15_Linux30.png)
|
我也没有介绍`cut`,它允许我们从文件的每一行中删除部分内容。它可用于按字节位置、字符和字段剪切行的一部分。`cut -d " " -f 2 list.txt`命令允许我们删除我们拥有的第一个字母并仅显示我们的数字。这里有很多组合可以使用此命令,我确信我花了太多时间尝试使用此命令。而我可以手动提取数据,更快地完成这一操作。
|
||||||
|
|
||||||
I didn't mention the `cut` command either, this allows us to remove sections from each line of a file. It can be used to cut parts of a line by byte position, character and field. The `cut -d " " -f 2 list.txt` command allows us to remove that first letter we have and just display our numbers. There are so many combinations that can be used here with this command, I am sure I have spent too much time trying to use this command when I could have extracted data quicker manually.
|
![](../../Days/Images/Day15_Linux31.png)
|
||||||
|
|
||||||
![](Images/Day15_Linux31.png)
|
|
||||||
|
|
||||||
Also to note if you type a command and you are no longer with happy with it and you want to start again just hit control + c and this will cancel that line and start you fresh.
|
Also to note if you type a command and you are no longer with happy with it and you want to start again just hit control + c and this will cancel that line and start you fresh.
|
||||||
|
还要注意的是,如果你输入一个命令并且不再满意它,你想重新开始,只需按下`control + c`,这将取消该行并重新开始。
|
||||||
|
|
||||||
## Resources
|
## 相关资料
|
||||||
|
|
||||||
- [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70)
|
- [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70)
|
||||||
- [Linux for hackers (don't worry you don't need be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE)
|
- [Linux for hackers (don't worry you don't need be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE)
|
||||||
|
|
||||||
See you on [Day16](day16.md)
|
[第十六天](day16.md)见
|
||||||
|
|
||||||
This is a pretty heavy list already but I can safely say that I have used all of these commands in my day to day, be it from an administering Linux servers or in my Linux Desktop, it is very easy when you are in Windows or macOS to navigate the UI but in Linux Servers they are not there, everything is done through the terminal.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
这是一个相当长的列表,但我可以肯定地说,我在日常中使用了所有这些命令。无论是从管理Linux服务器还是在我的Linux桌面上。在Windows或macOS中,你有图像界面。但在Linux服务器中,一切都是通过终端完成的。
|
||||||
|
2
2023.md
@ -4,7 +4,7 @@
|
|||||||
<img src="logo.png?raw=true" alt="90DaysOfDevOps Logo" width="50%" height="50%" />
|
<img src="logo.png?raw=true" alt="90DaysOfDevOps Logo" width="50%" height="50%" />
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
English Version | [한국어](2023/ko/README.md)
|
English Version | [한국어](2023/ko/README.md) | [Tiếng Việt](2023/vi/2023.md)
|
||||||
|
|
||||||
This repository is used to document my journey on getting a better foundational knowledge of "DevOps".
|
This repository is used to document my journey on getting a better foundational knowledge of "DevOps".
|
||||||
|
|
||||||
|
@ -42,4 +42,4 @@ Once you have access to your free tier account, there are a few additional steps
|
|||||||
[Create your free AWS account](https://youtu.be/uZT8dA3G-S4)
|
[Create your free AWS account](https://youtu.be/uZT8dA3G-S4)
|
||||||
|
|
||||||
[Generate credentials, budget, and billing alarms via CLI](https://youtu.be/OdUnNuKylHg)
|
[Generate credentials, budget, and billing alarms via CLI](https://youtu.be/OdUnNuKylHg)
|
||||||
See you in [Day 52](day52.md).
|
See you in [Day 51](day51.md).
|
||||||
|
@ -156,8 +156,8 @@ Once the control plane is initialised, the bootstrap machine is destroyed. If yo
|
|||||||
|
|
||||||
We have covered the components that make up a Red Hat OpenShift Container Platform environment, why they are important to the environment, and what enteprise features they bring over a vanilla Kubernetes environment. We then dived into the methods available to deploy an OpenShift Cluster and the process that a Cluster build undertakes.
|
We have covered the components that make up a Red Hat OpenShift Container Platform environment, why they are important to the environment, and what enteprise features they bring over a vanilla Kubernetes environment. We then dived into the methods available to deploy an OpenShift Cluster and the process that a Cluster build undertakes.
|
||||||
|
|
||||||
In [Day 58](/day58.md) will cover the steps to install Red Hat OpenShift to a VMware vSphere environment.
|
In [Day 58](../day58.md) will cover the steps to install Red Hat OpenShift to a VMware vSphere environment.
|
||||||
|
|
||||||
# Resources
|
# Resources
|
||||||
|
|
||||||
- [Glossary of common terms for OpenShift Container Platform architecture](https://docs.openshift.com/container-platform/4.12/architecture/index.html#openshift-architecture-common-terms_architecture-overview)
|
- [Glossary of common terms for OpenShift Container Platform architecture](https://docs.openshift.com/container-platform/4.12/architecture/index.html#openshift-architecture-common-terms_architecture-overview)
|
||||||
|
48
2023/ko/days/day49.md
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
# Day 49: AWS 클라우드 개요
|
||||||
|
|
||||||
|
데브옵스의 90일의 AWS 섹션에 오신 것을 환영합니다! 배울 7가지 항목을 선택하는 것은 여러 가지 이유로 어렵습니다:
|
||||||
|
|
||||||
|
1. 마지막으로 집계했을 때 250개 이상의 AWS 서비스가 있었습니다
|
||||||
|
2. 각각의 서비스는 며칠동안 깊게 파고들 수 있습니다 😅
|
||||||
|
|
||||||
|
그러한 이유로 우리는 쉽게 시작하여 DevOps에 매우 중요한 서비스를 소개하고 AWS DevOps 서비스에 대해 많이 노출할 수 있는 섹션 캡스톤 프로젝트로 마무리할 것입니다.
|
||||||
|
|
||||||
|
다음 7일 동안 제가 만든 것처럼 즐거운 시간을 보내시기 바랍니다. 궁금한 점이 있으면 언제든지 물어보세요!
|
||||||
|
|
||||||
|
AWS Cloud는 Amazon Web Services (AWS)가 제공하는 클라우드 컴퓨팅 플랫폼입니다. 컴퓨팅, 스토리지, 네트워킹, 데이터베이스, 분석, 기계 학습, 보안 등 다양한 서비스를 제공합니다. AWS Cloud는 사용한 만큼 비용을 지불하는 방식으로 비즈니스 및 조직이 이러한 서비스에 액세스할 수 있도록 하여 사용한 리소스에 대해서만 지불하고 필요에 따라 리소스를 확장 또는 축소할 수 있습니다.
|
||||||
|
|
||||||
|
![](../../images/day49-1.png)
|
||||||
|
|
||||||
|
## Flexibility
|
||||||
|
|
||||||
|
AWS Cloud의 주요 이점 중 하나는 유연성입니다. 귀하의 요구에 가장 적합한 서비스를 선택하고 사용한 만큼만 지불할 수 있습니다. 이는 소규모 기업, 스타트업 및 대기업에 이상적인 솔루션으로, 인프라에 대한 큰 초기 투자를 하지 않고 필요한 리소스에 액세스할 수 있도록 합니다.
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
AWS Cloud의 또 다른 이점은 보안입니다. AWS는 암호화, 자격 증명 및 액세스 관리, 네트워크 보안을 포함하여 데이터와 리소스를 보호하기 위해 여러 가지 보안 조치를 적용하고 있습니다. 또한 HIPAA, PCI DSS, GDPR을 포함하여 다양한 규정 준수 프로그램을 갖추고 있어 데이터가 안전하고 관련 규정을 준수하는지 확인합니다.
|
||||||
|
|
||||||
|
AWS Cloud는 리소스와 인프라를 관리하는 데 도움이 되는 다양한 도구와 서비스도 제공합니다. 예를 들어, AWS 관리 콘솔을 사용하면 단일 및 중앙 집중식 대시보드에서 리소스를 모니터링하고 제어할 수 있습니다. AWS Command Line Interface (CLI)를 사용하면 명령줄에서 리소스를 관리하여 작업을 자동화하고 다른 도구와 통합하기 쉽습니다.
|
||||||
|
|
||||||
|
## EC2
|
||||||
|
|
||||||
|
AWS Cloud가 제공하는 가장 인기 있는 서비스 중 하나는 Amazon Elastic Compute Cloud (EC2)입니다. EC2를 사용하면 클라우드에서 가상 서버를 쉽게 시작하고 관리할 수 있으며, 필요에 따라 리소스를 확장 또는 축소하기 쉽습니다. 여러 가지 인스턴스 유형과 크기 중에서 선택할 수 있으며, 사용한 리소스만큼만 비용을 지불하면 됩니다.
|
||||||
|
|
||||||
|
![](../../images/day49-2.png)
|
||||||
|
|
||||||
|
## S3
|
||||||
|
|
||||||
|
AWS Cloud가 제공하는 또 다른 인기 있는 서비스는 Amazon Simple Storage Service (S3)입니다. S3는 인터넷 어디에서나 대량의 데이터를 저장하고 찾을 수 있는 객체 저장 서비스입니다. 확장성이 높고 내구성이 있으며 안전하므로 클라우드에서 데이터를 저장하고 관리하는 데 이상적인 솔루션입니다.
|
||||||
|
|
||||||
|
![](../../images/day49-3.png)
|
||||||
|
|
||||||
|
## Databases
|
||||||
|
|
||||||
|
AWS Cloud는 데이터베이스 관리를 위한 Amazon Relational Database Service (RDS), 데이터 웨어하우징 및 분석을 위한 Amazon Redshift, 검색 및 분석을 위한 Amazon Elasticsearch Service와 같은 다양한 서비스도 제공합니다. 이러한 서비스를 사용하면 인프라나 확장에 대해 걱정할 필요 없이 클라우드에서 복잡한 애플리케이션을 구축하고 관리할 수 있습니다.
|
||||||
|
|
||||||
|
![](../../images/day49-4.png)
|
||||||
|
|
||||||
|
전반적으로 AWS Cloud는 다양한 규모의 기업과 조직을 위한 강력하고 유연한 클라우드 컴퓨팅 플랫폼으로, 다양한 서비스와 도구를 제공합니다. 소규모 기업, 스타트업 또는 대기업이든 상관없이 AWS Cloud는 여러분에게 무언가를 제공할 수 있습니다. 사용한 만큼 비용을 지불하는 방식, 보안 및 관리 도구를 갖추고 있어 클라우드 컴퓨팅의 이점을 활용하려는 누구에게나 이상적인 솔루션입니다.
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
[Day 50](day50.md)에서 다시 만나요.
|
45
2023/ko/days/day50.md
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
# Day 50: 무료 계층 계정 획득 및 청구 알람 활성화
|
||||||
|
|
||||||
|
AWS는 제한된 기간 동안 어떠한 요금도 부과받지 않고 다양한 AWS 서비스에 액세스하고 실험할 수 있는 무료 계층 계정을 제공합니다. 이 문서에서는 무료 계층 AWS 계정에 가입하는 단계를 안내해 드리겠습니다.
|
||||||
|
|
||||||
|
## Step 1: AWS 웹사이트로 이동
|
||||||
|
|
||||||
|
무료 계층 AWS 계정에 가입하기 위한 첫 번째 단계는 AWS 웹 사이트로 이동하는 것입니다. 웹사이트는 https://aws.amazon.com 로 접속할 수 있습니다. 웹 사이트에서 페이지 우측 상단의 "AWS 계정 만들기" 버튼을 클릭하면 됩니다.
|
||||||
|
![](../../images/day50-1.png)
|
||||||
|
|
||||||
|
## Step 2: AWS 계정 생성
|
||||||
|
|
||||||
|
"AWS 계정 만들기" 버튼을 클릭하면 AWS 로그인 페이지로 이동합니다. 이미 AWS 계정이 있는 경우 이메일 주소와 비밀번호를 사용하여 로그인할 수 있습니다. 계정이 없는 경우 이메일 주소와 AWS 계정 이름을 입력하고 "이메일 주소 확인" 버튼을 클릭하면 확인 코드가 포함된 이메일이 전송되어 다시 제공됩니다.
|
||||||
|
![](../../images/day50-2.png)
|
||||||
|
![](../../images/day50-3.png)
|
||||||
|
|
||||||
|
## Step 3: 계정 정보 제공
|
||||||
|
|
||||||
|
다음 페이지에서 계정 정보를 제공하라는 메시지가 표시됩니다. 비밀번호, 전체 이름, 회사 이름, 전화 번호를 제공해야 합니다. 정보를 입력한 후 "계속" 버튼을 클릭하십시오.
|
||||||
|
![](../../images/day50-5.png)
|
||||||
|
![](../../images/day50-4.png)
|
||||||
|
|
||||||
|
|
||||||
|
## Step 4: 결제 정보 제공
|
||||||
|
|
||||||
|
무료 계층 계정에 가입하려면 결제 정보를 제공해야 합니다. AWS는 본인 확인 및 사기 방지를 위해 이 정보를 요구합니다. 단, 무료 티어 서비스는 1년 동안 무료로 제공되기 때문에 요금이 부과되지 않습니다. 결제 정보를 제공한 후 "확인 및 계속" 버튼을 클릭하십시오. 다음 페이지에서는 본인 확인을 위한 SMS 또는 음성 통화를 전화로 전송합니다.
|
||||||
|
![](../../images/day50-6.png)
|
||||||
|
![](../../images/day50-7.png)
|
||||||
|
|
||||||
|
## Step 5: 지원 계획 선택
|
||||||
|
|
||||||
|
결제 정보를 제공한 후 지원 계획 페이지로 이동합니다. 여기에서 원하는 지원 수준을 선택할 수 있으며, 필요에 따라 *기본 지원 - 무료* 옵션을 사용합니다. 이 정보를 제공한 후 "가입 완료" 버튼을 클릭하십시오.
|
||||||
|
![](../../images/day50-8.png)
|
||||||
|
|
||||||
|
## 다음 단계:
|
||||||
|
|
||||||
|
무료 계층 계정에 액세스한 후 추가적인 몇 가지 단계를 수행해야 합니다. 이러한 단계 중에서 요금 알림을 생성하는 것이 가장 중요하다고 주장할 수 있습니다. *그러니 건너뛰지 마세요!!*
|
||||||
|
1. [청구 알람 생성](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html)
|
||||||
|
2. [루트 사용자에서 MFA 활성화](https://docs.aws.amazon.com/accounts/latest/reference/root-user-mfa.html)
|
||||||
|
3. [IAM 사용자 생성](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) 은 일반 작업용이고 그것이 필요한 경우를 제외하곤 *절대* 루트 사용자 계정을 사용하지 마세요.
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
[무료 AWS 계정 만들기](https://youtu.be/uZT8dA3G-S4)
|
||||||
|
|
||||||
|
[CLI를 사용하요 자격 증명, 예산 및 청구 알람 생성](https://youtu.be/OdUnNuKylHg)
|
||||||
|
[Day 51](day51.md)에서 다시 만나요.
|
27
2023/ko/days/day51.md
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
# Day 51: 코드로서의 인프라(IaC) 및 CloudFormation
|
||||||
|
|
||||||
|
IaC(Infrastructure as code)는 개발자와 운영팀이 수동 프로세스가 아닌 코드를 통해 인프라를 관리하고 프로비저닝할 수 있는 프로세스입니다. IaC를 사용하면 구성 파일과 자동화 도구를 사용하여 인프라 리소스를 관리할 수 있어 더 빠르고 일관되며 신뢰성 있는 인프라 배포가 가능합니다.
|
||||||
|
|
||||||
|
가장 인기 있는 IaC 툴 중 하나는 AWS CloudFormation으로 운영, 데브옵스 및 개발자가 YAML 또는 JSON 형식의 템플릿을 사용하여 인프라 리소스를 정의할 수 있습니다. 이러한 템플릿은 버전 제어 및 팀 간 공유가 가능하여 손쉽게 협업할 수 있고 구성 드리프트 가능성을 줄일 수 있습니다.
|
||||||
|
|
||||||
|
![](../../images/day51-1.png)
|
||||||
|
|
||||||
|
CloudFormation은 IaC를 구현하려는 사람들에게 여러 가지 이점을 제공합니다. 인프라 배포 및 관리를 자동화할 수 있다는 점이 주요 이점 중 하나로, 시간을 절약하고 인적 오류의 위험을 줄일 수 있습니다. 개발자와 운영 팀은 CloudFormation을 사용하여 가상 머신, 데이터베이스, 네트워킹 구성과 같은 인프라 리소스를 정의한 후 반복 가능하고 일관된 방식으로 배포할 수 있습니다.
|
||||||
|
|
||||||
|
CloudFormation을 사용하는 또 다른 장점은 인프라 리소스의 변경 사항을 추적할 수 있다는 점입니다. CloudFormation 템플릿이 변경되면 서비스는 새로운 구성을 반영하도록 리소스를 자동으로 업데이트할 수 있습니다. 이렇게 하면 모든 리소스가 동기화 상태로 유지되고 구성 오류가 발생할 가능성이 줄어듭니다.
|
||||||
|
|
||||||
|
CloudFormation은 리소스 간의 종속성을 관리할 수 있는 기능도 제공합니다. 이는 리소스를 올바른 순서와 올바른 구성으로 프로비저닝하여 오류 가능성을 줄이고 배포 프로세스를 더욱 효율적으로 만들 수 있음을 의미합니다.
|
||||||
|
|
||||||
|
이러한 이점 외에도 CloudFormation은 변경 사항을 롤백하는 기능, 전체 애플리케이션을 배포하는 데 사용할 수 있는 템플릿을 생성하는 기능과 같은 다양한 다른 기능도 제공합니다. 이러한 기능을 사용하면 인프라 리소스를 더 쉽게 관리하고 배포의 일관성과 신뢰성을 보장할 수 있습니다.
|
||||||
|
|
||||||
|
## Resources:
|
||||||
|
|
||||||
|
[AWS CloudFormation이 무엇인가? Pros & Cons?](https://youtu.be/0Sh9OySCyb4)
|
||||||
|
|
||||||
|
[CloudFormation 튜토리얼](https://www.youtube.com/live/gJjHK28b0cM)
|
||||||
|
|
||||||
|
[AWS CloudFormation 유저 가이드](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)
|
||||||
|
|
||||||
|
[AWS CloudFormation 시작 단계별 가이드](https://aws.amazon.com/cloudformation/getting-started/)
|
||||||
|
|
||||||
|
[Day 52](day52.md)에서 다시 만나요.
|
56
2023/ko/days/day52.md
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
# Day 52: Identity and Access Management (IAM)
|
||||||
|
|
||||||
|
클라우드 컴퓨팅이 계속해서 인기를 얻으면서, 점점 더 많은 조직들이 그들의 인프라를 관리하기 위해 클라우드 플랫폼으로 관심을 돌리고 있습니다. 그러나, 이와 함께 데이터와 자원을 보호하기 위해 적절한 보안 조치가 시행되도록 보장해야 할 필요성이 뒤따릅니다. AWS에서 보안을 관리하기 위한 가장 중요한 도구 중 하나는 IAM(Identity and Access Management)입니다.
|
||||||
|
|
||||||
|
## AWS IAM이란?
|
||||||
|
|![](../../images/day52-1.png)|
|
||||||
|
|:-:|
|
||||||
|
| <i>IAM is (1) WHO (2) CAN ACCESS (3) WHAT</i>|
|
||||||
|
|
||||||
|
AWS IAM은 AWS 리소스에 대한 사용자 및 사용자의 액세스를 관리할 수 있는 웹 서비스입니다. IAM을 사용하면 AWS 사용자 및 그룹을 생성 및 관리할 수 있고, AWS 리소스에 대한 액세스를 제어할 수 있으며, 사용자가 해당 리소스에 대해 수행할 수 있는 작업을 결정하는 권한을 설정할 수 있습니다. IAM은 세분화된 액세스 제어를 제공하므로 세분화된 수준에서 특정 리소스에 대한 권한을 부여하거나 거부할 수 있습니다.
|
||||||
|
|
||||||
|
IAM은 AWS 리소스를 보호하기 위한 필수 도구입니다. IAM이 없다면 AWS 계정에 액세스할 수 있는 모든 사람은 모든 리소스에 제한 없이 액세스할 수 있습니다. IAM을 사용하면 리소스에 액세스할 수 있는 사용자, 수행할 수 있는 작업, 액세스할 수 있는 리소스를 제어할 수 있습니다. 또한 IAM은 여러 AWS 계정을 생성하고 관리할 수 있도록 지원하는데, 이는 대규모 조직에는 항상 서로 어느 정도 수준의 상호 작용이 필요한 계정이 많이 있기 때문에 필수적입니다:
|
||||||
|
|
||||||
|
|![](../../images/day52-2.png)|
|
||||||
|
|:-:|
|
||||||
|
| <i>Multi-Account IAM access is essential knowledge</i>|
|
||||||
|
|
||||||
|
|
||||||
|
## AWS IAM을 시작하는 방법
|
||||||
|
|
||||||
|
AWS IAM을 시작하는 것은 간단합니다. 여러분이 따라야 할 단계는 다음과 같습니다:
|
||||||
|
|
||||||
|
### Step 1: AWS 계정 만들기
|
||||||
|
|
||||||
|
첫번째 단계는 AWS 계정이 없는 경우 AWS 계정을 생성하는 것입니다. 50일차에 이렇게 했으니 가보는 것도 좋을 것 같아요 😉
|
||||||
|
|
||||||
|
### Step 2: IAM 설정
|
||||||
|
|
||||||
|
AWS 계정이 있으면 IAM 콘솔로 이동하여 IAM을 설정할 수 있습니다. 콘솔에서는 IAM 사용자, 그룹, 역할 및 정책을 관리할 수 있습니다.
|
||||||
|
|
||||||
|
### Step 3: IAM 사용자 생성
|
||||||
|
|
||||||
|
다음 단계는 IAM 사용자를 생성하는 것입니다. IAM 사용자는 AWS 리소스에 액세스해야 하는 개인 또는 서비스를 나타내는 IAM에서 생성하는 엔티티입니다. IAM 사용자를 생성할 때 사용자가 가져야 할 권한을 지정할 수 있습니다. 50일으로부터 숙제 중 하나는 [IAM 사용자 생성하기](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)이고, 완료하지 않은 경우 다시 돌아가서 지금 만들어 주십시오.
|
||||||
|
|
||||||
|
### Step 4: IAM 그룹 생성
|
||||||
|
|
||||||
|
IAM 사용자를 만든 후 다음 단계는 IAM 그룹을 만드는 것입니다. IAM 그룹은 IAM 사용자의 모음입니다. IAM 그룹을 만들 때 그룹이 가져야 할 권한을 지정할 수 있습니다. 이를 수행하려면 "IAM Basics"를 보고 리소스 섹션의 "IAM User Guide: Getting Started"를 읽으십시오.
|
||||||
|
|
||||||
|
### Step 5: IAM Group에 권한 할당
|
||||||
|
|
||||||
|
IAM 그룹을 만든 후에는 그룹에 권한을 할당할 수 있습니다. 여기에는 그룹이 가져야 할 권한을 정의하는 IAM 정책을 만드는 것이 포함됩니다. 그런 다음 해당 정책을 그룹에 연결할 수 있습니다. "IAM 튜토리얼 & 딥 다이브"를 보고 이를 달성하기 위해선 리소스 섹션의 IAM 튜토리얼을 살펴보십시오.
|
||||||
|
|
||||||
|
### Step 6: IAM 사용자 테스트
|
||||||
|
|
||||||
|
IAM 그룹에 권한을 할당한 후 IAM 사용자에게 올바른 권한이 있는지 테스트할 수 있습니다. 이를 수행하려면 IAM 사용자의 자격 증명을 사용하여 AWS Management Console에 로그인하고 사용자가 수행할 수 있어야 하는 작업을 수행할 수 있습니다.
|
||||||
|
|
||||||
|
## Resources:
|
||||||
|
[IAM 기본](https://youtu.be/iF9fs8Rw4Uo)
|
||||||
|
|
||||||
|
[IAM 사용 설명서: 시작하기](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started.html)
|
||||||
|
|
||||||
|
[IAM 비디오 튜토리얼 & 딥 다이브](https://youtu.be/ExjW3HCFG1U)
|
||||||
|
|
||||||
|
[IAM 튜토리얼: AM 역할을 사용하여 AWS 계정 전체에 액세스 위임](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html)
|
||||||
|
|
||||||
|
[Day 53](day53.md)에서 다시 만나요.
|
50
2023/ko/days/day53.md
Normal file
@ -0,0 +1,50 @@
|
|||||||
|
# Day 53: AWS Systems Manager
|
||||||
|
|
||||||
|
![](../../images/day53-01.png)
|
||||||
|
|
||||||
|
AWS Systems Manager는 사용자가 자신의 AWS 및 사내 리소스 모두에서 운영 태스크를 관리하고 자동화할 수 있는 완전 관리형 서비스입니다. AWS 리소스, 가상 머신 및 애플리케이션을 관리할 수 있는 중앙 집중화된 플랫폼을 제공합니다. DevOps 전문가가 운영 업무를 자동화하고 규정 준수를 유지하며 운영 비용을 절감할 수 있습니다.
|
||||||
|
|
||||||
|
AWS Systems Manager를 통해 사용자는 패치 관리 자동화, OS 및 애플리케이션 배포 자동화, Amazon Machine Images(AMI) 생성 및 관리, 리소스 활용도 모니터링 등의 작업을 수행할 수 있습니다. 또한 실행 명령, 상태 관리자, 인벤토리 및 유지 관리 창을 포함하는 인스턴스 구성 및 관리를 위한 일련의 도구를 제공합니다.
|
||||||
|
|
||||||
|
또한 AWS Systems Manager는 운영 데이터의 통합 뷰를 제공하여 사용자가 EC2 인스턴스, 사내 서버 및 AWS 서비스를 포함한 AWS 인프라 전반에 걸쳐 운영 데이터를 시각화하고 모니터링할 수 있습니다. 이를 통해 사용자는 문제를 보다 빠르게 파악하고 해결할 수 있어 운영 효율성이 향상되고 다운타임이 줄어듭니다.
|
||||||
|
|
||||||
|
## AWS System Manager을 시작하는 방법은 무엇일까요?
|
||||||
|
|
||||||
|
AWS System Manager를 시작하는 것은 1, 2, 3, 4만큼 쉽습니다 😄:
|
||||||
|
|
||||||
|
![](../../images/day53-03.png)
|
||||||
|
|
||||||
|
### Step 1: AWS System Manager 콘솔로 이동
|
||||||
|
|
||||||
|
AWS 계정이 있으면 2개의 윈도우 서버와 2개의 리눅스 서버(free tier 과정😉)를 생성하고 AWS System Manager 콘솔로 이동합니다. 이 콘솔은 EC2 인스턴스, 사내 서버 및 기타 리소스를 포함한 AWS 리소스를 관리하기 위한 통합 인터페이스를 제공합니다:
|
||||||
|
|
||||||
|
![](../../images/day53-02.png)
|
||||||
|
시작하기 버튼을 클릭하고 원하는 지역을 선택합니다(저는 us-east-1을 선택했습니다)
|
||||||
|
|
||||||
|
### Step 2: 구성 유형 선택
|
||||||
|
|
||||||
|
다음 단계는 리소스를 관리하도록 AWS Systems Manager를 구성하는 것입니다. 빠른 설정 공통 작업 중 하나를 선택하여(또는 직접 선택한 사용자 지정 설정 유형을 생성) 이 작업을 수행할 수 있습니다:
|
||||||
|
![](../../images/day53-04.png)
|
||||||
|
필요에 따라 "Patch Manager(패치 매니저)"를 선택할 것입니다. 아래 리소스에는 테스트할 수 있는 추가 시나리오가 있습니다. "AWS Systems Manager를 사용하여 몇 분 안에 AWS 인스턴스를 패치 및 관리"를 참조하여 이 단계를 확인하십시오.
|
||||||
|
|
||||||
|
### Step 3: 구성 옵션 구체화
|
||||||
|
|
||||||
|
각 구성 유형에는 이 단계에 적용할 고유한 매개 변수 집합이 있습니다...
|
||||||
|
|![](../../images/day53-05.png)|
|
||||||
|
|:-:|
|
||||||
|
| <i>선택한 빠른 시작 구성에 따라 다른 점이 나타납니다.</i>|
|
||||||
|
|
||||||
|
따라서 각 리소스에 필요한 인수에 대해서는 설명하지 않겠습니다. 일반적으로 다음 단계는 리소스 그룹을 만들어 리소스를 구성하는 것입니다. 리소스 그룹은 공통 속성을 공유하는 리소스 모음입니다. 리소스를 그룹화하면 리소스를 전체적으로 볼 수 있고 정책과 작업을 함께 적용할 수 있습니다. 이 단계를 실행하려면 "AWS Systems Manager를 사용하여 몇 분 안에 AWS 인스턴스를 패치 및 관리"를 참조하십시오.
|
||||||
|
|
||||||
|
### Step 4: 리소스 배포, 검토 및 관리
|
||||||
|
|
||||||
|
리소스 그룹을 생성한 후에는 AWS System Manager 콘솔에서 리소스를 보고 관리할 수 있습니다. 또한 자동화 워크플로우를 생성하고 패치 관리를 실행하며 리소스에 대한 다른 작업을 수행할 수 있습니다.
|
||||||
|
|
||||||
|
## Resources:
|
||||||
|
[AWS Systems Manager 소개](https://youtu.be/pSVK-ingvfc)
|
||||||
|
|
||||||
|
[AWS Systems Manager를 사용하여 몇 분 안에 AWS 인스턴스를 패치 및 관리](https://youtu.be/DEQFJba3h4M)
|
||||||
|
|
||||||
|
[AWS System Manager 시작하기](https://docs.aws.amazon.com/systems-manager/latest/userguide/getting-started-launch-managed-instance.html)
|
||||||
|
|
||||||
|
[Day 54](day54.md)에서 다시 만나요.
|
32
2023/ko/days/day54.md
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
# Day 54: AWS CodeCommit
|
||||||
|
|
||||||
|
![](../../images/day54-01.png)
|
||||||
|
|
||||||
|
AWS CodeCommit은 아마존웹서비스(AWS)가 제공하는 완전 관리형 소스 제어 서비스로, 개발자들이 개인 Git 저장소를 쉽게 호스팅하고 관리할 수 있도록 해줍니다. "GitHub이지만 기능이 적은"라고 생각하세요 🤣 (j/k, 자세한 내용은 "CodeCommit vs GitHub" 리소스 참조) 이를 통해 팀들은 코드 협업과 보안 액세스 제어, 암호화 및 자동 백업을 지원으로 코드를 클라우드에 안전하게 저장할 수 있습니다.
|
||||||
|
|
||||||
|
개발자들은 AWS CodeCommit을 통해 강력한 코드 리뷰 및 워크플로우 도구를 통해 Git 저장소를 쉽게 생성, 관리 및 협업할 수 있습니다. AWS CodePipeline 및 AWS CodeBuild와 같은 다른 AWS 서비스와 원활하게 통합되어 완벽하게 자동화된 방식으로 애플리케이션을 구축하고 배포할 수 있습니다.
|
||||||
|
|
||||||
|
AWS CodeCommit의 몇 가지 주요 기능은 다음과 같습니다:
|
||||||
|
|
||||||
|
- 코드 리뷰 및 pull requests을 지원하는 깃 기반 저장소
|
||||||
|
- 안전한 액세스 제어를 위해 AWS IAM(Identity and Access Management)과의 통합(큰 이점)
|
||||||
|
- 저장 및 전송 중인 데이터 암호화
|
||||||
|
- 자동 백업 및 장애 조치 기능을 통한 뛰어난 확장성과 가용성
|
||||||
|
- AWS CodePipeline 및 AWS CodeBuild와 같은 다른 AWS 개발자 도구와의 통합
|
||||||
|
|
||||||
|
CodeCommit을 효과적으로 활용하기 위해서는 Git을 사용하는 방법을 당연히 알아야 합니다.
|
||||||
|
[many](https://www.youtube.com/playlist?list=PL2rC-8e38bUXloBOYChAl0EcbbuVjbE3t) [excellent](https://youtu.be/tRZGeaHPoaw) [Git](https://youtu.be/USjZcfj8yxE) [tutorials](https://youtu.be/RGOj5yH7evk)가 있고, (어쨋든 제 섹션이 아닙니다 😉) 그래서 저는 거기를 살펴보진 않을 것입니다.
|
||||||
|
|
||||||
|
전반적으로 AWS CodeCommit은 코드 협업, 저장소의 안전한 관리 및 개발 워크플로우의 효율화가 필요한 팀을 위한 강력한 도구입니다.
|
||||||
|
|
||||||
|
## Resources:
|
||||||
|
|
||||||
|
[AWS CodeCommit 사용자 안내서](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html)
|
||||||
|
|
||||||
|
[AWS CodeCommit 개요](https://youtu.be/5kFmfgFYOx4)
|
||||||
|
|
||||||
|
[AWS CodeCommit 튜토리얼: your first Repo, Commit and Push](https://youtu.be/t7M8pHCh5Xs)
|
||||||
|
|
||||||
|
[AWS CodeCommit vs GitHub: 2023에 빛날 것은?](https://appwrk.com/aws-codecommit-vs-github)
|
||||||
|
|
||||||
|
[Day 55](day55.md)에서 다시 만나요.
|
67
2023/ko/days/day55.md
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
# Day 55: AWS CodePipeline
|
||||||
|
|
||||||
|
<i>AWS 서비스의 마지막 날에 우리는 움직이는 부분과 통합이 많은 큰 서비스에 대해 이야기할 것입니다. 이에 대한 학습/이해에 도움이 될 몇 가지 무료 리소스가 있지만 솔직히 가장 좋은 리소스 중 일부는 비용이 들 것입니다. 리소스 섹션에 별도로 나열하여 호출할 것이지만 이 복잡한 서비스를 학습하기에 환상적이기 때문에 언급하지 않는 것은 놓칠 수 있습니다</i>
|
||||||
|
|
||||||
|
<b>CodePipeline</b>은 IaC 또는 소프트웨어 릴리즈 프로세스를 자동화할 수 있는 완전 관리형 지속적 전달 서비스입니다. 이를 통해 코드 변경 사항을 (적절한 테스트를 수행하여)안정되게 지속적으로 빌드, 테스트 및 배포하는 파이프라인을 생성할 수 있습니다:
|
||||||
|
|
||||||
|
![](../../images/day55-01.jpg)
|
||||||
|
|
||||||
|
CodePipeline을 사용하면 빌드, 테스트 및 배포 워크플로우를 자동화하는 파이프라인을 생성하여 코드 변경 사항이 대상 환경에 안정적으로 배포되도록 할 수 있습니다. 이를 통해 빠른 릴리즈 주기를 달성하고 개발 및 운영 팀 간의 협업을 개선하며 소프트웨어 릴리스의 전반적인 품질과 신뢰성을 향상시킬 수 있습니다.
|
||||||
|
|
||||||
|
AWS CodePipeline은 다른 AWS 서비스와 통합됩니다:
|
||||||
|
|
||||||
|
- [Source Action Integrations](https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-source)
|
||||||
|
- [Build Action Integrations](https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-build)
|
||||||
|
- [Test Action Integrations](https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-test)
|
||||||
|
- [Deploy Action Integrations](https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-deploy)
|
||||||
|
- [Approval Action Integrations](https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-approval)
|
||||||
|
- [Invoke Action Integrations](https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-invoke)
|
||||||
|
|
||||||
|
또한 깃허브(GitHub), 젠킨스(Jenkins), 비트버킷(Bitbucket)과 같은 타사 도구와 통합됩니다. AWS CodePipeline을 사용하여 여러 AWS 계정 및 리전에 걸쳐 애플리케이션 업데이트를 관리할 수 있습니다.
|
||||||
|
|
||||||
|
## AWS CodePipeline 시작하기
|
||||||
|
|
||||||
|
AWS CodePipeline을 시작하기 위해 [AWS User Guide](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) 에 몇 가지 우수한 [tutorials](https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials.html)이 있습니다. 이들은 모두 기본적으로 다음과 같은 세 단계로 나뉩니다:
|
||||||
|
|
||||||
|
### Step 1: IAM 역할 만들기
|
||||||
|
|
||||||
|
AWS CodePipeline에서 파이프라인을 실행하는 데 필요한 AWS 리소스에 액세스할 수 있는 IAM 역할을 생성해야 합니다. IAM 역할을 생성하려면 [Day 52](day52.md)의 단계를 확인하십시오
|
||||||
|
|
||||||
|
### Step 2: CodePipeline 파이프라인 생성
|
||||||
|
|
||||||
|
CodePipeline 파이프라인을 만들려면 AWS CodePipeline 콘솔로 이동하여 "파이프라인 생성" 버튼을 클릭한 후 지시사항을 따라 파이프라인을 생성합니다. 코드의 소스 위치, 사용할 빌드 공급자, 사용할 배포 공급자 및 2단계에서 생성한 IAM 역할을 지정해야 합니다.
|
||||||
|
|
||||||
|
### Step 3: 코드 변경 테스트 및 배포
|
||||||
|
|
||||||
|
CodePipeline 파이프라인을 생성한 후 코드 변경사항을 테스트하고 배포할 수 있습니다. AWS CodePipeline은 자동으로 코드 변경사항을 빌드, 테스트하고 대상 환경에 배포합니다. AWS CodePipeline 콘솔에서 파이프라인의 진행 상황을 모니터링할 수 있습니다.
|
||||||
|
Once you have created your CodePipeline pipeline, you can test and deploy your code changes. AWS CodePipeline will automatically build, test, and deploy your code changes to your target environments. You can monitor the progress of your pipeline in the AWS CodePipeline console.
|
||||||
|
|
||||||
|
## 캡스톤 프로젝트
|
||||||
|
|
||||||
|
데브옵스 90일의 이 AWS 섹션을 연결하려면 Adrian Canttrill의 우수한 미니 프로젝트인 [CatPipeline](https://www.youtube.com/playlist?list=PLTk5ZYSbd9MgARTJHbAaRcGSn7EMfxRHm) 을 살펴보시기를 권장합니다. 그 안에서 여러분은 데브옵스 엔지니어의 하루를 경험할 수 있는 재미있는 작은 프로젝트인 CodeCommit, CodeBuild, CodeDeploy 및 CodePipeline을 접할 수 있습니다.
|
||||||
|
|
||||||
|
- [YouTube CatPipeline Playlist](https://www.youtube.com/playlist?list=PLTk5ZYSbd9MgARTJHbAaRcGSn7EMfxRHm)
|
||||||
|
- [GitHub CatPipeline Repo](https://github.com/acantril/learn-cantrill-io-labs/tree/master/aws-codepipeline-catpipeline)
|
||||||
|
|
||||||
|
## Resources (무료)
|
||||||
|
|
||||||
|
[AWS: Real-world CodePipeline CI/CD 예제](https://youtu.be/MNt2HGxClZ0)
|
||||||
|
|
||||||
|
[AWS CodePipeline 사용 설명서](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html)
|
||||||
|
|
||||||
|
[AWS CodePipeline 튜토리얼](https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials.html)
|
||||||
|
|
||||||
|
[AWS CodeCommit 튵ㅎ라올: your first Repo, Commit and Push](https://youtu.be/t7M8pHCh5Xs)
|
||||||
|
|
||||||
|
[AWS CodeCommit vs GitHub: 2023에 빛날 것은?](https://appwrk.com/aws-codecommit-vs-github)
|
||||||
|
|
||||||
|
## Resources (유료)
|
||||||
|
|
||||||
|
수많은 <i>훌륭한</i> 강사들이 있고 2-3명을 뽑는 것은 항상 어렵지만, [Adrian Canttrill](https://learn.cantrill.io/), [Andrew Brown](https://www.exampro.co/), 과 [Stephane Maarek](https://www.udemy.com/user/stephane-maarek/) 은 항상 환상적인 콘텐츠에 대해 이야기할 때 생각납니다.
|
||||||
|
|
||||||
|
## 마지막 생각
|
||||||
|
|
||||||
|
데브옵스 90일의 이 섹션을 통해 AWS 생태계에서 사용할 수 있는 것을 확인할 수 있기를 바랍니다.
|
||||||
|
|
||||||
|
공부에 행운을 빌어요! 다음은 Red Hat OpenShift 입니다!
|
||||||
|
[Day 56](day56.md)에서 다시 만나요.
|
203
2024.md
@ -22,95 +22,116 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
|
|||||||
|
|
||||||
## Agenda
|
## Agenda
|
||||||
|
|
||||||
## Agenda
|
- [✔️][✔️] ♾️ 1 > [2024 - Community Edition - Introduction](2024/day01.md) - Michael Cade
|
||||||
|
- [✔️][✔️] ♾️ 2 > [The Digital Factory](2024/day02.md) - Romano Roth
|
||||||
|
- [✔️][✔️] ♾️ 3 > [High-performing engineering teams and the Holy Grail](2024/day03.md) - Jeremy Meiss
|
||||||
|
- [✔️][✔️] ♾️ 4 > [Manage Kubernetes Add-Ons for Multiple Clusters Using Cluster Run-Time State](2024/day04.md) - Gianluca Mardente
|
||||||
|
- [✔️][✔️] ♾️ 5 > [Cross-functional empathy](2024/day05.md) - Chris Kranz
|
||||||
|
- [✔️][✔️] ♾️ 6 > [Kubernetes RBAC with Ansible](2024/day06.md) - Elif Samedin & Andrei Buzoianu
|
||||||
|
- [✔️][✔️] ♾️ 7 > [Automate like a pro: Dealing with test automation hassles](2024/day07.md) - Mesut Durukal
|
||||||
|
- [✔️][✔️] ♾️ 8 > [Culinary Coding: Crafting Infrastructure Recipes with OpenTofu](2024/day08.md) - Kaiwalya Koparkar
|
||||||
|
- [✔️][✔️] ♾️ 9 > [Why should developers care about container security?](2024/day09.md) - Eric Smalling
|
||||||
|
- [✔️][✔️] ♾️ 10 > [Is Kubernetes Too Complicated?](2024/day10.md) - Julia Furst
|
||||||
|
- [✔️][✔️] ♾️ 11 > [Building Resilience: A Journey of Crafting and Validating Our Disaster Recovery Plan](2024/day11.md) - Yedidya Schwartz
|
||||||
|
- [✔️][✔️] ♾️ 12 > [Know your data: The Stats behind the Alerts](2024/day12.md) - Dave McAllister
|
||||||
|
- [✔️][✔️] ♾️ 13 > [Architecting for Versatility](2024/day13.md) - Tim Banks
|
||||||
|
- [✔️][✔️] ♾️ 14 > [An introduction to API Security in Kubernetes](2024/day14.md) - Geoff Burke
|
||||||
|
- [✔️][✔️] ♾️ 15 > [Using code dependency analysis to decide what to test](2024/day15.md) - Patrick Kusebauch
|
||||||
|
- [✔️][✔️] ♾️ 16 > [Smarter, Better, Faster, Stronger - Testing at Scale](2024/day16.md) - Ada Lündhé
|
||||||
|
- [✔️][✔️] ♾️ 17 > [From Chaos to Resilience: Decoding the Secrets of Production Readiness](2024/day17.md) - Alejandro Pedraza Borrero
|
||||||
|
- [✔️][✔️] ♾️ 18 > [Platform Engineering Is Not About Tech](2024/day18.md) - Nicolò Cambiaso Erizzo & Francesca Carta
|
||||||
|
- [✔️][✔️] ♾️ 19 > [Building Efficient and Secure Docker Images with Multi-Stage Builds](2024/day19.md) - Pradumna V Saraf
|
||||||
|
- [✔️][✔️] ♾️ 20 > [Navigating the Vast DevOps Terrain: Strategies for Learning and Staying Current](2024/day20.md) - Kunal Kushwaha
|
||||||
|
- [✔️][✔️] ♾️ 21 > [Azure ARM now got Bicep](2024/day21.md) - Tushar Kumar
|
||||||
|
- [✔️][✔️] ♾️ 22 > [Test in Production with Kubernetes and Telepresence](2024/day22.md) - Mohammad-Ali A'râbi
|
||||||
|
- [✔️][✔️] ♾️ 23 > [SQL Server 2022 on Linux Containers and Kubernetes from Zero to a Hero!](2024/day23.md) - Yitzhak David
|
||||||
|
- [✔️][✔️] ♾️ 24 > [DevSecOps - Defined, Explained & Explored](2024/day24.md) - Sameer Paradkar
|
||||||
|
- [✔️][✔️] ♾️ 25 > [Kube-Nation: Exploring the Land of Kubernetes](2024/day25.md) - Siddhant Khisty & Aakansha Priya
|
||||||
|
- [✔️][✔️] ♾️ 26 > [Advanced Code Coverage with Jenkins and API Mocking](2024/day26.md) - Oleg Nenashev
|
||||||
|
- [✔️][✔️] ♾️ 27 > [From Automated to Automatic - Event-Driven Infrastructure Management with Ansible](2024/day27.md) - Daniel Bodky
|
||||||
|
- [✔️][✔️] ♾️ 28 > [Talos Linux on VMware vSphere](2024/day28.md) - Michael Cade
|
||||||
|
- [✔️][✔️] ♾️ 29 > [Practical introduction to OpenTelemetry tracing](2024/day29.md) - Nicolas Fränkel
|
||||||
|
- [✔️][✔️] ♾️ 30 > [How GitHub delivers GitHub using GitHub](2024/day30.md) - April Edwards
|
||||||
|
- [✔️][✔️] ♾️ 31 > [GitOps on AKS](2024/day31.md) - Richard Hooper, Wesley Haakman, Karl Cooke
|
||||||
|
- [✔️][✔️] ♾️ 32 > [Cracking Cholera’s Code: Victorian Insights for Today’s Technologist](2024/day32.md) - Simon Copsey
|
||||||
|
- [✔️][✔️] ♾️ 33 > [GitOps made simple with ArgoCD and GitHub Actions](2024/day33.md) - Arsh Sharma
|
||||||
|
- [✔️][✔️] ♾️ 34 > [How to Implement Automated Deployment Pipelines for Your DevOps Projects](2024/day34.md) - Neel Shah
|
||||||
|
- [✔️][✔️] ♾️ 35 > [Azure for DevSecOps Operators](2024/day35.md) - Kevin Evans
|
||||||
|
- [✔️][✔️] ♾️ 36 > [Policy-as-Code Super-Powers! Rethinking Modern IaC With Service Mesh And CNI](2024/day36.md) - Kat Morgan & Marino Wijay
|
||||||
|
- [✔️][✔️] ♾️ 37 > [The Lean DevOps Playbook: Make it a success from Day one](2024/day37.md) - Aman Sharma
|
||||||
|
- [✔️][✔️] ♾️ 38 > [Open Standards: Empowering Cloud-Native Innovation](2024/day38.md) - Kunal Verma
|
||||||
|
- [✔️][✔️] ♾️ 39 > [Is TLS in Kubernetes really that hard to understand?](2024/day39.md) - Shivang Shandilya
|
||||||
|
- [✔️][✔️] ♾️ 40 > [Infrastructure as Code - A look at Azure Bicep and Terraform](2024/day40.md) - Sarah Lean
|
||||||
|
- [✔️][✔️] ♾️ 41 > [My journey to reimagining DevOps: Ushering in the Second Wave](2024/day41.md) - Brit Myers
|
||||||
|
- [✔️][✔️] ♾️ 42 > [The North Star: Risk-driven security](2024/day42.md) - Jonny Tyers
|
||||||
|
- [✔️][✔️] ♾️ 43 > [Let's go sidecarless in Ambient Mesh!](2024/day43.md) - Leon Nunes
|
||||||
|
- [✔️][✔️] ♾️ 44 > [Exploring Firecracker](2024/day44.md) - Irine Kokilashvili
|
||||||
|
- [✔️][✔️] ♾️ 45 > [Microsoft DevOps Solutions or how to integrate the best of Azure DevOps and GitHub](2024/day45.md) - Peter De Tender
|
||||||
|
- [✔️][✔️] ♾️ 46 > [Mastering AWS Systems Manager: Simplifying Infrastructure Management](2024/day46.md) - Adit Modi
|
||||||
|
- [ ][✔️] ♾️ 47 > [Azure logic app, low / no code](2024/day47.md) - Ian Engelbrecht
|
||||||
|
- [ ][ ] ♾️ 48 > [From Puddings to Platforms: Bringing Ideas to life with ChatGPT](2024/day48.md) - Anthony Spiteri
|
||||||
|
- [ ][✔️] ♾️ 49 > [From Confusion To Clarity: How Gherkin And Specflow Ensures Clear Requirements and Bug-Free Apps](2024/day49.md) - Steffen Jørgensen
|
||||||
|
- [ ][✔️] ♾️ 50 > [State of cloud native 2024](2024/day50.md) - Saiyam Pathak
|
||||||
|
- [ ][ ] ♾️ 51 > [](2024/day51.md)
|
||||||
|
- [ ][ ] ♾️ 52 > [Multi-Model Databases and its place in DevOps](2024/day52.md) - Pratim Bhosale
|
||||||
|
- [ ][ ] ♾️ 53 > [Implementing SRE (Site Reliability Engineering)](2024/day53.md) - Andy Babiec
|
||||||
|
- [ ][] ♾️ 54 > [](2024/day54.md)
|
||||||
|
- [ ][✔️] ♾️ 55 > [Bringing Together IaC and CM with Terraform Provider for Ansible](2024/day55.md) - Razvan Ionescu
|
||||||
|
- [ ][ ] ♾️ 56 > [Automated database deployment within the DevOps process](2024/day56.md) - Marc Müller
|
||||||
|
- [ ][ ] ♾️ 57 > [](2024/day57.md)
|
||||||
|
- [ ][ ] ♾️ 58 > [OSV Scanner: A Powerful Tool for Open Source Security](2024/day58.md) - Paras Mamgain
|
||||||
|
- [ ][ ] ♾️ 59 > [Continuous Delivery pipelines for cloud infrastructure](2024/day59.md) - Michael Lihs
|
||||||
|
- [ ][ ] ♾️ 60 > [Migrating a monolith to Cloud-Native and the stumbling blocks that you don’t know about](2024/day60.md) - JJ Asghar
|
||||||
|
- [ ][✔️] ♾️ 61 > [Demystifying Modernisation: True Potential of Cloud Technology](2024/day61.md) - Anupam Phoghat
|
||||||
|
- [ ][ ] ♾️ 62 > [Chatbots are going to destroy infrastructures and your cloud bills](2024/day62.md) - Stanislas Girard
|
||||||
|
- [ ][ ] ♾️ 63 > [Introduction to Database Operators for Kubernetes](2024/day63.md) - Juarez Junior
|
||||||
|
- [ ][ ] ♾️ 64 > [The Invisible Guardians: Unveiling the Power of Monitoring and Observability in the Digital Age](2024/day64.md) - Santosh Kumar Perumal
|
||||||
|
- [ ][✔️] ♾️ 65 > [Azure pertinent DevOps for non-coders](2024/day65.md) - Sucheta Gawade
|
||||||
|
- [ ][✔️] ♾️ 66 > [A Developer's Journey to the DevOps: The Synergy of Two Worlds](2024/day66.md) - Jonah Andersson
|
||||||
|
- [ ][ ] ♾️ 67 > [Art of DevOps: Harmonizing Code, Culture, and Continuous Delivery](2024/day67.md) - Rohit Ghumare
|
||||||
|
- [ ][ ] ♾️ 68 > [Service Mesh for Kubernetes 101: The Secret Sauce to Effortless Microservices Management](2024/day68.md) - Mohd Imran
|
||||||
|
- [ ][ ] ♾️ 69 > [Enhancing Kubernetes security, visibility, and networking control logic](2024/day69.md) - Dean Lewis
|
||||||
|
- [ ][✔️] ♾️ 70 > [Simplified Cloud Adoption with Microsoft's Terraforms Azure Landing Zone Module](2024/day70.md) - Simone Bennett
|
||||||
|
- [ ][] ♾️ 71 > [](2024/day71.md)
|
||||||
|
- [ ][ ] ♾️ 72 > [Infrastructure as Code with Pulumi](2024/day72.md) - Scott Lowe
|
||||||
|
- [ ][ ] ♾️ 73 > [E2E Test Before Merge](2024/day73.md) - Natalie Lunbeck
|
||||||
|
- [ ][ ] ♾️ 74 > [Workload Identity Federation with Azure DevOps and Terraform](2024/day74.md) - Arindam Mitra
|
||||||
|
- [ ][ ] ♾️ 75 > [Achieving Regulatory Compliance in Multi-Cloud Deployments with Terraform](2024/day75.md) - Eric Evans
|
||||||
|
- [ ][ ] ♾️ 76 > [All you need to know about AWS CDK.](2024/day76.md) - Amogha Kancharla
|
||||||
|
- [ ][ ] ♾️ 77 > [Connect to Microsoft APIs in Azure DevOps Pipelines using Workload Identity Federation](2024/day77.md) - Jan Vidar Elven
|
||||||
|
- [ ][ ] ♾️ 78 > [Scaling Terraform Deployments with GitHub Actions: Essential Configurations](2024/day78.md) - Thomas Thornton
|
||||||
|
- [ ][✔️] ♾️ 79 > [DevEdOps](2024/day79.md) - Adam Leskis
|
||||||
|
- [ ][ ] ♾️ 80 > [Unlocking K8s Troubleshooting Best Practices with Botkube](2024/day80.md) - Maria Ashby
|
||||||
|
- [ ][✔️] ♾️ 81 > [Leveraging Kubernetes to build a better Cloud Native Development Experience](2024/day81.md) - Nitish Kumar
|
||||||
|
- [ ][ ] ♾️ 82 > [Dev Containers in VS Code](2024/day82.md) - Chris Ayers
|
||||||
|
- [ ][ ] ♾️ 83 > [IaC with Pulumi and GitHub Actions](2024/day83.md) - Till Spindler
|
||||||
|
- [ ][✔️] ♾️ 84 > [Hacking Kubernetes For Beginners](2024/day84.md) - Benoit Entzmann
|
||||||
|
- [ ][✔️] ♾️ 85 > [Reuse, Don't Repeat - Creating an Infrastructure as Code Module Library](2024/day85.md) - Sam Cogan
|
||||||
|
- [ ][✔️] ♾️ 86 > [Tools To Make Your Terminal DevOps and Kubernetes Friendly](2024/day86.md) - Maryam Tavakkoli
|
||||||
|
- [ ][✔️] ♾️ 87 > [Hands-on Performance Testing with k6](2024/day87.md) - Pepe Cano
|
||||||
|
- [ ][✔️] ♾️ 88 > [What Developers Want from Internal Developer Portals](2024/day88.md) - Ganesh Datta
|
||||||
|
- [ ][✔️] ♾️ 89 > [Seeding Infrastructures: Merging Terraform with Generative AI for Effortless DevOps Gardens](2024/day89.md) - Renaldi Gondosubroto
|
||||||
|
- [ ][ ] ♾️ 90 > [Fighting fire with fire: Why we cannot always prevent technical issues with more tech](2024/day90.md) - Anaïs Urlichs
|
||||||
|
|
||||||
- [x] ♾️ 1 > [Day 1](2024/day01.md)
|
- [ ][ ] ♾️ 91 > [Day 91 - March 31st 2024 - Closing](2024/day90.md) - Michael Cade
|
||||||
- [ ] ♾️ 2 > [The Digital Factory](2024/day02.md) - Romano Roth
|
|
||||||
- [ ] ♾️ 3 > [High-performing engineering teams and the Holy Grail](2024/day03.md) - Jeremy Meiss
|
[✔️]- DevOps with Windows - Nuno do Carmo
|
||||||
- [ ] ♾️ 4 > [Manage Kubernetes Add-Ons for Multiple Clusters Using Cluster Run-Time State](2024/day04.md) - Gianluca Mardente
|
|
||||||
- [ ] ♾️ 5 > [Cross-functional empathy](2024/day05.md) - Chris Kranz
|
- Building Scalable Infrastructure For Advanced Air Mobility - Dan Lambeth
|
||||||
- [ ] ♾️ 6 > [DevSecOps - Defined, Explained & Explored](2024/day06.md) - Sameer Paradkar
|
- Elevating DevSecOps with Modern CDNs - Richard Yew
|
||||||
- [ ] ♾️ 7 > [Advanced Code Coverage with Jenkins and API Mocking](2024/day07.md) - Oleg Nenashev
|
- Empowering Developers with No Container Knowledge to build & deploy app on OpenShift - Shan N/A
|
||||||
- [ ] ♾️ 8 > [Azure ARM now got Bicep](2024/day08.md) - Tushar Kumar
|
- Streamlining Data Pipelines: CI/CD Best Practices for Efficient Deployments - Monika Rajput
|
||||||
- [ ] ♾️ 9 > [GitOps: The next Frontier in DevOps!](2024/day09.md) - Megha Kadur
|
- A practical guide to Test-Driven Development of infrastructure code - David Pazdera
|
||||||
- [ ] ♾️ 10 > [Is Kubernetes Too Complicated? No, And Here's Why](2024/day10.md) - Julia Furst
|
- Saving Cloud Costs Using Existing Prometheus Metrics - Pavan Gudiwada
|
||||||
- [ ] ♾️ 11 > [Architecting for Versatility](2024/day11.md) - Tim Banks
|
- Code, Connect, and Conquer: Mastering Personal Branding for Developers - Pavan Belagatti
|
||||||
- [ ] ♾️ 12 > [Container Security for Enterprise Kubernetes environments](2024/day12.md) - Imran Roshan
|
- Mastering AWS OpenSearch: Terraform Provisioning and Cost Efficiency Series - Ranjini Ganeshan
|
||||||
- [ ] ♾️ 13 > [Automate like a pro: Dealing with test automation hassles](2024/day13.md) - Mesut Durukal
|
- GitOps: The next Frontier in DevOps! - Megha Kadur
|
||||||
- [ ] ♾️ 14 > [Navigating Cloud-Native DevOps: Strategies for Seamless Deployment](2024/day14.md) - Yhorby Matias
|
- Container Security for Enterprise Kubernetes environments - Imran Roshan
|
||||||
- [ ] ♾️ 15 > [Building Resilience: A Journey of Crafting and Validating Our Disaster Recovery Plan](2024/day15.md) - Yedidya Schwartz
|
- Navigating Cloud-Native DevOps: Strategies for Seamless Deployment - Yhorby Matias
|
||||||
- [ ] ♾️ 16 > [Distracted Development](2024/day16.md) - Josh Ether
|
- Distracted Development - Josh Ether
|
||||||
- [ ] ♾️ 17 > [Know your data: The Stats behind the Alerts](2024/day17.md) - Dave McAllister
|
- Continuous Delivery: From Distributed Monolith to Microservices as a unit of deployment - Naresh Waswani
|
||||||
- [ ] ♾️ 18 > [Continuous Delivery: From Distributed Monolith to Microservices as a unit of deployment](2024/day18.md) - Naresh Waswani
|
- DevSecOps: Integrating Security into the DevOps Pipeline - Reda Hajjami
|
||||||
- [ ] ♾️ 19 > [An introduction to API Security in Kubernetes](2024/day19.md) - Geoff Burke
|
- The Reverse Technology Thrust - Rom Adams
|
||||||
- [ ] ♾️ 20 > [Navigating the Vast DevOps Terrain: Strategies for Learning and Staying Current](2024/day20.md) - Kunal Kushwaha
|
- PCI Compliance in the Cloud - Barinua Kane
|
||||||
- [ ] ♾️ 21 > [Smarter, Better, Faster, Stronger - Testing at Scale](2024/day21.md) - Ada Lündhé
|
- End to End Data Governance using AWS Serverless Stack - Ankit Sheth
|
||||||
- [ ] ♾️ 22 > [Test in Production with Kubernetes and Telepresence](2024/day22.md) - Mohammad-Ali A'râbi
|
- Multi-Cloud Service Discovery and Load Balancing - Vladislav Bilay
|
||||||
- [ ] ♾️ 23 > [SQL Server 2022 on Linux Containers and Kubernetes from Zero to a Hero!](2024/day23.md) - Yitzhak David
|
|
||||||
- [ ] ♾️ 24 > [From Chaos to Resilience: Decoding the Secrets of Production Readiness](2024/day24.md) - Alejandro Pedraza Borrero
|
|
||||||
- [ ] ♾️ 25 > [Kube-Nation: Exploring the Land of Kubernetes](2024/day25.md) - Siddhant Khisty & Aakansha Priya
|
|
||||||
- [ ] ♾️ 26 > [Platform Engineering Is Not About Tech](2024/day26.md) - Nicolò Cambiaso Erizzo & Francesca Carta
|
|
||||||
- [ ] ♾️ 27 > [From Automated to Automatic - Event-Driven Infrastructure Management with Ansible](2024/day27.md) - Daniel Bodky
|
|
||||||
- [ ] ♾️ 28 > [Policy-as-Code Super-Powers! Rethinking Modern IaC With Service Mesh And CNI](2024/day28.md) - Kat Morgan & Marino Wijay
|
|
||||||
- [ ] ♾️ 29 > [The Reverse Technology Thrust](2024/day29.md) - Rom Adams
|
|
||||||
- [ ] ♾️ 30 > [How GitHub delivers GitHub using GitHub](2024/day30.md) - April Edwards
|
|
||||||
- [ ] ♾️ 31 > [DevSecOps: Integrating Security into the DevOps Pipeline](2024/day31.md) - Reda Hajjami
|
|
||||||
- [ ] ♾️ 32 > [Cracking Cholera’s Code: Victorian Insights for Today’s Technologist](2024/day32.md) - Simon Copsey
|
|
||||||
- [ ] ♾️ 33 > [Building Efficient and Secure Docker Images with Multi-Stage Builds](2024/day33.md) - Pradumna V Saraf
|
|
||||||
- [ ] ♾️ 34 > [How to Implement Automated Deployment Pipelines for Your DevOps Projects](2024/day34.md) - Neel Shah
|
|
||||||
- [ ] ♾️ 35 > [Using code dependency analysis to decide what to test](2024/day35.md) - Patrick Kusebauch
|
|
||||||
- [ ] ♾️ 36 > [Exploring Firecracker](2024/day36.md) - Irine Kokilashvili
|
|
||||||
- [ ] ♾️ 37 > [Practical introduction to OpenTelemetry tracing](2024/day37.md) - Nicolas Fränkel
|
|
||||||
- [ ] ♾️ 38 > [Open Standards: Empowering Cloud-Native Innovation](2024/day38.md) - Kunal Verma
|
|
||||||
- [ ] ♾️ 39 > [DIs TLS in Kubernetes really that hard to understand?](2024/day39.md) - Shivang Shandilya
|
|
||||||
- [ ] ♾️ 40 > [Infrastructure as Code - A look at Azure Bicep and Terraform](2024/day40.md) - Sarah Lean
|
|
||||||
- [ ] ♾️ 41 > [Building Scalable Infrastructure For Advanced Air Mobility](2024/day41.md) - Dan Lambeth
|
|
||||||
- [ ] ♾️ 42 > [The North Star: Risk-driven security](2024/day42.md) - Jonny Tyers
|
|
||||||
- [ ] ♾️ 43 > [End to End Data Governance using AWS Serverless Stack](2024/day43.md) - Ankit Sheth
|
|
||||||
- [ ] ♾️ 44 > [The Lean DevOps Playbook: Make it a success from Day one](2024/day44.md) - Aman Sharma
|
|
||||||
- [ ] ♾️ 45 > [Microsoft DevOps Solutions or how to integrate the best of Azure DevOps and GitHub](2024/day45.md) - Peter De Tender
|
|
||||||
- [ ] ♾️ 46 > [Mastering AWS Systems Manager: Simplifying Infrastructure Management](2024/day46.md) - Adit Modi
|
|
||||||
- [ ] ♾️ 47 > [From Puddings to Platforms: Bringing Ideas to life with ChatGPT](2024/day47.md) - Anthony Spiteri
|
|
||||||
- [ ] ♾️ 48 > [Azure logic app, low / no code](2024/day48.md) - Ian Engelbrecht
|
|
||||||
- [ ] ♾️ 49 > [Enhancing DevOps with MLOps for GenAI and AI-Powered Solutions](2024/day49.md) - Azhar Amir
|
|
||||||
- [ ] ♾️ 50 > [State of cloud native 2024](2024/day50.md) - Saiyam Pathak
|
|
||||||
- [ ] ♾️ 51 > [PCI Compliance in the Cloud](2024/day51.md) - Barinua Kane
|
|
||||||
- [ ] ♾️ 52 > [Multi-Model Databases and its place in DevOps](2024/day52.md) - Pratim Bhosale
|
|
||||||
- [ ] ♾️ 53 > [Implementing SRE (Site Reliability Engineering)](2024/day53.md) - Andy Babiec
|
|
||||||
- [ ] ♾️ 54 > [Let's go sidecarless in Ambient Mesh!](2024/day54.md) - Leon Nunes
|
|
||||||
- [ ] ♾️ 55 > [Bringing Together IaC and CM with Terraform Provider for Ansible](2024/day55.md) - Razvan Ionescu
|
|
||||||
- [ ] ♾️ 56 > [Automated database deployment within the DevOps process](2024/day56.md) - Marc Müller
|
|
||||||
- [ ] ♾️ 57 > [Multi-Cloud Service Discovery and Load Balancing](2024/day57.md) - Vladislav Bilay
|
|
||||||
- [ ] ♾️ 58 > [OSV Scanner: A Powerful Tool for Open Source Security](2024/day58.md) - Paras Mamgain
|
|
||||||
- [ ] ♾️ 59 > [Continuous Delivery pipelines for cloud infrastructure](2024/day59.md) - Michael Lihs
|
|
||||||
- [ ] ♾️ 60 > [Migrating a monolith to Cloud-Native and the stumbling blocks that you don’t know about](2024/day60.md) - JJ Asghar
|
|
||||||
- [ ] ♾️ 61 > [Demystifying Modernisation: True Potential of Cloud Technology](2024/day61.md) - Anupam Phoghat
|
|
||||||
- [ ] ♾️ 62 > [Chatbots are going to destroy infrastructures and your cloud bills](2024/day62.md) - Stanislas Girard
|
|
||||||
- [ ] ♾️ 63 > [Introduction to Database Operators for Kubernetes](2024/day63.md) - Juarez Junior
|
|
||||||
- [ ] ♾️ 64 > [The Invisible Guardians: Unveiling the Power of Monitoring and Observability in the Digital Age](2024/day64.md) - Santosh Kumar Perumal
|
|
||||||
- [ ] ♾️ 65 > [Azure pertinent DevOps for non-coders](2024/day65.md) - Sucheta Gawade
|
|
||||||
- [ ] ♾️ 66 > [A Developer's Journey to the DevOps: The Synergy of Two Worlds](2024/day66.md) - Jonah Andersson
|
|
||||||
- [ ] ♾️ 67 > [Art of DevOps: Harmonizing Code, Culture, and Continuous Delivery](2024/day67.md) - Rohit Ghumare
|
|
||||||
- [ ] ♾️ 68 > [Service Mesh for Kubernetes 101: The Secret Sauce to Effortless Microservices Management](2024/day68.md) - Mohd Imran
|
|
||||||
- [ ] ♾️ 69 > [Enhancing Kubernetes security, visibility, and networking control logic](2024/day69.md) - Dean Lewis
|
|
||||||
- [ ] ♾️ 70 > [Simplified Cloud Adoption with Microsoft's Terraforms Azure Landing Zone Module](2024/day70.md) - Simone Bennett
|
|
||||||
- [ ] ♾️ 71 > [Azure for DevSecOps Operators](2024/day71.md) - Kevin Evans
|
|
||||||
- [ ] ♾️ 72 > [Infrastructure as Code with Pulumi](2024/day72.md) - Scott Lowe
|
|
||||||
- [ ] ♾️ 73 > [E2E Test Before Merge](2024/day73.md) - Natalie Lunbeck
|
|
||||||
- [ ] ♾️ 74 > [Workload Identity Federation with Azure DevOps and Terraform](2024/day74.md) - Arindam Mitra
|
|
||||||
- [ ] ♾️ 75 > [Achieving Regulatory Compliance in Multi-Cloud Deployments with Terraform](2024/day75.md) - Eric Evans
|
|
||||||
- [ ] ♾️ 76 > [All you need to know about AWS CDK.](2024/day76.md) - Amogha Kancharla
|
|
||||||
- [ ] ♾️ 77 > [DConnect to Microsoft APIs in Azure DevOps Pipelines using Workload Identity Federation](2024/day77.md) - Jan Vidar Elven
|
|
||||||
- [ ] ♾️ 78 > [Scaling Terraform Deployments with GitHub Actions: Essential Configurations](2024/day78.md) - Thomas Thornton
|
|
||||||
- [ ] ♾️ 79 > [DevEdOps](2024/day79.md) - Adam Leskis
|
|
||||||
- [ ] ♾️ 80 > [Unlocking K8s Troubleshooting Best Practices with Botkube](2024/day80.md) - Maria Ashby
|
|
||||||
- [ ] ♾️ 81 > [Leveraging Kubernetes to build a better Cloud Native Development Experience](2024/day81.md) - Nitish Kumar
|
|
||||||
- [ ] ♾️ 82 > [Dev Containers in VS Code](2024/day82.md) - Chris Ayers
|
|
||||||
- [ ] ♾️ 83 > [IaC with Pulumi and GitHub Actions](2024/day83.md) - Till Spindler
|
|
||||||
- [ ] ♾️ 84 > [Hacking Kubernetes For Beginners](2024/day84.md) - Benoit Entzmann
|
|
||||||
- [ ] ♾️ 85 > [Reuse, Don't Repeat - Creating an Infrastructure as Code Module Library](2024/day85.md) - Sam Cogan
|
|
||||||
- [ ] ♾️ 86 > [Tools To Make Your Terminal DevOps and Kubernetes Friendly](2024/day86.md) - Maryam Tavakkoli
|
|
||||||
- [ ] ♾️ 87 > [Hands-on Performance Testing with k6](2024/day87.md) - Pepe Cano
|
|
||||||
- [ ] ♾️ 88 > [What Developers Want from Internal Developer Portals](2024/day88.md) - Ganesh Datta
|
|
||||||
- [ ] ♾️ 89 > [Seeding Infrastructures: Merging Terraform with Generative AI for Effortless DevOps Gardens](2024/day89.md) - Renaldi Gondosubroto
|
|
||||||
- [ ] ♾️ 90 > [Fighting fire with fire: Why we cannot always prevent technical issues with more tech](2024/day90.md) - Anaïs Urlichs
|
|
BIN
2024/Images/day02-1.jpg
Normal file
After Width: | Height: | Size: 29 KiB |
BIN
2024/Images/day02-2.png
Normal file
After Width: | Height: | Size: 164 KiB |
BIN
2024/Images/day02-3.jpg
Normal file
After Width: | Height: | Size: 110 KiB |
BIN
2024/Images/day02-4.png
Normal file
After Width: | Height: | Size: 194 KiB |
BIN
2024/Images/day02-5.jpg
Normal file
After Width: | Height: | Size: 471 KiB |
BIN
2024/Images/day02-6.jpg
Normal file
After Width: | Height: | Size: 314 KiB |
BIN
2024/Images/day09-1.jpg
Normal file
After Width: | Height: | Size: 134 KiB |
BIN
2024/Images/day15-01.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
2024/Images/day26-1.png
Normal file
After Width: | Height: | Size: 65 KiB |
BIN
2024/Images/day26-2.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
2024/Images/day26-3.png
Normal file
After Width: | Height: | Size: 93 KiB |
@ -0,0 +1,77 @@
|
|||||||
|
Day 2: The Digital Factory
|
||||||
|
=========================
|
||||||
|
|
||||||
|
## Video
|
||||||
|
[![Day 2: The Digital Facotry ](https://img.youtube.com/vi/xeX4HGLeJQw/0.jpg)](https://youtu.be/xeX4HGLeJQw?si=CJ75C8gUBcdWAQTR)
|
||||||
|
|
||||||
|
|
||||||
|
## About Me
|
||||||
|
I'm [Romano Roth](https://www.linkedin.com/in/romanoroth/), Chief of DevOps and Partner at [Zühlke](https://www.zuehlke.com/en). My journey with Zuhlke began 21 years ago. Over the years, I've evolved from an expert software engineer and software architect to a consultant. Throughout this journey, one question has always fueled my passion: **How can we continuously deliver value while ensuring quality and automation?**
|
||||||
|
|
||||||
|
When the DevOps movement began to gain momentum, I was naturally drawn to it. Today, I’m one of the organizers of the monthly [DevOps Meetup in Zürich](https://www.meetup.com/de-DE/DevOps-Meetup-Zurich/) and president of [DevOps Days Zürich](https://www.devopsdays.ch/), an annual conference part of the global DevOps movement. DevOps isn't just a professional interest; it's my passion. That’s why I’ve my own [YouTube channel](https://www.youtube.com/c/RomanoRoth), where I've curated over 100 videos centered on DevOps, architecture, and leadership.
|
||||||
|
|
||||||
|
![Romano Roth](Images/day02-1.jpg)
|
||||||
|
|
||||||
|
## What is DevOps?
|
||||||
|
DevOps is a mindset, a culture, and a set of technical practices. It provides communication, integration, automation, and close cooperation among all the people needed to plan, develop, test, deploy, release, and maintain a product.
|
||||||
|
|
||||||
|
In short: **Bringing People, Process, and Technology together to continuously deliver value!**
|
||||||
|
|
||||||
|
![What is DevOps](Images/day02-2.png)
|
||||||
|
|
||||||
|
## What are the challenges with DevOps
|
||||||
|
**Cultural Resistance:** One of the biggest challenges is changing the organizational culture. DevOps requires shifting from traditional siloed roles to a collaborative approach with shared responsibility. This can be met with resistance from teams used to working in siloed organizations.
|
||||||
|
|
||||||
|
**Cognitive Load:** Numerous technical practices and tools exist for various stages of the DevOps lifecycle, from ideation over continuous integration over continuous deployment to release on demand. Integrating and maintaining all these technical practices and tools to develop great products can be challenging.
|
||||||
|
|
||||||
|
**Scaling DevOps**: What works for a small team or a single project might not work for an entire organization. Scaling DevOps practices while maintaining speed and reliability is a significant challenge.
|
||||||
|
|
||||||
|
![What are the challenges with DevOps](Images/day02-3.jpg)
|
||||||
|
|
||||||
|
## How can we scale DevOps?
|
||||||
|
Scaling DevOps, especially in larger organizations, requires a strategic approach beyond tools and technologies. Here are some considerations to scale DevOps effectively:
|
||||||
|
- **Cultural Transformation**: Foster a collaborative environment that values learning from failures.
|
||||||
|
- **Standardization**: Adopt consistent tools and processes across teams to maintain uniformity.
|
||||||
|
- **Automation**: Streamline operations by automating tasks from ideation over continuous integration over continuous deployment to release on demand.
|
||||||
|
- **Modular Architecture**: Utilize architecture styles like microservices to reduce interdependencies.
|
||||||
|
- **Metrics**: Use metrics to measure performance, identify bottlenecks, and drive continuous improvement.
|
||||||
|
- **Continuous Training**: Invest in ongoing skill development to ensure team members have the necessary skills to work in a DevOps environment.
|
||||||
|
- **Feedback Loops**: Establish efficient channels for feedback to identify and address issues quickly.
|
||||||
|
- **Decentralized Decision-making**: Empower teams to make decisions locally, reducing the need for top-down approvals and speeding up the development process.
|
||||||
|
- **Pilot Programs**: Test and refine DevOps practices through specific pilot projects.
|
||||||
|
- **Collaboration Platforms**: Use tools that enhance team communication like GitLab, GitHub, and Azure DevOps….
|
||||||
|
- **Regular Reviews**: Continuously assess and adjust DevOps practices as the organization grows and changes.
|
||||||
|
|
||||||
|
## What is Is Platform Engineering?
|
||||||
|
|
||||||
|
Platform Engineering and DevOps are not the same, but they are closely related and often overlap in many organizations.
|
||||||
|
|
||||||
|
**DevOps** is a mindset, a culture, and a set of technical practices. It provides communication, integration, automation, and close cooperation among all the people needed to plan, develop, test, deploy, release, and maintain a product and deliver continuous value to the customer.
|
||||||
|
|
||||||
|
**Platform engineering** is designing and building toolchains and workflows that enable self-service capabilities for product teams that deliver continuous value to the customer.
|
||||||
|
|
||||||
|
**Platform engineering** uses **DevOps** practices, which enables product teams to do DevOps.
|
||||||
|
|
||||||
|
![What is Is Platform Engineering?](Images/day02-4.png)
|
||||||
|
|
||||||
|
## What is a Digital Factory?
|
||||||
|
|
||||||
|
Throughout my work on various projects across diverse industries and clients, I've observed that **many companies share common challenges and objectives**. I think they all want to build great products, have a faster time to market, and be more efficient. And what they want is to **build up a digital factory**.
|
||||||
|
|
||||||
|
At the top of a company, you find the board of directors and the executive board. They shape the company's vision, mission, and strategy. All big ideas are prioritized in a **portfolio kanban**. The board prioritizes these ideas and gives product management the most important and promising ideas. For example, _building drones carrying heavy weight increases the market share_. The product management takes that idea and defines what features are needed for such a drone. Such a drone, for example, needs to have modified software, bigger batteries, and better engines. They give these features down to the teams. The existing teams have started to work on those features. For the new engine, a new team needs to be established. For that, **the platform engineering team will provide a standardized continuous delivery environment** so that that team can start right away. All the parts get assembled, and the drones can now be continuously delivered to the customers.
|
||||||
|
|
||||||
|
The teams constantly monitor the drones. **Telemetry and business data are collected**, like how many drones we have sold and customer satisfaction. These metrics are fed back to the portfolio level, where this information informs the board's future decisions.
|
||||||
|
|
||||||
|
![What is is a Digital Factory?](Images/day02-5.jpg)
|
||||||
|
|
||||||
|
## How can we implement Digital Factory?
|
||||||
|
|
||||||
|
To build a digital factory, you need a holistic approach.
|
||||||
|
- **Architecture:** Design architectures that align with your technology strategy, ensuring adaptability, scalability, and flexibility.
|
||||||
|
- **DevOps:** Utilize Platform Engineering to design and build toolchains and workflows that enable self-service capabilities for product teams to enable them to make quality and do DevOps.
|
||||||
|
- **Data:** Streamline data pipelines for timely, actionable insights. Harness data science to extract value to inform decision-making.
|
||||||
|
- **Customer experience:** Place user feedback at the heart of product development. Aim for a seamless end-to-end experience.
|
||||||
|
- **Agile Programme Delivery:** Adopt a multi-team organization to optimize workflows and performance. Continuous discovery, coupled with transparent reporting, drives growth.
|
||||||
|
- **Product Management for Maximized Value:** Connect the strategy with the execution. Align product initiatives with the company goals. Continuously refine management practices and leverage feedback for prioritization.
|
||||||
|
|
||||||
|
![How can we implement Digital Factory?](Images/day02-6.jpg)
|
@ -0,0 +1,18 @@
|
|||||||
|
# Day 3: 90DaysofDevOps
|
||||||
|
|
||||||
|
## High-performing engineering teams and the Holy Grail
|
||||||
|
|
||||||
|
***Jeremy Meiss***
|
||||||
|
- [Twitter](https://twitter.com/IAmJerdog)
|
||||||
|
- [LinkedIn](https://linkedin.com/in/jeremymeiss)
|
||||||
|
- [Dev.to](https://dev.to/jerdog]
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
“High-performing engineering teams” are the Holy Grail for every CTO. But what are they, are they attainable, and if so, how? In this talk, we’ll look at CI/CD data from over 15mil anonymous workflows and compare it against the last few years on the CircleCI platform, and explore this rare specimen in its native habitat – right there in your organization, and how to activate them using some better DevOps and Continuous Delivery practices.
|
||||||
|
|
||||||
|
### Resource
|
||||||
|
|
||||||
|
- [2023 State of Software Delivery Report](go.jmeiss.me/SoSDR2023)
|
||||||
|
- [2023 State of DevOps Report](https://cloud.google.com/devops/state-of-devops)
|
||||||
|
- [2023 State of Continuous Delivery Report](https://cd.foundation/state-of-cd-2023/)
|
@ -0,0 +1 @@
|
|||||||
|
|
@ -0,0 +1,30 @@
|
|||||||
|
Day 9: Why should developers care about container security?
|
||||||
|
=========================
|
||||||
|
|
||||||
|
## Video
|
||||||
|
![Day 9: Why should developers care about container security? ](https://youtu.be/z0Si8aE_W4Y)
|
||||||
|
|
||||||
|
|
||||||
|
## About Me
|
||||||
|
[Eric Smalling](https://about.me/ericsmalling)<br>
|
||||||
|
Staff Solutions Architect at [Chainguard](https://chainguard.dev)
|
||||||
|
|
||||||
|
For about 30 years, I've been an enterprise software developer, architect, and consultant with a focus on CI/CD, DevOps, and container-based solutions over the last decade.
|
||||||
|
|
||||||
|
I am also a Docker Captain, and am certified in Kubernetes (CKA, CKAD, CKS), and have been a Docker user since 2013.
|
||||||
|
|
||||||
|
![Eric Smalling](Images/day09-1.jpg)
|
||||||
|
|
||||||
|
## Description?
|
||||||
|
Container scanning tools, industry publications, and application security experts are constantly telling us about best practices for how to build our images and run our containers.
|
||||||
|
Often these non-functional requirements seem abstract and are not described well enough for those of us that don’t have an appsec background to fully understand why they are important.
|
||||||
|
|
||||||
|
This session explores several of the most common secure container practices, shows examples of how workloads can be exploited if not followed and, most importantly,
|
||||||
|
how to easily find and fix issues when building containers BEFORE you ship them.
|
||||||
|
Additionally, we'll discuss tactics to minimize exploit exposure by hardening runtime container and Kubernetes configurations.
|
||||||
|
|
||||||
|
## Links referenced in the video
|
||||||
|
- Security Context blog: https://snyk.co/k8s-securitycontext
|
||||||
|
- Network Policy recipes: https://github.com/ahmetb/kubernetes-...
|
||||||
|
- Ko Build tool: https://ko.build
|
||||||
|
- Jib Build tool: https://github.com/GoogleContainerToo...
|
181
2024/day11.md
@ -0,0 +1,181 @@
|
|||||||
|
Day 11: Building Resilience: A Journey of Crafting and Validating Our Disaster Recovery Plan
|
||||||
|
=========================
|
||||||
|
|
||||||
|
|
||||||
|
## Video
|
||||||
|
[![Day 11: A Journey of Crafting and Validating Our Disaster Recovery Plan ](https://i.ytimg.com/vi/cWUUJYKvbAk/hqdefault.jpg)](https://youtu.be/cWUUJYKvbAk)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## About Me
|
||||||
|
I'm [Yedidya Schwartz](https://www.linkedin.com/in/yedidyas/), Software Architect & Devops Lead @ [OwnID](https://ownid.com).
|
||||||
|
I'm leading the company's infrastructure and backend domains, designing and implementing complex architectures, bringing observability and performance to the top-level.
|
||||||
|
Certified AWS Solution Architect with more than 12 years of experience in various software development positions, from team lead to tech lead.
|
||||||
|
International speaker, holds Philosophy B.A & M.A degrees, plays the piano and the guitar for relaxation, married and father of two.
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
[My Sessionize page](https://sessionize.com/yedidya), where you can explore more of the talks and conferences in which I have participated.
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
![Yedidya Schwartz](https://sessionize.com/image/d34a-400o400o2-S7YpvQxzS99s1gzvUSNTxH.png)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## DRP Resoruces
|
||||||
|
|
||||||
|
#### Code examples
|
||||||
|
|
||||||
|
|
||||||
|
[IaC](https://github.com/yedidyas/DRP/tree/main/IaC)
|
||||||
|
|
||||||
|
|
||||||
|
[Github Actions](https://github.com/yedidyas/DRP/tree/main/GithubActions)
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
#### Recommended DRP sessions
|
||||||
|
|
||||||
|
|
||||||
|
[DRP for an account take over](https://www.youtube.com/watch?v=IOZyIEpdVGs)
|
||||||
|
|
||||||
|
|
||||||
|
[AWS re:Invent 2022 - Building resilient multi-site workloads using AWS global services and Netflix case study](https://www.youtube.com/watch?v=62ZQHTruBnk)
|
||||||
|
|
||||||
|
|
||||||
|
[Valarie Regas - Disaster Recovery & You, The Gift of Paranoia](https://www.youtube.com/watch?v=6uor5VYaBvQ)
|
||||||
|
|
||||||
|
|
||||||
|
[DevOps Disaster Recovery-Lessons from 50 Years of Aviation Disasters](https://www.youtube.com/watch?v=q0ZZXRkAdp4)
|
||||||
|
|
||||||
|
|
||||||
|
[Disaster Recovery of Workloads on AWS](https://www.youtube.com/watch?v=cJZw5mrxryA)
|
||||||
|
|
||||||
|
|
||||||
|
[Validate Your Disaster Recovery Strategy Ensuring Your Plan Works](https://www.youtube.com/watch?v=Du9GyTp-NL4)
|
||||||
|
|
||||||
|
|
||||||
|
[DR in DevOps: How to Guarantee an Effective Disaster Recovery Plan with DevOps](https://www.bunnyshell.com/blog/disaster-recovery-devops/)
|
||||||
|
|
||||||
|
|
||||||
|
["Adventures in Devops" Podcast: DR](https://open.spotify.com/episode/3haGR250LTlmVgoZ8GGGjS?si=F1-HLTRTQ4WOoieyVTPdSQ)
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
#### AWS Services
|
||||||
|
|
||||||
|
|
||||||
|
[Fault Injection Simulator](https://aws.amazon.com/fis/)
|
||||||
|
|
||||||
|
|
||||||
|
[Resilience Hub](https://aws.amazon.com/resilience-hub/)
|
||||||
|
|
||||||
|
|
||||||
|
[Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/)
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
#### AWS Resources
|
||||||
|
|
||||||
|
|
||||||
|
[AWS Well-Architected Framework: Recovery in the Cloud - Full PDF](https://docs.aws.amazon.com/pdfs/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-workloads-on-aws.pdf)
|
||||||
|
|
||||||
|
|
||||||
|
[Resiliency](https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-23/wat.concept.resiliency.en.html)
|
||||||
|
|
||||||
|
|
||||||
|
[Strategies for recovery in the cloud](https://aws.amazon.com/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-i-strategies-for-recovery-in-the-cloud/)
|
||||||
|
|
||||||
|
|
||||||
|
[Pilot light and warm standby](https://aws.amazon.com/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iii-pilot-light-and-warm-standby/)
|
||||||
|
|
||||||
|
|
||||||
|
[RPO and RTO](https://aws.amazon.com/blogs/mt/establishing-rpo-and-rto-targets-for-cloud-applications/)
|
||||||
|
|
||||||
|
|
||||||
|
[Fault isolation boundaries](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/control-planes-and-data-planes.html)
|
||||||
|
|
||||||
|
|
||||||
|
[RDS read replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.XRgn)
|
||||||
|
|
||||||
|
|
||||||
|
[DNS failover](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html)
|
||||||
|
|
||||||
|
|
||||||
|
[Multi region secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create-manage-multi-region-secrets.html)
|
||||||
|
|
||||||
|
|
||||||
|
[S3 replication](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html#crr-scenario)
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
#### Azure Resources
|
||||||
|
|
||||||
|
|
||||||
|
[Global VS Regional services](https://learn.microsoft.com/en-us/azure/reliability/availability-service-by-category)
|
||||||
|
|
||||||
|
|
||||||
|
[Control plane VS Data plane](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/control-plane-and-data-plane)
|
||||||
|
|
||||||
|
|
||||||
|
[Azure Site Recovery - product page](https://azure.microsoft.com/en-us/products/site-recovery)
|
||||||
|
|
||||||
|
|
||||||
|
[Azure Site Recovery - tutorial](https://learn.microsoft.com/en-us/azure/site-recovery/)
|
||||||
|
|
||||||
|
|
||||||
|
[Azure Backup - product page](https://azure.microsoft.com/en-us/products/backup)
|
||||||
|
|
||||||
|
|
||||||
|
[Azure Backup - tutorial](https://learn.microsoft.com/en-us/azure/backup/backup-overview)
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
#### Google Cloud Resources
|
||||||
|
|
||||||
|
|
||||||
|
[Google's DRP guide](https://cloud.google.com/architecture/dr-scenarios-planning-guide)
|
||||||
|
|
||||||
|
|
||||||
|
[Global VS Regional services](https://cloud.google.com/compute/docs/regions-zones/global-regional-zonal-resources)
|
||||||
|
|
||||||
|
|
||||||
|
[Google Cloud Backup and DR - introduction](https://cloud.google.com/blog/products/storage-data-transfer/introducing-google-cloud-backup-and-dr)
|
||||||
|
|
||||||
|
|
||||||
|
[Google Cloud Backup and DR - marketplace](https://console.cloud.google.com/marketplace/product/google/backupdr.googleapis.com?pli=1)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
#### General Resources
|
||||||
|
|
||||||
|
|
||||||
|
[Terraform case study: use a separate repository for DRP environment](https://xebia.com/blog/aws-disaster-recovery-strategies-poc-with-terraform/)
|
||||||
|
|
||||||
|
|
||||||
|
[Step by step: building a disaster recovery project with multi region replication](https://medium.com/@jerome.decoster/disaster-recovery-with-multi-region-architecture-331fec6456f)
|
||||||
|
|
||||||
|
|
||||||
|
[CrashPlan's DRP guide](https://www.crashplan.com/resources/guide/data-disaster-recovery-plan-using-3-2-1-backup-strategy/)
|
285
2024/day15.md
@ -0,0 +1,285 @@
|
|||||||
|
Using code dependency analysis to decide what to test
|
||||||
|
===================
|
||||||
|
|
||||||
|
By [Patrick Kusebauch](https://github.com/patrickkusebauch)
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> Find out how to save 90+% of your test runtime and resources by eliminating 90+% of your tests while keeping your test
|
||||||
|
> coverage and confidence. Save over 40% of your CI pipeline runtime overall.
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
Tests are expensive to run and the larger the code base the more expensive it becomes to run them all. At some point
|
||||||
|
your test runtime might even become so long it will be impossible to run them all on every commit as your rate of
|
||||||
|
incoming commits might be higher than your ability to test them. But how else can you have confidence that your
|
||||||
|
introduced changes have not broken some existing code?
|
||||||
|
|
||||||
|
Even if your situation is not that dire yet, the time it takes to run test makes it hard to get fast feedback on your
|
||||||
|
changes. It might even force you to compromise on other development techniques. To lump several changes into larger
|
||||||
|
commits, because there is no time to test each small individual change (like type fixing, refactoring, documentation
|
||||||
|
etc.). You might like to do trunk-based development, but have feature branches instead, so that you can open PRs and
|
||||||
|
test a whole slew of changes all at once. Your DORA metrics are compromised by your slow rate of development. Instead of
|
||||||
|
being reactive to customer needs, you have to plan your projects and releases months in advance because that's how often
|
||||||
|
you are able to fully test all the changes.
|
||||||
|
|
||||||
|
Slow testing can have huge consequences on how the whole development process looks like. While speeding up test
|
||||||
|
execution per-se is very individual problem in every project, there is another technique that can be applied everywhere.
|
||||||
|
You have to become more picky about what tests to run. So how do you decide what to test?
|
||||||
|
|
||||||
|
## Theory
|
||||||
|
|
||||||
|
### What is code dependency analysis?
|
||||||
|
|
||||||
|
Code dependency analysis is the process of (usually statically) analysing the code to determine what code is used by
|
||||||
|
other code. The most common example of this is analysing the specified dependencies of a project to determine potential
|
||||||
|
vulnerabilities. This is what tools like [OWASP Dependency Check](https://owasp.org/www-project-dependency-check/) do.
|
||||||
|
Another use case is to generate a Software Bill of Materials (SBOM) for a project.
|
||||||
|
|
||||||
|
There is one other use case that not many people talk about. That is using code dependency analysis to create a Directed
|
||||||
|
Acyclic Graph (DAG) of the various components/modules/domains of a project. This DAG can then be used to determine how
|
||||||
|
changes to one component will affect other components.
|
||||||
|
|
||||||
|
Imagine you have a project with the following structure of components:
|
||||||
|
|
||||||
|
![Project Structure](Images/day15-01.png)
|
||||||
|
|
||||||
|
The `Supportive` component depends on the `Analyser` and `OutputFormatter` components. The `Analyser` in turn depends on
|
||||||
|
3 other components - `Ast`, `Layer` and `References`. Lastly `References` depend on the `Ast` component.
|
||||||
|
|
||||||
|
If you make a change to the `OutputFormatter` component you will want to run the **contract tests**
|
||||||
|
for `OutputFormatter` and **integration tests** for `Supportive` but no tests for `Ast`. If you make changes
|
||||||
|
to `References` you will want to run the **contract tests** for `References`, **integration tests** for `Analyser` and
|
||||||
|
`Supportive` but no tests for `Layer` or `OutputFormatter`. In fact, there is no one module that you can change that
|
||||||
|
would require you to run all the tests.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> By **contract tests** I mean tests that test the defined API of the component. In other words what the component
|
||||||
|
> promises (by contract) to the outside users to always be true about the usage of the component. Such a test mocks out
|
||||||
|
> all outside interaction with any other component.
|
||||||
|
>
|
||||||
|
> By contrast, **integration tests** in this context mean tests that test that the interaction with a dependent
|
||||||
|
> component is properly programmed. For that reason the underlying (dependent) component is not mocked out.
|
||||||
|
|
||||||
|
### How do you create the dependency DAG?
|
||||||
|
|
||||||
|
There are very few tools that can do this as of today, even though the concept is very simple. So simple you can do it
|
||||||
|
yourself if there is no tool available for your language of choice.
|
||||||
|
|
||||||
|
You need to parse and lex the code to create an Abstract Syntax Tree (AST) and then walk the AST of every file to find
|
||||||
|
the dependencies. The same functionality your IDE does any time you "Find references..." or what your language server
|
||||||
|
sends over [LSP (Language Server Protocol)](https://en.wikipedia.org/wiki/Language_Server_Protocol).
|
||||||
|
|
||||||
|
You group the dependencies by predefined components/modules/domains, and then combine all the dependencies into a single
|
||||||
|
graph.
|
||||||
|
|
||||||
|
### How do you use the DAG to decide what to test?
|
||||||
|
|
||||||
|
Once you have the DAG there is a 4-step process to run your testing:
|
||||||
|
|
||||||
|
1. Get the list of changed files (for example by running `git diff`)
|
||||||
|
2. Feed the list to the dependency analysis tool to get the list of changed components (and optionally the list of
|
||||||
|
depending components as well for integration testing)
|
||||||
|
3. Feed the list to your testing tool of choice to run the test-suites corresponding to each changed component
|
||||||
|
4. Revel in how much time you have saved on testing.
|
||||||
|
|
||||||
|
## Practice
|
||||||
|
|
||||||
|
This is not just some theoretical idea, but rather something you can try out yourself today. If you are lucky, there is
|
||||||
|
already an open-source tool in your language of choice that lets you do it today. If you are not, the following
|
||||||
|
demonstration will give you enough guidance to write it yourself. If you do, please let me know, I would love to see it.
|
||||||
|
|
||||||
|
The tool that I have used today for demonstration is [deptrac](https://qossmic.github.io/deptrac/), and it is written in
|
||||||
|
PHP and for PHP.
|
||||||
|
|
||||||
|
All you have to do to create a DAG is to specify the modules/domains:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# deptrac.yaml
|
||||||
|
deptrac:
|
||||||
|
paths:
|
||||||
|
- src
|
||||||
|
|
||||||
|
layers:
|
||||||
|
- name: Analyser
|
||||||
|
collectors:
|
||||||
|
- type: directory
|
||||||
|
value: src/Analyser/.*
|
||||||
|
- name: Ast
|
||||||
|
collectors:
|
||||||
|
- type: directory
|
||||||
|
value: src/Ast/.*
|
||||||
|
- name: Layer
|
||||||
|
collectors:
|
||||||
|
- type: directory
|
||||||
|
value: src/Layer/.*
|
||||||
|
- name: References
|
||||||
|
collectors:
|
||||||
|
- type: directory
|
||||||
|
value: src/References/.*
|
||||||
|
- name: Contract
|
||||||
|
collectors:
|
||||||
|
- type: directory
|
||||||
|
value: src/Contract/.*
|
||||||
|
```
|
||||||
|
|
||||||
|
### The 4-step process
|
||||||
|
|
||||||
|
Once you have the DAG you can use combine it with the list of changed files to determine what modules/domains to test. A
|
||||||
|
simple git command will give you the list of changed files:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git diff --name-only
|
||||||
|
```
|
||||||
|
|
||||||
|
You can then use this list to find the modules/domains that have changed and then use the DAG to find the modules that
|
||||||
|
depend on those modules.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# to get the list of changed components
|
||||||
|
git diff --name-only | xargs php deptrac.php changed-files
|
||||||
|
|
||||||
|
# to get the list of changed modules with the depending components
|
||||||
|
git diff --name-only | xargs php deptrac.php changed-files --with-dependencies
|
||||||
|
```
|
||||||
|
|
||||||
|
If you pick the popular PHPUnit framework for your testing and
|
||||||
|
follow [their recommendation for organizing code](https://docs.phpunit.de/en/10.5/organizing-tests.html), it will be
|
||||||
|
very easy for you to create a test-suite per component. To run a test for a component you just have to pass the
|
||||||
|
parameter `--testsuite {componentName}` to the PHPUnit executable:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git diff --name-only |\
|
||||||
|
xargs php deptrac.php changed-files |\
|
||||||
|
sed 's/;/ --testsuite /g; s/^/--testsuite /g' |\
|
||||||
|
xargs ./vendor/bin/phpunit
|
||||||
|
```
|
||||||
|
|
||||||
|
Or if you have integration test for the dependent modules, and decide to name you integration test-suites
|
||||||
|
as `{componentName}Integration`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git diff --name-only |\
|
||||||
|
xargs php deptrac.php changed-files --with-dependencies |\
|
||||||
|
sed '1s/;/ --testsuite /g; 2s/;/Integration --testsuite /g; /./ { s/^/--testsuite /; 2s/$/Integration/; }' |\
|
||||||
|
sed ':a;N;$!ba;s/\n/ /g' |\
|
||||||
|
xargs ./vendor/bin/phpunit
|
||||||
|
```
|
||||||
|
|
||||||
|
### Real life comparison results
|
||||||
|
|
||||||
|
I have run the following script a set of changes to compare what the saving were:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Compare timing
|
||||||
|
iterations=10
|
||||||
|
|
||||||
|
total_time_with=0
|
||||||
|
for ((i = 1; i <= $iterations; i++)); do
|
||||||
|
# Run the command
|
||||||
|
runtime=$(
|
||||||
|
TIMEFORMAT='%R'
|
||||||
|
time (./vendor/bin/phpunit >/dev/null 2>&1) 2>&1
|
||||||
|
)
|
||||||
|
|
||||||
|
miliseconds=$(echo "$runtime" | tr ',' '.')
|
||||||
|
total_time_with=$(echo "$total_time_with + $miliseconds * 1000" | bc)
|
||||||
|
done
|
||||||
|
|
||||||
|
average_time_with=$(echo "$total_time_with / $iterations" | bc)
|
||||||
|
echo "Average time (not using deptrac): $average_time_with ms"
|
||||||
|
|
||||||
|
# Compare test coverage
|
||||||
|
tests_with=$(./vendor/bin/phpunit | grep -oP 'OK \(\K\d+')
|
||||||
|
echo "Executed tests (not using deptrac): $tests_with tests"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
total_time_without=0
|
||||||
|
for ((i = 1; i <= $iterations; i++)); do
|
||||||
|
# Run the command
|
||||||
|
runtime=$(
|
||||||
|
TIMEFORMAT='%R'
|
||||||
|
time (
|
||||||
|
git diff --name-only |
|
||||||
|
xargs php deptrac.php changed-files --with-dependencies |
|
||||||
|
sed '1s/;/ --testsuite /g; 2s/;/Integration --testsuite /g; /./ { s/^/--testsuite /; 2s/$/Integration/; }' |
|
||||||
|
sed ':a;N;$!ba;s/\n/ /g' |
|
||||||
|
xargs ./vendor/bin/phpunit >/dev/null 2>&1
|
||||||
|
) 2>&1
|
||||||
|
)
|
||||||
|
|
||||||
|
miliseconds=$(echo "$runtime" | tr ',' '.')
|
||||||
|
total_time_without=$(echo "$total_time_without + $miliseconds * 1000" | bc)
|
||||||
|
done
|
||||||
|
|
||||||
|
average_time_without=$(echo "$total_time_without / $iterations" | bc)
|
||||||
|
echo "Average time (using deptrac): $average_time_without ms"
|
||||||
|
tests_execution_without=$(git diff --name-only |
|
||||||
|
xargs php deptrac.php changed-files --with-dependencies |
|
||||||
|
sed '1s/;/ --testsuite /g; 2s/;/Integration --testsuite /g; /./ { s/^/--testsuite /; 2s/$/Integration/; }' |
|
||||||
|
sed ':a;N;$!ba;s/\n/ /g' |
|
||||||
|
xargs ./vendor/bin/phpunit)
|
||||||
|
tests_without=$(echo "$tests_execution_without" | grep -oP 'OK \(\K\d+')
|
||||||
|
tests_execution_without_time=$(echo "$tests_execution_without" | grep -oP 'Time: 00:\K\d+\.\d+')
|
||||||
|
echo "Executed tests (using deptrac): $tests_without tests"
|
||||||
|
|
||||||
|
execution_time=$(echo "$tests_execution_without_time * 1000" | bc | awk '{gsub(/\.?0+$/, ""); print}')
|
||||||
|
echo "Time to find tests to execute (using deptrac): $(echo "$average_time_without - $tests_execution_without_time * 1000" | bc | awk '{gsub(/\.?0+$/, ""); print}') ms"
|
||||||
|
echo "Time to execute tests (using deptrac): $execution_time ms"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
percentage=$(echo "scale=3; $tests_without / $tests_with * 100" | bc | awk '{gsub(/\.?0+$/, ""); print}')
|
||||||
|
echo "Percentage of tests not needing execution given the changed files: $(echo "100 - $percentage" | bc)%"
|
||||||
|
percentage=$(echo "scale=3; $execution_time / $average_time_with * 100" | bc | awk '{gsub(/\.?0+$/, ""); print}')
|
||||||
|
echo "Time saved on testing: $(echo "$average_time_with - $execution_time" | bc) ms ($(echo "100 - $percentage" | bc)%)"
|
||||||
|
percentage=$(echo "scale=3; $average_time_without / $average_time_with * 100" | bc | awk '{gsub(/\.?0+$/, ""); print}')
|
||||||
|
echo "Time saved overall: $(echo "$average_time_with - $average_time_without" | bc) ms ($(echo "100 - $percentage" | bc)%)"
|
||||||
|
```
|
||||||
|
|
||||||
|
with the following results:
|
||||||
|
|
||||||
|
```
|
||||||
|
Average time (not using deptrac): 984 ms
|
||||||
|
Executed tests (not using deptrac): 721 tests
|
||||||
|
|
||||||
|
Average time (using deptrac): 559 ms
|
||||||
|
Executed tests (using deptrac): 21 tests
|
||||||
|
Time to find tests to execute (using deptrac): 491 ms
|
||||||
|
Time to execute tests (using deptrac): 68 ms
|
||||||
|
|
||||||
|
Percentage of tests not needing execution given the changed files: 97.1%
|
||||||
|
Time saved on testing: 916 ms (93.1%)
|
||||||
|
Time saved overall: 425 ms (43.2%)
|
||||||
|
```
|
||||||
|
|
||||||
|
Some interesting observations:
|
||||||
|
|
||||||
|
- Only **3% of the tests** that normally run on the PR needed to be run to cover the change with tests. That is a
|
||||||
|
**saving of 700 tests** in this case.
|
||||||
|
- **Test execution time has decreased by 93%**. You are mostly left with the constant cost of set-up and tear-down of
|
||||||
|
the testing framework.
|
||||||
|
- **Pipeline overall time has decreased by 43%**. Since the analysis time grows orders of magnitude slower that test
|
||||||
|
runtime (it is not completely constant more files still means more to statically analyse), the number is only bound to
|
||||||
|
be better the larger the codebase is.
|
||||||
|
|
||||||
|
And these saving apply to arguable the worst possible SUT (System Under Test):
|
||||||
|
|
||||||
|
- It is a **small application**, so it is hard to get the saving of skipping testing of vast number of components as it
|
||||||
|
would be the case for large codebases.
|
||||||
|
- It is a **CLI script**, so it has no database, no external APIs to call, minimal slow I/O tests. Those are the tests
|
||||||
|
you want skipping the most, and they are barely present here.
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
|
||||||
|
Code dependency analysis is a very useful tool for deciding what to test. It is not a silver bullet, but it can help you
|
||||||
|
reduce the number of tests you run and the time it takes to run them. It can also help you decide what tests to run in
|
||||||
|
your CI pipeline. It is not a replacement for a good test suite, but it can help you make your test suite more
|
||||||
|
efficient.
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [deptrac](https://qossmic.github.io/deptrac/)
|
||||||
|
- [deptracpy](https://patrickkusebauch.github.io/deptracpy/)
|
||||||
|
|
||||||
|
See you on [Day 16](day16.md).
|
530
2024/day16.md
@ -0,0 +1,530 @@
|
|||||||
|
# Smarter, Better, Faster, Stronger
|
||||||
|
#### Simulation Frameworks as the Future of Performance Testing
|
||||||
|
|
||||||
|
|
||||||
|
| | |
|
||||||
|
| ----------- | ----------- |
|
||||||
|
| By | [Ada Lundhe](https://github.com/scorbettum/) |
|
||||||
|
| Where | [Datavant](https://datavant.com/) |
|
||||||
|
| Twitter | [@sc_codeum](https://twitter.com/sc_codeum) | |
|
||||||
|
| Code Source | [Hedra](https://github.com/scorbettum/hedra) | |
|
||||||
|
| Keywords | devops, simulation-framework, distributed, graph, testing, performance
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Performance testing has long been a critical part of a DevOps engineer's toolbox, allowing engineers to better understand the systems they build and maintain by simulating varying degrees of traffic at scale. However, existing performance frameworks are often limited in their ability to simulate realistic user scenarios at scale. Additionaly, performance testing frameworks are almost universally infamous for their poor developer/user experience, difficulty in integration with CI/CD and modern cloud environments, and lack of built-in quality reporting options.
|
||||||
|
|
||||||
|
<b>Simulation frameworks</b> represent the next step in performance testing, delivering on performance frameworks' core value proposition(s) while extending their functionality to embrace new testing techniques and modern developer needs. Simulation frameworks achieve this by:
|
||||||
|
|
||||||
|
- Allowing developers to write integration test like code using full programming languages that executes at the concurrency and with the speed of performance frameworks.
|
||||||
|
|
||||||
|
- Utilizing machine learning and statistical methods to provide features such as learned configuration values, A/B testing, etc.
|
||||||
|
|
||||||
|
- Providing chaos-testing facilities to allow request-level fault injection.
|
||||||
|
|
||||||
|
- Providing ample built-in reporting integrations to facilitate results submission to common DevOps analytics platforms like Datadog, Grafana, etc.
|
||||||
|
|
||||||
|
- Embracing multi-datacenter distributed execution as a core part of functionality while minimizing test code and configuration changes necessary to do so.
|
||||||
|
|
||||||
|
- Making developer/user experience a top priority via modern CLI interfaces, carefully constructed APIs, etc.
|
||||||
|
|
||||||
|
In doing so, simulation frameworks allow DevOps and infrastructure teams to:
|
||||||
|
|
||||||
|
- Reproduce the sophisticated usage patterns of actual users thus surfacing underlying infrastructure and/or application issues.
|
||||||
|
|
||||||
|
- Easily verify the impact of changes on different environments.
|
||||||
|
|
||||||
|
- Minimize the number of tests teams maintain by emphasizing parameterization while reducing time spent identifying "ideal" test parameters via learned configuration values.
|
||||||
|
|
||||||
|
- Ensure metrics from tests are available wherever it needs to be to maximize context and empower teams to make better decisions without having to maintain or compile external plugins or integrations.
|
||||||
|
|
||||||
|
- Minimize CI/CD or runtime environment complexity by reducing changes required to run between local, pipeline, and modern cloud environments.
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## Why Performance Testing?
|
||||||
|
|
||||||
|
Performance testing differs from this "functional testing" in that it focuses less on determining whether the target executes expected behaviors and more on how quickly and efficiently the target handles heavy amounts of traffic or usage. For distributed services, websites, and modern applications in general, this information can be critical in determining:
|
||||||
|
|
||||||
|
- Stability of services when a sudden spike in user traffic occurs
|
||||||
|
|
||||||
|
- CPU and memory usage of the target system under simulated increasing or defined traffic levels
|
||||||
|
|
||||||
|
- Identifying memory leaks or tricky transient issues
|
||||||
|
|
||||||
|
- Page load or API response times
|
||||||
|
|
||||||
|
as well as other target system behavior characteristics such as ready/write times to file or database, how quickly autoscaling adapts to simulated traffic or usage influx, etc. For these reasons, performance testing is now widely consider as a component of "chaos testing", or testing that determines how a system behaves under unexpected conditions.
|
||||||
|
|
||||||
|
However, we would argue that performance testing's domain and usefulness extend far beyond this area and are as critical a component of determining the quality, scalability, and durability of software as any type of functional testing. It's not enough that systems function, they must function well under a variety of usage and use cases.
|
||||||
|
|
||||||
|
> [A 2019 study by Portent](https://www.portent.com/blog/analytics/research-site-speed-hurting-everyones-revenue.htm) shows the impact of page load times on conversion rates. These load times are a product of both UI and API performance, and can increase drastically if a system is experiencing significant traffic.
|
||||||
|
|
||||||
|
Performance testing helps you provide essential metrics that illustrate what parts have the greatest impact on your application's the overall speed, stability, and efficiency. Hedra is one such tool that can provide these insights.
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## Limitations and Frustrations
|
||||||
|
|
||||||
|
Many load testing tools were developed back when running on bare-metal or small clusters of VMs was the predominant means of hosting/running software. Web applications were likewise in their infancy, so targeting and testing a handful of static URLs was perfectly acceptable.
|
||||||
|
|
||||||
|
Today's applications run in complicated cloud environments that are more sophisticated and distributed, such that targeting static URLs provides a woefully incomplete picture of application performance. Corequisitely, much of the tooling we use now needs to run in these resource-restrictive environments both to facilitate ease of use by developers and insure proper integration.
|
||||||
|
|
||||||
|
The majority of performance testing frameworks neglect or are outright incompatible these needs, placing the burden of customization and configuration on developers such to the point that performance test frameworks effectively require in-house engineering teams to deliver any sort of impactful insight. Frequent pain points include:
|
||||||
|
|
||||||
|
- Having to write a variety of custom execution engines to test with more modern protocols or libraries such as HTTP2, GRPC, UI testing via Playwright/Selenium, etc.
|
||||||
|
|
||||||
|
- Having to write custom integration reporting plugins or integrate clunky third-party options in order to publish test results to modern DevOps analytics platforms.
|
||||||
|
|
||||||
|
- Having to effectively micromanage deployment to avoid OOM or other compatability issues with CI/CD pipelines or distributed environments like Kubernetes.
|
||||||
|
|
||||||
|
- Having to "guess" appropriate testing configuration values not just for initial testing but after infrastructure or application changes, resulting in significant time wasted manually tweaking tests to get good signal on the impact of changes.
|
||||||
|
|
||||||
|
- Having to maintain a library of often similar tests due to performance frameworks not allowing for easy parameterization of tests and not offering A|B testing for simultaneously targeting multiple environments.
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## Simulaing Realism in Tests - Workflows as Graphs
|
||||||
|
|
||||||
|
When we want to execute a collection of tasks in some defined order, we often try to organize that work into contained steps and orchestrate them as a workflow. Computational frameworks such as Airflow ans Spark have particularly popularized and made evident the power of this approach for data science and data analysis.
|
||||||
|
|
||||||
|
What Airflow, Spark, and other "workflow" centeric tooling commonly share is the use of graphsto characterize the dependencies between tasks, determine and group execution order, and even provision required resources. Graphs are powerful data structures that make determining relationships between two disparate "things" computationally efficient.
|
||||||
|
|
||||||
|
The benefits of graphs in representing and managing workflows are numerous:
|
||||||
|
|
||||||
|
- Graphs make determining parallizable work easy
|
||||||
|
|
||||||
|
- Graphs make isolating and handling failure or errors in workflows efficient
|
||||||
|
|
||||||
|
- Graphs make authoring complex workflows natural
|
||||||
|
|
||||||
|
- Graphs make workflow progress visualization intuitive
|
||||||
|
|
||||||
|
Graphs also translate more naturally to distributed execution. Because Graphs allow us to better determine relationships between tasks we can then more easily isolate and delegate that work to disparte nodes in a cluster. We can also better handle failure not just of work but of nodes. Since graphs make keeping track of progress easy, we can simply decide whether we want a recovered node to resume that work, skip the work, or halt execution of the workflow as a whole.
|
||||||
|
|
||||||
|
Finally, graphs aow us to use a wide variety of interesting algorithms and possibilities - from shortest path algorithms helping us best determine how to optimize a workflow to probabilistic graphs allowing us to inject degrees of simulated human uncertainty.
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## Tests as Workflows
|
||||||
|
|
||||||
|
When writing tests, we are most familiar with writing test code as a series of steps to be executed in sequential order. When testing realistic usage of a system - while this representation might be suitable for simulating a user's surface level interactions - the underlying processes and events that triggered by those interactions are not sequential.
|
||||||
|
|
||||||
|
Consider a user submitting a form. The user enters some text into inputs, clicks some checkboxes, and then submits the form. On surface level this appears to be a perfectly sequential series of events, and thus can be simulated and tested as a sequential series of tasks in test code.
|
||||||
|
|
||||||
|
Behind the UI, modern applications and systems are executing a bevy of concurrent tasks for each sequential user "step" - from API calls to validate critical fields, to submitting user input to machine learning pipelines for analysis and recommendations as the user types, to capturing page interaction events, to batching calls to third party providers to return relevant advertisements, etc. Accurate tests capture more than the surface level interaction, they test and validate the complex interconnected work each interaction triggers.
|
||||||
|
|
||||||
|
|
||||||
|
```python
|
||||||
|
|
||||||
|
# An example simulating, from API level, an authorized user
|
||||||
|
# searching for a book by author.
|
||||||
|
|
||||||
|
import os
|
||||||
|
from hedra import (
|
||||||
|
Workflow,
|
||||||
|
step,
|
||||||
|
)
|
||||||
|
from hedra.core.engines import (
|
||||||
|
HTTPResult,
|
||||||
|
HTTP2Result
|
||||||
|
)
|
||||||
|
from typing import Literal, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class Test(Workflow):
|
||||||
|
vus=1000
|
||||||
|
duration='1m'
|
||||||
|
username=os.getenv('USERNAME')
|
||||||
|
password=os.getenv('PASSWORD')
|
||||||
|
|
||||||
|
def get_book_title(self) -> Literal['Shakespeare Collected Works']:
|
||||||
|
return 'Shakespeare Collected Works'
|
||||||
|
|
||||||
|
@step()
|
||||||
|
async def login_via_api(self) -> HTTP2Result:
|
||||||
|
return await self.client.http2.post(
|
||||||
|
'https://myapi.com/api/v1/login',
|
||||||
|
auth=(
|
||||||
|
self.username,
|
||||||
|
self.password
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
@step('login_via_api')
|
||||||
|
async def get_book(
|
||||||
|
self,
|
||||||
|
auth_response: HTTP2Result
|
||||||
|
) -> HTTPResult:
|
||||||
|
auth_token = response.headers.get('X-API-TOKEN')
|
||||||
|
title = self.get_book_title()
|
||||||
|
|
||||||
|
return await self.client.http.get(
|
||||||
|
f'https://myapi.com/api/v1/books?title={title}',
|
||||||
|
headers={
|
||||||
|
'X-API-TOKEN': auth_token
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
@step('login_via_api')
|
||||||
|
async def get_author(
|
||||||
|
self,
|
||||||
|
auth_response: HTTP2Result
|
||||||
|
) -> HTTPResult:
|
||||||
|
auth_token = response.headers.get('X-API-TOKEN')
|
||||||
|
|
||||||
|
return await self.client.http.get(
|
||||||
|
'https://myapi.com/api/v1/authors?author=william&shakespeare',
|
||||||
|
headers={
|
||||||
|
'X-API-TOKEN': auth_token
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Given that the underlying pieces of work generated have clear and defined relationships with each other, it makes sense to both mentally and computationally model this work as a graph. The ideal test representitave of a system under usage is then also a graph composed of discrete tasks simulating and validating the functionality of the underlying components responsible for the work generated by user interaction events.
|
||||||
|
|
||||||
|
Ideally, integration and end-to-end tests would accomplish this. However these tests lack the granularity to validate the underlying work triggered by user interaction. Indvidual step code in integration or end-to-end tests becomes entagled and intertwined with other test code when attempting to accomplish this, and it becomes a difficult to maintain the distinct task boundares required by graph structures. Eventually, most integration and end-to-end tests devolve into sequential workflows for the sake of stability and scalability.
|
||||||
|
|
||||||
|
```
|
||||||
|
# As sequential steps
|
||||||
|
[Authorized User Search] ->
|
||||||
|
[Get API Token] ->
|
||||||
|
[Search for Author] ->
|
||||||
|
[Search for Book]
|
||||||
|
|
||||||
|
# As a concurrent workflow
|
||||||
|
|
||||||
|
[Authorized User Search] ->
|
||||||
|
[Get API Token] ->
|
||||||
|
[Search cache for popular authors, send search analytics, hit third party search API via fan out] ->
|
||||||
|
[Search cache for matching popular books, send search analytics, hit third party search API via fan out]
|
||||||
|
```
|
||||||
|
|
||||||
|
By contrast, unit tests are written to be fast, efficient, and discrete. This is because unit tests focus on validating the functionality of the smallest testable components of a system. When we break down the complex work generated by user interaction into its smallest components, we find that the work generated corresponds directly to orchestrated execution of these discrete components.
|
||||||
|
|
||||||
|
Instead of integration or end-to-end tests - to accurately test user impact on a system it makes more sense to compose unit tests into graph workflows.
|
||||||
|
|
||||||
|
However, the granularity of unit tests can make composing them into meaningful tests arduous compared to writing integration or end-to-end tests. The answer is a balance - tests that allow for assessing higher-level integrated functionality while retaining as much of the efficiency and independence of unit tests as possible. This sort of test allows our tests maximize the benefits of test workflow orchestration while ensuring test workflows do not become overly complex.
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
### A|B Testing, Chaos, and Learned Configuration
|
||||||
|
|
||||||
|
One of the the most frequent (and awkward) questions engineers encounter when setting up performance testing is - "how should I setup this test"? While existing application analystics can provide insight into how an application performs now, we want to account for unexpected and future scenarios.
|
||||||
|
|
||||||
|
Simply setting concurrency to maximum will likely cause the application to fail, but will provide no insight as to where issues <i>begin</i> to arise. Likewise, setting concurrency too low means the application will not be placed under proper stress, resulting in test results delivering no value. This challenge compounds when testing different environments (such as development or staging environments) which may have differing levels of resource allocation.
|
||||||
|
|
||||||
|
Conveniently, performance tests themselves contain a potential solution.Performance tests can involve millions of requests per-run and test results contain the contextual information we need (errors, status codes, etc.) to establish metrics that we can seek to maximize or minimize. For example, we could aim to maximize number of successful requests completed:
|
||||||
|
|
||||||
|
> <br/>
|
||||||
|
> R_success = R_total - R_error
|
||||||
|
>
|
||||||
|
> <br/>
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
Or any other generally measurable outcome that could be expressed as some subset of the total set out output results. We could then express this via mathematical optimization as a loss function, where we seek to minimize the "distance" (error) from our goal:
|
||||||
|
|
||||||
|
> <br/>
|
||||||
|
> E_success = 1/(R_success) = 1/(R_total - R_error)
|
||||||
|
>
|
||||||
|
> <br/>
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
As successful requests increase, the error decreases and trends towards a limit of zero.
|
||||||
|
|
||||||
|
More importantly, <i>most full programming languages contain libraries that allow for the automated optimization of this sort of function</i>. Then by repeatedly running short "subset" tests that output these error metrics, we can automate the identification of configuration values such as concurrency.
|
||||||
|
|
||||||
|
Frameworks can extend upon this functionality be embracing A|B testing to allow engineers to specify subsets of simulated traffic to divert to differing environments, API versions, etc.
|
||||||
|
|
||||||
|
For example, we could define a test where:
|
||||||
|
|
||||||
|
- 20% of traffic is randomly diverted to the Development environment
|
||||||
|
- 40% randomly diverted to Staging
|
||||||
|
- 40% randomly diverted to Production
|
||||||
|
|
||||||
|
Using automated configuration, we can largely account for differences in environment resources since the framework can be set to automatically search for and find concurrency values that maximize successful requests. This then allows us to determine, from a single test, the actual impact changes in infrastructure resources first deployed to Development may have vs. the existing environments in Staging and Production.
|
||||||
|
|
||||||
|
We can also pair this with protocol level fault injection (i.e. sending a randomly selected subset of requests as intentionally malformed) to determine how changes adapt to and handle common attacks like request smuggling, etc. Although most libraries now protect from these sort of attacks, their handling can significantly slow processing of requests.
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## Integrations and Reporting
|
||||||
|
|
||||||
|
Simulating realism in tests takes more than orchestrating test execution a certain way - it requires being able to test an application at every level of the stack, from UI to DB.
|
||||||
|
|
||||||
|
The majority of existing performance frameowrks facilitate plain `HTTP/1.1` requests, with a subset supporting `HTTP/2`. However, modern applications often use UDP, Websockets, GraphQL, and more. Frameworks that facilitate additional protocols or libraries often only allow for the use of a single protocol during a test (i.e. only using `HTTP/1.1` <i>or</i> `HTTP/2.2` <i>or</i> a third-party `Selenium` extension).
|
||||||
|
|
||||||
|
Simulation frameworks address these limitations by allowing for use of any supported protocols or libraries concurrently.
|
||||||
|
|
||||||
|
```python
|
||||||
|
|
||||||
|
import os
|
||||||
|
from hedra import (
|
||||||
|
Workflow,
|
||||||
|
step,
|
||||||
|
)
|
||||||
|
from hedra.core.engines import (
|
||||||
|
HTTPResult,
|
||||||
|
HTTP2Result,
|
||||||
|
PlaywrightResult
|
||||||
|
)
|
||||||
|
from typing import Literal, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class Test(Workflow):
|
||||||
|
vus=1000
|
||||||
|
duration='1m'
|
||||||
|
username=os.getenv('USERNAME')
|
||||||
|
password=os.getenv('PASSWORD')
|
||||||
|
|
||||||
|
def get_book_title(self) -> Literal['Shakespeare Collected Works']:
|
||||||
|
return 'Shakespeare Collected Works'
|
||||||
|
|
||||||
|
@step()
|
||||||
|
async def login_via_api(self) -> HTTP2Result:
|
||||||
|
return await self.client.http2.post(
|
||||||
|
'https://myapi.com/api/v1/login',
|
||||||
|
auth=(
|
||||||
|
self.username,
|
||||||
|
self.password
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
@step()
|
||||||
|
async def login_via_ui(self) -> PlaywrightResult:
|
||||||
|
await self.client.playwright.goto('https://myapi.com/login')
|
||||||
|
|
||||||
|
await self.client.playwright.input_text(
|
||||||
|
'[data-test-id="username-input"]',
|
||||||
|
self.username
|
||||||
|
)
|
||||||
|
|
||||||
|
await self.client.playwright.input_text(
|
||||||
|
'[data-test-id="password-input"]',
|
||||||
|
self.password
|
||||||
|
)
|
||||||
|
|
||||||
|
return await self.client.playwright.click('[data-test-id="login-button"]')
|
||||||
|
|
||||||
|
@step('login_via_api')
|
||||||
|
async def get_book(
|
||||||
|
self,
|
||||||
|
auth_response: HTTP2Result
|
||||||
|
) -> HTTPResult:
|
||||||
|
auth_token = response.headers.get('X-API-TOKEN')
|
||||||
|
title = self.get_book_title()
|
||||||
|
|
||||||
|
return await self.client.http.get(
|
||||||
|
f'https://myapi.com/api/v1/books?title={title}',
|
||||||
|
headers={
|
||||||
|
'X-API-TOKEN': auth_token
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
@step('login_via_api')
|
||||||
|
async def get_author(
|
||||||
|
self,
|
||||||
|
auth_response: HTTP2Result
|
||||||
|
) -> HTTPResult:
|
||||||
|
auth_token = response.headers.get('X-API-TOKEN')
|
||||||
|
|
||||||
|
return await self.client.http.get(
|
||||||
|
'https://myapi.com/api/v1/authors?author=william&shakespeare',
|
||||||
|
headers={
|
||||||
|
'X-API-TOKEN': auth_token
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
@step('login_via_ui')
|
||||||
|
async def get_author_and_book_via_search(self) -> PlaywrightResult:
|
||||||
|
await self.client.playwright.click('[data-test-id="author-search"]')
|
||||||
|
await self.client.playwright.input_text(
|
||||||
|
'[data-test-id="author-search-input"]',
|
||||||
|
'William Shakespeare'
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
title = self.get_book_title()
|
||||||
|
await self.client.playwright.click('[data-test-id="book-search"]')
|
||||||
|
await self.client.playwright.input_text(
|
||||||
|
'[data-test-id="book-search-input"]',
|
||||||
|
title
|
||||||
|
)
|
||||||
|
|
||||||
|
return await self.client.playwright.click('[data-test-id="search-button"]')
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
While existing performance frameworks have made efforts to improve their reporting options, but often delegate the reponsibility of non-file/non-CLI output reporting to engineers, who must integrate or even write the extensions necessary. This increase in developer burden is often enough to prevent developers from integrating their tests into their workflows at all.
|
||||||
|
|
||||||
|
Simulation frameworks recognize the importance of integrations for reporting by offering plentiful options which are declared in-test and run concurrently. For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import os
|
||||||
|
import statistics
|
||||||
|
from hedra import (
|
||||||
|
Workflow,
|
||||||
|
step
|
||||||
|
Metric
|
||||||
|
)
|
||||||
|
from hedra.core.engines import (
|
||||||
|
HTTPResult,
|
||||||
|
HTTP2Result,
|
||||||
|
PlaywrightResult
|
||||||
|
)
|
||||||
|
from hedra.reporting (
|
||||||
|
JSONResults,
|
||||||
|
DatadogResults,
|
||||||
|
KafkaResults
|
||||||
|
)
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
|
||||||
|
@depends(Test)
|
||||||
|
class SubmitResults(Workflow):
|
||||||
|
reporters=[
|
||||||
|
JSONResults(
|
||||||
|
path='./events.json'
|
||||||
|
),
|
||||||
|
DatadogResults(
|
||||||
|
api_key=os.getenv('DD_API_KEY'),
|
||||||
|
app_key=os.getenv('DD_APP_KEY')
|
||||||
|
),
|
||||||
|
KafkaResults(
|
||||||
|
host=os.getenv('KAFKA_HOST'),
|
||||||
|
topic='myapi_testing_results'
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
@step()
|
||||||
|
async def ui_vs_api_timing(
|
||||||
|
self,
|
||||||
|
results: List[
|
||||||
|
HTTPResult |
|
||||||
|
HTTP2Result |
|
||||||
|
PlaywrightResult
|
||||||
|
]
|
||||||
|
) -> Metric['median_api_vs_ui_time_ratio']:
|
||||||
|
|
||||||
|
avg_api_timing = statistics.median([
|
||||||
|
result.total_time for result in results if isinstance(
|
||||||
|
result,
|
||||||
|
(HTTPResult, HTTP2Result)
|
||||||
|
)
|
||||||
|
])
|
||||||
|
|
||||||
|
ui_timings = statistics.median([
|
||||||
|
result.total_time for result in results if isinstance(
|
||||||
|
result,
|
||||||
|
PlaywrightResult
|
||||||
|
)
|
||||||
|
])
|
||||||
|
|
||||||
|
return avg_api_timing/ui_timings
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## Developer Experience as a Priority
|
||||||
|
|
||||||
|
The primary source of adoption cost for existing performance testing frameworks is poor developer experience. Opaque CLI interfaces, sprawling APIs, and significant changes in test code or framework configuration to run tests via CI/CD or distributed vs local.
|
||||||
|
|
||||||
|
Drawing inspiration from modern web development tooling and frameworks K6, we can begin to improve by:
|
||||||
|
|
||||||
|
- Running tests via a single CLI whether locally or distributed.
|
||||||
|
|
||||||
|
- Embracing code generation to help test, devops, and application developers write tests more quickly via "starter" templates.
|
||||||
|
|
||||||
|
- Provide comprehensive, CLI-configurable metrics output of results to help provide additional visual feedback on test runs.
|
||||||
|
|
||||||
|
- Facilitating management of tests as "projects", collections of related work as opposed to offloading the entirety of test organization and management upon developers.
|
||||||
|
|
||||||
|
Code generation in particular is critical in helping developers rapidly prototype and develop tests. For example, a developer running the command
|
||||||
|
|
||||||
|
```bash
|
||||||
|
hedra test generate my-test --using http,http2,playwright --tags service=myapi.com,environment=staging
|
||||||
|
```
|
||||||
|
|
||||||
|
generates the following template code:
|
||||||
|
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Generated by Hedra - my_test.py
|
||||||
|
import os
|
||||||
|
from hedra import (
|
||||||
|
Workflow,
|
||||||
|
step,
|
||||||
|
)
|
||||||
|
from hedra.core.engines import (
|
||||||
|
HTTPResult,
|
||||||
|
HTTP2Result,
|
||||||
|
PlaywrightResult
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class MyTest(Workflow):
|
||||||
|
vus=1000
|
||||||
|
duration='1m'
|
||||||
|
tags={
|
||||||
|
'service': 'myapi.com',
|
||||||
|
'environment': 'staging'
|
||||||
|
}
|
||||||
|
|
||||||
|
@step()
|
||||||
|
async def get_http(self) -> HTTPResult:
|
||||||
|
return await self.client.http.get('<ADD_URL_HERE>')
|
||||||
|
|
||||||
|
@step()
|
||||||
|
async def get_http2(self) -> HTTP2Result:
|
||||||
|
return await self.client.http2.get('<ADD_URL_HERE>')
|
||||||
|
|
||||||
|
@step()
|
||||||
|
async def goto_url(self) -> PlaywrightResult:
|
||||||
|
return await self.client.playwright.goto('<ADD_URL_HERE>')
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
When combined with linting:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
hedra test lint my_test.py
|
||||||
|
|
||||||
|
Linting my_test.py...
|
||||||
|
OK!
|
||||||
|
```
|
||||||
|
|
||||||
|
project management features:
|
||||||
|
```bash
|
||||||
|
hedra test submit my_test.py --project github.com/myorg/tests
|
||||||
|
|
||||||
|
Submitting my_test.py to github.com/myorg/tests...
|
||||||
|
Repo updated!
|
||||||
|
```
|
||||||
|
|
||||||
|
And RPC remote execution:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
hedra cloud test my_test.py --send staging
|
||||||
|
|
||||||
|
Sending to - staging - cluster at 155.020.313.33:6883
|
||||||
|
```
|
||||||
|
|
||||||
|
Allows developers to focus on value delivery as opposed to maintaining a plethora of of extensions, disorganized tests, and execution environments.
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
## Summing it Up
|
||||||
|
|
||||||
|
While performance frameworks are valuable tools, their inherent limitations have made them difficult to adopt and their value proposition increasingly questionable. We can build upon their strengths by:
|
||||||
|
|
||||||
|
- Facilitating simulation of realistic user interactions via workflows
|
||||||
|
- Allowing for concurrent use of multiple protocols/libraries in a single test/workflow
|
||||||
|
- Embracing statistical frameworks like A|B testing, using optimization to automate configuration, and providing protocol-level fault injection for chaos testing
|
||||||
|
- Including modern developer experience features like starter template code generation, "unified experience" CLIs, test linting, and project management
|
||||||
|
|
||||||
|
to create a new class of tooling, simulation frameworks, that both deliver upon and exceed the value proposition of performance testing frameworks.
|
@ -0,0 +1,49 @@
|
|||||||
|
# Day 21: Advanced Code Coverage with Jenkins, GitHub and API Mocking
|
||||||
|
|
||||||
|
Presentation by [Oleg Nenashev](https://linktr.ee/onenashev),
|
||||||
|
Jenkins core maintainer, developer advocate and community builder at Gradle
|
||||||
|
|
||||||
|
**TL;DR:** I will talk about how modern Jenkins allows you to analyze
|
||||||
|
and improve code coverage with help of the new Coverage Plugin for Jenkins,
|
||||||
|
support for standard formats (Cobertura, JaCoCo, gcov, JUnit, etc.),
|
||||||
|
test parallelization, and GitHub Checks API.
|
||||||
|
We will also delve into increasing the integration test coverage with help of WireMock and Testcontainers.
|
||||||
|
|
||||||
|
![Jenkins and GitHub Checks](./Images/day26-1.png)
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Video Recording](https://www.youtube.com/watch?v=ZBaQ71CI_lI)
|
||||||
|
- [Slides](https://speakerdeck.com/onenashev/advanced-code-coverage-with-jenkins-github-and-api-mocking/) (Premier - January 21)
|
||||||
|
|
||||||
|
## Full Abstract
|
||||||
|
|
||||||
|
In 2015-2018, I talked about how to use the Jenkins Pipeline and custom libraries to do advanced integration tests and analyze code coverage.
|
||||||
|
Coverage plugins were rather weak, and one needed some scripts and hacks to make it work, and to DIY for distributed testing. In 2021 the situation has changed significantly thanks to the Coverage and Checks API plugins.
|
||||||
|
Distributed integration testing also became easier thanks to better coverage collectors and integrations with API mocking tools. So, good time to be alive… and use Jenkins!
|
||||||
|
|
||||||
|
![Jenkins and GitHub Checks](./Images/day26-2.png)
|
||||||
|
|
||||||
|
We will talk about how modern Jenkins allows you to improve and analyze code coverage.
|
||||||
|
We will talk about unit and integration testing with WireMock,
|
||||||
|
the new Coverage Plugin,
|
||||||
|
support for standard formats (Cobertura, JaCoCo, gcov, JUnit, etc.),
|
||||||
|
parallelization for heavy integration tests and API mocking, and integration with GitHub Checks API.
|
||||||
|
How can you analyze code coverage in Jenkins and when do you need to create your own libraries?
|
||||||
|
And what’s the fuzz about Testcontainers and WireMock for integration testing?
|
||||||
|
|
||||||
|
![Jenkins and GitHub Checks](./Images/day26-3.png)
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [Jenkins Coverage Plugin](https://plugins.jenkins.io/coverage/)
|
||||||
|
- [GitHub Checks Plugin](https://plugins.jenkins.io/github-checks/)
|
||||||
|
- [WireMock](https://wiremock.org/)
|
||||||
|
- [Testcontainers](https://www.testcontainers.org/)
|
||||||
|
- Demo - Coming soon
|
||||||
|
|
||||||
|
Contribute to open source projects:
|
||||||
|
[Jenkins](https://www.jenkins.io/participate),
|
||||||
|
[WireMock](https://wiremock.org/participate),
|
||||||
|
[Testcontainers](https://java.testcontainers.org/contributing/)
|
||||||
|
[Gradle](https://gradle.org/resources/)
|
340
2024/day27.md
@ -0,0 +1,340 @@
|
|||||||
|
# Day 27: 90DaysofDevOps
|
||||||
|
|
||||||
|
## From Automated to Automatic - Event-Driven Infrastructure Management with Ansible
|
||||||
|
|
||||||
|
**Daniel Bodky**
|
||||||
|
- [Twitter](https://twitter.com/d_bodky)
|
||||||
|
- [LinkedIn](https://linkedin.com/in/daniel-bodky)
|
||||||
|
- [Blog](https://dbodky.me)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
A universal truth and recurring theme in the DevOps world is automation. From providing infrastructure to testing code to deploying to production, many parts of the DevOps lifecycle get automated already. One popular technology for managing infrastructure and configuration in an automated way is Ansible, but are we fully utilizing its capabilities yet?
|
||||||
|
|
||||||
|
This presentation will give a broad overview of Ansible and its architecture and use-cases, before exploring a relatively new feature, Event-driven Ansible (EDA). Analzying applications of event-driven Ansible, participants will see that automated management is nice, but automatic management is awesome, not just regarding DevOps principles, but also in terms of reaction times, the human tendency for minor mistakes, and toil for operators.
|
||||||
|
|
||||||
|
Participants will get first-hand insights into Ansible, its strengths, weaknesses, and the potential of event-driven automation within the DevOps world.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> The below content is a copy of the [lab repository's](https://github.com/mocdaniel/lab-event-driven-ansible) README for convenience.
|
||||||
|
|
||||||
|
---
|
||||||
|
# Event-Driven Ansible Lab
|
||||||
|
|
||||||
|
This is a lab designed to demonstrate Ansible and how Event-Driven Ansible (**EDA**) builds on top of its capabilities.
|
||||||
|
|
||||||
|
The setup is done with Ansible, too. It will install **Ansible, EDA, Prometheus**, and **Alertmanager** on a VM to demonstrate some of the capabilities of EDA.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
To follow along with this lab in its entirety, you will need three VMs:
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> If you want to skip Ansible basics and go straight to EDA, you'll need just the `eda-controller.example.com` VM and can skip the others.
|
||||||
|
|
||||||
|
| VM name | OS |
|
||||||
|
|--------------------|-------------|
|
||||||
|
| eda-controller.example.com | CentOS/Rocky 8.9 |
|
||||||
|
| company.example.com | CentOS/Rocky 8.9 |
|
||||||
|
| webshop.example.com | Ubuntu 22.04 |
|
||||||
|
|
||||||
|
**You'll need to be able to SSH to each of these VMs as root using SSH keys.**
|
||||||
|
|
||||||
|
## Lab Setup
|
||||||
|
|
||||||
|
### Clone the repository and create a Python virtual environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/mocdaniel/lab-event-driven-ansible.git
|
||||||
|
cd lab-event-driven-ansible
|
||||||
|
python3 -m venv .venv
|
||||||
|
source .venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install Ansible and other dependencies
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create the inventory file
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
# hosts.yml
|
||||||
|
webservers:
|
||||||
|
hosts:
|
||||||
|
webshop.example.com:
|
||||||
|
ansible_host: <ip-address>
|
||||||
|
webserver: apache2
|
||||||
|
company.example.com:
|
||||||
|
ansible_host: <ip-address>
|
||||||
|
webserver: httpd
|
||||||
|
eda_controller:
|
||||||
|
hosts:
|
||||||
|
eda-controller.example.com:
|
||||||
|
ansible_host: <ip-address>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install Needed Roles and Collections
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-galaxy install -r requirements.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run the Setup Playbook
|
||||||
|
|
||||||
|
After you created the inventory file and filled in the IP addresses, you can run the setup playbook:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook playbooks/setup.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!CAUTION]
|
||||||
|
> Due to a known bug with Python on MacOS, you need to run `export NO_PROXY="*"` on MacOS before running the playbook
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Demos
|
||||||
|
|
||||||
|
### Lab 1: Ansible Basics
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Ansible from the CLI via ansible</summary>
|
||||||
|
|
||||||
|
#### Ansible from the CLI via `ansible`
|
||||||
|
|
||||||
|
The first example installs a webserver on all hosts in the `webservers` group. The installed webserver is defined as a **host variable** in the inventory file `hosts.yml` (*see above*).
|
||||||
|
|
||||||
|
```console
|
||||||
|
ansible \
|
||||||
|
webservers \
|
||||||
|
-m package \
|
||||||
|
-a 'name="{{ webserver }}"' \
|
||||||
|
--one-line
|
||||||
|
```
|
||||||
|
|
||||||
|
Afterwards, we can start the webserver on all hosts in the `webservers` group.
|
||||||
|
|
||||||
|
```console
|
||||||
|
ansible \
|
||||||
|
webservers \
|
||||||
|
-m service \
|
||||||
|
-a 'name="{{ webserver }}" state=started' \
|
||||||
|
--one-line
|
||||||
|
```
|
||||||
|
|
||||||
|
Go on and check if the web servers are running on the respective hosts.
|
||||||
|
|
||||||
|
> [!HINT]
|
||||||
|
> Ansible is **idempotent** - try running the commands again and see how the output differs.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Ansible from the CLI via ansible-playbook</summary>
|
||||||
|
|
||||||
|
#### Ansible from the CLI via `ansible-playbook`
|
||||||
|
|
||||||
|
The second example utilizes the following **playbook** to **gather** and **display information** for all hosts in the `webservers` group, utilizing the **example** role from the lab repository.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
- name: Example role
|
||||||
|
hosts: webservers
|
||||||
|
gather_facts: false
|
||||||
|
vars:
|
||||||
|
greeting: "Hello World!"
|
||||||
|
pre_tasks:
|
||||||
|
- name: Say Hello
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: "{{ greeting }}"
|
||||||
|
roles:
|
||||||
|
- role: example
|
||||||
|
post_tasks:
|
||||||
|
- name: Say goodbye
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: Goodbye!
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
ansible-playbook \
|
||||||
|
playbooks/example.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
### Lab 2: Event-Driven Ansible
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Receive Generic Events via Webhook</summary>
|
||||||
|
|
||||||
|
#### Receive Generic Events via Webhook
|
||||||
|
|
||||||
|
If you followed the setup instructions for the EDA lab, you should already have a running EDA instance on the `eda-controller.example.com` VM.
|
||||||
|
|
||||||
|
If you navigate to `/etc/edacontroller/rulebook.yml` on the VM, you'll see the following rulebook:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
- name: Listen to webhook events
|
||||||
|
hosts: all
|
||||||
|
sources:
|
||||||
|
- ansible.eda.webhook:
|
||||||
|
host: 0.0.0.0
|
||||||
|
port: 5000
|
||||||
|
rules:
|
||||||
|
- name: Debug event output
|
||||||
|
condition: 1 == 1
|
||||||
|
action:
|
||||||
|
debug:
|
||||||
|
msg: "{{ event }}"
|
||||||
|
|
||||||
|
- name: Listen to Alertmanager alerts
|
||||||
|
hosts: all
|
||||||
|
sources:
|
||||||
|
- ansible.eda.alertmanager:
|
||||||
|
host: 0.0.0.0
|
||||||
|
port: 9000
|
||||||
|
data_alerts_path: alerts
|
||||||
|
data_host_path: labels.instance
|
||||||
|
data_path_separator: .
|
||||||
|
rules:
|
||||||
|
- name: Restart MySQL server
|
||||||
|
condition: event.alert.labels.alertname == 'MySQL not running' and event.alert.status == 'firing'
|
||||||
|
action:
|
||||||
|
run_module:
|
||||||
|
name: ansible.builtin.service
|
||||||
|
module_args:
|
||||||
|
name: mysql
|
||||||
|
state: restarted
|
||||||
|
- name: Debug event output
|
||||||
|
condition: 1 == 1
|
||||||
|
action:
|
||||||
|
debug:
|
||||||
|
msg: "{{ event }}"
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
For this part of the lab, the **first rule** is the one we're interested in: It listens to a generic webhook on port `5000` and prints the event's **metadata** to its logs.
|
||||||
|
|
||||||
|
To test this, we can use the `curl` command to send a `POST` request to the webhook `/endpoint` from the VM itself:
|
||||||
|
|
||||||
|
```console
|
||||||
|
curl \
|
||||||
|
-X POST \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"foo": "bar"}' \
|
||||||
|
http://localhost:5000/endpoint
|
||||||
|
```
|
||||||
|
|
||||||
|
If you now check the logs of the EDA controller, you should see the following output:
|
||||||
|
|
||||||
|
```console
|
||||||
|
journalctl -fu eda-controller
|
||||||
|
|
||||||
|
Jan 11 16:35:29 eda-controller ansible-rulebook[56882]: {'payload': {'foo': 'bar'}, 'meta': {'endpoint': 'endpoint',
|
||||||
|
'headers': {'Host': 'localhost:5000', 'User-Agent': 'curl/7.76.1', 'Accept': '*/*', 'Content-Length': '21',
|
||||||
|
'Content-Type': 'application/x-www-form-urlencoded'}, 'source': {'name': 'ansible.eda.webhook', 'type': 'ansible.eda.webhook'},
|
||||||
|
'received_at': '2024-01-11T15:35:29.798401Z', 'uuid': '6ebf8dd2-60a2-455a-9383-97b81f535366'}}
|
||||||
|
```
|
||||||
|
|
||||||
|
A rule that always evaluates to `true` is not very useful, so let's change the rule to only print the the value of `foo` if the `foo` key is present in the event's payload, and `no foo :(` otherwise:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
- name: Listen to webhook events
|
||||||
|
hosts: all
|
||||||
|
sources:
|
||||||
|
- ansible.eda.webhook:
|
||||||
|
host: 0.0.0.0
|
||||||
|
port: 5000
|
||||||
|
rules:
|
||||||
|
- name: Foo
|
||||||
|
condition: event.payload.foo is defined
|
||||||
|
action:
|
||||||
|
debug:
|
||||||
|
msg: "{{ event.payload.foo }}"
|
||||||
|
- name: No foo
|
||||||
|
condition: 1 == 1
|
||||||
|
action:
|
||||||
|
debug:
|
||||||
|
msg: "no foo :("
|
||||||
|
```
|
||||||
|
|
||||||
|
Send the same `curl` request again and check the logs, you should see a line saying `bar` now.
|
||||||
|
|
||||||
|
Let's also try a `curl` request with a different payload:
|
||||||
|
|
||||||
|
```console
|
||||||
|
curl \
|
||||||
|
-X POST \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"bar": "baz"}' \
|
||||||
|
http://localhost:5000/endpoint
|
||||||
|
```
|
||||||
|
|
||||||
|
This time, the output should be `no foo :(`.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Restarting Services Automatically with EDA</summary>
|
||||||
|
|
||||||
|
#### Restarting Services Automatically with EDA
|
||||||
|
|
||||||
|
The last lab is more of a demo - it shows how you can use EDA to automatically react on events observed by **Prometheus** and **Alertmanager**.
|
||||||
|
|
||||||
|
For this demo, the second **ruleset** in our rulebook is the one we're interested in:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: Listen to Alertmanager alerts
|
||||||
|
hosts: all
|
||||||
|
sources:
|
||||||
|
- ansible.eda.alertmanager:
|
||||||
|
host: 0.0.0.0
|
||||||
|
port: 9000
|
||||||
|
data_alerts_path: alerts
|
||||||
|
data_host_path: labels.instance
|
||||||
|
data_path_separator: .
|
||||||
|
rules:
|
||||||
|
- name: Restart MySQL server
|
||||||
|
condition: event.alert.labels.alertname == 'MySQL not running' and event.alert.status == 'firing'
|
||||||
|
action:
|
||||||
|
run_playbook:
|
||||||
|
name: ./playbook.yml
|
||||||
|
- name: Debug event output
|
||||||
|
condition: 1 == 1
|
||||||
|
action:
|
||||||
|
debug:
|
||||||
|
msg: "{{ event }}"
|
||||||
|
```
|
||||||
|
|
||||||
|
With this rule, we can restart our MySQL server if it's not running! But how do we get the event to trigger? With **Prometheus** and **Alertmanager**!
|
||||||
|
|
||||||
|
When you ran the setup playbook, it installed **Prometheus** and **Alertmanager** on the `eda-controller.example.com` VM. You can access the **Prometheus** UI at `http://<eda-controller-ip>:9090` and the **Alertmanager** UI at `http://<eda-controller-ip>:9093`.
|
||||||
|
|
||||||
|
It also installed a **Prometheus exporter** for the **MySQL** database that runs on the server.
|
||||||
|
|
||||||
|
With this setup, we can now shut down our MySQL server and see what happens - make sure to watch the output of the EDA controller's logs:
|
||||||
|
|
||||||
|
```console
|
||||||
|
systemctl stop mysql
|
||||||
|
journalctl -fu edacontroller
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
Within 30-90 seconds, you should see EDA running our **playbook** and restarting the MySQL server. You can track that process by watching the Prometheus/Alertmanager UIs for firing alerts.
|
||||||
|
|
||||||
|
Once you see the playbook being executed in the logs, you can check the MySQL state once more:
|
||||||
|
|
||||||
|
```console
|
||||||
|
systemctl status mysql
|
||||||
|
```
|
||||||
|
|
||||||
|
MySQL should be up and running again!
|
||||||
|
</details>
|
@ -0,0 +1,33 @@
|
|||||||
|
Day 30: How GitHub Builds GitHub with GitHub
|
||||||
|
=========================
|
||||||
|
|
||||||
|
Hello!👋
|
||||||
|
|
||||||
|
I am April Edwards and I am senior developer advocate at GitHub. I've been at GitHub for almost a year, prior to that I worked for Microsoft. Having spent over 24 years in the tech industry I started my journey in ops and then moved into development. DevOps was a natural fit for me, especially when I started focusing on cloud deployments in 2013.
|
||||||
|
|
||||||
|
In this session I am going to show you how GitHub builds GitHub with GitHub. GitHub is not just a place where resources live, but it's a platform where you can start your DevOps journey, all the way to through to delivering your code.
|
||||||
|
|
||||||
|
💻You can connect with me here:
|
||||||
|
[GitHub](https://github.com/scubaninja)
|
||||||
|
[LinkedIn](https://www.linkedin.com/in/azureapril/)
|
||||||
|
[Twitter](https://twitter.com/TheAprilEdwards)
|
||||||
|
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- Learn about [GitHub Projects](https://docs.github.com/issues/planning-and-tracking-with-projects/learning-about-projects/about-projects)
|
||||||
|
|
||||||
|
- [Quickstart for using GitHub Projects](https://docs.github.com/en/issues/planning-and-tracking-with-projects/learning-about-projects/quickstart-for-projects)
|
||||||
|
|
||||||
|
- Read the docs on [GitHub Issues](https://docs.github.com/issues)
|
||||||
|
|
||||||
|
- Read more about [GitHub Codespaces](https://docs.github.com/codespaces/overview)
|
||||||
|
|
||||||
|
- Get started with a [Codespaces template](https://github.com/codespaces)
|
||||||
|
|
||||||
|
- Read about [GitHub Advanced Security (GHAS) -](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security)
|
||||||
|
|
||||||
|
- Learn more about all of the ways to work with the [GitHub API](https://docs.github.com/en/rest?apiVersion=2022-11-28)
|
||||||
|
|
||||||
|
## Video
|
||||||
|
[![Day 30:](https://img.youtube.com/TODO.jpg)](https://youtu.be/TODO)
|
@ -0,0 +1,23 @@
|
|||||||
|
# Day 32: 90DaysofDevOps
|
||||||
|
|
||||||
|
## Cracking Cholera’s Code: Victorian Insights for Today’s Technologist
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
As Steve Jobs reminded us, technology can be a bicycle for the mind. It can be a force multiplier, helping us achieve what we could otherwise not.
|
||||||
|
|
||||||
|
However, it is too easy for technology to constrain rather than enable us. Transformations too often fall short, and new technology rarely creates the expected bottom-line benefits.
|
||||||
|
|
||||||
|
What can Victorian London teach us about avoiding this trap?
|
||||||
|
|
||||||
|
#### Resources
|
||||||
|
|
||||||
|
- [Related Books](https://www.goodreads.com/review/list/68511315-simon?ref=nav_mybooks&shelf=cracking-choleras-code)
|
||||||
|
|
||||||
|
### Simon Copsey
|
||||||
|
I'm a Delivery & Transformation Consultant, on a mission to help organisations understand and unwind complex, cross-functional obstacles - enabling happier staff to deliver better software to customers sooner.
|
||||||
|
|
||||||
|
My career has taken me from being a developer in the trenches to helping various organisations take pragmatic steps from a place of chaos and paralysis, to one where it becomes a little easier to see the wood for the trees.
|
||||||
|
|
||||||
|
- [Learn More](https://curiouscoffee.club/)
|
||||||
|
- [LinkedIn](https://linkedin.com/in/simoncopsey)
|
@ -0,0 +1,6 @@
|
|||||||
|
|
||||||
|
|
||||||
|
Extra Resources which would be good to include in the description:
|
||||||
|
• Blog: https://arshsharma.com/posts/2023-10-14-argocd-github-actions-getting-started/
|
||||||
|
• GitHub repo used for the sample: https://github.com/RinkiyaKeDad/gitops-sample
|
||||||
|
• Argo CD docs for installation: https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/
|
@ -0,0 +1,5 @@
|
|||||||
|
Here are additional resource:
|
||||||
|
|
||||||
|
https://firecracker-microvm.github.io/
|
||||||
|
https://itnext.io/microvm-another-level-of-abstraction-for-serverless-computing-5f106b030f15
|
||||||
|
https://github.com/alexellis/firecracker-init-lab
|
1563
Logo/2. print ready file.pdf
Normal file
1581
Logo/2. source file.ai
Normal file
206
Logo/2. vector file.svg
Normal file
@ -0,0 +1,206 @@
|
|||||||
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
|
<!-- Generator: Adobe Illustrator 24.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
|
||||||
|
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
|
||||||
|
viewBox="0 0 2000 1500" style="enable-background:new 0 0 2000 1500;" xml:space="preserve">
|
||||||
|
<style type="text/css">
|
||||||
|
.st0{fill:url(#SVGID_1_);}
|
||||||
|
.st1{fill:url(#SVGID_2_);}
|
||||||
|
.st2{fill:url(#SVGID_3_);}
|
||||||
|
.st3{fill:url(#SVGID_4_);}
|
||||||
|
.st4{fill:url(#SVGID_5_);}
|
||||||
|
</style>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<path d="M381.7,1188.6l32.6-42.5c-3.3,1.1-6.2,1.7-8.7,1.7c-10.5,0-19.1-3.2-25.7-9.5c-6.6-6.3-10-14.6-10-24.6
|
||||||
|
c0-10.5,3.6-18.9,10.9-25.1c7.3-6.3,16.7-9.4,28.3-9.4c12,0,21.7,3.2,29.2,9.7c7.4,6.4,11.2,15,11.2,25.7
|
||||||
|
c0,6.2-1.2,11.8-3.5,16.8c-2.4,5-5.8,10.7-10.4,16.9l-30.4,40.4H381.7z M409.7,1134.3c6.8,0,11.8-1.9,15.1-5.7
|
||||||
|
c3.3-3.8,4.9-8.9,4.9-15.3c0-6-1.7-10.8-5.1-14.3c-3.4-3.5-8.2-5.3-14.5-5.3c-6.5,0-11.6,1.8-15.3,5.5c-3.7,3.7-5.5,8.5-5.5,14.5
|
||||||
|
c0,6.4,1.7,11.4,5.2,15.1C397.8,1132.4,402.9,1134.3,409.7,1134.3z"/>
|
||||||
|
<path d="M516.4,1187.1c-4.6,2-9.9,3-15.7,3s-11.1-1-15.7-3c-4.7-2-8.5-4.7-11.5-8.1c-3-3.4-5.6-7.5-7.6-12.2
|
||||||
|
c-2-4.7-3.5-9.8-4.3-15.1c-0.9-5.3-1.3-10.9-1.3-17c0-6.1,0.4-11.9,1.3-17.2c0.9-5.4,2.4-10.4,4.4-15.2c2.1-4.8,4.6-8.9,7.7-12.2
|
||||||
|
c3.1-3.4,6.9-6.1,11.5-8c4.6-2,9.8-3,15.6-3s11,1,15.6,3c4.6,2,8.4,4.7,11.5,8.1c3,3.4,5.6,7.5,7.6,12.3c2,4.8,3.5,9.8,4.4,15.2
|
||||||
|
c0.9,5.3,1.3,11.1,1.3,17.2c0,6-0.4,11.7-1.3,17c-0.9,5.3-2.3,10.3-4.3,15.1c-2,4.8-4.5,8.9-7.6,12.2
|
||||||
|
C524.9,1182.5,521.1,1185.1,516.4,1187.1z M490.7,1171.4c2.8,2.3,6.1,3.4,10.1,3.4c3.9,0,7.3-1.1,10-3.4c2.8-2.3,4.9-5.4,6.3-9.4
|
||||||
|
c1.4-4,2.5-8.3,3.1-12.8c0.6-4.5,1-9.6,1-15.1c0-26.5-6.8-39.8-20.4-39.8c-13.5,0-20.4,13.2-20.6,39.5c0,5.6,0.3,10.7,1,15.2
|
||||||
|
c0.6,4.6,1.7,8.9,3.2,12.9C485.8,1166,487.9,1169.2,490.7,1171.4z"/>
|
||||||
|
<path d="M560,1188.6v-107.6h31.1c9.4,0,17.8,1.1,25.3,3.2c7.4,2.2,13.8,5.4,19.2,9.7c5.4,4.3,9.5,9.9,12.3,16.7
|
||||||
|
c2.9,6.8,4.3,14.7,4.3,23.7c0,17.4-5.2,30.8-15.7,40.1c-10.5,9.4-25,14.1-43.6,14.1H560z M579.9,1172.6h14.3
|
||||||
|
c12.3,0,21.6-3.2,28.1-9.7c6.4-6.4,9.6-15.8,9.6-28.2c0-13.2-3.3-22.8-9.9-28.8c-6.6-6-16.6-9-29.9-9h-12.1V1172.6z"/>
|
||||||
|
<path d="M690.6,1190.1c-7.5,0-13.6-2.1-18.2-6.2c-4.6-4.1-6.9-10.1-6.9-17.9c0-8.4,2.7-14.6,8.2-18.6c5.5-4,13.8-6.6,24.8-7.7
|
||||||
|
c1.5-0.2,3.2-0.4,4.9-0.6c1.8-0.2,3.7-0.4,6-0.6c2.2-0.2,3.9-0.4,5.2-0.5v-4.3c0-4.9-1.1-8.5-3.4-10.7c-2.3-2.2-5.7-3.3-10.3-3.3
|
||||||
|
c-6.6,0-14.7,1.8-24.3,5.5c0-0.1-0.9-2.4-2.4-6.7c-1.6-4.3-2.4-6.5-2.4-6.6c9.5-4.1,19.8-6.1,30.9-6.1c10.9,0,18.8,2.4,23.7,7.1
|
||||||
|
c4.9,4.7,7.4,12.4,7.4,23v52.7h-14.3c0-0.2-0.6-1.8-1.6-4.9c-1-3.1-1.5-4.7-1.5-4.9c-4,3.9-7.9,6.8-11.8,8.6
|
||||||
|
C700.7,1189.2,696.1,1190.1,690.6,1190.1z M695.8,1176.4c4.5,0,8.3-1.1,11.6-3.2c3.3-2.1,5.7-4.7,7.1-7.7v-15.6
|
||||||
|
c-0.1,0-1.4,0.1-3.9,0.3c-2.4,0.2-3.7,0.3-3.9,0.3c-7.8,0.7-13.5,2.2-17.1,4.4c-3.6,2.3-5.4,5.9-5.4,10.9c0,3.4,1,6.1,3,7.9
|
||||||
|
C689.2,1175.5,692.1,1176.4,695.8,1176.4z"/>
|
||||||
|
<path d="M757.4,1222.6c-3.1,0-6.5-0.2-10-0.6l-1-14.9c2.3,0.2,5.1,0.4,8.6,0.4c4.5,0,8-0.9,10.7-2.6c2.7-1.8,4.9-4.9,6.7-9.4
|
||||||
|
c0.2-0.5,1.2-3.3,2.9-8.3l-32.5-79.3h20.6l21.2,58.5c1.6-5.8,5.1-16.6,10.5-32.5c5.5-15.8,8.5-24.5,9-26.1h20.6
|
||||||
|
c-22.4,59.2-33.7,89-33.9,89.5c-3.5,9.1-7.9,15.5-13.1,19.4C772.5,1220.6,765.7,1222.6,757.4,1222.6z"/>
|
||||||
|
<path d="M860.9,1190c-11.9,0-21.4-2-28.6-6.1l2-14.5c3.2,1.8,7.3,3.3,12.3,4.7c5,1.4,9.5,2.1,13.7,2.1c4.3,0,7.6-0.8,10-2.4
|
||||||
|
c2.4-1.6,3.6-3.9,3.6-7c0-2.8-1.1-5-3.3-6.6c-2.2-1.6-6.5-3.6-12.7-6c-2.2-0.8-3.6-1.3-4.2-1.5c-7.4-2.9-12.8-6.1-16.1-9.7
|
||||||
|
c-3.3-3.5-4.9-8.4-4.9-14.4c0-7.3,2.7-13,8-16.9c5.3-3.9,12.7-5.9,22.2-5.9c10.3,0,19.3,1.9,27.1,5.8l-4.8,13.3
|
||||||
|
c-7.7-3.6-15-5.4-22-5.4c-3.9,0-6.9,0.6-9.1,1.9c-2.2,1.3-3.3,3.3-3.3,5.9c0,2.5,1.1,4.4,3.2,5.8c2.1,1.4,6.2,3.2,12.3,5.5
|
||||||
|
c0.2,0,0.8,0.3,1.9,0.7c1.1,0.4,1.9,0.7,2.5,1c7.4,2.7,12.8,5.9,16.3,9.7c3.5,3.7,5.2,8.6,5.2,14.7c0,8.1-2.8,14.3-8.2,18.7
|
||||||
|
C878.6,1187.8,870.9,1190,860.9,1190z"/>
|
||||||
|
<path d="M949.1,1173c-6.4-1.4-11.5-4.5-15.3-9.3c-5.3-6.8-8-16.4-8-28.7c0-12.5,2.7-22.2,8-29.1c3.8-4.9,8.9-8.1,15.2-9.5v-16.4
|
||||||
|
c-12.3,1.4-22.3,6.1-30,14.1c-9.4,9.7-14.1,23.3-14.1,40.7c0,17.3,4.7,30.8,14.1,40.7c7.7,8.1,17.7,12.8,30.1,14.3V1173z"/>
|
||||||
|
<path d="M995.8,1094.1c-7.7-8.1-17.8-12.8-30.3-14.2v16.4c6.5,1.4,11.7,4.6,15.6,9.6c5.3,6.9,8,16.6,8,29.2c0,12.4-2.7,22-8,28.8
|
||||||
|
c-3.8,4.9-9,8-15.6,9.4v16.6c12.4-1.4,22.5-6.2,30.2-14.3c9.4-9.8,14.1-23.4,14.1-40.5C1009.9,1117.4,1005.2,1103.8,995.8,1094.1
|
||||||
|
z"/>
|
||||||
|
<path d="M1032.1,1188.6v-67.9h-12.8l1.2-11.3l11.5-1.5v-4.5c0-4.9,0.5-9.1,1.4-12.6c0.9-3.5,2.4-6.5,4.4-8.9
|
||||||
|
c2.1-2.5,4.8-4.3,8.3-5.5c3.5-1.2,7.7-1.8,12.7-1.8c4.9,0,9.9,0.5,15,1.4l-1.7,14.3c-4-0.5-7.2-0.8-9.5-0.8c-4.1,0-7,1-8.8,3
|
||||||
|
c-1.7,2-2.6,5.3-2.6,9.9v5.5h19v12.8h-19v67.9H1032.1z"/>
|
||||||
|
<path d="M1084.7,1188.6v-107.6h31.1c9.4,0,17.8,1.1,25.3,3.2c7.4,2.2,13.8,5.4,19.2,9.7c5.4,4.3,9.5,9.9,12.3,16.7
|
||||||
|
c2.9,6.8,4.3,14.7,4.3,23.7c0,17.4-5.2,30.8-15.7,40.1c-10.5,9.4-25,14.1-43.6,14.1H1084.7z M1104.6,1172.6h14.3
|
||||||
|
c12.3,0,21.6-3.2,28.1-9.7c6.4-6.4,9.6-15.8,9.6-28.2c0-13.2-3.3-22.8-9.9-28.8c-6.6-6-16.6-9-29.9-9h-12.1V1172.6z"/>
|
||||||
|
<path d="M1231.3,1190.1c-13,0-23.1-3.7-30.3-11.2c-7.2-7.5-10.8-17.8-10.8-31c0-12.8,3.4-23,10.2-30.6
|
||||||
|
c6.8-7.6,16.1-11.4,27.8-11.5c11.1,0,19.7,3.5,25.9,10.4c6.1,7,9.2,16.2,9.2,27.7c0,0.8,0,2.2,0,4c0,1.9,0,3.2,0,4.1h-53.7
|
||||||
|
c0.2,7.5,2.3,13.4,6.2,17.5c3.9,4.1,9.4,6.2,16.3,6.2c8.8,0,17.3-2.3,25.4-6.9l2.7,13.7
|
||||||
|
C1252.1,1187.6,1242.4,1190.1,1231.3,1190.1z M1209.9,1140h35.3c0-6.6-1.5-11.6-4.5-15.1c-3-3.5-7.1-5.3-12.4-5.3
|
||||||
|
c-4.9,0-9.1,1.7-12.5,5.1C1212.3,1128.1,1210.4,1133.2,1209.9,1140z"/>
|
||||||
|
<path d="M1301.3,1188.6l-32.2-80.7h20.4c0.8,2.2,3.3,9,7.5,20.5c4.2,11.5,7,19.3,8.3,23.5c2.8,8.1,4.8,14.3,6,18.4
|
||||||
|
c0,0,0.4-1.6,1.2-4.8c0.8-3.1,1.6-6.1,2.4-8.9c0.8-2.8,1.2-4.4,1.4-5c0-0.1,2.3-6.9,6.7-20.3c4.4-13.4,7.1-21.3,7.9-23.5h20.3
|
||||||
|
l-30.6,80.7H1301.3z"/>
|
||||||
|
<path d="M1402.7,1172.9c-6.2-1.4-11.3-4.5-15-9.3c-5.3-6.8-8-16.4-8-28.7c0-12.5,2.7-22.2,8-29.1c3.7-4.8,8.7-8,15-9.4v-16.5
|
||||||
|
c-12.2,1.5-22.1,6.2-29.8,14.1c-9.4,9.7-14.1,23.3-14.1,40.7c0,17.3,4.7,30.8,14.1,40.7c7.7,8,17.6,12.8,29.8,14.2V1172.9z"/>
|
||||||
|
<path d="M1449.7,1094.1c-7.8-8.1-18-12.9-30.6-14.2v16.3c6.6,1.3,11.9,4.5,15.8,9.6c5.3,6.9,8,16.6,8,29.2c0,12.4-2.7,22-8,28.8
|
||||||
|
c-3.9,5-9.2,8.1-15.8,9.4v16.5c12.5-1.4,22.7-6.1,30.5-14.3c9.4-9.8,14.1-23.4,14.1-40.5
|
||||||
|
C1463.7,1117.4,1459,1103.8,1449.7,1094.1z"/>
|
||||||
|
<path d="M1480.7,1222.1v-114.3h16.5l2.1,11.3c2.6-4.3,6.2-7.6,10.5-9.9c4.4-2.3,9.3-3.5,14.7-3.5c6.6,0,12.4,1.8,17.4,5.5
|
||||||
|
c5,3.7,8.8,8.7,11.4,15c2.6,6.3,3.9,13.5,3.9,21.5c0,12.5-3,22.7-9.1,30.5c-6,7.9-14.2,11.8-24.6,11.8c-5.1,0-9.7-1-13.8-3
|
||||||
|
c-4.1-2-7.6-4.8-10.4-8.3c0.4,7.1,0.6,11,0.6,11.9v30.5L1480.7,1222.1z M1519.5,1175.4c5.7,0,10.2-2.4,13.6-7.2
|
||||||
|
c3.3-4.8,5-11.7,5-20.6c0-9.1-1.7-15.9-5-20.4c-3.3-4.6-8-6.8-13.9-6.8c-12.7,0-19.2,8.7-19.3,26c0,10,1.7,17.3,5,22
|
||||||
|
C1508.1,1173.1,1513,1175.4,1519.5,1175.4z"/>
|
||||||
|
<path d="M1598.8,1190c-11.9,0-21.4-2-28.6-6.1l2-14.5c3.2,1.8,7.3,3.3,12.3,4.7c5,1.4,9.5,2.1,13.7,2.1c4.3,0,7.6-0.8,10-2.4
|
||||||
|
c2.4-1.6,3.6-3.9,3.6-7c0-2.8-1.1-5-3.3-6.6c-2.2-1.6-6.5-3.6-12.7-6c-2.2-0.8-3.6-1.3-4.2-1.5c-7.4-2.9-12.8-6.1-16.1-9.7
|
||||||
|
c-3.3-3.5-4.9-8.4-4.9-14.4c0-7.3,2.7-13,8-16.9c5.3-3.9,12.7-5.9,22.2-5.9c10.3,0,19.3,1.9,27.1,5.8l-4.8,13.3
|
||||||
|
c-7.7-3.6-15-5.4-22-5.4c-3.9,0-6.9,0.6-9.1,1.9c-2.2,1.3-3.3,3.3-3.3,5.9c0,2.5,1.1,4.4,3.2,5.8c2.1,1.4,6.2,3.2,12.3,5.5
|
||||||
|
c0.2,0,0.8,0.3,1.9,0.7c1.1,0.4,1.9,0.7,2.5,1c7.4,2.7,12.8,5.9,16.3,9.7c3.5,3.7,5.2,8.6,5.2,14.7c0,8.1-2.8,14.3-8.2,18.7
|
||||||
|
C1616.5,1187.8,1608.8,1190,1598.8,1190z"/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="1091.0248" y1="832.8361" x2="1396.0465" y2="832.8361">
|
||||||
|
<stop offset="0" style="stop-color:#0025AD"/>
|
||||||
|
<stop offset="1" style="stop-color:#A50664"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st0" d="M1098.8,811.9c4.1,0,7.5-3.2,7.7-7.3l63.4-10.1c1.1,2.9,3.8,4.9,7,5l4.7,29.8c-1.6,1.1-2.6,2.9-2.6,4.9
|
||||||
|
c0,0.4,0.1,0.9,0.1,1.3l-20.5,20.4c-0.6-0.2-1.3-0.3-2-0.3c-3.4,0-6.2,2.8-6.2,6.2s2.8,6.2,6.2,6.2s6.2-2.8,6.2-6.2
|
||||||
|
c0-0.4,0-0.8-0.1-1.2l20.7-20.7c0.4,0.1,0.9,0.2,1.4,0.2c3.3,0,5.9-2.6,5.9-5.9c0-0.5-0.1-0.9-0.2-1.4l7.9-7.9
|
||||||
|
c10.8-10.8,24.4-17.3,38.4-19.6c1.5,3,4.5,5,8.1,5c3.8,0,7.1-2.4,8.4-5.8c16.4,1.2,32.4,8,44.9,20.5c0.6,0.6,1.1,1.1,1.6,1.7
|
||||||
|
c0,0.2,0,0.4,0,0.7c0,3.9,2.9,7.2,6.7,7.7c12.5,18.2,15.5,41.1,8.9,61.5c-2.3,0.8-3.9,3-3.9,5.5c0,1,0.2,1.9,0.7,2.7
|
||||||
|
c-3.4,7.3-8.1,14.1-14.1,20.1c-8.3,8.3-18.6,14.4-29.8,17.7l-13.6,4l14,2c6.1,0.9,11.3,1.3,16.3,1.3c0,0,0,0,0,0
|
||||||
|
c39.3,0,75.8-20.8,95.7-53.1c2.7-0.2,4.9-2.5,4.9-5.2c0-0.9-0.2-1.7-0.6-2.4c1.3-2.6,2.5-5.3,3.6-8c5.7-14,8-28.7,6.9-43.7
|
||||||
|
c-0.6-8-2-15.9-4.4-23.8c1-1.5,1.6-3.3,1.6-5.2c0-4.3-2.9-7.9-6.9-9c-0.6-1.2-1.1-2.5-1.7-3.7c-8.4-17.2-21.2-33.2-36.8-46.1
|
||||||
|
c-10.6-8.8-22.4-16-35.3-21.4c-0.4-3-3-5.4-6.1-5.4c-1.5,0-2.9,0.6-4,1.5c-1.8-0.6-3.7-1.2-5.6-1.8
|
||||||
|
c-18.2-5.4-37.3-7.4-56.9-6.1c-11.5,0.8-22.7,2.7-33.6,5.8c-2.3-3.9-6.6-6.5-11.4-6.5c-7.4,0-13.3,6-13.3,13.3
|
||||||
|
c0,0.7,0.1,1.4,0.2,2.1c-17.1,7.8-32.3,18.3-45.2,31.1l-34.5,34.6c-0.9-0.4-1.9-0.6-2.9-0.6c-4.3,0-7.7,3.5-7.7,7.7
|
||||||
|
S1094.5,811.9,1098.8,811.9z M1194.6,742.4c6,0,11-3.9,12.7-9.3l34.5,7.3c0.1,0.5,0.3,1,0.5,1.5l-60.2,43.7
|
||||||
|
c-0.4-0.3-0.8-0.6-1.3-0.8l11.8-42.6C1193.3,742.4,1193.9,742.4,1194.6,742.4z M1300.4,731.5c0.8,1.9,2.6,3.4,4.7,3.7l2.1,23.9
|
||||||
|
c-6.6,1.2-11.6,7-11.6,14c0,1,0.1,2,0.3,2.9l-43.4,20.2c-1.3-1.8-3.2-3.2-5.4-3.7l2.1-47.2c2.6-0.5,4.6-2.7,4.9-5.4
|
||||||
|
L1300.4,731.5z M1374.9,811.7l-14.3,8.5c-2.1-2.6-5.3-4.3-9-4.3c-5.8,0-10.6,4.3-11.4,9.9h-25.1c-0.6-2.9-2.9-5.3-5.8-6
|
||||||
|
l1.3-32.5c5.3-0.3,9.9-3.6,12-8.3l52.7,24.8c-0.8,1.3-1.2,2.9-1.2,4.6C1374.3,809.6,1374.6,810.7,1374.9,811.7z M1375.3,890.7
|
||||||
|
l-52.4,8.9c-0.3-0.7-0.7-1.3-1.3-1.8c0.3-1,0.6-2,0.9-3l25.8-56.5c1,0.3,2.2,0.5,3.3,0.5c1.1,0,2.2-0.2,3.2-0.5l22.1,49.2
|
||||||
|
C1376.2,888.4,1375.5,889.5,1375.3,890.7z M1194.2,820.5l-8,7.9c-0.4-0.1-0.9-0.2-1.3-0.2c-0.1,0-0.2,0-0.3,0l-4.6-29.4
|
||||||
|
c2.9-1.1,4.9-3.9,4.9-7.2c0-1.2-0.3-2.3-0.7-3.3l59.7-44.3c0.6,0.5,1.3,0.9,2.1,1.1l-2.1,47.2c-3.7,0.5-6.7,3.2-7.6,6.7
|
||||||
|
C1220.9,801.5,1206.1,808.7,1194.2,820.5z M1255,798.5l41.9-19.5c2,4.3,6,7.5,10.8,8.2l-1.3,32.5c-1.2,0.2-2.3,0.7-3.3,1.4
|
||||||
|
c-0.2-0.2-0.3-0.3-0.4-0.5C1289.4,807.3,1272.4,799.9,1255,798.5z M1312.9,833.1c1.2-1.1,2.1-2.6,2.4-4.2h25.1
|
||||||
|
c0.5,3.4,2.4,6.4,5.2,8.1l-20.8,45.4C1326.5,865.5,1322.5,848,1312.9,833.1z M1285.3,943.7c-0.9,0-1.7,0-2.6,0
|
||||||
|
c7.4-3.7,14.2-8.6,20.1-14.4c6.4-6.4,11.4-13.6,15-21.3c3-0.1,5.4-2.4,5.7-5.3l52-8.8C1356.7,924.2,1322.3,943.7,1285.3,943.7z
|
||||||
|
M1383.1,878.8c-1,2.5-2.1,4.9-3.3,7.3l-22-49c3.2-2,5.4-5.6,5.4-9.7c0-1.6-0.3-3.1-0.9-4.5l14.2-8.5c1.7,2,4.3,3.4,7.1,3.4
|
||||||
|
c0.8,0,1.6-0.1,2.4-0.3C1390.9,834.6,1392.4,855.9,1383.1,878.8z M1378.7,798.5c0.3,0.5,0.5,1.1,0.8,1.6
|
||||||
|
c-0.7,0.4-1.4,0.8-2,1.3l-53.7-25.3c0.2-1,0.3-2,0.3-3c0-7.7-6.1-14-13.8-14.2l-2.1-23.9c0.6-0.2,1.2-0.6,1.7-1
|
||||||
|
c12,5.1,23.3,11.9,33.6,20.5C1358.5,766.8,1370.7,782.1,1378.7,798.5z M1297.1,729.2l-43.6,7.6c-1-2.2-3.1-3.7-5.7-3.7
|
||||||
|
c-2.5,0-4.6,1.5-5.6,3.6l-34.4-6.6c0-0.3,0-0.7,0-1c0-0.4,0-0.8-0.1-1.2C1237.2,719.8,1268.5,720.1,1297.1,729.2z
|
||||||
|
M1140.6,766.7c12.7-12.6,27.3-22.6,43.1-29.9c1.5,2.1,3.6,3.7,6,4.7l-11.8,42.5c-0.2,0-0.5,0-0.7,0c-4.2,0-7.6,3.3-7.7,7.4
|
||||||
|
l-63.4,10.1c0,0,0-0.1,0-0.1L1140.6,766.7z"/>
|
||||||
|
<linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="773.7637" y1="920.7142" x2="1078.7854" y2="920.7142">
|
||||||
|
<stop offset="0" style="stop-color:#0025AD"/>
|
||||||
|
<stop offset="1" style="stop-color:#A50664"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st1" d="M1071,941.7c-4.1,0-7.5,3.2-7.7,7.3l-63.4,10.1c-1.1-2.9-3.8-4.9-7-5l-4.7-29.8c1.6-1.1,2.6-2.9,2.6-4.9
|
||||||
|
c0-0.4-0.1-0.9-0.1-1.3l20.5-20.4c0.6,0.2,1.3,0.3,2,0.3c3.4,0,6.2-2.8,6.2-6.2c0-3.4-2.8-6.2-6.2-6.2c-3.4,0-6.2,2.8-6.2,6.2
|
||||||
|
c0,0.4,0,0.8,0.1,1.2l-20.7,20.7c-0.4-0.1-0.9-0.2-1.4-0.2c-3.3,0-5.9,2.6-5.9,5.9c0,0.5,0.1,0.9,0.2,1.4l-7.9,7.9
|
||||||
|
c-10.8,10.8-24.4,17.3-38.4,19.6c-1.5-3-4.5-5-8.1-5c-3.8,0-7.1,2.4-8.4,5.8c-16.4-1.2-32.4-8-44.9-20.5
|
||||||
|
c-0.6-0.6-1.1-1.1-1.6-1.7c0-0.2,0-0.4,0-0.7c0-3.9-2.9-7.2-6.7-7.7c-12.5-18.2-15.5-41.1-8.9-61.5c2.3-0.8,3.9-3,3.9-5.5
|
||||||
|
c0-1-0.2-1.9-0.7-2.7c3.4-7.3,8.1-14.1,14.1-20.1c8.3-8.3,18.6-14.4,29.8-17.7l13.6-4l-14-2c-6.1-0.9-11.3-1.3-16.3-1.3
|
||||||
|
c0,0,0,0,0,0c-39.3,0-75.8,20.8-95.7,53.1c-2.7,0.2-4.9,2.5-4.9,5.2c0,0.9,0.2,1.7,0.6,2.4c-1.3,2.6-2.5,5.3-3.6,8
|
||||||
|
c-5.7,14-8,28.7-6.9,43.7c0.6,8,2,15.9,4.4,23.8c-1,1.5-1.6,3.3-1.6,5.2c0,4.3,2.9,7.9,6.9,9c0.6,1.2,1.1,2.5,1.7,3.7
|
||||||
|
c8.4,17.2,21.2,33.2,36.8,46.1c10.6,8.8,22.4,16,35.3,21.4c0.4,3,3,5.4,6.1,5.4c1.5,0,2.9-0.6,4-1.5c1.8,0.6,3.7,1.2,5.6,1.8
|
||||||
|
c14.6,4.3,29.8,6.5,45.3,6.5c3.8,0,7.7-0.1,11.6-0.4c11.5-0.8,22.7-2.7,33.6-5.8c2.3,3.9,6.6,6.5,11.4,6.5
|
||||||
|
c7.4,0,13.3-6,13.3-13.3c0-0.7-0.1-1.4-0.2-2.1c17.1-7.8,32.3-18.3,45.2-31.1l34.5-34.6c0.9,0.4,1.9,0.6,2.9,0.6
|
||||||
|
c4.3,0,7.7-3.5,7.7-7.7C1078.8,945.2,1075.3,941.7,1071,941.7z M975.3,1011.1c-5.4,0-10.1,3.2-12.2,7.9l-35-12.4
|
||||||
|
c-0.1-0.6-0.2-1.2-0.5-1.7l60-37.2c0.4,0.4,0.9,0.7,1.4,1l-11.8,42.6C976.5,1011.2,975.9,1011.1,975.3,1011.1z M869.2,1021.6
|
||||||
|
c-0.9-1.7-2.5-2.9-4.5-3.3l-2.1-23.9c6.6-1.2,11.6-7,11.6-14c0-1-0.1-2-0.3-2.9l43.4-20.2c1.3,1.8,3.2,3.1,5.4,3.6l-2.1,40.3
|
||||||
|
c-2.8,0.6-4.9,3-4.9,6L869.2,1021.6z M794.9,941.8l14.3-8.5c2.1,2.6,5.3,4.3,9,4.3c5.8,0,10.6-4.3,11.4-9.9h25.1
|
||||||
|
c0.6,2.9,2.9,5.3,5.8,6l-1.3,32.5c-5.3,0.3-9.9,3.6-12,8.3l-52.7-24.8c0.8-1.3,1.2-2.9,1.2-4.6
|
||||||
|
C795.5,943.9,795.3,942.8,794.9,941.8z M794.5,862.8l52.4-8.9c0.3,0.7,0.7,1.3,1.3,1.8c-0.3,1-0.6,2-0.9,3l-25.8,56.5
|
||||||
|
c-1-0.3-2.2-0.5-3.3-0.5c-1.1,0-2.2,0.2-3.2,0.5L792.7,866C793.6,865.2,794.3,864.1,794.5,862.8z M975.6,933l8-7.9
|
||||||
|
c0.4,0.1,0.9,0.2,1.3,0.2c0.1,0,0.2,0,0.3,0l4.6,29.4c-2.9,1.1-4.9,3.9-4.9,7.2c0,1,0.2,2,0.6,2.9l-59.3,38.2
|
||||||
|
c-0.7-0.6-1.5-1.1-2.4-1.4l2-40.3c3.7-0.4,6.7-3.2,7.6-6.7C948.9,952,963.7,944.9,975.6,933z M914.8,955.1l-41.9,19.5
|
||||||
|
c-2-4.3-6-7.5-10.8-8.2l1.3-32.5c1.2-0.2,2.3-0.7,3.3-1.4c0.2,0.2,0.3,0.3,0.4,0.5C880.4,946.2,897.4,953.6,914.8,955.1z
|
||||||
|
M856.9,920.4c-1.2,1.1-2.1,2.6-2.4,4.2h-25.1c-0.5-3.4-2.4-6.4-5.2-8.1l20.8-45.4C843.3,888.1,847.3,905.6,856.9,920.4z
|
||||||
|
M884.5,809.9c0.9,0,1.7,0,2.6,0c-7.4,3.7-14.2,8.6-20.1,14.4c-6.4,6.4-11.4,13.6-15,21.3c-3,0.1-5.4,2.4-5.7,5.3l-52,8.8
|
||||||
|
C813.1,829.3,847.5,809.9,884.5,809.9z M786.7,874.7c1-2.5,2.1-4.9,3.3-7.3l22,49c-3.2,2-5.4,5.6-5.4,9.7
|
||||||
|
c0,1.6,0.3,3.1,0.9,4.5l-14.2,8.5c-1.7-2-4.3-3.4-7.1-3.4c-0.8,0-1.6,0.1-2.4,0.3C778.9,918.9,777.4,897.6,786.7,874.7z
|
||||||
|
M791.1,955c-0.3-0.5-0.5-1.1-0.8-1.6c0.7-0.4,1.4-0.8,2-1.3l53.7,25.3c-0.2,1-0.3,2-0.3,3c0,7.7,6.1,14,13.8,14.2l2.1,23.9
|
||||||
|
c-0.6,0.2-1.2,0.6-1.7,1c-12-5.1-23.3-11.9-33.6-20.5C811.3,986.7,799.1,971.5,791.1,955z M871,1023.8l45.6-13.3
|
||||||
|
c1.1,1.8,3.1,3.1,5.4,3.1c2.2,0,4.2-1.2,5.3-3l35,11.2c-0.2,0.9-0.3,1.7-0.3,2.6c0,0.4,0,0.8,0.1,1.2
|
||||||
|
C932.1,1034,900,1033.5,871,1023.8z M1029.2,986.8c-12.7,12.6-27.3,22.6-43.1,29.9c-1.5-2.1-3.6-3.7-6-4.7l11.8-42.5
|
||||||
|
c0.2,0,0.5,0,0.7,0c4.2,0,7.6-3.3,7.7-7.4l63.4-10.1c0,0,0,0.1,0,0.1L1029.2,986.8z"/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
<linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="766.1575" y1="875.5201" x2="1404.113" y2="875.5201">
|
||||||
|
<stop offset="0" style="stop-color:#0025AD"/>
|
||||||
|
<stop offset="1" style="stop-color:#A50664"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st2" d="M1285.3,963.2c-31.1,0-59.6-11.2-81.4-29.7c-2.6-2-5.1-4-7.5-6.4l-65.3-64l-74.9-73.5l-24.4-23.9
|
||||||
|
c-60.8-60.5-159.5-60.5-220.2,0.2c-29.6,29.6-44.7,68.2-45.5,107c16-49.3,63.2-85.1,118.8-85.1c31.1,0,59.6,11.2,81.4,29.7
|
||||||
|
c2.6,2,5.1,4,7.5,6.4l164.5,161.4c18.9,18.9,41.6,31.9,65.6,39c3.7,1.1,7.3,2,11.1,2.8c50.1,10.9,104.7-3.1,143.5-42
|
||||||
|
c29.6-29.6,44.7-68.1,45.5-107C1388.1,927.4,1340.9,963.2,1285.3,963.2z"/>
|
||||||
|
</g>
|
||||||
|
<g>
|
||||||
|
<linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="667.4361" y1="621.7529" x2="695.3327" y2="415.6279">
|
||||||
|
<stop offset="0" style="stop-color:#0025AD"/>
|
||||||
|
<stop offset="1" style="stop-color:#A50664"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st3" d="M612.5,534.1l8.9-3.7L765,463.2l156.1-185.7l-251.5,56.3L463.8,596.4l87-36.4
|
||||||
|
c-10.6,33.8-18.1,50.1-15.3,94.1c-6.7,9.8-13.4,21.6-7.8,29c4.4,5.8,12.3,8,20.9,6.8c1.3,2.6,2.6,5.2,3.8,7.8
|
||||||
|
c16.5,35.5-11.2,76.5-11.2,76.5l21.1-9.2c16.7-7.3,28.6-22.6,31.3-40.6c0.1-0.4,0.1-0.7,0.2-1c2.8-18.9-1.1-38.1-10.9-54.5
|
||||||
|
c-0.8-1.3-1.4-2.2-1.6-2.4c4.5-8.7,5.7-18.6,0-24.2c-6-5.8-18.6-3.3-20.3-2.9c-0.7-130,139.8-236.4,152.8-245.9
|
||||||
|
c0.6-0.4,1.4-0.5,2.1-0.3c1.6,0.6,1.9,2.6,0.7,3.7C701.7,409.5,634.4,470.2,612.5,534.1z"/>
|
||||||
|
<linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="1078.9738" y1="677.4498" x2="1106.8704" y2="471.3247">
|
||||||
|
<stop offset="0" style="stop-color:#0025AD"/>
|
||||||
|
<stop offset="1" style="stop-color:#A50664"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st4" d="M1498.3,650.8c-20.6-19.7-47-32.4-76.2-36.8l-7.3-1.1l0.1-7.4c0.2-19.1-4.5-38.3-13.5-55.5
|
||||||
|
c-11.7-22.3-30.2-40.8-53.3-53.4c-21.1-11.6-45.4-17.7-70-17.7c-2.3,0-4.5,0-6.8,0.2c-17.6,0.8-34.7,4.6-50.8,11.5l-9.5,4
|
||||||
|
l-2.2-10.1c-5.6-26.3-20.1-49.8-41.7-68c-25.2-21.2-57.8-32.8-91.9-32.8c-36.7,0-71,13.1-96.6,36.9c-18.3,17-30.9,38.4-36.4,61.9
|
||||||
|
l-2.6,11.1l-9.9-5.7c-6-3.5-11.4-6.1-17-8.4c-11.1-4.6-22.5-7.2-35.4-8.4l2.4-2c13.9-9.9,25.2-23.2,32.3-38.7
|
||||||
|
c3.1-6.8,4.8-12.9,5.1-18.2c0.3-5.1-0.8-9.4-3.3-12.7l-31.8-41.1L779.6,482.4l-149.5,65.5l33,42.7c2.5,3.3,6.4,5.5,11.4,6.6
|
||||||
|
c2.5,0.5,5.3,0.8,8.3,0.8c11.2,0,22.1-3.9,31.2-10.4l16.7-11.9c-0.8,5.6-1.3,11.3-1.3,17c0,4.1,0.2,8.4,0.7,12.8l0.9,7.9
|
||||||
|
l-7.8,1.4c-27.9,5.2-53,18.1-72.5,37.3c-23.7,23.2-36.7,53.5-36.7,85.2c0,32.3,13.5,63,37.9,86.4c24.6,23.6,57.8,37.2,93.3,38.4
|
||||||
|
l0.4,0l0.4,0l21.7,0c0.5-2.6,1.7-4.9,3.4-6.7c0.4-2.9,0.9-5.7,1.5-8.6c0-0.1,0.1-0.1,0.1-0.2h-18.5v0.4l-8.8-0.3
|
||||||
|
c-30.3-1.2-58.6-13-79.7-33.2c-21.4-20.5-33.2-47.6-33.2-76.2c0-28.1,11.4-54.8,32.1-75.2c20.6-20.3,48.4-32.4,78.4-34.2l1.2-0.1
|
||||||
|
l0.6,0c14.8,0.3,29,2.9,42,7.7c0,0.6-0.1,1.2-0.1,1.8c0,14,11.4,25.4,25.4,25.4c14,0,25.4-11.4,25.4-25.4S826,611.9,812,611.9
|
||||||
|
c-7.8,0-14.8,3.5-19.4,9.1c-11.6-4.2-23.8-6.9-36.4-8l-6.8-0.6l-0.9-6.8c-0.6-4.2-0.8-8.6-0.8-12.9c0-28.6,12-55.5,33.9-75.6
|
||||||
|
c21.6-20,50.4-31,81-31c16.9,0,30,2.2,42.3,7.3c10.4,4.3,20.7,10.8,30.2,17.2l3.8,2.5v117.6c-9.1,3.9-15.5,12.9-15.5,23.4
|
||||||
|
c0,14,11.4,25.4,25.4,25.4c14,0,25.4-11.4,25.4-25.4c0-11.1-7.1-20.5-17-23.9v-27.6h55.3c19,0,36.9-6.8,50.3-19.2
|
||||||
|
c13.3-12.2,20.6-28.3,20.6-45.5v-19.1c10-3.4,17.1-12.9,17.1-24c0-14-11.4-25.4-25.4-25.4c-14,0-25.4,11.4-25.4,25.4
|
||||||
|
c0,10.5,6.3,19.4,15.4,23.3v19.8c0,13.2-5.6,25.7-15.7,35c-9.9,9.2-23.1,14.2-36.9,14.2h-55.3v-69h-0.1l0-8.5
|
||||||
|
c0.1-29.9,12.4-57.7,34.8-78.5c22-20.5,51.6-31.8,83.2-31.8c29.4,0,57.5,10,79.2,28.3c20.6,17.3,33.9,40.6,37.7,65.8h0.4l0.1,1.2
|
||||||
|
h0.1l0.3,6.3l0.7,6.8h-0.4l0,0.9l-0.1,9.4V610c-9.2,3.8-15.8,12.9-15.8,23.5c0,14,11.4,25.4,25.4,25.4c14,0,25.4-11.4,25.4-25.4
|
||||||
|
c0-11-7-20.3-16.8-23.9v-96.6l4.3-2.5c17-9.7,36.5-15.3,56.5-16.2c1.9-0.1,3.9-0.1,5.8-0.1c21.2,0,42.1,5.3,60.4,15.3
|
||||||
|
c20.1,11,36.2,27.1,46.4,46.6c7.9,15,12.1,31.8,12,48.5l0,7.7l-7.6,0.8c-14.7,1.5-28.8,5.1-42,10.6c-0.3,0.1-0.6,0.3-0.9,0.4
|
||||||
|
c-3.7-2.1-7.9-3.3-12.5-3.3c-14,0-25.4,11.4-25.4,25.4c0,14,11.4,25.4,25.4,25.4c14,0,25.4-11.4,25.4-25.4c0-3.4-0.7-6.6-1.9-9.6
|
||||||
|
c12.2-4.9,25.3-7.8,39-8.6v-0.6l8.8,0.3c30.3,1.2,58.6,13,79.7,33.2c21.4,20.5,33.2,47.6,33.2,76.3c0,28.6-11.8,55.7-33.2,76.2
|
||||||
|
c-21.1,20.2-49.4,32-79.7,33.2l-8.8,0.3v-0.3h-4.7v12.2c0,1.2-0.1,2.3-0.3,3.4h13l0.5-0.1l0.4,0c35.6-1.2,68.7-14.9,93.3-38.4
|
||||||
|
c24.5-23.4,37.9-54.1,37.9-86.4C1536.2,704.9,1522.7,674.2,1498.3,650.8z"/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
</svg>
|
After Width: | Height: | Size: 19 KiB |
BIN
Logo/2.-logo-jpg-file.jpg
Normal file
After Width: | Height: | Size: 341 KiB |
BIN
Logo/a2. trsnaprent fille black text.png
Normal file
After Width: | Height: | Size: 122 KiB |
BIN
Logo/a2.transparent file white text.png
Normal file
After Width: | Height: | Size: 121 KiB |
BIN
Logo/logo-file(1).jpg
Normal file
After Width: | Height: | Size: 429 KiB |
BIN
Logo/logo-file.jpg
Normal file
After Width: | Height: | Size: 429 KiB |
BIN
Logo/logo-jpg-file.jpg
Normal file
After Width: | Height: | Size: 397 KiB |
BIN
Logo/logo-on-black-background.jpg
Normal file
After Width: | Height: | Size: 366 KiB |
BIN
Logo/logo.png
Normal file
After Width: | Height: | Size: 659 KiB |
1553
Logo/print ready file.pdf
Normal file
1552
Logo/source file.ai
Normal file
BIN
Logo/transparent file white text.png
Normal file
After Width: | Height: | Size: 118 KiB |
BIN
Logo/transparent white color.png
Normal file
After Width: | Height: | Size: 80 KiB |
BIN
Logo/trsnaprent fille black text.png
Normal file
After Width: | Height: | Size: 120 KiB |
204
Logo/vector file.svg
Normal file
@ -0,0 +1,204 @@
|
|||||||
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
|
<!-- Generator: Adobe Illustrator 24.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
|
||||||
|
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
|
||||||
|
viewBox="0 0 2000 1500" style="enable-background:new 0 0 2000 1500;" xml:space="preserve">
|
||||||
|
<style type="text/css">
|
||||||
|
.st0{fill:url(#SVGID_1_);}
|
||||||
|
.st1{fill:url(#SVGID_2_);}
|
||||||
|
.st2{fill:url(#SVGID_3_);}
|
||||||
|
.st3{fill:url(#SVGID_4_);}
|
||||||
|
.st4{fill:url(#SVGID_5_);}
|
||||||
|
</style>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<path d="M299.8,1246.6l36.9-48.1c-3.7,1.3-7,1.9-9.9,1.9c-11.9,0-21.6-3.6-29.2-10.8c-7.5-7.2-11.3-16.5-11.3-27.9
|
||||||
|
c0-11.9,4.1-21.3,12.4-28.4c8.2-7.1,18.9-10.6,32.1-10.6c13.6,0,24.6,3.6,33,10.9c8.4,7.3,12.6,17,12.6,29.1c0,7-1.3,13.3-4,19
|
||||||
|
c-2.7,5.7-6.6,12.1-11.7,19.1l-34.4,45.8H299.8z M331.5,1185.2c7.7,0,13.4-2.1,17.1-6.4c3.7-4.3,5.5-10,5.5-17.3
|
||||||
|
c0-6.8-1.9-12.2-5.7-16.2c-3.8-4-9.3-6-16.5-6c-7.4,0-13.2,2.1-17.3,6.2c-4.2,4.2-6.3,9.6-6.3,16.4c0,7.2,2,12.9,5.9,17.1
|
||||||
|
C318.1,1183.1,323.9,1185.2,331.5,1185.2z"/>
|
||||||
|
<path d="M452.4,1245c-5.2,2.2-11.2,3.4-17.8,3.4c-6.6,0-12.5-1.1-17.8-3.4c-5.3-2.2-9.6-5.3-13.1-9.1c-3.4-3.9-6.3-8.5-8.6-13.8
|
||||||
|
c-2.3-5.4-3.9-11.1-4.9-17.1c-1-6-1.5-12.4-1.5-19.2c0-6.9,0.5-13.4,1.5-19.5c1-6.1,2.7-11.8,5-17.2c2.3-5.4,5.2-10,8.7-13.8
|
||||||
|
c3.5-3.8,7.8-6.9,13-9.1c5.2-2.2,11.1-3.4,17.6-3.4s12.4,1.1,17.6,3.4c5.2,2.2,9.5,5.3,13,9.1c3.4,3.9,6.3,8.5,8.6,13.9
|
||||||
|
c2.3,5.4,4,11.1,4.9,17.2c1,6,1.5,12.5,1.5,19.5c0,6.8-0.5,13.2-1.5,19.2c-1,6-2.6,11.7-4.9,17.1c-2.3,5.4-5.1,10-8.6,13.9
|
||||||
|
C462,1239.7,457.7,1242.8,452.4,1245z M423.3,1227.2c3.2,2.6,7,3.8,11.4,3.8c4.4,0,8.2-1.3,11.4-3.8c3.1-2.5,5.5-6.1,7.2-10.6
|
||||||
|
c1.6-4.5,2.8-9.3,3.5-14.5c0.7-5.1,1.1-10.8,1.1-17.1c0-30.1-7.7-45.1-23.1-45.1c-15.3,0-23.1,14.9-23.3,44.8
|
||||||
|
c0,6.3,0.4,12.1,1.1,17.2c0.7,5.2,1.9,10,3.6,14.6C417.7,1221.1,420.1,1224.7,423.3,1227.2z"/>
|
||||||
|
<path d="M501.7,1246.6v-121.9H537c10.6,0,20.2,1.2,28.6,3.7c8.4,2.4,15.7,6.1,21.8,11c6.1,4.9,10.7,11.2,14,19
|
||||||
|
c3.2,7.7,4.9,16.7,4.9,26.8c0,19.7-5.9,34.8-17.8,45.5c-11.9,10.6-28.3,15.9-49.3,15.9H501.7z M524.2,1228.6h16.1
|
||||||
|
c13.9,0,24.5-3.6,31.8-10.9c7.3-7.3,10.9-17.9,10.9-31.9c0-15-3.7-25.9-11.2-32.6c-7.5-6.8-18.8-10.2-33.9-10.2h-13.7V1228.6z"/>
|
||||||
|
<path d="M649.6,1248.4c-8.5,0-15.3-2.3-20.6-7c-5.2-4.7-7.9-11.4-7.9-20.3c0-9.5,3.1-16.5,9.3-21.1c6.2-4.6,15.6-7.5,28.1-8.7
|
||||||
|
c1.7-0.2,3.6-0.4,5.6-0.7c2-0.2,4.2-0.4,6.7-0.7c2.5-0.2,4.5-0.4,5.9-0.6v-4.8c0-5.5-1.3-9.6-3.8-12.1c-2.6-2.5-6.4-3.8-11.6-3.8
|
||||||
|
c-7.4,0-16.6,2.1-27.5,6.2c-0.1-0.2-1-2.7-2.7-7.6c-1.8-4.9-2.7-7.4-2.7-7.5c10.8-4.6,22.4-6.9,34.9-6.9c12.3,0,21.3,2.7,26.9,8
|
||||||
|
c5.6,5.4,8.4,14,8.4,26v59.6h-16.1c-0.1-0.2-0.7-2.1-1.8-5.6c-1.2-3.5-1.7-5.4-1.7-5.6c-4.5,4.4-9,7.7-13.4,9.8
|
||||||
|
C661.1,1247.3,655.8,1248.4,649.6,1248.4z M655.5,1232.9c5,0,9.4-1.2,13.2-3.6c3.7-2.4,6.4-5.3,8-8.8v-17.6
|
||||||
|
c-0.2,0-1.6,0.1-4.4,0.3c-2.7,0.2-4.2,0.3-4.5,0.3c-8.8,0.8-15.3,2.4-19.4,5c-4.1,2.6-6.2,6.7-6.2,12.3c0,3.9,1.1,6.9,3.4,8.9
|
||||||
|
C648,1231.9,651.3,1232.9,655.5,1232.9z"/>
|
||||||
|
<path d="M725.3,1285.1c-3.5,0-7.3-0.2-11.3-0.7l-1.1-16.9c2.5,0.3,5.8,0.4,9.7,0.4c5,0,9.1-1,12.1-3c3-2,5.6-5.5,7.6-10.6
|
||||||
|
c0.2-0.6,1.3-3.7,3.3-9.4l-36.8-89.8h23.3l24,66.3c1.8-6.6,5.8-18.9,11.9-36.8c6.2-17.9,9.6-27.8,10.2-29.5h23.3
|
||||||
|
c-25.3,67-38.1,100.8-38.3,101.3c-3.9,10.3-8.9,17.6-14.9,22C742.4,1282.9,734.7,1285.1,725.3,1285.1z"/>
|
||||||
|
<path d="M842.5,1248.3c-13.5,0-24.3-2.3-32.4-6.9l2.2-16.4c3.7,2,8.3,3.8,14,5.4c5.7,1.6,10.8,2.4,15.5,2.4
|
||||||
|
c4.8,0,8.6-0.9,11.3-2.7c2.7-1.8,4.1-4.5,4.1-7.9c0-3.2-1.3-5.7-3.8-7.5c-2.5-1.8-7.3-4.1-14.4-6.8c-2.4-0.9-4-1.5-4.7-1.7
|
||||||
|
c-8.4-3.3-14.5-6.9-18.2-10.9c-3.7-4-5.6-9.5-5.6-16.3c0-8.3,3-14.7,9-19.1c6-4.4,14.4-6.7,25.2-6.7c11.6,0,21.9,2.2,30.7,6.6
|
||||||
|
l-5.4,15.1c-8.7-4-17-6.1-24.9-6.1c-4.4,0-7.8,0.7-10.3,2.2c-2.5,1.5-3.7,3.7-3.7,6.7c0,2.8,1.2,5,3.6,6.6c2.4,1.6,7,3.6,14,6.2
|
||||||
|
c0.2,0.1,0.9,0.3,2.2,0.7c1.2,0.4,2.2,0.8,2.8,1.1c8.4,3.1,14.5,6.7,18.5,10.9c3.9,4.2,5.9,9.8,5.9,16.6
|
||||||
|
c-0.1,9.1-3.2,16.2-9.3,21.2C862.5,1245.8,853.8,1248.3,842.5,1248.3z"/>
|
||||||
|
<path d="M942.4,1229c-7.2-1.6-13-5.1-17.3-10.6c-6-7.7-9.1-18.5-9.1-32.5c0-14.1,3-25.1,9.1-33c4.3-5.6,10-9.1,17.2-10.8v-18.6
|
||||||
|
c-13.9,1.6-25.3,7-34,16c-10.6,11-16,26.4-16,46.1c0,19.6,5.3,34.9,15.9,46c8.7,9.1,20.1,14.5,34,16.1V1229z"/>
|
||||||
|
<path d="M995.3,1139.6c-8.8-9.2-20.2-14.5-34.3-16.1v18.5c7.4,1.6,13.3,5.2,17.6,10.8c6,7.8,9,18.8,9,33c0,14-3,24.9-9,32.6
|
||||||
|
c-4.4,5.6-10.2,9.1-17.6,10.6v18.8c14-1.6,25.4-7,34.2-16.2c10.6-11.1,16-26.5,16-45.9C1011.2,1166,1005.9,1150.7,995.3,1139.6z"
|
||||||
|
/>
|
||||||
|
<path d="M1036.3,1246.6v-76.9h-14.5l1.4-12.8l13.1-1.7v-5.1c0-5.5,0.5-10.3,1.5-14.2c1-3.9,2.7-7.3,5-10.1
|
||||||
|
c2.3-2.8,5.4-4.9,9.4-6.2c3.9-1.4,8.7-2,14.4-2c5.5,0,11.2,0.5,17,1.6l-1.9,16.2c-4.5-0.6-8.1-0.9-10.7-0.9c-4.7,0-8,1.1-9.9,3.4
|
||||||
|
c-2,2.2-3,6-3,11.2v6.2h21.5v14.5h-21.5v76.9H1036.3z"/>
|
||||||
|
<path d="M1095.9,1246.6v-121.9h35.3c10.6,0,20.2,1.2,28.6,3.7c8.4,2.4,15.7,6.1,21.8,11c6.1,4.9,10.7,11.2,14,19
|
||||||
|
c3.2,7.7,4.9,16.7,4.9,26.8c0,19.7-5.9,34.8-17.8,45.5c-11.9,10.6-28.3,15.9-49.3,15.9H1095.9z M1118.4,1228.6h16.1
|
||||||
|
c13.9,0,24.5-3.6,31.8-10.9c7.3-7.3,10.9-17.9,10.9-31.9c0-15-3.7-25.9-11.2-32.6c-7.5-6.8-18.8-10.2-33.9-10.2h-13.7V1228.6z"/>
|
||||||
|
<path d="M1261.9,1248.4c-14.7,0-26.1-4.2-34.3-12.7c-8.2-8.5-12.3-20.1-12.3-35.1c0-14.5,3.9-26.1,11.6-34.7
|
||||||
|
c7.7-8.6,18.2-12.9,31.5-13c12.6,0,22.4,3.9,29.3,11.8c7,7.9,10.4,18.3,10.4,31.4c0,0.9,0,2.5,0,4.6c0,2.1,0,3.7,0,4.7h-60.8
|
||||||
|
c0.2,8.5,2.6,15.2,7,19.8c4.5,4.7,10.6,7,18.4,7c10,0,19.6-2.6,28.8-7.8l3.1,15.5C1285.4,1245.5,1274.5,1248.4,1261.9,1248.4z
|
||||||
|
M1237.7,1191.6h39.9c0-7.4-1.7-13.1-5.1-17.1c-3.4-4-8.1-6-14.1-6c-5.6,0-10.3,1.9-14.2,5.8
|
||||||
|
C1240.5,1178.2,1238.2,1184,1237.7,1191.6z"/>
|
||||||
|
<path d="M1341.2,1246.6l-36.4-91.4h23.1c0.9,2.5,3.7,10.2,8.4,23.2c4.7,13,7.9,21.9,9.4,26.6c3.2,9.2,5.4,16.2,6.8,20.9
|
||||||
|
c0-0.1,0.5-1.9,1.4-5.4c0.9-3.5,1.8-6.9,2.7-10c0.9-3.1,1.4-5,1.6-5.7c0.1-0.1,2.6-7.8,7.6-23c5-15.2,8-24.1,8.9-26.6h23
|
||||||
|
l-34.6,91.4H1341.2z"/>
|
||||||
|
<path d="M1456,1228.9c-7.1-1.6-12.7-5.1-17-10.5c-6-7.7-9.1-18.5-9.1-32.5c0-14.1,3-25.1,9.1-33c4.2-5.5,9.9-9,16.9-10.7v-18.6
|
||||||
|
c-13.8,1.7-25,7-33.7,16c-10.6,11-16,26.4-16,46.1c0,19.6,5.3,34.9,15.9,46c8.7,9.1,19.9,14.4,33.7,16.1V1228.9z"/>
|
||||||
|
<path d="M1509.2,1139.6c-8.8-9.2-20.4-14.6-34.6-16.1v18.5c7.5,1.5,13.5,5.1,17.9,10.9c6,7.8,9,18.8,9,33c0,14-3,24.9-9,32.6
|
||||||
|
c-4.4,5.6-10.4,9.2-17.9,10.7v18.7c14.1-1.6,25.7-7,34.5-16.2c10.6-11.1,16-26.5,16-45.9
|
||||||
|
C1525.1,1166,1519.8,1150.7,1509.2,1139.6z"/>
|
||||||
|
<path d="M1544.4,1284.6v-129.4h18.7l2.3,12.8c3-4.8,7-8.6,11.9-11.2c5-2.6,10.5-4,16.6-4c7.5,0,14.1,2.1,19.8,6.2
|
||||||
|
c5.7,4.2,10,9.8,12.9,17c2.9,7.2,4.4,15.3,4.4,24.3c0,14.1-3.4,25.7-10.3,34.6c-6.8,8.9-16.1,13.4-27.8,13.4
|
||||||
|
c-5.8,0-11-1.2-15.7-3.5c-4.7-2.3-8.6-5.4-11.8-9.4c0.4,8,0.7,12.5,0.7,13.5v34.5L1544.4,1284.6z M1588.3,1231.7
|
||||||
|
c6.5,0,11.6-2.7,15.3-8.2c3.7-5.4,5.6-13.2,5.6-23.3c0-10.3-1.9-18-5.7-23.1c-3.8-5.2-9-7.7-15.7-7.7c-14.4,0-21.7,9.8-21.9,29.4
|
||||||
|
c0,11.3,1.9,19.6,5.6,24.9C1575.4,1229.1,1580.9,1231.7,1588.3,1231.7z"/>
|
||||||
|
<path d="M1678.1,1248.3c-13.5,0-24.3-2.3-32.4-6.9l2.2-16.4c3.7,2,8.3,3.8,14,5.4c5.7,1.6,10.8,2.4,15.5,2.4
|
||||||
|
c4.8,0,8.6-0.9,11.3-2.7c2.7-1.8,4.1-4.5,4.1-7.9c0-3.2-1.3-5.7-3.8-7.5c-2.5-1.8-7.3-4.1-14.4-6.8c-2.4-0.9-4-1.5-4.7-1.7
|
||||||
|
c-8.4-3.3-14.5-6.9-18.2-10.9c-3.7-4-5.6-9.5-5.6-16.3c0-8.3,3-14.7,9-19.1c6-4.4,14.4-6.7,25.2-6.7c11.6,0,21.9,2.2,30.7,6.6
|
||||||
|
l-5.4,15.1c-8.7-4-17-6.1-24.9-6.1c-4.4,0-7.8,0.7-10.3,2.2c-2.5,1.5-3.7,3.7-3.7,6.7c0,2.8,1.2,5,3.6,6.6c2.4,1.6,7,3.6,14,6.2
|
||||||
|
c0.2,0.1,0.9,0.3,2.2,0.7c1.2,0.4,2.2,0.8,2.8,1.1c8.4,3.1,14.5,6.7,18.5,10.9c3.9,4.2,5.9,9.8,5.9,16.6
|
||||||
|
c-0.1,9.1-3.2,16.2-9.3,21.2C1698.1,1245.8,1689.4,1248.3,1678.1,1248.3z"/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="1103.0773" y1="843.8042" x2="1448.4863" y2="843.8042">
|
||||||
|
<stop offset="0" style="stop-color:#2196F3"/>
|
||||||
|
<stop offset="1" style="stop-color:#CD00D5"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st0" d="M1111.9,820.1c4.7,0,8.5-3.6,8.8-8.2l71.8-11.5c1.2,3.2,4.3,5.6,8,5.7l5.3,33.8c-1.8,1.2-3,3.2-3,5.6
|
||||||
|
c0,0.5,0.1,1,0.2,1.5l-23.2,23.1c-0.7-0.2-1.5-0.4-2.2-0.4c-3.9,0-7,3.1-7,7c0,3.9,3.1,7,7,7s7-3.1,7-7c0-0.5,0-0.9-0.1-1.4
|
||||||
|
l23.5-23.4c0.5,0.1,1,0.2,1.6,0.2c3.7,0,6.7-3,6.7-6.7c0-0.5-0.1-1.1-0.2-1.6l9-8.9c12.3-12.2,27.6-19.6,43.5-22.2
|
||||||
|
c1.7,3.4,5.1,5.7,9.1,5.7c4.3,0,8-2.7,9.5-6.5c18.5,1.3,36.7,9,50.8,23.2c0.6,0.6,1.2,1.3,1.9,1.9c0,0.3,0,0.5,0,0.8
|
||||||
|
c0,4.5,3.3,8.1,7.6,8.7c14.1,20.6,17.5,46.5,10.1,69.6c-2.6,0.9-4.4,3.4-4.4,6.3c0,1.1,0.3,2.2,0.8,3.1
|
||||||
|
c-3.8,8.2-9.1,15.9-15.9,22.7c-9.4,9.4-21,16.3-33.7,20.1l-15.4,4.6l15.9,2.3c6.9,1,12.8,1.5,18.4,1.5c0,0,0,0,0,0
|
||||||
|
c44.5,0,85.8-23.5,108.3-60.2c3.1-0.2,5.5-2.8,5.5-5.9c0-1-0.2-1.9-0.7-2.7c1.5-2.9,2.9-6,4.1-9c6.4-15.8,9-32.5,7.9-49.5
|
||||||
|
c-0.6-9-2.3-18-5-26.9c1.1-1.7,1.8-3.7,1.8-5.9c0-4.9-3.3-9-7.8-10.2c-0.6-1.4-1.3-2.8-1.9-4.2c-9.6-19.5-24-37.6-41.6-52.2
|
||||||
|
c-12-10-25.4-18.1-39.9-24.2c-0.5-3.4-3.4-6.1-6.9-6.1c-1.7,0-3.3,0.6-4.6,1.7c-2.1-0.7-4.2-1.4-6.3-2
|
||||||
|
c-20.6-6.1-42.2-8.4-64.4-6.9c-13,0.9-25.7,3.1-38.1,6.5c-2.6-4.4-7.5-7.4-13-7.4c-8.3,0-15.1,6.8-15.1,15.1
|
||||||
|
c0,0.8,0.1,1.6,0.2,2.4c-19.4,8.8-36.6,20.7-51.2,35.2l-39.1,39.2c-1-0.4-2.2-0.7-3.3-0.7c-4.8,0-8.8,3.9-8.8,8.8
|
||||||
|
C1103.1,816.1,1107,820.1,1111.9,820.1z M1220.3,741.4c6.8,0,12.5-4.4,14.4-10.5l39.1,8.3c0.1,0.6,0.3,1.2,0.6,1.7l-68.1,49.5
|
||||||
|
c-0.4-0.4-0.9-0.7-1.4-0.9l13.4-48.2C1218.9,741.4,1219.6,741.4,1220.3,741.4z M1340.2,729c0.9,2.2,2.9,3.8,5.3,4.2l2.4,27.1
|
||||||
|
c-7.5,1.4-13.2,8-13.2,15.9c0,1.1,0.1,2.2,0.3,3.3l-49.2,22.8c-1.4-2.1-3.6-3.6-6.2-4.1l2.4-53.4c2.9-0.6,5.2-3.1,5.5-6.1
|
||||||
|
L1340.2,729z M1424.6,819.9l-16.1,9.6c-2.4-3-6-4.9-10.2-4.9c-6.6,0-12,4.9-12.9,11.2H1357c-0.7-3.3-3.2-6-6.5-6.8l1.4-36.8
|
||||||
|
c6.1-0.4,11.2-4.1,13.6-9.4l59.7,28.1c-0.9,1.5-1.3,3.3-1.3,5.2C1423.9,817.5,1424.1,818.7,1424.6,819.9z M1425,909.4
|
||||||
|
l-59.3,10.1c-0.3-0.8-0.8-1.4-1.4-2c0.3-1.1,0.7-2.2,1-3.4l29.2-64c1.2,0.4,2.4,0.6,3.8,0.6c1.3,0,2.5-0.2,3.7-0.5l25,55.7
|
||||||
|
C1426,906.7,1425.3,907.9,1425,909.4z M1219.9,829.9l-9,9c-0.5-0.1-1-0.2-1.5-0.2c-0.1,0-0.2,0-0.3,0l-5.2-33.3
|
||||||
|
c3.2-1.3,5.5-4.5,5.5-8.2c0-1.3-0.3-2.6-0.8-3.7l67.7-50.2c0.7,0.6,1.5,1,2.4,1.3l-2.4,53.4c-4.2,0.5-7.6,3.6-8.6,7.6
|
||||||
|
C1250.2,808.4,1233.4,816.4,1219.9,829.9z M1288.8,804.9l47.4-22c2.2,4.9,6.8,8.5,12.3,9.3L1347,829c-1.4,0.2-2.6,0.8-3.7,1.5
|
||||||
|
c-0.2-0.2-0.3-0.3-0.5-0.5C1327.8,814.9,1308.5,806.5,1288.8,804.9z M1354.3,844.1c1.4-1.2,2.4-2.9,2.7-4.8h28.4
|
||||||
|
c0.5,3.9,2.8,7.2,5.9,9.2l-23.5,51.5C1369.7,880.8,1365.2,861,1354.3,844.1z M1323,969.3c-1,0-2,0-3,0
|
||||||
|
c8.4-4.2,16.1-9.7,22.7-16.3c7.2-7.2,12.9-15.4,17-24.1c3.4-0.1,6.1-2.7,6.5-6l58.9-10C1403.9,947.3,1364.9,969.3,1323,969.3z
|
||||||
|
M1433.8,895.9c-1.1,2.8-2.4,5.6-3.7,8.3l-24.9-55.5c3.7-2.3,6.1-6.4,6.1-11c0-1.8-0.4-3.5-1-5.1l16.1-9.6
|
||||||
|
c1.9,2.3,4.8,3.8,8.1,3.8c0.9,0,1.8-0.1,2.7-0.4C1442.7,845.8,1444.3,870,1433.8,895.9z M1428.9,804.9c0.3,0.6,0.6,1.2,0.9,1.8
|
||||||
|
c-0.8,0.4-1.5,0.9-2.2,1.5l-60.9-28.7c0.2-1.1,0.4-2.2,0.4-3.4c0-8.7-6.9-15.9-15.6-16.1l-2.4-27.1c0.7-0.3,1.4-0.6,2-1.1
|
||||||
|
c13.6,5.8,26.4,13.5,38.1,23.2C1406,769,1419.7,786.3,1428.9,804.9z M1336.4,726.4l-49.3,8.6c-1.1-2.4-3.5-4.1-6.4-4.1
|
||||||
|
c-2.8,0-5.2,1.7-6.4,4l-39-7.5c0-0.4,0-0.8,0-1.1c0-0.5,0-0.9-0.1-1.4C1268.6,715.7,1304.1,716.1,1336.4,726.4z M1159.2,768.9
|
||||||
|
c14.4-14.3,30.9-25.6,48.8-33.9c1.7,2.4,4,4.2,6.8,5.3l-13.3,48.1c-0.3,0-0.5,0-0.8,0c-4.7,0-8.6,3.7-8.8,8.4l-71.8,11.4
|
||||||
|
c0,0,0-0.1,0-0.1L1159.2,768.9z"/>
|
||||||
|
<linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="743.8082" y1="943.3182" x2="1089.2173" y2="943.3182">
|
||||||
|
<stop offset="0" style="stop-color:#2196F3"/>
|
||||||
|
<stop offset="1" style="stop-color:#CD00D5"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st1" d="M1080.4,967.1c-4.7,0-8.5,3.6-8.8,8.2l-71.8,11.5c-1.2-3.2-4.3-5.6-8-5.7l-5.3-33.8c1.8-1.2,3-3.2,3-5.6
|
||||||
|
c0-0.5-0.1-1-0.2-1.5l23.2-23.1c0.7,0.2,1.5,0.4,2.2,0.4c3.9,0,7-3.1,7-7c0-3.9-3.1-7-7-7c-3.9,0-7,3.1-7,7c0,0.5,0,0.9,0.1,1.4
|
||||||
|
l-23.5,23.4c-0.5-0.1-1-0.2-1.6-0.2c-3.7,0-6.7,3-6.7,6.7c0,0.5,0.1,1.1,0.2,1.6l-9,8.9c-12.3,12.2-27.6,19.6-43.5,22.2
|
||||||
|
c-1.7-3.4-5.1-5.7-9.1-5.7c-4.3,0-8,2.7-9.5,6.5c-18.5-1.3-36.7-9-50.8-23.2c-0.6-0.6-1.2-1.3-1.9-1.9c0-0.3,0-0.5,0-0.8
|
||||||
|
c0-4.5-3.3-8.1-7.6-8.7c-14.1-20.6-17.5-46.5-10.1-69.6c2.6-0.9,4.4-3.4,4.4-6.3c0-1.1-0.3-2.2-0.8-3.1
|
||||||
|
c3.8-8.2,9.1-15.9,15.9-22.7c9.4-9.4,21-16.3,33.7-20.1l15.4-4.6l-15.9-2.3c-6.9-1-12.8-1.5-18.4-1.5c0,0,0,0,0,0
|
||||||
|
c-44.5,0-85.8,23.5-108.3,60.2c-3.1,0.2-5.5,2.8-5.5,5.9c0,1,0.2,1.9,0.7,2.7c-1.5,2.9-2.9,6-4.1,9c-6.4,15.8-9,32.5-7.9,49.5
|
||||||
|
c0.6,9,2.3,18,5,26.9c-1.1,1.7-1.8,3.7-1.8,5.9c0,4.9,3.3,9,7.8,10.2c0.6,1.4,1.3,2.8,1.9,4.2c9.6,19.5,24,37.6,41.6,52.2
|
||||||
|
c12,10,25.4,18.1,39.9,24.2c0.5,3.4,3.4,6.1,6.9,6.1c1.7,0,3.3-0.6,4.6-1.7c2.1,0.7,4.2,1.4,6.3,2c16.5,4.9,33.7,7.4,51.3,7.4
|
||||||
|
c4.3,0,8.7-0.2,13.1-0.5c13-0.9,25.7-3.1,38.1-6.5c2.6,4.4,7.5,7.4,13,7.4c8.3,0,15.1-6.8,15.1-15.1c0-0.8-0.1-1.6-0.2-2.4
|
||||||
|
c19.4-8.8,36.6-20.7,51.2-35.2l39.1-39.2c1,0.4,2.2,0.7,3.3,0.7c4.8,0,8.8-3.9,8.8-8.8S1085.3,967.1,1080.4,967.1z M972,1045.7
|
||||||
|
c-6.1,0-11.4,3.7-13.8,8.9l-39.6-14.1c-0.1-0.7-0.3-1.3-0.5-1.9l67.9-42.1c0.5,0.4,1,0.8,1.6,1.1l-13.4,48.2
|
||||||
|
C973.4,1045.7,972.7,1045.7,972,1045.7z M851.8,1057.6c-1-1.9-2.9-3.3-5.1-3.7l-2.4-27.1c7.5-1.4,13.2-8,13.2-15.9
|
||||||
|
c0-1.1-0.1-2.2-0.3-3.3l49.2-22.8c1.4,2.1,3.6,3.6,6.1,4.1l-2.3,45.7c-3.2,0.7-5.5,3.4-5.6,6.8L851.8,1057.6z M767.7,967.2
|
||||||
|
l16.1-9.6c2.4,3,6,4.9,10.2,4.9c6.6,0,12-4.9,12.9-11.2h28.4c0.7,3.3,3.2,6,6.5,6.8l-1.4,36.8c-6.1,0.4-11.2,4.1-13.6,9.4
|
||||||
|
L767,976.1c0.9-1.5,1.3-3.3,1.3-5.2C768.4,969.6,768.1,968.4,767.7,967.2z M767.3,877.8l59.3-10.1c0.3,0.8,0.8,1.4,1.4,2
|
||||||
|
c-0.3,1.1-0.7,2.2-1,3.4l-29.2,64c-1.2-0.4-2.4-0.6-3.8-0.6c-1.3,0-2.5,0.2-3.7,0.5l-25-55.7C766.3,880.4,767,879.2,767.3,877.8
|
||||||
|
z M972.4,957.3l9-9c0.5,0.1,1,0.2,1.5,0.2c0.1,0,0.2,0,0.3,0l5.2,33.3c-3.2,1.3-5.5,4.5-5.5,8.2c0,1.1,0.2,2.2,0.6,3.2
|
||||||
|
l-67.1,43.2c-0.8-0.7-1.7-1.2-2.7-1.6l2.3-45.7c4.2-0.5,7.6-3.6,8.6-7.6C942.1,978.7,958.9,970.7,972.4,957.3z M903.5,982.3
|
||||||
|
l-47.4,22c-2.2-4.9-6.8-8.5-12.3-9.3l1.4-36.8c1.4-0.2,2.6-0.8,3.7-1.5c0.2,0.2,0.3,0.3,0.5,0.5
|
||||||
|
C864.5,972.2,883.8,980.6,903.5,982.3z M838,943c-1.4,1.2-2.4,2.9-2.7,4.8h-28.4c-0.5-3.9-2.8-7.2-5.9-9.2l23.5-51.5
|
||||||
|
C822.6,906.4,827.1,926.2,838,943z M869.3,817.8c1,0,2,0,3,0c-8.4,4.2-16.1,9.7-22.7,16.3c-7.2,7.2-12.9,15.4-17,24.1
|
||||||
|
c-3.4,0.1-6.1,2.7-6.5,6l-58.9,10C788.4,839.8,827.4,817.8,869.3,817.8z M758.5,891.3c1.1-2.8,2.4-5.6,3.7-8.3l24.9,55.5
|
||||||
|
c-3.7,2.3-6.1,6.4-6.1,11c0,1.8,0.4,3.5,1,5.1l-16.1,9.6c-1.9-2.3-4.8-3.8-8.1-3.8c-0.9,0-1.8,0.1-2.7,0.4
|
||||||
|
C749.6,941.3,748,917.2,758.5,891.3z M763.4,982.2c-0.3-0.6-0.6-1.2-0.9-1.8c0.8-0.4,1.5-0.9,2.2-1.5l60.9,28.7
|
||||||
|
c-0.2,1.1-0.4,2.2-0.4,3.4c0,8.7,6.9,15.9,15.6,16.1l2.4,27.1c-0.7,0.3-1.4,0.6-2,1.1c-13.6-5.8-26.4-13.5-38.1-23.2
|
||||||
|
C786.3,1018.1,772.5,1000.8,763.4,982.2z M853.9,1060.1l51.6-15.1c1.2,2.1,3.5,3.5,6.1,3.5c2.5,0,4.7-1.3,6-3.4l39.6,12.7
|
||||||
|
c-0.2,1-0.3,2-0.3,3c0,0.5,0,0.9,0.1,1.4C923.1,1071.6,886.8,1071,853.9,1060.1z M1033.1,1018.2c-14.4,14.3-30.9,25.6-48.8,33.9
|
||||||
|
c-1.7-2.4-4-4.2-6.8-5.3l13.3-48.1c0.3,0,0.5,0,0.8,0c4.7,0,8.6-3.7,8.8-8.4l71.8-11.4c0,0,0,0.1,0,0.1L1033.1,1018.2z"/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
<linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="735.1948" y1="892.14" x2="1457.6208" y2="892.14">
|
||||||
|
<stop offset="0" style="stop-color:#2196F3"/>
|
||||||
|
<stop offset="1" style="stop-color:#CD00D5"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st2" d="M1323,991.5c-35.2,0-67.4-12.7-92.2-33.6c-2.9-2.2-5.8-4.6-8.4-7.2l-73.9-72.5l-84.8-83.2l-27.6-27
|
||||||
|
c-68.8-68.6-180.7-68.5-249.4,0.2c-33.5,33.5-50.6,77.2-51.5,121.1c18.1-55.8,71.6-96.4,134.6-96.4c35.2,0,67.4,12.7,92.2,33.6
|
||||||
|
c2.9,2.2,5.8,4.6,8.4,7.2l186.3,182.8c21.5,21.4,47.1,36.1,74.3,44.1c4.1,1.2,8.3,2.3,12.5,3.2c56.8,12.3,118.5-3.5,162.5-47.5
|
||||||
|
c33.5-33.5,50.6-77.2,51.5-121.1C1439.5,950.9,1386,991.5,1323,991.5z"/>
|
||||||
|
</g>
|
||||||
|
<g>
|
||||||
|
<linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="626.0831" y1="496.0927" x2="1347.3966" y2="496.0927">
|
||||||
|
<stop offset="0" style="stop-color:#2196F3"/>
|
||||||
|
<stop offset="1" style="stop-color:#CD00D5"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st3" d="M561.2,505.5l10.1-4.2l162.5-76.1l176.8-210.3l-284.8,63.7L392.8,576l98.5-41.2
|
||||||
|
c-12,38.2-20.4,56.7-17.4,106.6c-7.6,11.1-15.1,24.5-8.8,32.8c5,6.6,13.9,9.1,23.7,7.7c1.5,3,3,5.9,4.3,8.8
|
||||||
|
c18.7,40.2-12.7,86.6-12.7,86.6l23.8-10.5c18.9-8.3,32.3-25.6,35.5-45.9c0.1-0.4,0.1-0.8,0.2-1.2c3.1-21.4-1.3-43.2-12.3-61.8
|
||||||
|
c-0.9-1.5-1.5-2.5-1.8-2.7c5.1-9.8,6.5-21.1,0-27.3c-6.8-6.6-21-3.8-23-3.3C502,477.5,661.1,357,675.7,346.3
|
||||||
|
c0.7-0.5,1.5-0.6,2.3-0.3c1.8,0.7,2.2,3,0.8,4.2C662.3,364.4,586,433.1,561.2,505.5z"/>
|
||||||
|
<linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="626.0831" y1="591.8624" x2="1347.3966" y2="591.8624">
|
||||||
|
<stop offset="0" style="stop-color:#2196F3"/>
|
||||||
|
<stop offset="1" style="stop-color:#CD00D5"/>
|
||||||
|
</linearGradient>
|
||||||
|
<path class="st4" d="M1564.2,637.7c-23.3-22.3-53.2-36.7-86.3-41.7l-8.3-1.2l0.1-8.4c0.2-21.6-5.1-43.4-15.3-62.9
|
||||||
|
c-13.3-25.3-34.2-46.2-60.4-60.5c-23.9-13.1-51.4-20-79.3-20c-2.6,0-5.1,0.1-7.7,0.2c-19.9,0.9-39.3,5.2-57.5,13l-10.7,4.6
|
||||||
|
l-2.4-11.4c-6.4-29.8-22.7-56.4-47.2-77c-28.5-24-65.5-37.1-104.1-37.1c-41.5,0-80.3,14.8-109.4,41.8
|
||||||
|
c-20.7,19.2-35,43.5-41.2,70.1l-2.9,12.6l-11.2-6.4c-6.8-3.9-12.9-6.9-19.2-9.5c-12.6-5.2-25.4-8.2-40.1-9.5l2.7-2.2
|
||||||
|
c15.7-11.2,28.6-26.2,36.5-43.9c3.5-7.7,5.4-14.6,5.8-20.6c0.3-5.8-0.9-10.6-3.7-14.4l-36-46.5L750.5,447l-169.3,74.2l37.3,48.4
|
||||||
|
c2.9,3.8,7.3,6.2,12.9,7.4c2.8,0.6,6,0.9,9.4,0.9c12.7,0,25-4.4,35.4-11.8l18.9-13.5c-1,6.3-1.5,12.8-1.5,19.3
|
||||||
|
c0,4.6,0.3,9.5,0.8,14.5l1,8.9l-8.8,1.6c-31.6,5.9-60,20.5-82.1,42.2c-26.8,26.3-41.5,60.6-41.5,96.5c0,36.6,15.2,71.4,42.9,97.9
|
||||||
|
c27.9,26.7,65.4,42.1,105.7,43.5l0.4,0l0.4,0.1l24.5,0c0.6-3,1.9-5.5,3.8-7.6c0.5-3.3,1-6.5,1.7-9.7c0-0.1,0.1-0.1,0.1-0.2h-20.9
|
||||||
|
v0.4l-10-0.4c-34.3-1.3-66.4-14.7-90.3-37.6c-24.3-23.2-37.6-53.9-37.6-86.3c0-31.8,12.9-62,36.4-85.1
|
||||||
|
c23.3-22.9,54.9-36.7,88.7-38.7l1.4-0.1l0.7,0c16.8,0.3,32.8,3.2,47.6,8.7c0,0.7-0.1,1.3-0.1,2c0,15.9,12.9,28.7,28.7,28.7
|
||||||
|
c15.9,0,28.7-12.9,28.7-28.7s-12.9-28.7-28.7-28.7c-8.8,0-16.7,4-22,10.3c-13.1-4.7-26.9-7.8-41.2-9.1l-7.7-0.7l-1-7.6
|
||||||
|
c-0.6-4.8-0.9-9.7-0.9-14.6c0-32.4,13.6-62.8,38.4-85.6c24.5-22.6,57.1-35.1,91.7-35.1c19.2,0,33.9,2.5,47.9,8.3
|
||||||
|
c11.8,4.8,23.4,12.2,34.3,19.4l4.3,2.9v133.1c-10.3,4.4-17.5,14.6-17.5,26.5c0,15.9,12.9,28.8,28.8,28.8
|
||||||
|
c15.9,0,28.7-12.9,28.7-28.8c0-12.5-8-23.2-19.3-27.1V583h62.6c21.5,0,41.8-7.7,57-21.8c15-13.8,23.3-32.1,23.3-51.5v-21.7
|
||||||
|
c11.3-3.9,19.4-14.6,19.4-27.2c0-15.9-12.9-28.7-28.7-28.7c-15.9,0-28.7,12.9-28.7,28.7c0,11.8,7.2,22,17.4,26.4v22.4
|
||||||
|
c0,15-6.3,29.1-17.8,39.6c-11.3,10.4-26.1,16.1-41.8,16.1h-62.6v-78.2h-0.1l0-9.7c0.1-33.8,14.1-65.4,39.4-88.9
|
||||||
|
c25-23.2,58.4-36,94.2-36c33.2,0,65.1,11.4,89.7,32c23.4,19.6,38.4,45.9,42.7,74.6h0.4l0.1,1.4h0.1l0.4,7.1l0.7,7.7h-0.4l0,1.1
|
||||||
|
l-0.1,10.7v104.2c-10.5,4.3-17.9,14.6-17.9,26.6c0,15.9,12.9,28.8,28.7,28.8s28.7-12.9,28.7-28.8c0-12.4-7.9-23-19-27V481.7
|
||||||
|
l4.8-2.8c19.2-11,41.3-17.4,63.9-18.4c2.2-0.1,4.4-0.1,6.6-0.1c24.1,0,47.7,6,68.4,17.3c22.8,12.5,41,30.7,52.6,52.8
|
||||||
|
c9,17,13.7,36,13.6,55l0,8.7l-8.7,0.9c-16.6,1.7-32.6,5.7-47.6,12.1c-0.3,0.1-0.7,0.3-1,0.5c-4.2-2.4-9-3.7-14.1-3.7
|
||||||
|
c-15.9,0-28.7,12.9-28.7,28.7c0,15.9,12.9,28.8,28.7,28.8s28.7-12.9,28.7-28.8c0-3.8-0.8-7.5-2.1-10.8c13.8-5.5,28.6-8.8,44.1-9.7
|
||||||
|
v-0.7l10,0.4c34.3,1.3,66.3,14.7,90.3,37.6c24.3,23.2,37.6,53.9,37.6,86.4c0,32.4-13.4,63.1-37.6,86.3
|
||||||
|
c-23.9,22.9-56,36.2-90.3,37.6l-10,0.3v-0.4h-5.3v13.8c0,1.3-0.2,2.6-0.4,3.8h14.7l0.5-0.1l0.4,0c40.3-1.4,77.8-16.8,105.7-43.5
|
||||||
|
c27.7-26.5,42.9-61.3,42.9-97.9C1607.2,699,1591.9,664.2,1564.2,637.7z"/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
</svg>
|
After Width: | Height: | Size: 19 KiB |
42
README.md
@ -4,7 +4,7 @@
|
|||||||
<img src="logo.png?raw=true" alt="90DaysOfDevOps Logo" width="50%" height="50%" />
|
<img src="logo.png?raw=true" alt="90DaysOfDevOps Logo" width="50%" height="50%" />
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
[![Website](https://img.shields.io/website?url=https%3A%2F%2Fwww.90daysofdevops.com)](https://www.90daysofdevops.com) [![GitHub Repo stars](https://img.shields.io/github/stars/MichaelCade/90DaysOfDevOps)](https://github.com/MichaelCade/90DaysOfDevOps) [![GitHub Repo stars](https://img.shields.io/github/forks/MichaelCade/90DaysOfDevOps)](https://github.com/MichaelCade/90DaysOfDevOps) [![GitHub Repo issues](https://img.shields.io/github/issues/MichaelCade/90DaysOfDevOps)](https://github.com/MichaelCade/90DaysOfDevOps)
|
[![RepoRater](https://repo-rater.eddiehub.io/api/badge?owner=MichaelCade&name=90DaysOfDevOps)](https://repo-rater.eddiehub.io/rate?owner=MichaelCade&name=90DaysOfDevOps) [![Website](https://img.shields.io/website?url=https%3A%2F%2Fwww.90daysofdevops.com)](https://www.90daysofdevops.com) [![GitHub Repo stars](https://img.shields.io/github/stars/MichaelCade/90DaysOfDevOps)](https://github.com/MichaelCade/90DaysOfDevOps) [![GitHub Repo stars](https://img.shields.io/github/forks/MichaelCade/90DaysOfDevOps)](https://github.com/MichaelCade/90DaysOfDevOps) [![GitHub Repo issues](https://img.shields.io/github/issues/MichaelCade/90DaysOfDevOps)](https://github.com/MichaelCade/90DaysOfDevOps)
|
||||||
|
|
||||||
This repository is used to document my journey on getting a better foundational knowledge of "DevOps". I will be starting this journey on the 1st January 2022 but the idea is that we take 90 days which just so happens to be January 1st to March 31st.
|
This repository is used to document my journey on getting a better foundational knowledge of "DevOps". I will be starting this journey on the 1st January 2022 but the idea is that we take 90 days which just so happens to be January 1st to March 31st.
|
||||||
|
|
||||||
@ -20,17 +20,37 @@ This will **not cover all things** "DevOps" but it will cover the areas that I f
|
|||||||
|
|
||||||
The two images below will take you to the 2022 and 2023 edition of the learning journey.
|
The two images below will take you to the 2022 and 2023 edition of the learning journey.
|
||||||
|
|
||||||
<p align="center">
|
<div style="display: flex; justify-content: center; align-items: center;">
|
||||||
<a href="2022.md">
|
|
||||||
<img src="2022.png?raw=true" alt="2022" width="70%" height="70%" />
|
|
||||||
</p>
|
|
||||||
|
|
||||||
</a>
|
<div style="margin: 10px; text-align: center;">
|
||||||
<p align="center">
|
<a href="2022.md">
|
||||||
<a href="2023.md">
|
<img src="2022.png?raw=true" alt="2022" style="border: 2px solid #555; border-radius: 8px; width: 100%; max-width: 400px;" />
|
||||||
<img src="2023.png?raw=true" alt="2023" width="70%" height="70%" />
|
</a>
|
||||||
</p>
|
<br />
|
||||||
</a>
|
<em>Year 2022 - This is where we start, 110k words </em>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div style="margin: 10px; text-align: center;">
|
||||||
|
<a href="2023.md">
|
||||||
|
<img src="2023.png?raw=true" alt="2023" style="border: 2px solid #555; border-radius: 8px; width: 100%; max-width: 400px;" />
|
||||||
|
</a>
|
||||||
|
<br />
|
||||||
|
<em>Year 2023 - Continues... Some help from my friends</em>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div style="margin: 10px; text-align: center;">
|
||||||
|
<a href="2024.md">
|
||||||
|
<img src="2024.png?raw=true" alt="2024" style="border: 2px solid #555; border-radius: 8px; width: 100%; max-width: 400px;" />
|
||||||
|
</a>
|
||||||
|
<br />
|
||||||
|
<em>Year 2024 - Community Edition: 90 Sessions</em>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
##
|
||||||
|
|
||||||
From this year we have built website for 90DaysOfDevops Challenge :rocket: :technologist: - [Link for website](https://www.90daysofdevops.com/#/2023)
|
From this year we have built website for 90DaysOfDevops Challenge :rocket: :technologist: - [Link for website](https://www.90daysofdevops.com/#/2023)
|
||||||
|
|
||||||
|