DMCA.com Protection Status Trending Topics About Devops

Wednesday, 27 April 2022

EKS Implementation steps

 Different ways to approach 

1. AWS Mnagement Console 
2. Eksctl utility by AWS 
3. Iac (Terraform, Ansible)

 Prerequisites 
1. AWS account with admin privileges 
2. AWS CLI access to use kubernetis utility 
3. Instance to manage cluster by kubectl

 Step by Step method to create the cluster 
1. Signin in your account 
a. Visit: https://aws.amazon.com 

aws login


b. Sign in with your credentials. 
aws iam


2. Create IAM role for EKS cluster 
a. Click on services on the top left hand side and click on IAM 
iam dashboard


b. Go to Roles on left hand side. 
aws role


c. Creating Roles 
1. Click on create roles and Search for the EKS 
2. Then Select EKS cluster below 
3. Next permsission 
4. Select Amazon EKSCluster Policy and procees to next steps 
5. Then tags are optional and then next for review 
6. And write role name like eks_cluster_role and description is optional 
aws role


3. Create Dedicated VPC for EKS cluster 
a. Goto the Services and Click on VPC 
b. You can the the one that was default VPC 
aws vpc


c. Create the Customized VPC for EKS cluster with the help for cloud formation stack 
1. Click on Services and click on cloud formation 
aws cloudformation



a. Then Click on Create Stack and Prepare template is (Template is Ready) and Specify template is choose Amazon s3 URL and write this url below https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml then click on next 
aws cloudformation


b. Check the IP Parameter as given below in the Image as per the enviornment and provide Stack name and create the stack name is eksvpcstack 


c. Below the stack is creating and it take time after that verify our NAT and all vpc services from VPC which is created by stack and proceed next once stack is created. 


4. Create EKS cluster 
a. Select EKS from services and provide the name like ekscluster, provide the kubernetis version and select our existing created role eks_cluster_role. 


b. Click on the next and choose our early created VPC (created from cloudformation. You can see the name like eksvpcstack-VPC) and choose proper security group eksvpc security group which was created earlier and then select your network public,private or public and private (I choose public and private) 


c. You can see Configure Logging . Control plane logging so enable the required parameters for test i am not enable anything and click on next and finally create the cluster. Image shown below for Configure Logging and wait for till the Cluster will create. 



5. Install and setup IAM authenticator and Kubectl utility 
a. We need one instance to configure IAM authenticator and Kubectl utility to manage the worker node. So deploy one instance and as root in that instance follow the steps to configure the above one. So first to setup aws cli on that instance. The below Aws access and secret key will generate from IAM . See the below Image and create it 


 
$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" 
$ unzip awscli-bundle.zip 
$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws 
Add your Access Key ID and Secret Access Key to ~/.aws/config using this format: 
$ aws configure 
then fill your secret and access keys 

[default] 
aws_access_key_id = enter_your_key 
aws_secret_access_key = enter_your_key 
region = your region 

 Protect the config file: 
chmod 600 ~/.aws/config 

Optionally, you can set an environment variable pointing to the config file. This is especially important if you want to keep it in a non-standard location. For future convenience, also add this line to your ~/.bashrc file: 

export AWS_CONFIG_FILE=$HOME/.aws/config
That should be it. Try out the following from your command prompt and if you have any s3 buckets you should see them listed: 

aws s3 ls 
Here is the basic AWS CLI command structure. Keep in mind that any commands you enter in the CLI will have this standard format:

aws [options and parameters*] 


6. Create IAM role for EKS worker nodes 
a. Once the Instance is ready then check any aws command. In my case I am checking aws iam list-user
 

b. But for our fresh instance we are not able to run kubectl and aws-iam-authenticator command as shown on above image . So lets install it first with following commands 

 ***********Aws-iam authenticator steps (for linux in my case)********** 

$ curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/aws-iam-authenticator 
$ curl -o aws-iam-authenticator.sha256 https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/aws-iam-authenticator.sha256 
$ openssl sha1 -sha256 aws-iam-authenticator 
$ chmod +x ./aws-iam-authenticator 
$ mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin 
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc 
$ aws-iam-authenticator help 

*********** kubectl steps (for linux in my case and kubectl version 1.18)********** 
Note:- If want other version the search for aws kubectl on google and refer first site 
$ curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl 
$ chmod +x ./kubectl $ 
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin 
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc $ kubectl version --short –client 


c. $ kubectl get svc (check our Control SVC but not work here says connection refused) 
d. $ aws eks --region ap-south-1 update-kubeconfig --name ekscluster (local kubeconfig configuration set ) 
e. $ export KUBECONFIG=~/.kube/config 
f. $ kubectl get svc (this command work here) 

g. $ kubectl get nodes and $kubectl get namespace (otutput shown below) 




7. Create worker nodes 
a. Create IAM role for worker nodes. Go to Services and Select IAM and select Roles 
b. Select Ec2 and click next then select the AmazonEks_CNI policy, AmazonEKSWorkerNodePolicy,AmazonEC2ContainerRegistryReadOnly policy 
c. Click next, Tags are optional and Again next and enter the name of the role in my case name is eksworkernoderole and then create the role. 


d. Then goto EKS service then cluster then EKS cluster and select Configuration and at bottom then select Compute 
e. Click on Add node group and type name in my case eks-worker-node and select role eksworkernoderole (As we created above) and click on next 
f. Now node group configuration fill all the given fields like AMI type, Capacity, Instance type and Disk Size then select min and maz size nodes click Next 
g. Then specify network fields , then select the fields below shown in diagram then next and click on create. (Please create your ssh keys first) 


h. Then wait for Active the state after that just fire the $ kubectl get nodes and you can see the 2 worker nodes as I mention size 2 in above step. 



i. Now check pods and deploy to check is there any pods or deployment are there. But this is fresh new setup so no pods and no deploy. Please see below 



8. Deploy our sample application 

$ git clone https://github.com/vmudigal/microservices-sample.git 
$ cd microservices-sample 
$ mkdir yamls 
$ cd yamls 
$ kubectl apply -f xyz.yaml (here apply one by one all the yamls ) \
$ kubectl get svc 

Note (All the above below ports in diagram must be open in your VPC and replace awslink with your EIP link when generated at the time of creating cluster ) 

 And then verify 

 Tools
Consul Management console: http://awslink:8500/ui/ 




 MONITORING AND VIZUALIZATION 

 Monitoring, visualisation & management of the container in docker is done by weave scope. 

Tools: Weavescope Management Console: http://awslink:4040/ 





CENTRALIZED LOGGING USING ELK

Our services use Logback to create application logs and send the log data to the logging server (Logstash). Logstash formats the data and send it to the indexing server (Elasticsearch). The data stored in elasticsearch server can be beautifully visualized using Kibana. 

Tools: Elasticsearch: http://awslink:9200/_search?pretty Kibana: 
http://awslink:5601/app/kibana 

aws login



MICROSERVICES COMMUNICATION 

Intercommunication between microservices happens asynchronously with the help of RabbitMQ. 

Tools:  RabbitMQ Management Console: http://awslink:15672/



Monday, 11 April 2022

šˆš§š¬š­šžššš šØšŸ "šˆš§šŸš«ššš¬š­š«š®šœš­š®š«šž ššš¬ š‚šØššž", š”šØš° ššØšžš¬ "šƒš«ššš° š˜šØš®š« šˆš§šŸš«ššš¬š­š«š®šœš­š®š«šž" š¬šØš®š§š? ♥

 Introducing Brainboard.co (YC W22), an amazing tool for drawing infrastructure that will automatically generate terraform files, supports #AWS, #Azure, #GCP & #Scaleway. Think of achieving "š‘ŗš’Šš’š’ˆš’š’† š’”š’š’–š’“š’„š’† š’š’‡ š’•š’“š’–š’•š’‰" through #Brainboard that helps you to #Design, #Deploy and #Manage all within the same platform. 


#šš«ššš¢š§š›šØššš«š š…šžššš­š®š«šžš¬

1/ Import existing #terraform files through url or from local machine. 

2/ Visually represent your #IAC with #Brainboard. 

3/ Readily available #Brainboard templates for quick start

4/ Integrates with #Github, #Gitlab, #Jenkins, #Docker & #Kubernetes

5/ Convert cloud (AWS/Azure/GCP) environments into actionable visual IAC

6/ Real time collaboration on all your cloud based infrastructures.


š‘®š’†š’• š’”š’•š’‚š’“š’•š’†š’… š’˜š’Šš’•š’‰ š’‚š’ š’‚š’„š’„š’š’–š’š’• š’‰š’†š’“š’† h̳t̳t̳p̳s̳:̳/̳/̳b̳i̳t̳.̳l̳y̳/̳3̳6̳5̳Q̳f̳f̳U̳



š…šØš„š„šØš° š­š”šž š¬š­šžš©š¬ š­šØ šžššš¬š¢š„š² "šš«ššš° š²šØš®š« š¢š§šŸš«ššš¬š­š«š®šœš­š®š«šž"

➡ Select a cloud provider from AWS, Azure, GCP & Scaleway

➡ Drag & drop modules/resources of the cloud provider

➡ For each element, you get options like cloud configuration, add connectors, turn into icon only etc., 

➡ See design & corresponding terraform files side by side

➡ Once your design is ready, go to DEPLOY tab & deploy through Terraform's options i.e., Plan, Apply & Destroy

➡ That's pretty much to it, work that usually takes hours, days & months to pretty much almost instantaneous


šŒš¢š¬šœšžš„š„ššš§šžšØš®š¬ šŸšžššš­š®š«šžš¬

➡ For each resource/module, you can select different versions

➡ Version your drawings & view versions within the drawing space

➡ Tons of shortcut keys options

➡ From drawing space, create a readme file and push along with your Infrastructure drawing

➡ Invite new members to your drawings

➡ Create "Terraform variables" & "Outputs"

➡ Create multiple environments within #Brainboard

➡ Docs at https://bit.ly/3JAHYy2


Saturday, 26 March 2022

DevOps Culture

 DevOps Culture A shift to DevOps requires creating and nurturing a DevOps culture, which is a culture of transparency, effective and seamless collaboration, and common goals. 

 The people of the organization must have the right mindset to nurture the DevOps culture

 The people of the organization must have the right mindset to nurture the DevOps culture

You might have the processes and tools to support DevOps but, for successful DevOps adoption, the people of the organization must have the right mindset to nurture the DevOps culture.

There are seven core principles that can help you achieve a DevOps culture.

To learn more, expand each of the following seven categories.  

DevOps brings together development and operations to break down silos, align goals, and deliver on common objectives. The whole team (development, testing, security, operations, and others) has end-to-end ownership for the software they release. They work together to optimize the productivity of developers and the reliability of operations. Teams learn from each other's experiences, listen to concerns and perspectives, and streamline their processes to achieve the required results.


This increased visibility enables processes to be unified and continuously improved to deliver on business goals. The collaboration also creates a high-trust culture that values the efforts of each team member, and transfers knowledge and best practices across teams and the organization.

With DevOps, repeatable tasks are automated, enabling teams to focus on innovation. Automation provides the means to rapid development, testing, and deployment. Identify automation opportunities at every phase, such as code integrations, reviews, testing, security, deployments, and monitoring, using the right tools and services.


For example, infrastructure-as-code (IaC) can be used for predefined or approved environments, and versioned so that repeatable and consistent environments are built. You can also define regulatory checks and incorporate them in test that continuously run as part of your release pipeline.

customer first mindset is a key factor in driving development. For example, with feedback loops DevOps teams stay in-touch with their customer and develop software that meets the customer needs. With a microservices architecture, they are able to quickly switch direction and align their efforts to those needs. 


Streamlined processes and automation deliver requested updates faster and keep customer satisfaction high. Monitoring helps teams determine the success of their application and continuously align their customer focused efforts.

Applications are no longer being developed as one monolithic system with rigid development, testing, and deployment practices. Application architectures are designed with smaller, loosely coupled components. Overarching policies (such as backward compatibility, or change management) are in place and provide governance to development efforts. Teams are organized to match the required system architecture. They have a sense of ownership for their efforts. 


Adopting modern development practices, such as small and frequent code releases, gives teams the agility they need to be responsive to customer needs and business objectives.

To support continuous delivery, security must be iterative, incremental, automated, and in every phase of the application lifecycle, instead of something that is done before a release. Educate the development and operations teams to embed security into each step of the application lifecycle. This way, you can identify and resolve potential vulnerabilities before they become major issues and are more expensive to fix. 


For example, you can include security testing to scan for hard-coded access keys, or usage of restricted ports.

Inquiry, innovation, learning, and mentoring are encouraged and incorporated into DevOps processes. Teams are innovative and their progress is monitored. With innovation, failure will happen. Leadership accepts failure and teams are encouraged to see failure as a learning opportunity. 


For example, teams use DevOps tools to spin-up environments on demand, enabling them to experiment and innovate, perhaps on the use of new technology to support a customer requirement.

Thoughtful metrics help teams monitor their progress, evaluate their processes and tools, and work toward common goals and continuous improvement. For example, teams strive to improve development performance measures such as throughput.


They also strive to increase stability and reduce the mean time to restore service. Using the right monitoring tools, you can set application benchmarks for usual behaviors, and continuously monitor for variations.