DMCA.com Protection Status Trending Topics About Devops

Saturday, 23 October 2021

How To Create Ec2- instance with key-pair, security-group and Elastic Ip via Terraform

 provider "aws" {
  region     = "us-east-1"
  access_key = "AKIARUPJBF7JG"
  secret_key = "Qq55MbE3yIYVTOYullQYv9jgdVQbAQ"
}

resource "aws_instance" "kashmir" {
  ami           = "ami-02e136e904f3da870" # my ami
  instance_type = "t2.micro"
  key_name      = "moon"
  vpc_security_group_ids = [aws_security_group.moon_security.id]
tags = {
 Name = "moonabid"
}
}
resource "aws_key_pair" "moon" {
  key_name   = "moon"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmh2uYKuso2BAIbkYWaKcdSM7ufgHqTorggTt/iHGFhIYipAFyDkklPYDtKoHsnplcC/RjIAyCMDVnVQz/6Nv6GQrvzHjs0QSv2Gmhz+RKWMinihVRW0DS+kTKDltW5bftnPUJYLSkGwzbdoqH9PnV3yAk3I4RJZWWHEttUL9Xb0tN6JkMizAO7yJ/r3p1TwoYRq/HraESuv4vA1QgMdziFWMtO4ZzAr43DjejiqXlvBGqD8/mRwKESHmNypVkId9qlQG1mluE9PHfdsrVSnbJ3We2IKN33HsLjHzWP664F6hJdzDy0V6vGgM6GSGzOrU7vy9X1UsG0FpA8Lp3lq3V root@ip-172-31-0-195.ap-south-1.compute.internal"
}
resource "aws_eip" "mooneip" {
  instance = aws_instance.kashmir.id
  vpc      = true
}

resource "aws_default_vpc" "default" {
  tags = {
    Name = "Default VPC"
  }
}
resource "aws_default_vpc" "default" {
  tags = {
    Name = "Default VPC"
  }
}
resource "aws_security_group" "moon_security" {
  name        = "moon_security"
  description = "Allow TLS inbound traffic"
  vpc_id      = aws_default_vpc.default.id
  ingress {
      description      = "TLS from VPC"
      from_port        = 443
      to_port          = 443
      protocol         = "tcp"
      cidr_blocks      = ["0.0.0.0/0"]
    }

  egress {
      from_port        = 0
      to_port          = 0
      protocol         = "tcp"
      cidr_blocks      = ["0.0.0.0/0"]
    }

  tags = {
    Name = "moon_security"
  }
}


Monday, 13 September 2021

Kubernetes Interview Questions and Answers

 1. What is Kubernetes?

This is one of the most basic Kubernetes interview questions yet one of the most important ones! Kubernetes is an open-source container orchestration tool or system that is used to automate tasks such as the management, monitoring, scaling, and deployment of containerized applications. It is used to easily manage several containers (since it can handle grouping of containers), which provides for logical units that can be discovered and managed.

2. What are K8s? 

K8s is another term for Kubernetes. 

3. What is orchestration when it comes to software and DevOps? 

Orchestration refers to the integration of multiple services that allows them to automate processes or synchronize information in a timely fashion. Say, for example, you have six or seven microservices for an application to run. If you place them in separate containers, this would inevitably create obstacles for communication. Orchestration would help in such a situation by enabling all services in individual containers to work seamlessly to accomplish a single goal. 

4. How are Kubernetes and Docker related?

This is one of the most frequently asked Kubernetes interview questions, where the interviewer might as well ask you to share your experience working with any of them. Docker is an open-source platform used to handle software development. Its main benefit is that it packages the settings and dependencies that the software/application needs to run into a container, which allows for portability and several other advantages. Kubernetes allows for the manual linking and orchestration of several containers, running on multiple hosts that have been created using Docker. 

5. What are the main differences between the Docker Swarm and Kubernetes?

Docker Swarm is Docker’s native, open-source container orchestration platform that is used to cluster and schedule Docker containers. Swarm differs from Kubernetes in the following ways:

  • Docker Swarm is more convenient to set up but doesn’t have a robust cluster, while Kubernetes is more complicated to set up but the benefit of having the assurance of a robust cluster
  • Docker Swarm can’t do auto-scaling (as can Kubernetes); however, Docker scaling is five times faster than Kubernetes 
  • Docker Swarm doesn’t have a GUI; Kubernetes has a GUI in the form of a dashboard 
  • Docker Swarm does automatic load balancing of traffic between containers in a cluster, while Kubernetes requires manual intervention for load balancing such traffic  
  • Docker requires third-party tools like ELK stack for logging and monitoring, while Kubernetes has integrated tools for the same 
  • Docker Swarm can share storage volumes with any container easily, while Kubernetes can only share storage volumes with containers in the same pod
  • Docker can deploy rolling updates but can’t deploy automatic rollbacks; Kubernetes can deploy rolling updates as well as automatic rollbacks

6. What are the main components of Kubernetes architecture?

There are two primary components: the master node and the worker node. Each of these components has individual components in them.

7. What is a node in Kubernetes?

A node is the smallest fundamental unit of computing hardware. It represents a single machine in a cluster, which could be a physical machine in a data center or a virtual machine from a cloud provider. Each machine can substitute any other machine in a Kubernetes cluster. The master in Kubernetes controls the nodes that have containers. 

8. What does the node status contain?

The main components of a node status are Address, Condition, Capacity, and Info.

9. What process runs on Kubernetes Master Node? 

The Kube-api server process runs on the master node and serves to scale the deployment of more instances.

10. What is a pod in Kubernetes?

In this Kubernetes interview question, try giving a thorough answer instead of a one-liner. Pods are high-level structures that wrap one or more containers. This is because containers are not run directly in Kubernetes. Containers in the same pod share a local network and the same resources, allowing them to easily communicate with other containers in the same pod as if they were on the same machine while at the same time maintaining a degree of isolation.

11. What is the job of the kube-scheduler?

The kube-scheduler assigns nodes to newly created pods.

12. What is a cluster of containers in Kubernetes? 

A cluster of containers is a set of machine elements that are nodes. Clusters initiate specific routes so that the containers running on the nodes can communicate with each other. In Kubernetes, the container engine (not the server of the Kubernetes API) provides hosting for the API server.

13. What is the Google Container Engine?

The Google Container Engine is an open-source management platform tailor-made for Docker containers and clusters to provide support for the clusters that run in Google public cloud services. 

14. What are Daemon sets?

A Daemon set is a set of pods that runs only once on a host. They are used for host layer attributes like a network or for monitoring a network, which you may not need to run on a host more than once.

15. What is ‘Heapster’ in Kubernetes?

In this Kubernetes interview question, the interviewer would expect a thorough explanation. You can explain what it is and also it has been useful to you (if you have used it in your work so far!). A Heapster is a performance monitoring and metrics collection system for data collected by the Kublet. This aggregator is natively supported and runs like any other pod within a Kubernetes cluster, which allows it to discover and query usage data from all nodes within the cluster.

16. What is a Namespace in Kubernetes?

Namespaces are used for dividing cluster resources between multiple users. They are meant for environments where there are many users spread across projects or teams and provide a scope of resources.

17. Name the initial namespaces from which Kubernetes starts?

  • Default
  • Kube – system
  • Kube – public

18. What is the Kubernetes controller manager?

The controller manager is a daemon that is used for embedding core control loops, garbage collection, and Namespace creation. It enables the running of multiple processes on the master node even though they are compiled to run as a single process.

19. What are the types of controller managers?

The primary controller managers that can run on the master node are the endpoints controller, service accounts controller, namespace controller, node controller, token controller, and replication controller.

20. What is etcd?

Kubernetes uses etcd as a distributed key-value store for all of its data, including metadata and configuration data, and allows nodes in Kubernetes clusters to read and write data. Although etcd was purposely built for CoreOS, it also works on a variety of operating systems (e.g., Linux, BSB, and OS X) because it is open-source. Etcd represents the state of a cluster at a specific moment in time and is a canonical hub for state management and cluster coordination of a Kubernetes cluster.

21. What are the different services within Kubernetes?

Different types of Kubernetes services include: 

  • Cluster IP service
  • Node Port service
  • External Name Creation service and 
  • Load Balancer service

22. What is ClusterIP?

The ClusterIP is the default Kubernetes service that provides a service inside a cluster (with no external access) that other apps inside your cluster can access. 

23. What is NodePort? 

The NodePort service is the most fundamental way to get external traffic directly to your service. It opens a specific port on all Nodes and forwards any traffic sent to this port to the service.

24. What is the LoadBalancer in Kubernetes? 

The LoadBalancer service is used to expose services to the internet. A Network load balancer, for example, creates a single IP address that forwards all traffic to your service.  

25. What is a headless service?

A headless service is used to interface with service discovery mechanisms without being tied to a ClusterIP, therefore allowing you to directly reach pods without having to access them through a proxy. It is useful when neither load balancing nor a single Service IP is required. 

D26 What is Kubelet?

The kubelet is a service agent that controls and maintains a set of pods by watching for pod specs through the Kubernetes API server. It preserves the pod lifecycle by ensuring that a given set of containers are all running as they should. The kubelet runs on each node and enables the communication between the master and slave nodes.

27. What is Kubectl?

Kubectl is a CLI (command-line interface) that is used to run commands against Kubernetes clusters. As such, it controls the Kubernetes cluster manager through different create and manage commands on the Kubernetes component

28. Give examples of recommended security measures for Kubernetes.

Examples of standard Kubernetes security measures include defining resource quotas, support for auditing, restriction of etcd access, regular security updates to the environment, network segmentation, definition of strict resource policies, continuous scanning for security vulnerabilities, and using images from authorized repositories.

29. What is Kube-proxy? 

Kube-proxy is an implementation of a load balancer and network proxy used to support service abstraction with other networking operations. Kube-proxy is responsible for directing traffic to the right container based on IP and the port number of incoming requests.

30. How can you get a static IP for a Kubernetes load balancer? 

A static IP for the Kubernetes load balancer can be achieved by changing DNS records since the Kubernetes Master can assign a new static IP address.

Jumpstart Your Career with Kubernetes Training

Having a good understanding of DevOps and on-premises software development can be quite useful in helping you gain a holistic view of the subject matter. Ultimately, taking the Kubernetes Certification Training Course and taking your time to study and understand what you’ve learned while preferably putting it into practice is the best way to prepare for an Kubernetes interview, and  the Kubernetes interview questions that you’ve learned here are the icing on the cake. The more familiar you are with these types of Kubernetes interview questions, the better able you will be to show off your skills and Kuberenetes knowledge.

Saturday, 11 September 2021

DOCKER GUIDE: HOW TO INSTALL DOCKER AND WRITE DOCKERFILES

 In this blog we will discuss how to install it in our system and write docker files .

Here I will use ubuntu 20.04 to install docker.

Before installing docker make sure following system requirements are met

  1. Linux kernel version >= 3.8
  2. 64-bit Processor
  3. Memory :- 2GB recommended (512 MB minimum)

To check the version of Linux kernel, use command :-

uname -r

To install docker in your systems, Follow below steps:-

  1. update your OS using command :- sudo apt-get update to get latest packages(here “sudo” ensures that command runs with root access)

2. To download latest docker packages from docker site, some necessary certificates are required that has to be downloaded using command:-

sudo apt-get install apt-transport-https ca-certificates

3. Next step is to add the Linux official GPG key. GPG key is used by Linux to keep your files and packages safe and encrypted. To add GPG key use below command:-

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

4. After adding GPG key verify it using command:-

sudo apt-key fingerprint 0EBFCD88

5. Next, set up a stable docker repository . to do this use following command:-

sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”

6. Update ubuntu packages using sudo apt-get update to ensure all packages are up to date

7. Finally install docker engine from docker website using command:-

sudo apt-get install docker-ce

9. Congratulations now you have docker installed on your ubuntu machine. To check the docker version use command:-

sudo docker version

DockerFiles:-

In my previous post DOCKER GUIDE: WHAT IS DOCKER AND WHY USE IT:- i wrote about images and their use in docker . Dockerfile is simply a text file that contains instructions to build an image. All docker images have their own dockerfiles.

Some important commands to write dockerfiles:-

  1. FROM :- FROM command is used to set the base image where next instructions of dockerfile will run.
  2. COPY :- copy command is used to copy contents of one directory on host machine to another directory inside the container.
  3. RUN :- run command is used to run any command on the current image.
  4. WORKDIR :- workdir command is used to set the working directory where subsequent instructions of dockerfile will run.
  5. CMD :- CMD command is used to execute a command after creation of the container. Each dockerfile runs only one CMD instruction. If there are more than one CMD instruction in a dockerfile then the last CMD instruction will run and all others will be skipped.

Now suppose you have a small nodeJS application . Let us build an image of this application by writing a dockerfile.

  • FROM node:10-alpine:- Set node:10-alpine image as the base image for rest of the instructions of dockerfile.
  • RUN mkdir -p /home/app:- create a directory /home/app
  • COPY ./project /home/app:- copy all content of the project folder to /home/app directory of the container.
  • WORKDIR /home/app:- set /home/app as working directory, all the subsequent instructions on dockerfile will run here.
  • RUN npm install:- install all the dependencies of your project mentioned in package.json and package-lock.json files.
  • CMD [“node”, “index.js”]:- execute the application.

BUILD THE IMAGE :- After writing the dockerfile next step is to build the image. Use below command to build image from our dockerfile :-

Sudo docker build -t <Image-name>:<tag-name> .

Note:- “sudo” is used here to run the command with root access.

IMAGE TAGGING :- In the above command -t flag is used for tagging your image. Tagging is a process of adding metadata to your docker images like version no. If you do not specify any tag name for the image, docker assigns it by itself as “latest”. Tagging is a optional process

The above command runs each instruction specified in your dockerfile and build your image in layers.

RUN THE IMAGE:- To run your image use command:-

Sudo Docker run <image>

GET YOUR IMAGE INFO:- To verify your image, run command:-

Sudo Docker images

Above command will give you a list of all the images in your system with their details .

Congratulations, you have successfully installed docker in your system and created your application’s image via dockerfile.