DMCA.com Protection Status Trending Topics About Devops

Thursday, 2 December 2021

How to Become a DevOps Engineer in Six Months

 

How to Become a DevOps Engineer in Six Months?


What is DevOps?

DevOps is a mix of development (Dev) and operations (Ops). Beyond that, it’s all the connected tools and disciplines of both of those business areas that let an organization deliver services and software applications at high speed, so they can better serve their customers.

The Microsoft-approved definition of DevOps is “the union of people, processes and technology to continually provide value to customers.” Both definitions are open to interpretation, but DevOps centers around writing code that pulls together customers (internal and external), and business processes.

Sometimes, DevOps engineering means just “being that go-to employee” who can quickly and efficiently write code to address an engineering issue. In other words, in some organizations, DevOps is the indispensable IT employee who knows how to write effective code.

Path to become a DevOps engineer

The question of how to become a DevOps engineer has a relatively straightforward answer. With that said, you’ll need to bring a few things to the table. First and most important to the DevOps career path is a passion for learning, knowledge, and logic.

Who can become a DevOps engineer?

DevOps engineers need to be able to read between the lines in their customers’ requirements. They also have to produce software and services that meet those requirements in a usable, testable form. Since development doesn’t happen in a vacuum, you’ll also need leadership and management skills, along with a cool head under pressure.

It’s not all about tools!

When most DevOps hiring managers look for a new employee, they’re more concerned with mindset than with tools. If you’ve got a tech background, you’re willing to learn, and you’re an engineer at heart, you’ve already got the basics of a DevOps career.

DevOps engineers are curious, constantly improving their skillsets, and focused on lifelong learning. So while you can build the core skillset in a few months, your main driver should be on learning, with a goal of providing massive value to your next employer.

Learn to understand systems and processes, and you have the right mindset. That mindset will help you learn how to start a career in DevOps, and more important, how to be a good DevOps engineer.

How long does it take to become a DevOps engineer?

It takes about six months to become a DevOps engineer, assuming you have some basic Linux admin and networking skills, and that you apply the DevOps engineer learning path outlined below. With that said, that career won’t just happen overnight. The length of time required depends on several factors, including your mindset, your current skill level, and your career position.

With that caveat, there’s no shortage of free tools and resources you can use to help you on your journey. Some professional DevOps engineering sites even offer free or vastly reduced exams to help you grow and prove your worth.  Let’s dig into how to become a DevOps engineer, starting with the tools and skills.

What skills will I need?

To become a DevOps engineer, at the bare minimum you’ll need basic Linux admin and networking skills, plus some scripting fundamentals, along with the following DevOps skills:

1. Intermediate to advanced Linux skills

In DevOps, you’re not installing a server once and then logging in every now and then to perform a few admin tasks. You need to understand how to create highly customized Linux images from the ground up, both for VM and container use cases — unless you plan to become a Windows Server DevOps engineer.

2. Intermediate networking skills

In DevOps there’s no “network team.” All network resources are software-defined. In other words, networks are part of infrastructure as code. At a bare minimum, you’ll need a solid grasp on the OSI model, IPV4, subnetting, static and stateful firewalling, and DNS. These skills are usually included in advanced cloud certifications.

3. A commitment to at least one cloud

Clouds aren’t merely managed data centers. In order for you to automate workloads in a given cloud (AWS, Azure, GCP, etc.), you need a firm grasp of their specific semantics. You’ll need to know what resources are available, how they’re organized, and what properties they have. 

4. Infrastructure automation

Once you understand the resources (and their properties) applicable to a cloud, you’re ready to automate their creation using tools such as Terraform and Ansible.

5. SDLC, CI/CD pipelines, and scripting

In DevOps, we deliver infrastructure in a similar way to applications. So — you’ll need to be acquainted with the fundamentals of the software development life cycle (SDLC). This includes versioning strategies using source control code management systems like Git, and CI/CD pipelines such as Jenkins, and CircleCI. Advanced automation tasks may prove difficult through shell scripts alone. You’ll often require more powerful scripting using the likes of Python, Perl, or Ruby. 

6. Container technology

For legacy workloads you may automate the creation of a VM image. But for new applications you’ll be working with containers. As such, you need to know how to build your own Docker images (Linux skills required!) and deploy them using Kubernetes. FaaS technology like AWS Lambda also uses container technology behind the scenes.

7. Observability technology

While all clouds have monitoring dashboards and standard “telemetry” hooks, most large employers use third party (both commercial and open source) monitoring tools such as Prometheus, DynaTrace, Datadog, or the ELK stack.

Beyond that skills list, tool-building in DevOps requires a fundamental understanding of logic, its application, and how to apply it in a computer-recognizable form. While that may sound a tad scary for the uninitiated, there are several good books that cover programming fundamentals without using any specific language. Here are a couple of my favorites:

Now let’s dig into the nuts and bolts of how to become a DevOps engineer — starting with education.

What education do I need?

One of the great things about DevOps is that it’s about what you can do, not what qualifications you have. Some of the best DevOps engineers in the field are self-taught, with little in the way of formal higher education. The biggest requirement is motivation and an interest in DevOps engineering.

With that said, you’ll have a much easier time both learning DevOps skills and getting a company to hire you if you have a bachelor’s degree in software development, IT, or a related field.

Don’t take forever to get trained

Start your DevOps engineer roadmap by looking through the skills list above. If you already have some of those skills — great. If not, be honest about the time you’ll need to spend to learn them. But don’t stress about getting everything perfect before you start. If you wait for mastery, you’ll never get a DevOps job.

Start by learning a few of the easy-to-learn skills. If you’re already employed in a non-DevOps job, start working on some DevOps projects now, to build mastery and proof you have the skills. Then make the switch to a full-time DevOps career.

You may even find that your own company has DevOps openings you could move into. Keep a keen eye on internal and external vacancies alike.

Here are the DevOps skills you’ll need

Let’s take a deeper look now at how to become a DevOps engineer — the DevOps career path and how to build the skills. We’ll share the reasons each of these tools is important, and how long it’ll take to learn each one. We’ll also point you to some good online classes and certifications.

You can learn most of these skills on the job — but a word of caution. In the sink-or-swim world of DevOps career growth, different companies have different requirements. There’s no one-size-fits-all approach.

Foundation knowledge: 4 months

We’ve put a plus-sign after each of the time frames below, because while you can learn the basics quickly, mastery can take much longer.

1. Intermediate to Advanced Linux and Networking: 1 month+

Linux is the OS and server platform of choice for DevOps engineers in companies of any size. Linux’s open-source nature, small operational footprint, and support from the Likes of Redhat and Ubuntu make it the go-to not only for DevOps, but for tool building in general.  One of the best things about Linux is that you can download it and start using it today.

If you feel that your Linux skills are rusty, you can get started with the free course offered by Udemy. In fact, if you want to learn how to become a DevOps engineer exclusively from Udemy, they have an entire curriculum of core DevOps classes.

In terms of networking, you’ll get the necessary skills if you do an intermediate cloud certification, such as AWS Certified Solution Architect, but it helps if you take a specialized course such as The Bits and Bytes of Computer Networking on Coursera.

2. Advanced Scripting: 2 months+

First of all, you’ll always need shell (e.g., bash) scripting skills, because this is the default for Linux and most tools. 

For “advanced” scripting use cases, there are quite a few languages out there, but Python is a good start if you don’t know what scripting language to pick.

You can master Python in as little as two months with online tutorials from LearnPython.org. However, you’ll find that many employers also use other languages such as Perl and Ruby as well, so be ready to learn those, if need be.

3. Cloud Training and Certification: 1 month+

AWS is the 600lb gorilla in terms of agile cloud providers, and AWS and Linux go together like Strawberries and cream. You’ll need to be fluent in AWS before you can call yourself part of the DevOps community.

The beauty of AWS and cloud development in general is that you only pay for what you use. That model makes cloud computing ideal for DevOps testing. You can set up an environment quickly, use it for what you need, then pull it down again.

It’s easy to start using AWS, since there’s a 12-month free version available to anyone who signs up. You can learn professional-grade skill in AWS in as little as one month, though mastery can take years of continual on-the-job use. Get your AWS certification here.

4. Google Cloud: 1 month+

Azure offers similar employment opportunities to AWS, but what about GCP?

The Google Cloud Platform (GCP) is smaller than AWS and Azure but it excels particularly in data mining and artificial intelligence (and other deep learning technologies). Google’s DevOps-related offerings are becoming increasingly popular with large companies.

In the banking industry for example, the Google AI/ML tools are creating new ways of doing business, plus adding fraud detection and usage-pattern tracking. This saves huge amounts of time trying to develop similar tools in-house.

Similarly, other large companies are using Google’s ML tools to bring massive data sets down to size, drawing business-driving insights from previously unmanageable seas of data.

Want to know more about how to become a DevOps engineer with Google Cloud? You can get your Google Cloud certification here in three months, though you can learn to develop applications with Google Cloud in as little as one month.

Skills: 1 month

It doesn’t take long to learn the DevOps skills you’ll need to succeed in your new career. All these tools are free to use and experiment with. They just require a little time and effort on your part. Let’s look at how long it takes to learn the basic DevOps tools like Terraform, Git, Docker, Jenkins, ECS, and ELK Stack.

1. Configure: Terraform (and Ansible) — 1 week+

Configuration management is at the heart of fast software development. Poorly configured tools waste time, while well-configured tools save it.

As its name implies, Terraform has one purpose in life — to create infrastructure as code in an automated way that speeds up your entire process.

Ansible concerns itself with server-desired state configuration, ensuring that servers are configured to specs. These two technologies are cornerstones of DevOps. Both may seem complex at first, but they’re all based around configuration files written in YAML. 

Terraform takes about a week to learn the basics. Udemy offers a great online class that bundles AWS, Terraform, and Docker.

2. Version control: Git and GitHub (GitLab) — 20 minutes+

Version control is key to any DevOps endeavour. It lets DevOps Engineers and their team members create and review code faster, without wasting time sharing endless files and iterations.

Git is a standalone product that by default is used on local machines and networks. This is different from GitHub, which facilitates version control in the cloud, with the overhead managed by GitHub itself. In the world of infrastructure as code, version control with products like Git and GitLab are essential.

GitLab is a complete open-source DevOps platform. It helps users deliver software faster, with collaboration and security all rolled into one. Looking to learn more about how to become a DevOps engineer with Git? You can learn the basics of Git in minutes if you’re already a programmer. 

3. Package: Docker (Lambda) — 3 days+

Packaging is where build management meets release management. It’s where your code and infrastructure come together for deployment.

Without Docker there would be no DevOps. Docker essentially allows DevOps to run code in small isolated containers. That way, building services and replacing services becomes simpler than updating everything in one go (which is very non-DevOps).

Amazon’s Lambda is an alternative to Docker that many companies use instead. Though it’s best to know both tools, Docker is an excellent starting point. You can learn Docker in just a few days. Udemy offers a solid beginner’s course online for DevOps. They also have a Docker class bundled with Kubernetes.

4. Deploy: Jenkins (CodeDeploy) — 2 days+

During deployment, you’ll take your code from version control to users of your application. Automation is a key component of this step, and Jenkins is the central way to automate.

Jenkins allows automation for all manner of tasks, including running build tests and making decisions based on whether code passes or fails the build process. You can also use Jenkins for more mundane purposes, like centralized management of scripts and executing commands via SSH (and other authentication pathways).

It’s a tool to automate those frequent and boring tasks that computers can do better than even the best DevOps engineer could. Some companies choose CodeDeploy over Jenkins, making it another useful DevOps tool to learn.

You can learn to use Jenkins in just a few days. Udemy offers a great Jenkins class online for DevOps engineers.

5. Run: ECS (Kubernetes) — 1 day

Kubernetes is DevOps bread and butter. It starts with Docker and adds extra functionality and tools. For instance, it lets the administrator ensure that several copies of a container image are running. That way, if a single VM or host is lost, the service is still available.

ECS and Kubernetes perform valuable services like this in the background. They deliver several automated DevOps tools that allow useful additions to manage containers, and their availability. They also add important items such as introducing role-based access control and more centralized auditing and management functionality.

See IBM’s Kubernetes learning path and guide for a 13-hour course.

6. Monitor: ELK Stack (Prometheus) — 2 days

Once your new application is up and running, you’ll need a real-time view of its status, infrastructure, and services. To this end, DevOps engineers love ELK.

ELK provides all the base components for effective log management and search functionality. It’s Elasticsearch, Logstash, and Kibana — three open source applications offered by the Elastic company.

ELK takes data from multiple sources, and lets you visualize it by using useful charts and graphs. Its rival platform, Prometheus, is just as important for a DevOps engineer to understand. You can learn to use the ELK Stack in just a few days with Udemy’s 4-star online class.

Those are the basics of how to become a DevOps engineer. Now let’s look into why Git matters so much, and how to get a DevOps job.

GitHub matters

One more word on GitHub as a shortcut to starting a career in DevOps. GitHub is essentially the CV of the DevOps world. Any DevOps hiring person will check out your GitHub profile as a very first step and point of contact. Yet it’s easy to learn GitHub and other DevOps tools while you create your virtual CV at the same time.

How to get a DevOps job in 1 month+

Knowing how to become a DevOps engineer doesn’t stop with skills. The next step in your DevOps engineer career path is getting the job. That sounds daunting, but if you’ve got software development experience, the skills above, and a few DevOps achievements for your resume, you’re well on your way to getting hired.

Here’s how to get into DevOps.

Rewrite your DevOps resume

The first step in getting a DevOps job is to rewrite your resume. A DevOps resume doesn’t need to show years of experience. As this excellent DevOps resume guide shows, start with a reverse-chronological format. Then in your work history, education, and projects section, list achievements, including:

  • Development tasks from past jobs
  • Side projects in development, IT, Agile, scripting, or automation
  • Volunteer work in coding or distribution
  • Projects in Terraform or other DevOps tools
  • Linux/Unix projects
  • Scripting in Python or Ruby
  • Tasks completed with AWS, Jenkins, Maven, etc.

Start each resume bullet point with an action verb like developed, wrote, created, built, deployed, etc. And use numbers to show how many projects, deployments, scripts, tests, containers, and how many customers, team members, etc.

The more you show DevOps achievements in your history, with measurable details, the higher your chance of getting hired. Knowing how to become a DevOps cloud engineer is all about showing your projects and accomplishments.

Apply to lots of DevOps jobs

Finding a DevOps job is a numbers game. If you apply to three jobs, you won’t hear back from any. If you apply to 50, you’ll get a few responses and maybe an interview. Plan to hear back from about one in every 30 applications, and get interviewed by one in every 100.

In other words? You’ll need to apply to a lot of DevOps jobs. Probably something like 300 in a month to get one job (about 14 every weekday). But — you can vastly boost your chance of getting hired if you lean on networking. The easiest way? Start connecting with DevOps engineers on LinkedIn.

Then — don’t ask them for a job. Just ask if you can chat with them about their cool career. About 20% will be glad to share their success. You’ll learn tons about how to start a career in DevOps. And surprise surprise — some will even introduce you to their contacts.

Shun the unicorns

Don’t worry about being front-and-center in a DevOps job at Google, Amazon, or another giant company. Everybody clamors to get hired at those firms, creating stifling competition. Don’t be afraid to pursue a DevOps job at a less glamorous firm. Once you’ve got a little experience under your belt, then you can go unicorn hunting.

Consider working your way up

Is DevOps a good career for freshers? Not really. Don’t count on landing an entry-level DevOps job. DevOps is, by nature, an advanced position that requires highly skilled candidates. But — don’t let that discourage you. One of the best DevOps career paths is to start as a software developer or IT specialist in a company that also hires DevOps engineers.

If your current employer doesn’t hire DevOps pros, consider switching to one that does. Don’t stay entry-level for long. Three to six months is plenty. Once you’ve logged that time, commit to applying internally to DevOps positions in your new company. During your entry-level tenure, work to build accomplishments that look good on a DevOps resume.

Summary

Those are the basics of how to become a DevOps engineer. Though becoming a DevOps engineer takes persistence and passion, it’s not rocket science. Anyone with the drive (and a little time) can follow the DevOps career path, learn the necessary skills in five months, and get a DevOps job in one month. With the right skillset and job search strategy, you can be in your DevOps dream job very soon.

How to deploy dockerized apps to Kubernetes on CentOS 8

 

How to deploy dockerized apps to Kubernetes on CentOS 8




Kubernetes is a popular container orchestration system that lets you automate application deployment, scaling and management tasks via simple command line calls. This guide describes how to build and deploy a simple dockerized web app to a Kubernetes cluster on CentOS 8.

The guide is divided into three main parts:

1. Testing out containers
2. Building a simple dockerized app using Dockerfile
3. Deploying the app to Kubernetes

If you do not yet have a running Kubernetes cluster, have a look at our earlier tutorial on how to install Kubernetes on CentOS 8.

Test hosting on UpCloud!

Testing out containers

Let’s start by running a simple docker app to test the container platform.

1. Run hello-world app with the following command:

docker run hello-world

You should see the app print “Hello from Docker!” to your terminal towards the end of the output.

Running a known container pulls the required image from a public registry and saves it on your docker server.

2. Check the list of docker images

docker images

After running the hello-world app once, it should show up on the images list.

REPOSITORY    TAG      IMAGE ID       CREATED         SIZE
hello-world   latest   fce289e99eb9   15 months ago   1.84kB

Now let’s try a more advanced docker app

1. Run the Ubuntu docker app in interactive mode.

docker run -it ubuntu bash

On success, you’ll be brought directly to Ubuntu prompt.

2. Test the internet connection from inside docker app by running a command that requires network access.

On Ubuntu’s terminal prompt, run the following to check it has an internet connection.

apt update

On success, you should see something like below.

Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
...
Get:17 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [2496 B]
Get:18 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [4247 B] Fetched 17.7 MB in 5s (3439 kB/s)

Then stop the Ubuntu container with the following command.

exit

In case you cannot access the internet, you might see something similar to the error underneath.

Err:1 http://security.ubuntu.com/ubuntu bionic-security InRelease Temporary failure resolving \'security.ubuntu.com\' Err:2 http://archive.ubuntu.com/ubuntu bionic InRelease Temporary failure resolving \'archive.ubuntu.com\' ...

To fix this, make sure IP masquerade is enabled at the firewall and then try again.

firewall-cmd --add-masquerade --permanent
firewall-cmd --reload

OK, we’ve completed the first part, now let us move on to the next step.

Building a simple dockerized app using Dockerfile

Dockerizing apps is a great way of creating consistent and reliable environments for many tasks regardless of the underlying operating system.

For this example, we are going to make a simple Golang web server that takes input via the server URL and prints out a hello message.

1. Start by creating a new directory name it “goserver”

mkdir goserver

2. Create a file name it “main.go” in the goserver directory

vi goserver/main.go

Enter the following code, then save the file.

package main
import ( 
   "fmt"
   "net/http"
)
func main() {
   http.HandleFunc("/", HelloServer)
   http.ListenAndServe(":8080", nil)
}
func HelloServer(w http.ResponseWriter, r *http.Request) {
   fmt.Fprintf(w, "Hello, %s!\n", r.URL.Path[1:])
}

3. Create another new file named “Dockerfile” with the following command.

vi Dockerfile

Then add the following content to that file, save and exit the text editor.

FROM golang:alpine as builder
WORKDIR /build
COPY /goserver .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o main .
FROM scratch
COPY --from=builder /build/main /app/
WORKDIR /app
ENV PORT=8080
CMD ["./main"]

4. Next, build a docker image based on our goserver.

docker build -f Dockerfile -t goserver .

5. Additionally, you may wish to clean unused images from the build process.

docker system prune -f

6. Then check that your web app was added to the images list.

docker images

You should see a line such as in the example below.

REPOSITORY   TAG      IMAGE ID       CREATED          SIZE
goserver    latest   6c0fb70f56fe   55 seconds ago   7.41MB

7. Afterwards, test run the web app.

docker run -d -p 8090:8080 --name goserver goserver

8. You can verify that the deployment was successful by checking the running images.

docker ps -f name=goserver

You should see the “goserver” container running with status “Up”.

CONTAINER ID   IMAGE       COMMAND    CREATED          STATUS          PORTS                    NAMES
a16ebdf117b9   goserver   "./main"   54 seconds ago   Up 53 seconds   0.0.0.0:8090->8080/tcp   goserver

Your web app is now reachable both locally and over the internet.

9. Call it on the command line, for example by using curl with the command below.

curl http://localhost:8090/Joe

You should have curl available but if not, install curl using sudo dnf install curl

You can also open the page in your web browser. Replace the public-ip-address with the IP of your Master node.

http://public-ip-address:8090/Joe

You should see the following response.

Hello, Joe!

12. Once done, stop the web server and remove the image.

docker stop goserver && docker rm goserver

Congratulations, you should now have an idea what’s included in making simple dockerized apps.

Deploying a dockerized app to Kubernetes

We’ve now tested out the container platform and built our own dockerized web app, so the last thing to do is to deploy it on our Kubernetes cluster.

This part is referring to Kubernetes configuration installed in our previous tutorial.

1. Create the following configuration file on the master node. Replace the <master_private_IP> with the private IP address of your master node.

cat > /etc/docker/daemon.json <<EOF
{
   "log-driver": "json-file",
   "log-opts": {
      "max-size": "100m"
   },
   "storage-driver": "overlay2",
   "storage-opts": [
      "overlay2.override_kernel_check=true"
   ],
   "insecure-registries": [ "<master_private_IP>:5000" ]
}
EOF

2. Add the exception for the registry also on all worker nodes. Again, replace the <master_private_IP> with the private IP address of your master node.

cat > /etc/docker/daemon.json <<EOF
{
   "insecure-registries": [ "<master_private_IP>:5000" ]
}
EOF

Then restart Docker on all nodes.

systemctl restart docker

Continue with the rest of the steps on the master node only.

We are going to be using a private registry container for storing and distributing our dockerized app in our cluster.

3. Get the latest version of the docker registry.

docker pull registry:2

4. Run docker registry on port 5000 using the master node’s private IP address. Replace the <master_private_IP> with the correct address.

docker run -dit --restart=always -p <master_private_IP>:5000:5000 --name registry -e REGISTRY_STORAGE_DELETE_ENABLED=true registry:2

5. Next, create a deployment artefact for our “goserver” app.

Please be aware that indentation matters. This artefact is using two spaces for indentation.

vi deployment.yaml

Enter the following content and again replace the <master_private_IP> with the correct address.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: goserver
spec:
  selector:
    matchLabels:
      app: goserver
  replicas: 2
  revisionHistoryLimit: 0
  progressDeadlineSeconds: 30
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 2
  template:
    metadata:
      labels:
        app: goserver
    spec:
      containers:
      - name: goserver
        image: <master_private_IP>:5000/goserver:v1
        ports:
        - hostPort: 8090
          containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: goserver
  name: goserver
spec:
  selector:
    app: goserver
  ports:
  - protocol: TCP
    port: 8090
    targetPort: 8080
  type: ClusterIP

6. Tag the docker app to add the private registry address and port. As before, replace the <master_private_IP> with the private IP address of your master node.

docker tag goserver:latest <master_private_IP>:5000/goserver:v1

7. Then check to make sure the tagged app was created.

docker images

It should show up on the list.

8. Push the tagged app to the private registry so it can be pulled by Kubernetes deployment. Again, replace the <master_private_IP> with the private IP address of your master node.

docker push <master_private_IP>:5000/goserver:v1

9. Check that the tagged app was successfully pushed to the registry by querying the registry. Replace the <master_private_IP> with the private IP address of your master node.

curl -s -X GET http://<master_private_IP>:5000/v2/goserver/tags/list
{"name":"goserver","tags":["v1"]}

Now that we have set up our private repository and pushed the docker image to it, it can be used to deploy the app onto our Kubernetes cluster.

10. Deploy our goserver using the command below.

kubectl apply -f deployment.yaml

11. Check that deployment rollout succeeded.

kubectl rollout status deployment/goserver

12. You can also check that the pods are up and running.

kubectl get pods

13. You should now be able to access the webserver on both master and worker nodes.

curl http://localhost:8090/Joe

Or open browser to http://<public-IP-address>:8090/Joe

Replace the <public-IP-address> with either the public IP address of your master or worker node.

You should see the familiar greeting.

Hello, Joe!

14. When you wish to delete the test app, revert the deployment with the following command.

kubectl delete -f deployment.yaml

Done!

All done!

How to deploy Kubernetes Dashboard quickly and easily

 

How to deploy Kubernetes Dashboard quickly and easily



Kubernetes offers a convenient graphical user interface with their web dashboard which can be used to create, monitor and manage a cluster. The installation is quite straightforward but takes a few steps to set up everything in a convenient manner.

In addition to deploying the dashboard, we’ll go over how to set up both admin and read-only access to the dashboard. However, before we begin, we need to have a working Kubernetes cluster. You can get started with Kubernetes by following our earlier tutorial.


1. Deploy the latest Kubernetes dashboard

Once you’ve set up your Kubernetes cluster or if you already had one running, we can get started.

The first thing to know about the web UI is that it can only be accessed using localhost address on the machine it runs on. This means we need to have an SSH tunnel to the server. For most OS, you can create an SSH tunnel using this command. Replace the <user> and <master_public_IP> with the relevant details to your Kubernetes cluster.

ssh -L localhost:8001:127.0.0.1:8001 <user>@<master_public_IP>

After you’ve logged in, you can deploy the dashboard itself with the following single command.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

If your cluster is working correctly, you should see an output confirming the creation of a bunch of Kubernetes components like in the example below.

namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Afterwards, you should have two new pods running on your cluster.

kubectl get pods -A
...
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-v4z89   1/1     Running   0          30m
kubernetes-dashboard   kubernetes-dashboard-7b544877d5-m8jzk        1/1     Running   0          30m

You can then continue ahead with creating the required user accounts.

2. Creating Admin user

The Kubernetes dashboard supports a few ways to manage access control. In this example, we’ll be creating an admin user account with full privileges to modify the cluster and using tokens.

Start by making a new directory for the dashboard configuration files.

mkdir ~/dashboard && cd ~/dashboard

Create the following configuration and save it as dashboard-admin.yaml file. Note that indentation matters in the YAML files which should use two spaces in a regular text editor.

nano dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Once set, save the file and exit the editor.

Then deploy the admin user role with the next command.

kubectl apply -f dashboard-admin.yaml

You should see a service account and a cluster role binding created.

serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Using this method doesn’t require setting up or memorising passwords, instead, accessing the dashboard will require a token.

Get the admin token using the command below.

kubectl get secret -n kubernetes-dashboard $(kubectl get serviceaccount admin-user -n kubernetes-dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

You’ll then see an output of a long string of seemingly random characters like in the example below.

eyJhbGciOiJSUzI1NiIsImtpZCI6Ilk2eEd2QjJMVkhIRWNfN2xTMlA5N2RNVlR5N0o1REFET0dp
dkRmel90aWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlc
y5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1Y
mVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuL
XEyZGJzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZ
SI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb
3VudC51aWQiOiI1ODI5OTUxMS1hN2ZlLTQzZTQtODk3MC0yMjllOTM1YmExNDkiLCJzdWIiOiJze
XN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.GcUs
MMx4GnSV1hxQv01zX1nxXMZdKO7tU2OCu0TbJpPhJ9NhEidttOw5ENRosx7EqiffD3zdLDptS22F
gnDqRDW8OIpVZH2oQbR153EyP_l7ct9_kQVv1vFCL3fAmdrUwY5p1-YMC41OUYORy1JPo5wkpXrW
OytnsfWUbZBF475Wd3Gq3WdBHMTY4w3FarlJsvk76WgalnCtec4AVsEGxM0hS0LgQ-cGug7iGbmf
cY7odZDaz5lmxAflpE5S4m-AwsTvT42ENh_bq8PS7FsMd8mK9nELyQu_a-yocYUggju_m-BxLjgc
2cLh5WzVbTH_ztW7COlKWvSVbhudjwcl6w

The token is created each time the dashboard is deployed and is required to log into the dashboard. Note that the token will change if the dashboard is stopped and redeployed.

3. Creating Read-Only user

If you wish to provide access to your Kubernetes dashboard, for example, for demonstrative purposes, you can create a read-only view for the cluster.

Similarly to the admin account, save the following configuration in dashboard-read-only.yaml

nano dashboard-read-only.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: read-only-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
  name: read-only-clusterrole
  namespace: default
rules:
- apiGroups:
  - ""
  resources: ["*"]
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources: ["*"]
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources: ["*"]
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-only-binding
roleRef:
  kind: ClusterRole
  name: read-only-clusterrole
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: read-only-user
  namespace: kubernetes-dashboard

Once set, save the file and exit the editor.

Then deploy the read-only user account with the command below.

kubectl apply -f dashboard-read-only.yaml

To allow users to log in via the read-only account, you’ll need to provide a token which can be fetched using the next command.

kubectl get secret -n kubernetes-dashboard $(kubectl get serviceaccount read-only-user -n kubernetes-dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

The toke will be a long series of characters and unique to the dashboard currently running.

4. Accessing the dashboard

We’ve now deployed the dashboard and created user accounts for it. Next, we can get started managing the Kubernetes cluster itself.

However, before we can log in to the dashboard, it needs to be made available by creating a proxy service on the localhost. Run the next command on your Kubernetes cluster.

kubectl proxy

This will start the server at 127.0.0.1:8001 as shown by the output.

Starting to serve on 127.0.0.1:8001

Now, assuming that we have already established an SSH tunnel binding to the localhost port 8001 at both ends, open a browser to the link below.

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

If everything is running correctly, you should see the dashboard login window.

Signing in to Kubernetes dashboard

Select the token authentication method and copy your admin token into the field below. Then click the Sign in button.

You will then be greeted by the overview of your Kubernetes cluster.

Kubernetes dashboard overview

While signed in as an admin, you can deploy new pods and services quickly and easily by clicking the plus icon at the top right corner of the dashboard.

Creating new from input on Kubernetes dashboard

Then either copy in any configuration file you wish, select the file directly from your machine or create a new configuration from a form.

5. Stopping the dashboard

User roles that are no longer needed can be removed using the delete method.

kubectl delete -f dashboard-admin.yaml
kubectl delete -f dashboard-read-only.yaml

Likewise, if you want to disable the dashboard, it can be deleted just like any other deployment.

kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

The dashboard can then be redeployed at any time following the same procedure as before.

6. Setting up management script

The steps to deploy or delete the dashboard are not complicated but they can be further simplified.

The following script can be used to start, stop or check the dashboard status.

nano ~/dashboard/dashboard.sh
#!/bin/bash
showtoken=1
cmd="kubectl proxy"
count=`pgrep -cf "$cmd"`
dashboard_yaml="https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml"
msgstarted="-e Kubernetes Dashboard \e[92mstarted\e[0m"
msgstopped="Kubernetes Dashboard stopped"

case $1 in
start)
   kubectl apply -f $dashboard_yaml >/dev/null 2>&1
   kubectl apply -f ~/dashboard/dashboard-admin.yaml >/dev/null 2>&1
   kubectl apply -f ~/dashboard/dashboard-read-only.yaml >/dev/null 2>&1

   if [ $count = 0 ]; then
      nohup $cmd >/dev/null 2>&1 &
      echo $msgstarted
   else
      echo "Kubernetes Dashboard already running"
   fi
   ;;

stop)
   showtoken=0
   if [ $count -gt 0 ]; then
      kill -9 $(pgrep -f "$cmd")
   fi
   kubectl delete -f $dashboard_yaml >/dev/null 2>&1
   kubectl delete -f ~/dashboard/dashboard-admin.yaml >/dev/null 2>&1
   kubectl delete -f ~/dashboard/dashboard-read-only.yaml >/dev/null 2>&1
   echo $msgstopped
   ;;

status)
   found=`kubectl get serviceaccount admin-user -n kubernetes-dashboard 2>/dev/null`
   if [[ $count = 0 ]] || [[ $found = "" ]]; then
      showtoken=0
      echo $msgstopped
   else
      found=`kubectl get clusterrolebinding admin-user -n kubernetes-dashboard 2>/dev/null`
      if [[ $found = "" ]]; then
         nopermission=" but user has no permissions."
         echo $msgstarted$nopermission
         echo 'Run "dashboard start" to fix it.'
      else
         echo $msgstarted
      fi
   fi
   ;;
esac

# Show full command line # ps -wfC "$cmd"
if [ $showtoken -gt 0 ]; then
   # Show token
   echo "Admin token:"
   kubectl get secret -n kubernetes-dashboard $(kubectl get serviceaccount admin-user -n kubernetes-dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
   echo

   echo "User read-only token:"
   kubectl get secret -n kubernetes-dashboard $(kubectl get serviceaccount read-only-user -n kubernetes-dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
   echo
fi

Once all set, save the file and exit the text editor.

Then make the script executable.

chmod +x ~/dashboard/dashboard.sh

Next, create a symbolic link to the dashboard script to be able to run it from anywhere on the system.

sudo ln -s ~/dashboard/dashboard.sh /usr/local/bin/dashboard

You can then use the following commands to run the dashboard like an application.

Start the dashboard and show the tokens

dashboard start

Check whether the dashboard is running or not and output the tokens if currently set.

dashboard status

Stop the dashboard

dashboard stop

Congratulations, you have successfully installed the Kubernetes dashboard! You can now start getting familiar with the dashboard by exploring the different menus and view it offers.