Prerequisities

source: from official documentation of k8s
+ docker
+ k8s
+ kubectl
+ minikube

Install kubectl on Linux

Install kubectl binary with curl on Linux

  1. Download the latest release with the command:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
  1. Validation of the binary (optional)

Download the kubectl checksum file:

curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"

Validate the kubectl binary against the checksum file:

echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check

If valid, the output is:

kubectl: OK

If the check fails, sha256 exits with nonzero status and prints output similar to:

kubectl: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match
  1. Install kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
  1. Ensure the version you installed is up-to-date:
kubectl version --client

Verify kubectl configuration

In order for kubectl to find and access a Kubernetes cluster, it needs a kubeconfig file, which is created automatically when you create a cluster using kube-up.sh or successfully deploy a Minikube cluster. By default, kubectl configuration is located at ~/.kube/config.

Check that kubectl is properly configured by getting the cluster state:

kubectl cluster-info

If you see a URL response, kubectl is correctly configured to access your cluster.

If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster.

The connection to the server <server-name:port> was refused - did you specify the right host or port?

For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.

If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use:

kubectl cluster-info dump

Install Minikube on Ubuntu

source: Install Minikube on Ubuntu 22.04|20.04|18.04

Step 1: Update system

Run the following commands to update all system packages to the latest release:

sudo apt-get update
sudo apt-get install apt-transport-https
sudo apt-get upgrade

Step 2: Step 2: Install KVM or VirtualBox Hypervisor

For VirtualBox users, install VirtualBox using:

sudo apt install virtualbox virtualbox-ext-pack

Step 3: Download minikube on Ubuntu 22.04|20.04|18.04

You need to download the minikube binary. I will put the binary under /usr/local/bin directory since it is inside $PATH.

wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube-linux-amd64
sudo mv minikube-linux-amd64 /usr/local/bin/minikube

Confirm version installed

$ minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b

Create local cluster using minikube

Start K8s

Create environment of K8s using following command:

minikube start --driver=docker

Start Dashboard

Open the Kubernetes dashboard in a browser:

minikube dashboard

Delete K8s

minikube delete

Run Locally Built Docker Images in Kubernetes

config of deployment of image slave:v1 in slave.yml
/i is to mount nfs file storage.
Use local docker image with minikube:
1. Set the environment variables with eval $(minikube docker-env)
2. Build the image with the Docker daemon of Minikube (eg docker build -t my-image .)
3. Set the image in the pod spec like the build tag (eg my-image)
4. Set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image
5. If u wanna back or exit env from minikube.. eval $(minikube docker-env -u)
then create a .yml file to config the correspoding app. Following is an example of slave-pod.

apiVersion: batch/v1
kind: Job
metadata:
  name: slave
spec:
  template:
    metadata:
      name: slave-pod
    spec:
      containers:
      - name: slave
        image: slave:v1
        imagePullPolicy: Never
        volumeMounts:
        - mountPath: /i
          name: volume_i
      restartPolicy: Never
  volumes:
  - name: volume_i
    nfs:
      server: [nfs-url]
      path: [your path]

Then create pod:

kubectl create -f helloworld.yml

Coarse Parallel Processing Using a Work Queue

from source k8s docs
In this example, we will run a Kubernetes Job with multiple parallel worker processes.

In this example, as each pod is created, it picks up one unit of work from a task queue, completes it, deletes it from the queue, and exits.

Here is an overview of the steps in this example:

  1. Start a message queue service. In this example, we use RabbitMQ, but you could use another one. In practice you would set up a message queue service once and reuse it for many jobs.
  2. Create a queue, and fill it with messages. Each message represents one task to be done. In this example, a message is an integer that we will do a lengthy computation on.
  3. Start a Job that works on tasks from the queue. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached.

Starting a message queue service

Here I'd like to use RabbitMQ, however, you can adapt the example to use another AMQP-type message service.

In practice you could set up a message queue service once in a cluster and reuse it for many jobs, as well as for long-running services.

Start RabbitMQ as follows:

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.3/examples/celery-rabbitmq/rabbitmq-service.yaml

service "rabbitmq-service" created

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.3/examples/celery-rabbitmq/rabbitmq-controller.yaml

replicationcontroller "rabbitmq-controller" created

RabbitMQ

see how to use message queue with RabbitMQ rabbitmq docs

  1. example of sender.py (publisher)
#!/usr/bin/env python
import pika

connection = pika.BlockingConnection(
    pika.ConnectionParameters(host='localhost'))
channel = connection.channel()

channel.queue_declare(queue='hello')

channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()
  1. example of receive.py (subscriber)
#!/usr/bin/env python
import pika, sys, os

def main():
    connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
    channel = connection.channel()

    channel.queue_declare(queue='hello')

    def callback(ch, method, properties, body):
        print(" [x] Received %r" % body)

    channel.basic_consume(queue='hello', on_message_callback=callback, auto_ack=True)

    print(' [*] Waiting for messages. To exit press CTRL+C')
    channel.start_consuming()

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print('Interrupted')
        try:
            sys.exit(0)
        except SystemExit:
            os._exit(0)

Deploy Prometheus & Grafana To Monitor Cluster

You can setup Minikube (Local Kubernetes cluster) or use cloud managed kubernetes service like Google kubernetes Engine or Elastic Kubernetes service which you use to deploy Prometheus and Grafana to monitor the cluster. Connect to the cluster and start following tutorials.

  1. Creating Monitor Namespace
    Firstly we will create a namespace and follow good practice.
kubectl create namespace prometheus

This command creates namespace in the cluster where on the next step we will deploy Prometheus.
2. Install Prometheus Operator
Here I use Helm (see how to install Helm) to deploy Prometheus Grafana and many other services that have been used to monitor kubernetes clusters.

helm install prometheus stable/prometheus-operator --namespace prometheus

Prometheus

Prometheus is free and an open-source event monitoring tool for containers or microservices. Prometheus collects numerical data based on time series. The Prometheus server works on the principle of scraping. This invokes the metric endpoint of the various nodes that have been configured to monitor. These metrics are collected in regular timestamps and stored locally. The endpoint that was used to discard is exposed on the node.

1. Prometheus Data Retention

Prometheus data retention time is 15 days by default. The lowest retention period is 2hour. If you retain the data for the highest period more disk space will be used as there will be more data. The lowest retention period can be used when configuring remote storage for Prometheus.

2. Prometheus With Grafana

Grafana is a multi-platform visualization software that provides us a graph, the chart for a web connected to the data source. Prometheus has it’s own built-in browser expression but Grafana is the industry's most powerful visualization software. Grafana has out of the box integration with Prometheus.

Grafana

Grafana is a multi-platform visualization software available since 2014. Grafana provides us a graph, the chart for a web-connected to the data source. It can query or visualize your data source, it doesn’t matter where they are stored.

1. Visualize

Swift and extensible client-side graphs with a number of options. There are many plugins for many different ways to visualize metrics and logs. You will use custom kubernetes metrics to plot them in the graph we will see that in the latter section

2. Explore Metrics

In this article, the Kube state metric list to visually see in the Grafana graph. Split view and compare different time ranges, queries, and data sources

3. Explore Logs

Experience the magic of switching from metrics to logs with preserved label filters. Quickly search through all your logs or stream them live.

Step 1 — Install Prometheus Operator

We’ll start by installing Prometheus Operator into the Kubernetes cluster. We’ll install all of Prometheus Operator’s Kubernetes custom resource definitions (CRDs) that define the Prometheus, Alertmanager, and ServiceMonitor abstractions used to configure the monitoring stack. We’ll also deploy a Prometheus Operator controller into the cluster.

Install the Operator using the bundle.yaml file in the Prometheus Operator GitHub repository:

kubectl create -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/bundle.yaml

You should see the following output:

customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
serviceaccount/prometheus-operator created
service/prometheus-operator created
bundle.yaml installs CRDs for Prometheus objects as well as a Prometheus Operator controller and Service.

Step 2 — Configure Prometheus RBAC Permissions

Before rolling out Prometheus, we’ll configure its RBAC privileges using a ClusterRole, and bind this ClusterRole to a ServiceAccount using a ClusterRoleBinding object.

Prometheus needs Kubernetes API access to discover targets and pull ConfigMaps. To learn more about permissions granted in this section, please see RBAC from the Prometheus Operator docs.

First, create a directory in which you’ll store any K8s manifests used for this guide, and cd into it:

mkdir operator_k8s
cd operator_k8s

Create a manifest file called prom_rbac.yaml using your favorite editor. Paste in the following Kubernetes manifest:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: default

This creates a ServiceAccount called prometheus and binds it the prometheus ClusterRole. The manifest grants the ClusterRole get, list, and watch K8s API privileges.

When you’re done editing the manifest, save and close it.

Create the objects using kubectl:

kubectl apply -f

serviceaccount/prometheus created
clusterrole-rbac created
rbac-authorization created
Now that Prometheus has K8s API access, you can deploy it into the cluster.