CI-CD PipeLine Document
CICD Pipeline Part2 – KVM based on CentOS7
========================================================================
Note: Original CICD pipeline is based on linuxacademy’s cloud servers. ( http://193.200.241.161:10010/ci-cd-full-pipeline )
This article describes , how the same CICD pipeline can be implemented in your local LAN using KVM based VMs on Centos7
Tools : Github, Jenkins, Docker, kubernetes
Course Title : Implementing a Full CI/CD Pipeline
Virtual Machines : 3 to 4 Vm’s
Github Account
URl : https://github.com
→ Username: ncodeit.github@gmail.com
→ Password: ncodeit123
Dockerhub Account
Url: https://hub.docker.com/
→ Username: ncodeitdocker
→ Password: ncodeit123
Note: Every One should use the above mentioned github and docker accounts only to complete the cicd pipeline process
Environment Structure
![](https://i.pinimg.com/originals/be/55/33/be5533d6c2f8bffaabfd5f3787c83624.png)
VM1→192.168.1.1/ncddvps01.corp.ncodeit.com 1) Source Code Management 2) Build Automation 3) Continuous Integration 4) Container
VM2→192.168.1.2/ncddvps02.corp.ncodeit.com 5) Fully Automated Deployement 6) Production Server (Deployment Purpose)
VM3→192.168.1.3/ncddvps03.corp.ncodeit.com 7) Orchestration 8) Monitoring 9) Auto Scailing 10) Self Healing 11) Canary Testing
VM4→192.168.1.4/ncddvps04.corp.ncodeit.com 12) Node Server(Kubernetes)
- The Below Diagram explains about CICD pipeline Implementation Work Flow
- The below mentioned url’s are available at ncodeit github account every one should clone the below url’s when required all the required changes has done and made to use .
→Source Code Management : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-git →Jenkins : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-cd →Docker : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-docker →Docker Deploy: https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-dockerdeploy →Orchestration : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-kubernetes →Monitoring : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-monitoring →Self-healing : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-selfhealing →Auto Scaling : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-autoscaling →Canary testing : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-canary →Fully Automated Deployment : https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-autodeploy
Pre-Requisites
——————————————————————————————————————————————————-
1. VM creation
2. yum update
——————————————————————————————————————————————————-
1. VM creation
2. yum update
Create a new KVM based VM :
→ Open virt manager
$ virt-manager
→ Click on File
→ Select create new virtual machine
→ Select “import existing disk image”
→ Select qcow2 image from image directory
→ Select RAM as per requirement
→ Select Network selection “Select Bridge br0”
→ Select finish.
New user creation
——————————————————————————————————————————————————-
——————————————————————————————————————————————————-
1
2 3 4 5 |
$ cd /tmp/customscripts
$ cd ncdcustom $ ./createadminuser.sh $ ./ncdcustomize.sh $ updatehost.sh |
**) If required → To disable selinux Permanently
1
2 |
→ $ sudo vi /etc/sysconfig/selinux
→ set SELINUX=enforcing to SELINUX=disabledK |
→ Save and Close.
Source Control Management
→There is no dependency required to implement it
PLANNING :
1) Installing Git
2) Fork repo (cicd-pipeline-train-schedule-git.git)
Following steps to achieve this plan
Install GIt:
1) already installed at user creation so no need to install again
Check Git is Installed or Not:
2) Fork repo (cicd-pipeline-train-schedule-git.git)
Following steps to achieve this plan
Install GIt:
1) already installed at user creation so no need to install again
Check Git is Installed or Not:
1
|
git --version
|
## Configuring Your Name and Email
1
2 |
git config --global user.name "your name"
git config --global user.email "your email" |
## Setting up private key/public key access
1
2 3 4 5 6 7 |
ssh-keygen -t rsa -b 4096
cat /home/user/.ssh/id_rsa.pub Login to github A/C then click on settings Click on SSH and GPG keys Select new SSHkey Give the title on our choice and give the key that we generated above Click Add SSH key |
## Clone that cicd-pipeline-train-schedule-git Repository from Github A/C at command line
1
2 3 4 |
git clone https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-git
ls -la cd cicd-pipeline-train-schedule-git/ git status |
1
2 3 4 5 6 7 8 |
git add .
git status git commit -m "my commit" git status git push git status For accessing in browser : git branch git status |
## Create a new branch
1
2 3 4 5 |
git checkout -b newBranch
git branch git checkout master git tag myTag git tag |
Build Automation
——————————————————————————————————————————————————-
→ Its not needed any dependency
→ Build automation is the automation of tasks needed in order to process and prepare source code for deployment to production.It is an important component of continuous integration
→ Build automation is the automation of tasks needed in order to process and prepare source code for deployment to production.It is an important component of continuous integration
PLANNING :
→ Compiling
→ Dependency management
→ Executing Automatelinuxacademycontentd Tests
→ Dependency management
→ Executing Automatelinuxacademycontentd Tests
1) Use the following link to download gradleP
1
|
* https://gradle.org
|
2) Download the gradle by wget in centos 7
1
|
* wget -O ~/gradle-4.7-bin.For accessing in browser :zip https://services.gradle.org/distributions/gradle-4.7-bin.zip
|
3) now install java libraries
1
|
* sudo yum -y install unzip java-1.8.0-openjdk
|
4)Now make a directory
1
2 |
*sudo mkdir /opt/gradle
*sudo unzip -d /opt/gradle/ ~/gradle-4.7-bin.zip |
5) Now create a gradle.sh file
1
|
*sudo vi /etc/profile.d/gradle.sh
|
6) Put this text into gradle.sh:
1
|
export PATH=$PATH:/opt/gradle/gradle-4.7/bin
|
7) Then set permissions on gradle.sh:
1
|
*sudo chmod 755 /etc/profile.d/gradle.sh
|
8) Finally, after logging out of the server and logging back in:
1
|
* gradle --version
|
9) And the commands used to install and run the Gradle Wrapper:
1
2 3 4 5 |
* cd ~/
* mkdir my-project * cd my-project * gradle wrapper * ./gradlew build |
→A gradle build is defined in a grovy script called build.gradle located in the root directory of your project
Use gradle init command to initialize a new project.this also sets up the gradle wrapper for you
Gradle builds consists of a set of tasks that you call from the command line
plugins { id 'com.moowork.node' version '1.2.0' } task sayHello << { println 'Hello, World!' } task anotherTask << { println 'This is another task' }
1
|
./gradlew someTask someOthertask
|
→This command will run a task named sometask ,and a task named someother tasks
The build.gradle file controls what tasks are available for your project
The build.gradle file controls what tasks are available for your project
1
2 |
gradle init
ls –la |
1
2 3 4 5 |
./gradle sayHello
./gradlew anotherTask ./gradlew anotherTask sayHello ./gradlew nodeSetup ./gradlew build |
Continuous Integration
→ To Implement this no dependency is required
PLANNING
————————————————————————————————————————————————
1) Install jenkins
2) Creating GitHubForks and create a job in Jenkins
3) Triggering Builds with Git Hooks
Following this commands to achive this plan
————————————————————————————————————————————————
1) Install jenkins
2) Creating GitHubForks and create a job in Jenkins
3) Triggering Builds with Git Hooks
Following this commands to achive this plan
INSTALLING JENKINS
————————————————————————————————————————————————-
Following this commands to install jenkins
————————————————————————————————————————————————-
Following this commands to install jenkins
1
2 3 4 5 6 |
$ sudo yum -y install java-1.8.0-openjdk
$ sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo $ sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key $ sudo yum -y install jenkins-2.121.1 $ sudo systemctl enable jenkins $ sudo systemctl start jenkins |
To acces jenkins in browser-> ip:8080
→ unlock the jenkins for this in terminal
→cat /varlib/jenkins/screats/initialAdminpassword (after execute this command u get the password)
→Enter this password to unlock the jenkins_
→Create username and password etc...
Creating GitHubForks
→Before you doing fork we need to login into github account then
*search need of repo account→ select repo(what u want)→select fork (already available at ncodeitit git hub account no need to fork )
create a job in Jenkins
*search need of repo account→ select repo(what u want)→select fork (already available at ncodeitit git hub account no need to fork )
create a job in Jenkins
1) goto jenkins→ create new item→Enter an item name(ex:train-schedule)→Freestyle project→ok
2) in source code management→git→Repository Url(https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-cd)
3) go to Build→select add build step(select invoke gradel script) it uses grade wrapper script so
4) select use Gradel wrapper→ Tasks( u can add build)
after its run we can get the archive zip file so for this we can create the post build action for this fallow 5th step
5)post-build Actions→select add post build actions→ select Archive the artifacts
files to archive - dist/trainSchedule.zip
save
6) build this project→ click on the build→to check the build result in console
Triggering Builds with Git Hooks
→webhooks are event notifications made from one application to another over http.
→For this we need Github api key
→For this we need Github api key
1) go to github account-> select settings->Developers settings->personal access tokens->Generate new token
→Token description - Jenkins
→select admin:repo:hook
→select generate token
→copy this api key
→configure this api key
2) goto jenkins→manage jenkins→configure system→ goto Github section→ add Github server
→Name - Github
→credentials - select add
→kind - secret text
→secret - past ur api key
→id - github_key
→Description - GithubKey
3) goto github section→ credentials→ drop down select Githubkey
4) manage hooks check box is checked
→ save
→ utilize this webhook
5)select ur frestyle project->configure->Build triggers
*select Github hook trigger for Gitscm polling
save
→check ur webhook its added are not
6) goto ur Github account→select ur fork repo→settings→webhook→here find ur added github api key
7)change the code in readme.xml file
then commit
8)goto jenkins→here automatically build trigger
CONTAINERS
—————————————————————————————————————————————————
PLANNING
—————————————————————————————————————————————————
PLANNING
1. Install docker in production server & master server.
2. Install git and clone the train schedule repository in master server.
3. using Docker,Build and run train schedule app in master server.
4. create job for container in jenkins
5. deploy train schedule app to production server using created job.
6. Finally,Access the train schedule app in browser.
2. Install git and clone the train schedule repository in master server.
3. using Docker,Build and run train schedule app in master server.
4. create job for container in jenkins
5. deploy train schedule app to production server using created job.
6. Finally,Access the train schedule app in browser.
* To implement containers,we need two servers(production server and master server)
* Production server:
#Installing Docker:
#Installing Docker:
1
2 3 4 5 |
$ sudo yum -y install docker
$ sudo systemctl start docker $ sudo systemctl enable docker $ sudo docker info. $ sudo yum install -y sshpass. |
* Master server:
#Installing Docker:
1
2 3 4 |
$ sudo yum -y install docker
$ sudo systemctl start docker $ sudo systemctl enable docker $ sudo docker info. |
#Building docker:
1
2 3 |
$ cd ~/
$ git clone [github link] $ cd cicd pipeline directory |
vi Dockerfile → Copy and paste the below content in Dockerfile
1
2 3 4 5 6 7 |
FROM node:carbon
WORKDIR /usr/src/app COPY package*.json RUN npm install COPY . . EXPOSE 8080 CMD ["npm","start"] |
1
2 |
$ sudo docker build -t ncodeitdocker/train-schedule
$ sudo docker run -p 8080:8080 -d ncodeitdocker/train-schedule |
→ Access train schedule app on browser with ipaddress:8080
1
2 3 4 |
$ sudo groupadd docker
$ sudo usermod -aG docker jenkins $ sudo systemctl restart jenkins $ sudo systemctl restart docker |
# create Docker account in Dockerhub
#Jenkins pipelines docker container
1. Acces jenkins in browser
2. click on manage jenkins
3. click on configure system
# Environment variable section
→Name:prod_ip
→value: ip address[Production server]
4. save it
5. goback to mainpage
6. click on credentials
5. goback to mainpage
6. click on credentials
→create dockerhub,webserver and github credentials.
7. goback to mainpage and create new item with multibranch pipeline.
8. Now, in branch source section add github server and save it.
8. Now, in branch source section add github server and save it.
9. login into github repository on browser and fork this repostiory[https://github.com/linuxacademy/cicd-pipeline-train-schedule-dockerdeploy]
10. make changes in jenkins file
10. make changes in jenkins file
1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
pipeline {
agent any stages { stage('Build') { steps { echo 'Running build automation' sh './gradlew build --no-daemon' archiveArtifacts artifacts: 'dist/trainSchedule.zip' } } stage('Build Docker Image') { when { branch 'master' } steps { script { app = docker.build("[dockerhub username]/train-schedule") app.inside { sh 'echo $(curl localhost:8080)' } } } } stage('Push Docker Image') { when { branch 'master' } steps { script { docker.withRegistry('https://registry.hub.docker.com', 'docker_hub_login') { app.push("${env.BUILD_NUMBER}") app.push("latest") } } } } stage('DeployToProduction') { when { branch 'master' } steps { input 'Deploy to Production?' milestone(1) withCredentials([usernamePassword(credentialsId: 'webserver_login', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) { script { sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$prod_ip "docker pull [dockerhub username]/train-schedule:${env.BUILD_NUMBER}"" try { sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$prod_ip "docker stop train-schedule"" sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$prod_ip "docker rm train-schedule"" } catch (err) { echo: 'caught error: $err' } sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$prod_ip "docker run --restart always --name train-schedule -p 8080:8080 -d [dockerhub username]/train-schedule:${env.BUILD_NUMBER}"" } } } } } } |
11. Now, Build the created item.
12. Access train schedule app in DEPLOYING KUBERNETES WITH JENKINS. browser ipaddress[production server]:8080.
12. Access train schedule app in DEPLOYING KUBERNETES WITH JENKINS. browser ipaddress[production server]:8080.
ORCHESTRATION
Pre-Requistes
———————————————————————————————————————————————————–
→ First you need to install Kubernetes required Modules
→ Kubectl,kubelet
→ kubeadm
→ Docker installation
→ After installing required moduDEPLOYING KUBERNETES WITH JENKINS. les start and check there status
———————————————————————————————————————————————————–
→ First you need to install Kubernetes required Modules
→ Kubectl,kubelet
→ kubeadm
→ Docker installation
→ After installing required moduDEPLOYING KUBERNETES WITH JENKINS. les start and check there status
1
2 3 4 5 6 |
→ sudo systemctl start kubectl
→ sudo systemctl start kubeadm → sudo systemctl status kubectl → sudo systemctl status kubeadm → sudo systemctl start docker → sudo systemctl status docker |
MASTER CONFIGURATION
———————————————————————————————————————————————-
→First now we need to pull the images
———————————————————————————————————————————————-
→First now we need to pull the images
1
|
$ kubeadm config images pull --kubernetes-version=1.10.0
|
→Initialization will take couple ofDEPLOYING KUBERNETES WITH JENKINS. minutes.
→ create a vi file
in vi kube-config.yml add the bottom code
in vi kube-config.yml add the bottom code
1
2 3 4 5 6 |
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration networking: podSubnet: 10.244.0.0/16 apiServerExtraArgs: service-node-port-range: 8000-31274 |
→ Then run this command referencing that file:
1
|
$ kubeadm init --config kube-config.yml
|
(main step copy the key generated is is usefull to copy in node server)
Exmaple key :
→ kubeadm join 172.31.29.217:6443 --token p0irqu.nuutr9mh9rbek06c --discovery-token-ca-cert-hash sha256:b66b93d1b781aba0cb1f482116437451172bf8991dd24d867840448222a1fa9a
To start using your cluster you need to run the following commands(just copy and paste the below commands on cli)
1
2 3 |
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config |
→ kubectl get nodes(display the list of nodes)
it will shows the node details.
→SETUP OUR NETWORKING
→Use this command to set up a pod network after initializing the master with kubeadm init:
→Use this command to set up a pod network after initializing the master with kubeadm init:
1
|
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
|
NODE CONFIGURARTION
—————————————————————————————————————————————————-
→Join the kubernetes node to the kubernetes master.
—————————————————————————————————————————————————-
→Join the kubernetes node to the kubernetes master.
→run the following kubeadm coomand(copy the kubeadm join command)
1
|
$ sudo kubeadm join 172.31.96.74:6443 --token npcwsn.0metml406oht5hb --discovery-token-ca-cert-hash sha256(this was generated when you run the kubeadmin init)
|
→After it was installation completed go back to the ,master check againNODE CONFIGURARTION
1
|
$ kubectl get nodes
|
→To check the status of some default pods
1
|
$ kubectl get pods --all-namespaces
|
→This is the basic kubernetes structure
KUBERNETES BASICS
→cluster:a master and one or more server
→Masetr:runs control process and kubernetes API
→node: a worker machine
→pod:a running process on a node for ex:a running container or group of container
→service:set of combining different pods from diff node server called as service.
DEPLOYING KUBERNETES WITH JENKINS.
This demonstration uses the Kubernetes Continuous Deploy plugin to deploy to Kubernetes from Jenkins. You can find additional documentation about that plugin here: https://jenkins.io/doc/pipeline/steps/kubernetes-cd → manage jenkins → manage plugins → click on available and filter with kubernetes there we have "Kubernetes Continuous Deploy" plugin → check the box Kubernetes Continuous Deploy and install without restart → go back to the main page click on credentials → click on global → add credentials under kind we need to select kubernetes configuration(auto config) → id:kubeconfig → Description:Kubeconfig → kubeconfig: click on the Enter Directly Radio Button → copy and paste the contents of the configuration file of the "kubernetes master" → login to the kubernetes master public ip → cat ~/.kube/config → copy and paste entire contents of the configuration file of kubernetes. → click on ok create a pipeline project. → create train-schedule with multibranch pipeline → add git hub branch source: → credentials:select GIT-HUB(with token) → owner:github-username → select repository from drop-down is cicd-pipeline-train-schedule-kubernetes. → click on save
*) Mainly If git hub docker ceredentials are not shown in drop down to select them create manually with your github and docker
→ username and password
→ username: Github/docker
→ password: *********
→ id : github/docker token
→ save
GOTO GIT-HUB(cicd-pipeline-schedule-kubernetes→ You need to fork)
→ create a new file
→ name of the file:train-schedule-kube.yml
→ commit changes.
here we yml file whch contains no of documents seperated by (---)
→ go back to the project
→ and edit the JENKINSFILE
change the below line at the beginnig of the file
DOCKER_IMAGE_NAME = "willbla/train-schedule"
DOCKER_IMAGE_NAME = "(docker hub username)/train-schedule"
→ commit changes
→ goto jenkins page
→ click on train-schedule
→ click on master
→ build was sucessfull then goto
→ kubernetes master cli
→ check whether are any pods
→ kubectl get pods
take the public ip of kubernetes master or node
→ goto browser
ip of master/node:with port
MONITORING
=============================================================================
→curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > /tmp/get_helm.sh
1
2 3 4 5 6 7 |
$ chmod 700 /tmp/get_helm.sh
$ DESIRED_VERSION=v2.8.2 DEPLOYING KUBERNETES WITH JENKINS. /tmp/get_helm.sh $ helm init --wait $ kubectl --namespace=kube-system create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default $ helm ls $ cd ~/ $ git clone https://github.com/kubernetes/charts |
→ vi prometheus-values.yml # Enter the below code
1
2 3 4 5 6 |
alertmanager:
persistentVolume: enabled: false server: persistentVolume: enabled: false |
→ Then run:
1
|
$ helm install -f prometheus-values.yml charts/stable/prometheus --name -prometheus --namespace prometheus
|
→ vi grafana-values.yml
1
|
adminPassword: password
|
Then run:
1
|
$ helm install -f grafana-values.yml charts/stable/grafana/ --name grafana --namespace grafana
|
→ vi grafana-ext.yml ##Enter the below code
1
2 3 4 5 6 7 8 9 10 11 12 13 |
kind: Service
apiVersion: v1 metadata: namespace: grafana name: grafana-ext spec: type: NodePort selector: app: grafana ports: - protocol: TCP port: 3000 nodePort: 8080 |
→ Then run:
1
|
$ kubectl apply -f grafana-ext.yml
|
→ You can check on the status of the prometheus and grafana pods with these commands:
1
2 |
$ kubectl get pods -n prometheus
$ kubectl get pods -n grafana |
→go to browser into grafana
→select--add data source
→prometheus
→url--http://prometheus-server.prometheus.svc.clouster.local
→go to import
→3131
→data source--prometheus
→ vi train-schedule-kube.yml
→ kubectl apply -f train-schedule-ki=ube.yml
→ kubectl get pods -w
→ check in browser
→ select dashbord
→ metrics
→ sum(rate(http_request_duration_ms_count[2m])) by (service,route,method,code) * 60
SELF HEAILING
—————————————————————————————————————————————————————
→ It need dependencies(containers and orchestration)
→ Brief description about self healing:
→ Self-Healing is able to detect when something is wrong within itself and automatically take corrective action to fix without any kind of human intervention…
—————————————————————————————————————————————————————
→ It need dependencies(containers and orchestration)
→ Brief description about self healing:
→ Self-Healing is able to detect when something is wrong within itself and automatically take corrective action to fix without any kind of human intervention…
Self-Healing implementation:
→ Check the pods status by using below command
1
|
$kubectl get pods
|
→ Kill the main process of the pod by using below command
1
2 |
$ kubectl exec train-schedule-deployment-58fdcc9df9-br7wd -- pkill node
$ kubectl get pods -w |
→ you can see the train schedule deployment pod in error state
→ After a moment execute the below command then you can see the yor train schedule app is in running state..
→ After a moment execute the below command then you can see the yor train schedule app is in running state..
1
|
$ kubectl get pods
|
→ Creating liveness probes in kubernetes
**************************************
→ Kubernetes allows us to create liveness probes which are custom checks run periodically against containers to detect whether the containers are healthy…..if a liveness probe determines that a container is unhealthy ,that container will be restarted..
Implementation:
———————————————————————————————————————————————————-
→ Go to https://github.com/linuxacademy/cicd-pipeline-train-schedule-selfhealing
→ Click on fork
→ Select Your Github A/C
→ Then clone that cicd-pipeline-train-schedule-selfhealing Repository from Your Github A/C by using below command
———————————————————————————————————————————————————-
→ Go to https://github.com/linuxacademy/cicd-pipeline-train-schedule-selfhealing
→ Click on fork
→ Select Your Github A/C
→ Then clone that cicd-pipeline-train-schedule-selfhealing Repository from Your Github A/C by using below command
1
2 3 |
$ git clone https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-selfhealing
$ cd cicd-pipeline-train-schedule-selfhealing $ cat train-schedule-kube.yml |
→ vi train-schedule-kube.yml
1
2 3 4 5 6 7 8 9 |
##Add below content under - containerPort: 8080
------------------------------------ livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 |
→ Deploy that train-schedule-kube.yml file by uisng below command
1
2 3 4 |
$ kubectl apply -f train-schedule-kube.yml
$ kubectl get pods $ kubectl get pods $ kubectl get pods -w |
→ Go to browser
1
2 3 |
go to http://IP:8080 (you can see your train schedule app is running)
go to http://IP:8080/break (you can see "The app is now broken) go to http://IP:8080 (you can see something is broken) |
→ go back to the master node and execute the below command to see the status of the pods
1
|
$ kubectl get pods -w
|
After a moment it automatically restarts the train schedule app
→ Go to browser and refresh the http://IP:8080 page
you can see your train schedule app is running
AUTO SCALING
————————————————————————————————————————————————————–
1)Download Metrics Server from git repo:
1
|
$ git clone https://github.com/kubernetes-incubator/metrics-server.git
|
2)Open Metrics Server Directory:
1
2 3 4 |
$ cd metrics-server/
$ kubectl create -f deploy/1.8+/ $ kubectl get pods -n kube-system $ kubectl get --raw /apis/metrics.k8s.io/ |
3)Goto github.com and clone to your local machine
1
2 |
$ git clone https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-autoscaling
$ cd cicd-pipeline-train-schedule-autoscaling |
vi-train-schadule-deployment
1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
//add at the end//
resources: requests: cpu:220m --- apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: train-schedule namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: train-schadule-deployment minReplicas: 1 maxReplicas: 4 metrics: - type: Resource resource: name:cpu targetAverageUtilization: 50 |
1
2 3 4 |
$ kubectl apply -f train-schedule-kube.yml
$ kubectl get pods $ kubectl get hpa $ kubectl get hpa -w |
9)You can find the changes made to train-schedule-kube.yml in the example-solution branch of the GitHub repo. Once you have deployed the app and the HPA, you can generate CPU load to test it by spinning up a busybox shell:
1
|
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
|
paste the below one:
while true; do wget -q -O- http://54.169.246.112(Node Private ip address):8080/generate-cpu-load; done
1
2 |
$ kubectl get pods
$ kubectl get hpa -w |
CANARY TESTING
—————————————————————————————————————————————————————–
Pre-Requisites for canary test :
→GitHub A/C
→Docker A/C
→Kubernet Cluster
→Jenkins
→Implementing a Canary Test in Kubernetes
—————————————————————————————————————————————————————–
Pre-Requisites for canary test :
→GitHub A/C
→Docker A/C
→Kubernet Cluster
→Jenkins
→Implementing a Canary Test in Kubernetes
1) log into kubernetes master
→Make sure we have 1 master And at least 1 Node
→Make sure we have 1 master And at least 1 Node
1
2 |
kubectl get nodes
cd ~/ |
2) Fork the Repository cicd-pipeline-train-schedule-canary form https://github.com/linuxacademy/cicd-pipeline-train-schedule-canary into our GitHub A/C
3) Then Clone that cicd-pipeline-train-schedule-canary Repository from our GitHub A/C in to our kubernet master
3) Then Clone that cicd-pipeline-train-schedule-canary Repository from our GitHub A/C in to our kubernet master
1
2 3 4 5 |
sudo git clone https://github.com/ncodeitgithub1/cicd-pipeline-train-schedule-canary
cd cicd-pipeline-train-schedule-canary ls sudo cat cicd-pipeline-train-schedule-kube.yml sudo vi cicd-pipeline-train-schedule-kube.yml |
→Replace image: $DOCKER_IMAGE_NAME:$BUILD_NUMBER in containers as
image: linuxacademycontent/train-schedule:1
image: linuxacademycontent/train-schedule:1
1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
<span style="color: #000000">kind: Service
apiVersion: v1 metadata: name: train-schedule-service spec: type: NodePort selector: app: train-schedule ports: - protocol: TCP port: 8080 nodePort: 8080 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: train-schedule-deployment labels: app: train-schedule spec: replicas: 2 selector: matchLabels: app: train-schedule track: stable template: metadata: labels: app: train-schedule track: stable spec: containers: - name: train-schedule image: linuxacademycontent/train-schedule:1 ports: - containersPort: 8080 livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 resources: cpu: 200m </span> |
5) Create a canary Deployment with train-schedule-kube-canary.yml
1
|
$ kubectl apply -f train-schedule-kube.yml
|
6)Check depolyment status
1
|
$ sudo kubectl get pods
|
7) Describe canary pod by selecting path
1
|
$ kubectl describe pod train-schedule-deployment
|
8) Create a new file as template for canary testing
vi train-schedule-kube-canary.yml ##add below content
vi train-schedule-kube-canary.yml ##add below content
1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
<span style="color: #000000">kind: Service
apiVersion: v1 metadata: name: train-schedule-service spec: type: NodePort selector: app: train-schedule track: canary ports: - protocol: TCP port: 8080 nodePort: 8081 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: train-schedule-deployment labels: app: train-schedule spec: replicas: 1 selector: matchLabels: app: train-schedule track: canary template: metadata: labels: app: train-schedule track: canary spec: containers: - name: train-schedule image: linuxacademycontent/train-schedule:latest ports: - containersPort: 8080 livenessProbe: httpGet:→ path: / port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 resources: cpu: 200m</span> |
9) Create a canary Deployment with train-schedule-kube-canary.yml
1
|
$ kubectl apply -f train-schedule-kube-canary.yml
|
10) Now perform canary test by accessing on browser
→ Now goto the browser access Kubernet Node as http://<IP OF KUBERNET NODE>:8080
1
|
http://<IP OF KUBERNET NODE>:8081
|
Canary Testing in Kubernetes with Jenkins Pipelines
—————————————————————————————————————————————————-
1) Makesure Jenkins environment is up
And Should have GitHub credentials,Docker Hub credentials, and kubeconfig credentials
2) Now go to GitHub A/C Repository https://github.com/ncodeitgit/cicd-pipeline-train-schedule-canary
and create a new file for canary test as train-schedule-kube-canary.yml with below content
2) Now go to GitHub A/C Repository https://github.com/ncodeitgit/cicd-pipeline-train-schedule-canary
and create a new file for canary test as train-schedule-kube-canary.yml with below content
1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
<span style="color: #000000">kind: Service
apiVersion: v1 metadata: name: train-schedule-service spec: type: NodePort selector: app: train-schedule track: canary ports: - protocol: TCP port: 8080 nodePort: 8081 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: train-schedule-deployment labels: app: train-schedule spec: replicas: $CANARY_REPLICAS selector: matchLabels: app: train-schedule track: canary template: metadata: labels: app: train-schedule track: canary spec: containers: - name: train-schedule image: $DOCKER_IMAGE_NAME:$BUILD_NUMBER ports: - containersPort: 8080 livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 resources: cpu: 200m </span> |
3) Goback cicd-pipeline-train-schedule-canary Repository
→ Open Jenkinsfile and click on edit
→Makesure we have our Docker_IMAGE_NAME should be with “dockerusername/train-schedule”
→And create stage for canary deployment below staging of Docker image push
→ and also create enviroment variable , deployment steps , set CANARY_REPLICAS of Production envirnment to 0 as below
→ Open Jenkinsfile and click on edit
→Makesure we have our Docker_IMAGE_NAME should be with “dockerusername/train-schedule”
→And create stage for canary deployment below staging of Docker image push
→ and also create enviroment variable , deployment steps , set CANARY_REPLICAS of Production envirnment to 0 as below
stage('CanaryDeploy') {
when {
branch 'master'
}
environment {
CANARY_REPLICAS = 1
}
steps {
kubernetesDeploy(
kubeconfigId: 'kubeconfig',
configs: 'train-schedule-kube-canary.yml',
enableConfigSubstitution: true
)
}
}
Click on Commit changes
4) Now goto Jenkins browser
→Click on New Item and give name as train-schedule ,select Multibranch Pipeline type and click on OK
5) Select GitHub credentials , give github username and Repository cicd-pipeline-train-schedule-kube-canary details under Branch Sources
→ Click on Save
→And again go back to train-schedule
→Click on master branch and check the stage view and build
6)Goto command line and login to kubernet master and check canary pods deployment
→Click on New Item and give name as train-schedule ,select Multibranch Pipeline type and click on OK
5) Select GitHub credentials , give github username and Repository cicd-pipeline-train-schedule-kube-canary details under Branch Sources
→ Click on Save
→And again go back to train-schedule
→Click on master branch and check the stage view and build
6)Goto command line and login to kubernet master and check canary pods deployment
1
|
kubectl get pods -w
|
7)Now do canary test on kubernet with Jenkins pipeline
→ Access kubernet node on browser as http://<IP OF Kubenet Node>:8081
FULLY AUTOMATED DEPLOYEMENT
————————————————————————————————————————————————————–
→Go to jenkins→credentials→there we can see pre credentials
>>>Click on jenkins Image → manage jenkins→configure system
→You can see global properties
→check on Environment variables then appear
→List of variables:
→name=>KUBE_MASTER_IP
→value=>(our public ip of kubernet's master server) {we use this ip later on in jenkins file}
→next in Git Hub:
→Git Hub Servers=>click on Add Git Hub Server=>click on Git Hub Server in that
→Name=>GitHub
→API URL => leave it as default
→credentials=>select none then click on add select Jenkins next jenkins
→Credentials Provider page will appeare in that
→kind:select secret text
→Next in secret:paste the github personal access token {same tocken in credentials earliar}
→ID:github_secret
→Descriptiion:GitHub Secret
→click on add
→Then the previous page will appear
→in Credential click on none select GitHub Secret
→check write mark on manage hooks
→then click on save
→click on Manage Jenkins
→in that select Manage Plugins
→click on available
>>>Filter search box type 'http re' then in available show HTTP Request check on HTTP Request
→click on install without restart
→in next page that shows HTTP Request: success
→Goto main page of jenkins by clicking on jenkins logo
(then click on train-schedule=>master=>Branch master=>) or
in view configuration we can see Branch master
→Then go to GitHub site our repos
→Click on Fork at the repository page
→then edit 'Jenkinsfile'
→click on edit icon
→in the DOCKER_IMAGE_NAME instead of willbla use your dockerhub username
1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
pipeline {
agent any environment { //be sure to replace "willbla" with your own Docker Hub username DOCKER_IMAGE_NAME = "Docker hub username/train-schedule" CANARY_REPLICAS = 0 } stages { stage('Build') { steps { echo 'Running build automation' sh './gradlew build --no-daemon' archiveArtifacts artifacts: 'dist/trainSchedule.zip' } } stage('Build Docker Image') { when { branch 'master' } steps { script { app = docker.build(DOCKER_IMAGE_NAME) app.inside { sh 'echo Hello, World!' } } } } stage('Push Docker Image') { when { branch 'master' } steps { script { docker.withRegistry('https://registry.hub.docker.com', 'docker_hub_login') { app.push("${env.BUILD_NUMBER}") app.push("latest") } } } } stage('CanaryDeploy') { when { branch 'master' } environment { CANARY_REPLICAS = 1 } steps { kubernetesDeploy( kubeconfigId: 'kubeconfig', configs: 'train-schedule-kube-canary.yml', enableConfigSubstitution: true ) } } stage('SmokeTest') { when { branch 'master' } steps { script { sleep (time: 5) def response = httpRequest ( url: "http://$KUBE_MASTER_IP:8081/", timeout: 30 ) if (response.status != 200) { error("Smoke test against canary deployment failed.") } } } } stage('DeployToProduction') { when { branch 'master' } steps { milestone(1) kubernetesDeploy( kubeconfigId: 'kubeconfig', configs: 'train-schedule-kube.yml', enableConfigSubstitution: true ) } } } post { cleanup { kubernetesDeploy ( kubeconfigId: 'kubeconfig', configs: 'train-schedule-kube-canary.yml', enableConfigSubstitution: true ) } } } |
→Next go to Jenkins
→click on new item
→enter an item name :train-schedule & select multi brach pipeline then click ok
→Branch Sources
→click on Add Sources=>select GitHub=>in Credentials=> cllick on none=>select GitHub key
→Owner=>enter github user name what ever you had
→Repository=>select cicd-pipeline-train-schedule-autodeploy
→now click on save
→than at the build excuter run shows build files all
→cancel those building items except train-schedule-master#1 which in build queue
→goto left top menu train-schedule=>than click on master
→it shows boxes otherwise click on view configuration.
→in that boxes to comlte that process that will need few seconds to complete that
→while that is going on open kubernet's master's ip in terminal
1
|
$ kubectl get pods -w
|
→That shows blank that means that process is going on
→open jenkins then the build is complete
→open kubernet’s master in terminal,
→Exit it
→open jenkins then the build is complete
→open kubernet’s master in terminal,
→Exit it
1
|
$ kubectl get pods
|
→Then shows that status is running and deployment is complete
No comments:
Post a Comment