Thursday, October 11, 2018

CICD Pipeline

Task    :  Setup CI/CD pipeline end-to-end

Tools  :             github, jenkins, docker, kubernetes
Source :           www.linuxacademy.com
Course Title :  Implementing a Full CI/CD Pipeline
Level             :  Intermediate
Pre-reqs       :  3 linux academy cloud servers ,
github.com account(free) ,
dockerhub  account (free) .
—————————————————————————————————————————–
Section No: 1
Section Heading :  Introduction
Section URL (concept & Implementation)  :
Steps : 
Execution Video : None
Errors  & Solutions : None
Other Issues (If any) : None

Section No: 2
Section Heading : Source Control Management
Section URL (concept & Implementation) :
Video Name : Installing Git
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2198/lesson/2/module/218
Video Name : Creating GitHub Forks
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2198/lesson/3/module/218
Video Name : Making Changes in Git
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2198/lesson/4/module/218
Video Name : Branches and Tags
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2198/lesson/5/module/218
Video Name : Pull Requests
Video URL: https://linuxacademy.com/cp/courses/lesson/course/2198/lesson/6/module/218
Steps : 
For Git installation instructions for your system refer https://git-scm.com/downloads
1
2
sudo yum -y install git
git --version
## Configuring Your Name and Email
1
2
git config --global user.name "your name"
git config --global user.email "your email"
## Setting up private key/public key access
1
2
3
4
5
6
7
ssh-keygen -t rsa -b 4096
cat /home/user/.ssh/id_rsa.pub
Login to your github A/C then click on settings
Click on SSH and GPG keys
Select new SSHkey
Give the title on our choice and give the key that we generated above
Click Add SSH key
## Creating GitHub Forks :For accessing in browser :
1
2
Goto https://github.com/linuxacademy/cicd-pipeline-train-schedule-git<a href="https://github.com/linuxacademy/cicd-pipeline-train-schedule-git">
</a>
1
Click on fork
1
Select Your Github A/C
1
## And check whether Repository is fork or not.
## Clone that cicd-pipeline-train-schedule-git Repository from Your Github A/C at command line
1
2
3
4
git clone https://github.com/Divyabharathikatike/cicd-pipeline-train-schedule-git
ls -la
cd cicd-pipeline-train-schedule-git/
git status
vi views/index.jade                  ## Add below content ##
extends layout

block content
h1 nCodeIT train schedule
p Select your below to see its current schedule.
#wrapper
#trainList
h2 Trains
#trains.d-flex.flex-rowS
#trainInfo
strong <span id='trainName'></span>
#trainSchedule
strong Select a train to view its current schedule.
1
2
3
4
5
6
7
8
git add .
git status
git commit -m "my commit"
git status
git push
git statusFor accessing in browser :
git branch
git status
## Create a new branch
1
2
3
4
5
git checkout -b newBranch
git branch
git checkout master
git tag myTag
git tag
Errors  & Solutions : None
Other Issues (If any) : None

Section No: 3

Section Heading : Build Automation

Section URL (concept & Implementation) :
Video Name : Introducing Build Automation
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2199/lesson/1/module/218
Video Name : Installing Gradle
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2199/lesson/2/module/218
Video Name : Gradle Basics
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2199/lesson/3/module/218

Steps : 
1) Use the following link to download gradleP
1
* https://gradle.org
2) Download the gradle by wget in centos 7
1
* wget -O ~/gradle-4.7-bin.For accessing in browser :zip https://services.gradle.org/distributions/gradle-4.7-bin.zip
3) now install java libraries
1
* sudo yum -y install unzip java-1.8.0-openjdk
4)Now make a directory​_
1
2
*sudo mkdir /opt/gradle
*sudo unzip -d /opt/gradle/ ~/gradle-4.7-bin.zip
5) Now create a gradle.sh file
1
*sudo vi /etc/profile.d/gradle.sh
6) Put this text into gradle.sh:
1
export PATH=$PATH:/opt/gradle/gradle-4.7/bin
​_7) Then set permissions on gradle.sh:
1
*sudo chmod 755 /etc/profile.d/gradle.sh
Finally, after logging out of the server and logging back in:
1
* gradle --version
And the commands used to install and run the Gradle Wrapper:
1
2
3
4
5
* cd ~/
* mkdir my-project
* cd my-project
* gradle wrapper
* ./gradlew build
A gradle build is defined in a grovvy script called build.gradle located in the root directory of your project
Use gradle init command to initialize a new project.this also sets up the gradle wrapper for you
Gradle builds consists of a set of tasks that you call from the command line
1
* ./gradlew someTask someOthertask
​_This command will run a task named sometask ,and a task named someother tasks
The build.gradle file controls what tasks are available for your project
1
2
gradle init
ls –la
1
2
3
4
5
./gradle sayHello
./gradlew anotherTask
./gradlew anotherTask sayHello
./gradlew nodeSetup
./gradlew build
Errors  & Solutions

Other Issues (If any) :

Section No: 4
Section Heading : Continuous Integration
Section URL (concept & Implementation) :
Video Name : CI Overview
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2200/lesson/1/module/218
Video Name : Installing JenkinsNODE CONFIGURARTION
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2200/lesson/2/module/218
Video Name : Setting up Jenkins Projects
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2200/lesson/3/module/218
Video Name : Triggering Builds with Git Hooks
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2200/lesson/4/module/218
Steps : 
INSTALLING JENKINS
——————-
Following this commands to install jenkins
1
2
3
4
5
6
$ sudo yum -y install java-1.8.0-openjdk
$ sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
$ sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
$ sudo yum -y install jenkins-2.121.1
$ sudo systemctl enable jenkins
$ sudo systemctl start jenkins
To acces jenkins in browser-> ip:8080
-> unclock the jenkins for this in terminal
cat /varlib/jenkins/screats/initialAdminpassword (after excute this command u get the password) 
Enter this password to unlock the jenkins​_
Create ur username and password etc...
Creating GitHubForks
———————
Before you doing fork we need to login into ur github account then
*search need of repo account-> select repo(what u want)-> select fork (its automatically fork to ur account)
create a job in Jenkins
————————
1) goto jenkins->create new item-> Enter an item name(ex:train-schedule)->Freestyle project->ok
2) in source code management->git->Repository Url(add ur fork repo url) 
3) go to Build-> select add build step(select invoke gradel script) it uses grade wrapper script so 
4) select use Gradel wrapper-> Tasks( u can add build)
after its run we can get the archive zip file so for this we can create the post build action for this fallow 5th step 
5)post-build Actions->select add post build actions-> select Archive the artifacts 
files to archive - dist/trainSchedule.zip
save
6) build this project -> click on the build -> to check the build result in console 
Triggering Builds with Git Hooks
———————————
webhooks are event notifications made from one application to another over http.
For this we need Github api key
1) go to github account-> select settings->Developers settins->personal access tokens->Generate new token
*Token description - Jenkins
*select admin:repo:hook
*select generate token
*copy this api key 
configure this api key
1) goto jenkins->manage jenkins->configure system-> 
goto Github section -> add Github server
*Name - Github
*credintials - select add
*kind - secret text
*secret - past ur api key
*id - github_key
*Description - GithubKey
add
configure this to github server

2) goto github section-> credentials-> drop down select Githubkey
3) manage hooks check box is checked
save
utilize this webhook
4)select ur frestyle project->configure->Build triggers
*select Github hook trigger for Gitscm polling
save
check ur webhook its added are not
5) goto ur Github account-> selct ur fork repo-> settings-> webhook->here find ur added github api key
6)change the code in readme.xml file 
then commit
7)goto jenkins-> here automatically build trigger
Execution Video :
Errors  & Solutions
Other Issues (If any) :

Section No: 5
Section Heading : Continuous Delivery
Section URL (concept & Implementation) :
Video Name : Introducing Jenkins Pipelines
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2202/lesson/1/module/218
Video Name : Jenkins Pipeline stages and steps
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2202/lesson/2/module/218
Video Name : Deployment with Jenkins pipelines
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2202/lesson/3/module/218

Steps :
Execution Video :​_
Errors  & Solutions
Other Issues (If any) :

Section No: 6
Section Heading : Containers
Section URL (concept & Implementation) :
Video Name : Why Containers ?
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2203/lesson/1/module/218
Video Name : Installing Docker
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2203/lesson/2/module/218
Video Name : Docker Basics
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2203/lesson/3/module/218
Video Name : Building a Dockerfile
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2203/lesson/4/module/218
Video Name : Running with Docker in Production
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2203/lesson/5/module/218
Video Name : Installing Docker on Jenkins
Video URL: https://linuxacademy.com/cp/courses/lesson/course/2203/lesson/6/module/218
Video Name : Jenkins Pipelines CD and a Dockerized App
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2203/lesson/7/module/218
Steps :
* To implement containers,we need two servers(production server and master server)
* Production server:
#Installing Docker:
1
2
3
4
5
$ sudo yum -y install docker
$ sudo systemctl start docker
$ sudo systemctl enable docker
$ sudo docker info.
$ sudo yum install -y sshpass.
* Master server:
#Installing Docker:
1
2
3
4
$ sudo yum -y install docker
$ sudo systemctl start docker
$ sudo systemctl enable docker
$ sudo docker info.
#Building docker:
1
2
3
$ cd ~/
$ git clone [github link]
$ cd cicd pipeline dirctory
vi Dockerfile  >>Copy and paste the below content in Dockerfile
1
2
3
4
5
6
7
FROM node:carbon
WORKDIR /usr/src/app
COPY package*.json
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm","start"]
1
2
$ sudo docker build -t ncodeit/tarin-schedule
$ sudo docker run -p 8080:8080 -d ncodeit/train-schedule
-> Access train schedule app on browser with ipaddress:8080
1
2
3
4
$ sudo groupadd docker
$ sudo usermod -aG docker jenkins
$ sudo systemctl restart jenkins
$ sudo systemctl restart docker
# create Docker account in Dockerhub
#Jenkins pipelines docker container
1. Acces jenkins in browser
2. click on manage jenkins
3. click on configure system

# Environment variable section
>Name:prod_ip
>value: ip address[Production server]
4. save it
5. goback to mainpage
6. click on credentials
>create dockerhub,webserver and github credentials.
7. goback to mainpage and create new item with multibranch pipeline.
8. Now, in branch source section add github server and save it.
9. login into github repository on browser and fork this repostiory[https://github.com/linuxacademy/cicd-pipeline-train-schedule-dockerdeploy]
10. make changes in jenkins file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Running build automation'
sh './gradlew build --no-daemon'
archiveArtifacts artifacts: 'dist/trainSchedule.zip'
}
}
stage('Build Docker Image') {
when {
branch 'master'
}
steps {
script {
app = docker.build("[dockerhub username]/train-schedule")
app.inside {
sh 'echo $(curl localhost:8080)'
}
}
}
}
stage('Push Docker Image') {
when {
branch 'master'
}
steps {
script {
docker.withRegistry('https://registry.hub.docker.com', 'docker_hub_login') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
}
}
stage('DeployToProduction') {
when {
branch 'master'
}
steps {
input 'Deploy to Production?'
milestone(1)
withCredentials([usernamePassword(credentialsId: 'webserver_login', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) {
script {
sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$prod_ip "docker pull [dockerhub username]/train-schedule:${env.BUILD_NUMBER}""
try {
sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$prod_ip "docker stop train-schedule""
sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$prod_ip "docker rm train-schedule""
} catch (err) {
echo: 'caught error: $err'
}
sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$prod_ip "docker run --restart always --name train-schedule -p 8080:8080 -d [dockerhub username]/train-schedule:${env.BUILD_NUMBER}""
}
}
}
}
}
}
11. Now, Build the created item.
12. Access train schedule app in browser ipaddress[production server]:8080.
Execution Video : 
Errors  & Solutions
Other Issues (If any) :
For accessing in browser :
Section No: 7
Section Heading : Orchestration
Section URL (concept & Implementation) :
Video Name : Orchestration
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2204/lesson/1/module/218
Video Name : Creating a Kubernetes Cluster
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2204/lesson/2/module/218
Video Name : Kubernetes Basics
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2204/lesson/3/module/218
Video Name : Deploying to Kubernetes with Jenkins
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2204/lesson/4/module/218
Steps :
MASTER CONFIGURATION
=======================================================================
First now we need to pull the images
1
$ kubeadm config images pull --kubernetes-version=1.10.0
initialization will take couple of minutes.
→ create a vi file
in vi kube-config.yml add the bottom code
1
2
3
4
5
6
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
networking:
podSubnet: 10.244.0.0/16
apiServerExtraArgs:
service-node-port-range: 8000-31274
→ Then run this command referencing that file:
1
$ kubeadm init --config kube-config.yml
(main step copy the key generated is is usefull to copy in node server)
Exmaple key : 
-----------
→ kubeadm join 172.31.29.217:6443 --token p0irqu.nuutr9mh9rbek06c --discovery-token-ca-cert-hash sha256:b66b93d1b781aba0cb1f482116437451172bf8991dd24d867840448222a1fa9a
To start using your cluster you need to run the following commands(just copy and paste the below commands on cli)
1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
→ kubectl get nodes(display the list of nodes)
it will shows the node details.
SETUP OUR NETWORKING
Use this command to set up a pod network after initializing the master with kubeadm init:
1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
NODE CONFIGURARTION
===========================================================
Join the kubernetes node to the kubernetes master.
run the following kubeadm coomand(copy the kubeadm join command)
1
$ sudo kubeadm join 172.31.96.74:6443 --token npcwsn.0metml406oht5hb --discovery-token-ca-cert-hash sha256(this was generated when you run the kubeadmin init)
After it was installation completed go back to the ,master check againNODE CONFIGURARTION
1
$ kubectl get nodes
to check the status of some default pods
1
$ kubectl get pods --all-namespaces
this is the basic kubernetes structure
*****************************************************
KUBERNETES BASICS
*****************************************************
cluster:a master and one or more server
Masetr:runs control process and kubernetes API
node: a worker machine
pod:a running process on a node for ex:a running container or group of
container
service:set of combining different pods from diff node server called as service.
DEPLOYING KUBERNETES WITH JENKINS.
*****************************************************
This demonstration uses the Kubernetes Continuous Deploy plugin to deploy to Kubernetes from Jenkins. You can find additional documentation about that plugin here: https://jenkins.io/doc/pipeline/steps/kubernetes-cd
→ manage jenkins
→ manage plugins
→ click on available and filter with kubernetes
there we have "Kubernetes Continuous Deploy" plugin
→ check the box Kubernetes Continuous Deploy and install without restart
→ go back to the main page click on credentials
→ click on global
→ add credentials
under kind we need to select kubernetes configuration(auto config)
→ id:kubeconfig
→ Description:Kubeconfig
→ kubeconfig: click on the Enter Directly Radio Button
→ copy and paste the contents of the configuration file of the "kubernetes master"
→ login to the kubernetes master public ip
→ cat ~/.kube/config
→ copy and paste entire contents of the configuration file of kubernetes.
→ click on ok
create a pipeline project.
→ create train-schedule with multibranch pipeline
→ add git hub branch source:
→ credentials:select GIT-HUB(with token)
→ owner:github-username
→ select reposiry from drop-down is cicd-pipeline-train-schedule-kubernetes.
→ click on save
*) Mainly If git hub docker ceredentials are not shown in drop down to select them create manually with your github and docker 
→ username and password 
→ username: Github/docker
→ password: *********
→ id : github/docker token
→ save
GOTO GIT-HUB(cicd-pipeline-schedule-kubernetes→ You need to fork)
→ create a new file
→ name of the file:train-schedule-kube.yml ​_
→ commit changes.
here we yml file whch contains no of documents seperated by (---)
→ go back to the project
→ and edit the JENKINSFILE
change the below line at the beginnig of the file 
DOCKER_IMAGE_NAME = "willbla/train-schedule"
DOCKER_IMAGE_NAME = "(docker hub username)/train-schedule"
→ commit changes
→ goto jenkins page
→ click on train-schedule
→ click on master
→ build was sucessfull then goto
→ kubernetes master cli 
→ check whether are any pods
→ kubectl get pods
take the public ip of kubernetes master or node
→ goto browser
ip of master/node:with port
Execution Video : 
Errors  & Solutions 
Other Issues (If any) :

Section No: 8
Section Heading : Monitoring
Section URL (concept & Implementation) :
Video Name :  Monitoring
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2205/lesson/1/module/218
Video Name :  Installing Promotheus and Grafana
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2205/lesson/2/module/218
Video Name :  Cluster Monitoring
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2205/lesson/3/module/218
V
ideo Name : Application Monitoring
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2205/lesson/4/module/218
Video Name : Alerting
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2205/lesson/5/module/218
Steps :
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > /tmp/get_helm.sh
1
2
3
4
5
6
7
$ chmod 700 /tmp/get_helm.sh
$ DESIRED_VERSION=v2.8.2 /tmp/get_helm.sh
$ helm init --wait
$ kubectl --namespace=kube-system create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
$ helm ls
$ cd ~/
$ git clone https://github.com/kubernetes/charts
-vi prometheus-values.yml  # Enter the below code
1
2
3
4
5
6
alertmanager:
persistentVolume:
enabled: false
server:
persistentVolume:
enabled: false
Then run:
1
$ helm install -f prometheus-values.yml charts/stable/prometheus --name -prometheus --namespace prometheus
-vi grafana-values.yml
1
adminPassword: password
Then run:
1
$ helm install -f grafana-values.yml charts/stable/grafana/ --name grafana --namespace grafana
-vi grafana-ext.yml    ##Enter the below code
1
2
3
4
5
6
7
8
9
10
11
12
13
kind: Service
apiVersion: v1
metadata:
namespace: grafana
name: grafana-ext
spec:
type: NodePort
selector:
app: grafana
ports:
- protocol: TCP
port: 3000
nodePort: 8080
Then run:
1
$ kubectl apply -f grafana-ext.yml
You can check on the status of the prometheus and grafana pods with these commands:
1
2
$ kubectl get pods -n prometheus
$ kubectl get pods -n grafana
go to browser into grafana
select--add data source
prometheus
url--http://prometheus-server.prometheus.svc.clouster.local
go to import
3131
data source--prometheus
-vi train-schedule-kube.yml
- kubectl apply -f train-schedule-ki=ube.yml
-kubectl get pods -w
check in browser
select dashbord
metrics
sum(rate(http_request_duration_ms_count[2m])) by (service,route,method,code) * 60
Errors  & Solutions
Other Issues (If any) :

Section No: 9
Section Heading : Self-Healing
Section URL (concept & Implementation) :
Video Name :  Kubernetes and Autoscaling
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2208/lesson/1/module/218
Video Name : Horizantal Pod Autoscaler in kubernetes
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2208/lesson/2/module/218
Steps :
Self-Healing:
============
It need dependencies(containers and orchestration)
Brief description about self healing:
Self-Healing is able to detect when something is wrong within itself and automatically take corrective action to fix without any kind of human intervention…
Self-Healing implementation:
===========================
>>>> Check the pods status by using below command
1
$kubectl get pods
>>>> Kill the main process of the pod by using below command
1
2
$ kubectl exec train-schedule-deployment-58fdcc9df9-br7wd -- pkill node
$ kubectl get pods -w
>>>> you can see the train dchedule deployment pod in error state
>>>> After a moment execute the below command then you can see the yor train schedule app is in running state..
1
$ kubectl get pods
Creating liveness probes in kubernetes
**************************************
>>> Kubernetes allows us to create liveness probes which are custom checks run periodically against containers to detect whether the containers are healthy…..if a liveness probe determines that a container is unhelthy ,that container will be restarted..
Implementation:
==============
>>> Go to https://github.com/linuxacademy/cicd-pipeline-train- schedule- selfhealing
>>> Click on fork
>>> Select Your Github A/C
>>> Then clone that cicd-pipeline-train-schedule-selfhealing Repository from Your Github A/C by using below command
1
2
3
$ git clone https://github.com/Divyabharathikatike/cicd- pipeline-train-schedule-git
$ cd cicd-pipeline-train-schedule-selfhealing
$ cat train-schedule-kube.yml
vi train-schedule-kube.yml
1
2
3
4
5
6
7
8
9
##Add below content under - containerPort: 8080
------------------------------------
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 10
Deploy that train-schedule-kube.yml file by uisng below command
1
2
3
4
$ kubectl apply -f train-schedule-kube.yml
$ kubectl get pods
$ kubectl get pods
$ kubectl get pods -w
Go to browser
1
2
3
go to http://IP:8080 (you can see your train schedule app is running)
go to http://IP:8080/break (you can see "The app is now broken)
go to http://IP:8080 (you can see something is broken)
>>>go back to the master node and execute the below command to see the status of the pods
1
$ kubectl get pods -w
After a moment it automatically restarts the train schedule app
>>>> Go to browser and refresh the http://IP:8080 page
you can see your train schedule app is running
Execution Video :
Errors  & Solutions
Other Issues (If any) :

Section No: 10
Section Heading : Auto scaling
Section URL (concept & Implementation) :
Video Name : Kubernetes and Autoscaling
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2208/lesson/1/module/218
Video Name : Horizontal Pod Autoscalers in Kubernetes
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2208/lesson/2/module/218

Steps :
1)Download Metrics Server from git repo:
1
$ git clone https://github.com/kubernetes-incubator/metrics-server.git
2)Open Metrics Server Directory:
1
2
3
4
$ cd metrics-server/
$ kubectl create -f deploy/1.8+/
$ kubectl get pods -n kube-system
$ kubectl get --raw /apis/metrics.k8s.io/
3)Goto github.com and clone to your local machine
1
2
$ git clone fork url paste here from your git hub account
$ cd cicd-pipeline-train-schedule-autoscaling
vi-train-schadule-deployment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
//add at the end//
resources:
requests:
cpu:220m
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: train-schedule
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: train-schadule-deployment
minReplicas: 1
maxReplicas: 4
metrics:
- type: Resource
resource:
name:cpu
targetAverageUtilization: 50

1
2
3
4
$ kubectl apply -f train-schedule-kube.yml
$ kubectl get pods
$ kubectl get hpa
$ kubectl get hpa -w
9)You can find the changes made to train-schedule-kube.yml in the example-solution branch of the GitHub repo. Once you have deployed the app and the HPA, you can generate CPU load to test it by spinning up a busybox shell:
1
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
paste the below one:
while true; do wget -q -O- http://54.169.246.112(Node Private ip address):8080/generate-cpu-load; done
1
2
$ kubectl get pods
$ kubectl get hpa -w
Execution Video :  
Errors  & Solutions
Other Issues (If any) :

Section No: 11
Section Heading : Canary-Testing
Section URL (concept & Implementation) :
Video Name : What is Canary Testing?
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2209/lesson/1/module/218
Video Name : Implementing a Canary Test in Kubernetes
Video URL : https://linuxacademy.com/cp/courses/lesson/course/2209/lesson/2/module/218
Video Name : Kubernets Canary Testing with Jenkins Pipelines
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2209/lesson/3/module/218

Steps :
Pre-Requisites for canary test :
===============================
GitHub A/C
Docker A/C
Kubernet Cluster
Jenkins
Implementing a Canary Test in Kubernetes
——————————————-
1) log into kubernetes master
## Make sure we have 1 master And atleast 1 Node
kubectl get nodes
cd ~/
2) ## Fork the Repository cicd-pipeline-train-schedule-canary form https://github.com/linuxacademy/cicd-pipeline-train-schedule-canary into our GitHub A/C
3) ## Then Clone that cicd-pipeline-train-schedule-canary Repository from our GitHub A/C in to our kubernet master
1
2
3
4
5
sudo git clone https://github.com/Divyabharathikatike/cicd-pipeline-train-schedule-canary
cd cicd-pipeline-train-schedule-canary
ls
sudo cat cicd-pipeline-train-schedule-kube.yml
sudo vi cicd-pipeline-train-schedule-kube.yml
##Replace image: $DOCKER_IMAGE_NAME:$BUILD_NUMBER in containers as
image: linuxacademycontent/train-schedule:1
kind: Service
apiVersion: v1
metadata:
name: train-schedule-service
spec:
type: NodePort
selector:
app: train-schedule
ports:
- protocol: TCP
port: 8080
nodePort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: train-schedule-deployment
labels:
app: train-schedule
spec:
replicas: 2
selector:
matchLabels:
app: train-schedule
track: stable
template:
metadata:
labels:
app: train-schedule
track: stable
spec:
containers:
- name: train-schedule
image: linuxacademycontent/train-schedule:1
ports:
- containersPort: 8080
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1

periodSeconds: 10
resources:
cpu: 200m
5) ## Create a canary Deployment with train-schedule-kube-canary.yml
1
$ kubectl apply -f train-schedule-kube.yml
6)## Check depolyment status
1
$ sudo kubectl get pods
7) ## Describe canary pod by selecting path
1
$ kubectl describe pod train-schedule-deployment
8) ## Create a new file as template for canary testing
vi train-schedule-kube-canary.yml ##add below content
kind: Service
apiVersion: v1
metadata:
name: train-schedule-service
spec:
type: NodePort
selector:
app: train-schedule
track: canary
ports:
- protocol: TCP
port: 8080
nodePort: 8081
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: train-schedule-deployment
labels:
app: train-schedule
spec:
replicas: 1
selector:
matchLabels:
app: train-schedule
track: canary
template:
metadata:
labels:
app: train-schedule
track: canary
spec:
containers:
- name: train-schedule
image: linuxacademycontent/train-schedule:latest
ports:
- containersPort: 8080
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1

periodSeconds: 10
resources:
cpu: 200m
Save and Close
9) ## Create a canary Deployment with train-schedule-kube-canary.yml
1
$ kubectl apply -f train-schedule-kube-canary.yml
10) ## Now perform canary test by accessing on browser
## Now goto the browser access Kubernet Node as http://<IP OF KUBERNET NODE>:8080
1
http://&lt;IP OF KUBERNET NODE&gt;:8081
Canary Testing in Kubernetes with Jenkins Pipelines
======================================================
1)## Makesure Jenkins environment is up
## And Should have GitHub credentials,Docker Hub credentials, and kubeconfig credentials
2) ## Now go to GitHub A/C Repository https://github.com/Divyabharathikatike/cicd-pipeline-train-schedule-canary
## and create a new file for canary test as train-schedule-kube-canary.yml with below content
kind: Service
apiVersion: v1
metadata:
name: train-schedule-service
spec:
type: NodePort
selector:
app: train-schedule
track: canary
ports:
- protocol: TCP
port: 8080
nodePort: 8081
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: train-schedule-deployment
labels:
app: train-schedule
spec:
replicas: $CANARY_REPLICAS
selector:
matchLabels:
app: train-schedule
track: canary
template:
metadata:
labels:
app: train-schedule
track: canary
spec:
containers:
- name: train-schedule
image: $DOCKER_IMAGE_NAME:$BUILD_NUMBER
ports:
- containersPort: 8080
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1

periodSeconds: 10
resources:
cpu: 200m

Click on Commit new file
3) ## Goback cicd-pipeline-train-schedule-canary Repository
## Open Jenkinsfile and click on edit
## Makesure we have our Docker_IMAGE_NAME should be with “dockerusername/train-schedule”
##And create stage for canary deployment below staging of Docker image push
## and also create enviroment variable , deployment steps , set CANARY_REPLICAS of Production envirnment to 0 as below
stage('CanaryDeploy') {
when {
branch 'master'
}
environment {
CANARY_REPLICAS = 1
}
steps {
kubernetesDeploy(
kubeconfigId: 'kubeconfig',
configs: 'train-schedule-kube-canary.yml',
enableConfigSubstitution: true
)
}
}
Click on Commit changes
4)## Now goto Jenkins browser
##Click on New Item and give name as train-schedule ,select Multibranch Pipeline type and click on OK
5) ## Select GitHub credentials , give github username and Repository cicd-pipeline-train-schedule-kube-canary details under Branch Sources
## Click on Save
##And again go back to train-schedule
##Click on master branch and check the stage view and build
6)##Goto command line and login to kubernet master and check canary pods deployment
1
kubectl get pods -w
7)##Now do canary test on kubernet with Jenkins pipeline
## Access kubernet node on browser as http://<IP OF Kubenet Node>:8081
Execution Video : 
Errors  & Solutions
Other Issues (If any) :

Section No: 12
Section Heading : Fully Automated Deployment
Section URL (concept & Implementation) :
Video Name : Fully Automated Deployment
Video URL :  https://linuxacademy.com/cp/courses/lesson/course/2210/lesson/1/module/218

Steps :
>>>Go to jenkins=>credentials=>there we can see pre credentials
>>>Click on jenkins Image==>manage jenkins=>configure system
You can see global properties 
check on Environment variables then appear 
List of variables:
name=>KUBE_MASTER_IP
value=>(our public ip of kubernet's master server) {we use this ip later on in jenkins file}
next in Git Hub:
Git Hub Servers=>click on Add Git Hub Server=>click on Git Hub Server in that
Name=>GitHub 
API URL => leave it as default
credentials=>select none then click on add select Jenkins next jenkins 
Credentials Provider page will appeare
in that 
kind:select secret text
Next in secret:paste the github personal access token {same tocken in credentials earlear}
ID:github_secret
Descriptiion:GitHub Secret
click on add
then the previous page will appear
in Credential click on none select GitHub Secret

check write mark on manage hooks
then click on save
click on Manage Jenkins 
in that select Manage Plugins
click on available
>>>Filter search box type 'http re' then in available show HTTP Request check on HTTP Request
click on install without restart
in next page that shows HTTP Request: success
Goto main page of jenkins by clicking on jenkins logo
(then click on train-schedule=>master=>Branch master=>) or 
in view configuration we can see Branch master
Then go to GitHub site our repos
Click on Fork at the repository page
then edit 'Jenkinsfile'
click on edit icon 
in the DOCKER_IMAGE_NAME instead of willbla use your dockerhub username

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
pipeline {
agent any
environment {
//be sure to replace "willbla" with your own Docker Hub username
DOCKER_IMAGE_NAME = "betawins/train-schedule"
CANARY_REPLICAS = 0
}
stages {
stage('Build') {
steps {
echo 'Running build automation'
sh './gradlew build --no-daemon'
archiveArtifacts artifacts: 'dist/trainSchedule.zip'
}
}
stage('Build Docker Image') {
when {
branch 'master'
}
steps {
script {
app = docker.build(DOCKER_IMAGE_NAME)
app.inside {
sh 'echo Hello, World!'
}
}
}
}
stage('Push Docker Image') {
when {
branch 'master'
}
steps {
script {
docker.withRegistry('https://registry.hub.docker.com', 'docker_hub_login') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
}
}
stage('CanaryDeploy') {
when {
branch 'master'
}
environment {
CANARY_REPLICAS = 1
}
steps {
kubernetesDeploy(
kubeconfigId: 'kubeconfig',
configs: 'train-schedule-kube-canary.yml',
enableConfigSubstitution: true
)
}
}
stage('SmokeTest') {
when {
branch 'master'
}
steps {
script {
sleep (time: 5)
def response = httpRequest (
url: "http://$KUBE_MASTER_IP:8081/",
timeout: 30
)
if (response.status != 200) {
error("Smoke test against canary deployment failed.")
}
}
}
}
stage('DeployToProduction') {
when {
branch 'master'
}
steps {
milestone(1)
kubernetesDeploy(
kubeconfigId: 'kubeconfig',
configs: 'train-schedule-kube.yml',
enableConfigSubstitution: true
)
}
}
}
post {
cleanup {
kubernetesDeploy (
kubeconfigId: 'kubeconfig',
configs: 'train-schedule-kube-canary.yml',
enableConfigSubstitution: true
)
}
}
}
Next go to Jenkins 
click on new item
enter an item name :train-schedule & select multi brach pipeline then click ok
Branch Sources
click on Add Sources=>select GitHub=>in Credentials=> cllick on none=>select GitHub key
Owner=>enter github user name what ever you had
Repository=>select cicd-pipeline-train-schedule-autodeploy
now click on save
than at the build excuter run shows build files all
cancel those building items except train-schedule-master#1 which in build queue
goto left top menu train-schedule=>than click on master
it shows boxes otherwise click on view configuration.
in that boxes to comlte that process that will need few seconds to complete trhat
while that is going on open kubernet's master's ip in terminal
1
$ kubectl get pods -w
that shows blank thats means that process is going on
open jenkins then the build is complte
open kubernet’s master in terminal,
Exit it
1
$ kubectl get pods
then shows that status is running that is deployment is complete
Execution Video : 
Errors  & Solutions
Other Issues (If any) :

No comments:

Post a Comment