CI/CD with Python: From Code to Cloud @ Python Web Conference 2022

JFrog is a proud 2022 Sponsor for the Python Web Conference

July 16, 2022

< 1 min read

CI/CD with Python: From Code to Cloud @ Python Web Conference 2022

During our session, SuperFrog Daniel Keler, spoke about Python Code to the Cloud. by building a containerized Python Application, passing through security and compliance gates all the way to a Kubernetes cluster using Helm.

#PWC2022 attracted nearly 375 attendees from 36 countries and 21 time zones making it the biggest and best year yet. The highly engaging format featured 90 speakers,

6 tracks (including 80 talks and 4 tutorials) and took place virtually on March 21-25, 2022 on LoudSwarm by Six Feet Up. More information about the conference can be found at: https://2022.pythonwebconf.com

Speakers

Daniel Keler

SuperFrog & OpenShift QE DevOps Team Leader Red Hat

Daniel Keler, is a SuperFrog, and is currently the OpenShift QE DevOps Team Leader at Red Hat. Prior to this Daniel was the Automation Tech Lead at JFrog. Tt was 12 years ago that he moved from Brazil to Israel where his life changed forever. It was then that Daniel fell in love with the Tech creation world, from Code to Cloud. He loves swimming, coding, playing guitar, containers, Muay Thai, lambda and hiking (not necessarily in this order). Daniel is the proud father of a polyglot 3 years old butterfly and he is married to the techiest geographer in the middle east.

Video Transcript

so uh i’m daniel and i’m taking you today through a
journey of bringing our python code to the cloud let me turn on my camera oh my camera is
on cool so i am a devops team leader today at red hat i’m working on openshift cmv
department um openshift is the enterprise offering
for kubernetes at red hat um feel free to add me in the licordine and
i’ll be very happy to connect with you and everything that we’ll be talking today and all the code will be actually
available in my github account so uh
let’s move on let’s start
what we’ll cover today um so we’ll bring our python code to the cloud and uh the
best way uh to do it today is through pipelines um
here uh we’ll be using actually a pipeline of pipelines that means that we will have
three different pipelines that they can be used independently but
they actually are can be connected together in order to accomplish the the mission of bringing the
application to the cloud okay um so let me go very quickly
through the process and then we can just jump in
some details okay uh we’ll we’ll build our application okay it’s a python
application then we will be executing some tests
and once we have the tests in place we’ll publish our uh
python package to that our registry okay we have uh target z packages and wheel packages
that will be published to our registry um in parallel we will be able to create
a container image for our application okay um
here we will go through some best practices to create this uh container image and once we have this
image we can push to our private registry
and once we have it in our private registry so we’ll be able to scan to scan this image and find the licenses
inside of the packages and also we can find security
vulnerabilities okay in parallel we’ll create a helm chart
the helm chart will actually contain all the manifests that will take our container um to the
cloud to kubernetes okay so all the manifests they are packaged into a home
chart we will validate this helm chart as well this manifests
to see that everything is according to the best practices once we have everything in place we can
publish it to our registry again and then go ahead and deploy to staging
environment once we have the staging environment in place we run more tests and then
we’ll be able to deploy it to production that’s clear
cool so let’s start with the
application so the application i started using a flask and then i
i thought why not to maybe to try something a bit different and then i found a fast api
it’s uh also a python framework uh i’m using i used it
for um for http rest api application okay uh
that will expose a few endpoints um [Music]
like this okay uh this you see uh is the documentation that is
uh automatically generated by the framework okay it’s a swagger application a swagger documentation
there is also a redoc that is also a very nice interface for documentation
so one of the big advantages of festive guided i found is using aesgi
instead of wsgi and the main difference here is that everything happens asynchronously
and definitely it’s able to provide faster results
and better performance so why not to use it um also one nice
addition and it comes out of the box is pedantic uh for modeling
uh the responses and the re the way that we put the
data to our database for example it also validates the data
so it’s very nice um it saves us a lot of time in adding
all sorts of validations uh and as i said we have the documentation um out of the box so it
was a nice framework that i i used for
this demo okay for the package manager so
when we start dealing with the dependencies and then also at some point we want to create our
python package and send it to our friends or publish it to
the world so i just found a poetry as a very good one
i hope that some of you guys maybe had the opportunity to to use it
so poetry uh as i said is a package independency manager uh it makes our lives
very very it makes our lives very easy in boilerplating new projects for example
and dealing with production and development uh dependencies uh you can
handle them and separate you don’t need special uh mechanisms and special
tricks to to deal with it it’s already built in in the manager in the package manager uh adding updating removing
dependencies is very easy uh in the process of packaging and publishing as i
said we will be publishing the package to artifactory artifactor
will be our binary manager for this demo archifactory by jfrom
okay we have also the lock mechanism in place for this one
so it’s very easy to keep it safe and and have our dependencies well uh
protected and the virtual environment is already handled out of the box
as well so let’s see i want to share with you here
so let’s go ahead and create our first poetry uh project let’s
see that we want poetry to create a new project
so i will create prewc approach um and this is how i create my first
poetry project it’s just a python project inside of it i can see
different files were created for me so this structure goes like this with redmi my first python module here
uh some tests and our the main file here at least for
this stage will be the pi project.tunnel
uh that actually contains all the dependencies the regular dependencies
the development dependencies all the different configurations uh that will be used to bring this dependencies and also
to publish let’s take a look let’s see how it looks
um so i have it looks like this okay
it was created automatically but let’s say that i want to be a bit more specific while creating my
my first project with poetry so let’s just
maybe remove this one and let’s create another one so i can go
to this folder and i could do just poetry in it instead of new
and now i get into a very uh an interactive way to create uh
my dependencies and to declare them so let’s go ahead and
let’s call it this way this will be my first version i think
no descriptions for now let’s keep and then okay now the license will be
mit i can define the license i can give a lot of different information for my project
to specify which python is the one that should be used
and now i start to deal with my dependencies so i have here
first my dependencies yeah let’s go through it uh search for a package i would like to start maybe with requests
let’s try requests and yes i found this one
cool i’ll just use the latest version so pressing enter again i don’t want to add
a new uh package and now i’m handling the development dependencies
okay let’s say that i want to add bytes and then
that’s it so here this way i have uh my
poetry file created and then
let’s say that when i once i do poetry install
it will create my log file as well
now um
one more very interesting thing is that i have a
virtual environment already built in in uh in poetry so for example
if i want to run uh if i have a new file that is called for example
let’s say that i have a main goodbye and then i have
here and
let’s say that we want to just get some result
from github and i just want to bring the status to
have a status quo for example
so this is what we
do the magic and let’s say that
so when i have this my first file okay my first uh um
uh my first my first code here my first first piece of code and then if i want
to run it i can just uh start right with python and then
main.pi right and but also i can use poetry run
and then it will automatically use my virtual environment according to my
dependencies and according to the python version that i choose and so on
so this is for poetry okay
i had a very good very good time using this package manager it was very easy to take also
after finishing uh coding to create a package and just
configure uh artifactory to to to be my my private registry in order to
uh publish my packages and it was very smooth i would say i really
recommend this one let’s move on and go through the
containerizing our application so for containerizing our application um
there are a few things that we need to keep in mind okay so
one thing is to keep uh the images slim as slim as as clean as possible okay
um i really recommend to use the multi-stage pattern okay uh multi-stage
pattern is where uh you can use in the same docker file if you had a chance to
use uh to write your own docker file so um
so you have the opportunity to to actually to use different stages inside of the same
docker file even using different base images for different steps okay and at
the end you can even copy files from other stages to the last stage and my
final image that will be created from the docker file
will be the last stage that will eventually copy files from
other stages and this way i can keep things simpler
and smaller i get uh less binaries uh once i use
uh lighter uh base images for my docker files i get
less layers my image my container image becomes more
portable and i less i get less eventually lesser vulnerabilities
okay so um one more thing that i tried
during while i was preparing and this demonstration was to use podman instead of docker
today a lot of people they are trying to find
the alternatives for docker uh also uh kubernetes if i remember correctly
version 1.20 i think uh duplicated the docker uh
as a the rent runtime moving just to the cri standard
so yeah people are more and more looking for different alternatives there is no
doubt that docker made a big revolution in the industry but today things are
continuing continuing to to go on and the evolution needs to continue
so i used podman podman is
the same as docker but uh was created by a red hat actually and it does exactly the same
thing adding a lot of stuff so i want to show you a little bit
about podman so the commons with pogman they are pretty
much the same as docker okay so um
you can even go and make an alley alias docker being portman
and then when you do docker ps you will be actually
running pokemon ps and when you use docker images
i will have these images here okay um
let’s try to remove some images i will be using podman from now but are rmi
let me remove a few images we will remove these and this and this
yeah nice so again pogman images and we are good
um so let’s say now that we want to create our docker file and right we want to use
the multi-stage pattern uh let’s go there and instead of using a docker file
let’s make it a bit more generic let’s call it a container file
so
let me remove the previous one and let’s create
our own container file
so here is my container file i can the same way as i create docker files i can
go and from uh python 3.10
i just want to set a work there as file and then just run a command that
will actually bring this string this is my
first file into the first file
cool and that’s it this will be my my first stage and my first
uh let’s call it the docker image but let’s
call it a container image yeah
so let i can go and just like i do with docker i can do podman build
minus t this will be my fat image and i will
give this context here i see that everything worked uh all the steps it
created this image here uh and this is the hash for the image the digest so let’s
see how oh it’s almost one gigabyte uh image i
just used i didn’t do nothing okay i just used python 3.10 and i i got almost
one gigabyte image um so okay let’s try to move a bit
further and actually
and actually uh create another stage so let’s call this one as builder
right and i want to create another stage that we’ll use instead of just python 3.10
let’s try to make it a bit slimmer and
another builder it doesn’t matter how i call it i also
set a work gear here so my work there will be let’s say source
and let’s say that i just want to run
echo
right and i want to create my main.pipefile
and that’s it so this will be uh my the same image okay uh and then it will go
through the first step and then it will go to the second step okay
let’s see how it happens but now i don’t want to i don’t want to call it fat anymore i want to create it
slim and now i see that the first
part was executed and then the second one
and now let’s see now i have the fat
one that is almost one gigabytes and i actually i managed to shrink it a lot
and now i have the same python.10 but slim and now i have
122 megabytes so yeah we are getting better but i think we can get even better
uh so let’s try to create one more step or one more stage
that will actually use python the same
343.10 but this time i want to use alpine
and then with alpine i want also to have my work here at all and what i’m
going to do i’m going to use the files that i created in the previous
stages so let’s say that i want to copy from uh builder right and i want to go to the
directory that is called file and take my first uh file right
let’s take from let’s take it also from the second one so so i wanted to copy from the second one will be the another
builder right so this would be source and then i have the main.i
and now i have a third stage that is this one
that will actually copy from here this file and from here this
file okay and at the end let’s say that we want just
to serve these files and i will just in order to see it i want to
use the http server module
and just add this port here
888 um cool and now let’s try to build it
um again now i have the slim and now i have the slimmer even slimmer uh
image let’s see how it goes okay
so it looks like we forgot
to add here by and also here my first file
here and that’s it it will be relative to this working directory
so let’s go there and build again
cool and now i can see that
my image got 50 megabytes instead of 122
and i i’m still using uh the same python 3.10
so there are a few ways to make the the image uh smaller and smaller and
again once we have smaller images we have less binaries less layers more
portable images less vulnerabilities okay and how how do we run actually this
container so um we go and we have this image let’s
take the slimmer one and we can run in the background
so let’s expose the port 88 to 48
and let’s give an environment variable for example for staging and let’s give the name
let’s call it a server my server
cool and then let’s give the image cool now
i see that i have my running container here just like i do with docker i executed
this image okay and then created a container out of this image
with this name here this is the command
and you see that it lists in the directory with my main dot pi
file okay so now one more uh thing that podman
helps us is to go even closer to kubernetes and it’s it
actually he brings us not only to containers but also to the pod concept okay that that
is used in kubernetes so for example let’s say that i want from
this container here i want to create a pod
definition a pod manifest so um
i have this container here and then i can put man
i have a whole uh let’s call it a name space related to pods
you can see here i can do a lot of stuff that are related to pods
inside of podman itself okay and then let’s say that we want to even generate
i can generate a cube-like
pod out of my sorry
of my server container so what i’m doing here i’m using pogman to generate a
cube-like manifest to my container and this should generate my
manifest for the pod with all the information that
the container has so it will be just translated to this yaml
file here okay i can even save it to my production let’s say
and one more thing that potman helps us is even if we want okay we now
did podman generate right q
and and the name of the the container so now i want to just play
play also like kubernetes to simulating kubernetes and i can
run my thought inside of podman
the port is already binded because it’s already used being used so let’s
remove this container here and just like
the docker we remove the container
and now
my port is already there uh boardman board ps
and i see that my pod was created
so this is how podman can not only can not only
be an alternative to docker but it can it can give take us even further uh to the pods and kubernetes and cloud
world so that’s it for podman
um and let’s move on um
cool so now we have our application we already took care of the dependencies
we packaged our dependency already and now we already created our container and
now we want to actually to create our home chart
okay so how helm can help us here so how uh
i just showed you the the how the pod manifest was created
so together with this manifest we will have a lot of different manifests
eventually and we need a way to put them together and to make them even dynamic
okay the content of this manifests so helm comes to make easy to find and share
our application together with with its dependencies okay
and make all these manifests available as a package you can even give it a version
not only to the to the application but also to the helm chart itself
so you create a charm the chart you give it a version you can share it
you you can publish it everywhere okay uh you can see down here
how does it looks uh helm chart after
after adding uh the different uh values that i were
taking from another file uh that so the values here that they were created
dynamically cool and i will show you know in one minute but i just want to mention
also about also about the release life cycle uh so with helm uh it’s much
easier uh to not only to release okay
uh to create new versions to release uh and then to roll your application uh to
cover through kubernetes and even to have a revision for each one of the actions uh you can manage the
dependencies this way uh inside of the chart uh you can have
the history for the releases as well and i want to show you actually how it goes
okay uh let’s see how uh helm works uh and what is the
anatomy of a helm chart actually um so let’s
say that we want to go to our pwc project
and now we want to create our uh first chart okay and let’s go and do helm
help new and i will create chart it’s not how new it’s home create sorry
and now i just created my first chart and that’s it let’s see how it looks i have here now
this folder that is called chart and from this folder
i can actually see these files that were automatically generated for me um
so let’s see what do we have here we have the charts the chart of tiamo that will contain the
general information about the chart uh including uh its version and dependencies
uh and descriptions and license and everything okay we have the charts folder where all
the dependencies will be stored i can store all my dependencies here
and uh we have the templates so the templates is where the magic happens i would say because all the values that we
have here in this yamu they will populate all of these manifests okay you can handle some logic
with the tpl file with the template file and
let’s take a look at this deployment for example okay so we go to um
chart and let’s take a look at the deployment sorry templates
and deployment so you see that the deployment it’s it’s actually it’s a like any regular
uh kubernetes uh manifest in this case is a deployment okay and it includes a lot of different
variables these variables they are here uh using the
go uh template engine okay because helm is written in go so uh it’s very natural
to have the printing engine here uh to get these results from the values
file um so i have here all the definitions that i need okay
it can come from different objects for example the chart object itself it can
come from the values file so all the content of my
manifests they become very very dynamic okay and if i just want to see
um what will come out of this
uh template here okay uh even for troubleshooting or just to see
how it works so let’s do helm template
and chart okay i can template this chart and now all the manifests it will show me
all the manifests that will be actually used for creating my deployment
for creating my my my application okay now i have
the deployment and now instead of having all kinds of variables we have all the
values here okay and the values they are coming from the chart object
but also from the values file let’s take a look at the
biosphere bonus file
and it’s just a yellow file with different um
parameters and different variables uh and everything can be can be taken outside
and you can of course you can use the same the same helm chart with different values file okay and that’s where the
the the values become totally dynamic okay
for each environment you can use different uh values for example
so this is a helm uh it’s very easy to create the first helm chart
it works out of the box and
again very very easy to manage also releases
so by default it will use the rolling update strategy
but it depends on what you choose but in general you will roll
a new new version of your application and until uh the the
pod the new pod it’s not until it’s not able to get traffic and
connections the the previous one will not go down so only once the application is ready
the the previous pods will be taken down this is for the rolling update strategy
you have other strategies as well of course with the blue green and other different mechanisms
so this is for helm okay now [Music]
how do we put everything together now okay we have the containers and we have the charts and the packages and
everything and now we want to make everything go to the cloud i mean everything should
happen in an automatic way and everything should go to the cloud but we need to take into account that
if we want to [Music] add a piece of code and have this code
into production automatically we need to keep in mind
how important are the tests and and check everything before we put
anything into production okay so here we need to take into account all the security uh part of
course uh the compliance if my company allows this kind of or that kind of of
licenses into my uh third-party libraries uh i need to run all the
relevant tests all of them uh uh unit front and back and benchmarks
everything should be part of the pipeline okay and one important thing is to keep staging
as close to production as possible so you you can be uh you can
be very very uh sure that you have what what happens in staging will probably uh and definitely
will work very well in production okay um also miss configurations okay
during this uh this uh manifests generation and everything a lot of
different misconfigurations can happen okay
so today there are also different tubes that can go through the manifests and
just find things that that are they are not really according to the best practices
use these tools use as many tools as you can to make sure that the code that you put
to production works well okay um
cool so here for example we see where we use different labels just for staging and production everything
happens just like the same but you use different labels um
this can happen [Music] in my example here i’m using two different clusters actually one for
staging and one another for production uh and using just different labels uh
there are some different approaches of putting them in the same the same cluster i
personally i tend to not go to this uh to this approach
i prefer to have production and staging in different clusters so if anything bad
happens on staging will not affect production but the there are different approaches with just
handling it with affinities tolerance and paints and and this stuff
uh but it’s not in the scope of this uh this talk so
how do i put all of it together okay there are many different technologies and and things happening and i need
something that i will actually uh be able to put them together and and and
organize my my pipeline so i have here the platform the j4
platform uh that i used for uh my demonstration okay and
here i am using uh j4 pipelines okay to organize everything
and remember it’s a pipeline of pipelines so i have different pipelines that they are actually used together
they can be executed in in part totally independently
but they are also being executed together in order to accomplish
the mission of bringing the code to the cloud i’m using artifactory artifactory
it’s actually serving as my main packages hub because
when i bring third-party libraries to build my my python
package all these libraries they stay in artifactory uh my package at the end of
the day once i build it goes to artifactory okay and then when i create
my container image i push it to artifactory and the helm chart is the same for the
home chart so everything is kept in my artifactory instance and this way i keep
myself totally safe and totally sure that uh even
tomorrow if the internet is down so i still have my dependencies with me i
still can build my project i still can even deliver my my my software okay
so this uh everything happens in an artifactory and
then with x-ray um actually it’s a very good uh way to keep all these binaries
actually to keep all of them safe meaning that everything that i have in artifactory
my first priority software my third party software my images everything is
continuously being scanned and continuously [Music]
bringing me information about licenses and vulnerabilities
okay so let’s see uh how the drift platform
actually helped me here with it let me login
and i am here this is jifo platform i have here my packages
okay and so hi how did i configure everything to work uh with the different integrations that
i am using uh one reason to use the j4 pipelines is
that as python j4 pipelines has a lot of all the batteries included i would say
so dealing with the python packages and images and helm and publishing and
packaging and deploying everything happens in very very uh
familiar steps into the pipelines and they are very easy to use i would
say i will show you um in the next few minutes
um so let’s move on and see how actually i configure the j4
and here we have uh artifactory okay that is holding all my
packages i also configure the x-ray um
x-ray uh i in x-ray i created uh a policy that also
that actually takes and give me feedback about critical uh critical uh
security vulnerabilities okay so it will keep the
will keep an eye on all the vulnerabilities that they have severity critical
and for using this policy i actually create a watch
okay and this watch will actually will tell which resource i want to watch and
which policy will be applied to each uh to each uh
resource in this case it’s a repository it can be a repository it can be a build it doesn’t matter but
here i see that i have my builds here configured and
they are uh using this policy here okay and this is how i configure the x-ray i
also selected uh resources for indexing so all these repositories here they are
being continuously scan by x-ray
i have here my docker repository i have my python repository as well
i have my builds and here in artifactory i actually created
different repositories for different purposes as i said i have my docker
images here i have all my python my pipe i like
modules and also my chart my home chart packages everything stays here with me
okay everything that i bring from outside and everything that i create myself everything stays here so let’s go
to the pipelines uh this is for x-ray and for the pipelines i created these integrations here
okay so i of course have uh artifactory integration where everything all my
binaries are handled i have my git repository that is the one that i shared with you guys
is this one and i have my uh staging kubernetes cluster
okay and i have my production grade uh
openshift cluster as well where i will which one which i will use for production
and let’s see how this pipeline actually is built let’s go there
and see how it’s built
cool so i have here the as i i have the packaging part here
and i can share with you what is happening this first part here so as i said it’s a
pipeline of pipelines and you can see here uh these three steps in
the first one i package my application in this one i create the image and here i deploy
to uh production okay so in the first one
here i will just build my application
and then i will execute different tests let me make it a bit bigger
i will run different tests okay i run them in parallel and then i
publish my app okay to artifactory all the code is here
okay uh we can see that the pipeline code is it’s a young file okay that
contains all the different steps uh so handling a helm chart
here it’s like just like this and then we have upload artifacts just like this
and docker build docker push everything is here um
very easy to do and now the next step let’s jump to the next step here
so if i go to the image part this is another pipeline that will create the image
will publish that will push the image to artifactory validate
the the helm chart for me okay and then it will publish the helm chart run the
security and compliance and then i have the last one the last step
that will actually be deploying
my my application so i go to the pipeline and then i see that i have
my staging deploy and then i run a few more tests kind of
benchmark and integration tests and other important tests and then i am able to deploy to production
you