JFrog ArtifactoryCloud Native GitOps [swampUP 2020]

Ayrat Khayretdinov,CNCF Ambassador,GDG,GDE ,Kyle Bassett ,Founding partner at Arctiq

July 7, 2020

< 1 min read

Adopting a microservices architecture provides many benefits: scalability, faster releases, and polyglot application development. However, it also means more services to keep alive, more complex testing, monitoring, debugging, networking interactions an increased risk of cascading failures during outages. In this talk, we will explore how Anthos Service Mesh (ASM) and JFrog Artifactory can help you build reliability into your microservices applications. First, we’ll cover how adopting a service mesh can help you decouple the network from your application code, giving you client-side load balancing, increased consistency, and fine-grained controls out of the box. Then with demos, we will demonstrate a CI/CD pipeline with Tekton, JFrog Artifactory and ArgoCD, backed by capabilities of Knative and Istio Service Mesh to demo an overall Anthos experience in hybrid and multi-cloud scenarios.

Video Transcript

you
hi everyone thanks for joining us today
at swamp up 2020 I’m super excited to be
here and talk about cloud native gear
ups with anthos and j-rock artifactory
I’m Archie hybrid called specialist at
Google and as you can see my cloud is
suit and my background I’m a cloud
native ambassador as a CN CF ambassador
I’m actively engaged in promoting
community efforts and cloud native as an
organizer of kubernetes meetups this
year I’m helping to organize the first
kubernetes days in Canada if you’d like
to join us as a sponsor speaker or
attendee hit me up at Twitter with that
I would like to welcome my today’s Co
speaker Kyle Bassett from Arctic Kyle
alright thanks for the introduction
Archie so excited to be here to present
at the Jay frog Swamp up event 2020
virtual obviously this year but we’re
making the best fit I brought my my
friend here with me to keep me company
during this presentation so yeah quick
introduction I’m Kyle Bassett one of the
co-founders and partners at Arctic and
we do a lot of services around DevOps
and kubernetes and helping customers on
their transformation journey I’m
Canadian
and we’ve also been very lucky to be a
anthos design partner from day one with
Google so much of this presentation
today is going to be a demo I’m gonna
show off some of the aspects that my
team especially Daniels been building
and we’re gonna show you some of the CI
CD aspects of this and we’re gonna
leverage some of the Jay frog toolkit so
let’s get into it I’ll give you a quick
breakdown of the workflow and we’ll get
into the demo one of the aspects that we
feel pretty strong about is the roles
and responsibilities of the developer
and the operators we feel that the
operations team in their best interest
should be able to automate all the
infrastructure aspects of their
kubernetes environments and also the CI
CD flow that goes along with that that
way developers can focus on what they do
best write code that should drive
automated builds and automated testing
and they can get instant feedback the
other side effect of that is the
operations team can benefit from
consistency across the environment and
any policies and procedures that they
want implemented they can implement them
the development teams will start to
follow the rules based on feedback they
get
from their unit testing and automated
testing and everybody is happy and we
get more results for the business and
quicker releases out the door in this
demo we’re gonna get into a real live
example and show you some workflows the
base tools were gonna use is anthos as
the platform we’re gonna use that
running in GCP and gke we’re gonna use
get lab CI argo CD we’re gonna leverage
helm templates for the deployment
aspects and we’re gonna leverage j-rok
artifactory for the images and helm
templates as a review repository and
we’re gonna leverage also j frog x-ray
to make sure our code is safe and intact
the basic use cases developer goes ahead
and checks in some code that’s gonna go
ahead and kick off a trigger and get lab
is gonna run through and do code climate
checks it’s gonna build with Kanaka and
jib it’s gonna push to the j frog
artifactory repository and do the j frog
x-ray in the end our go is gonna kick in
and we’re gonna be able to deploy to our
dev clusters and our production clusters
we’re gonna leverage different branches
for that strategy we’ve also got open
policy agent enabled anthos config
management is looking after all the
configuration of all these clusters
we’re leveraging a get up strategy so
all of our code is in get it’s gonna get
pushed out when it gets committed to the
appropriate registries and things are
gonna be automated alright so let’s get
into the demo here in our GC p console
you can see we’ve got an anthos GK
cluster deployed and it’s referring to
it as dev we’ve also got our production
cluster deployed in this case we’ll do
show you the environments queue a cube
CT X and you can see our environments
here in this case this is our production
environment you can see it’s got the
iceberg tag on it
there’s our dev tag of the demo have a
look at our namespaces you can see we’ve
got an Argus CD space a bank we’ve got
our cantos config management gatekeeper
is do a few other things that we’ve got
in here go ahead and look at our pods
and our config names MIT namespace you
can see we’ve got an importer a monitor
a sinker these are the pods that look
after pulling the code for
get and you can see we’ve also got
gatekeeper deployed in the cluster so
gatekeepers gonna look after all our
policy and all our policy management
that we want to push down to the
clusters and the rule sets so let’s have
a look at the get lab repo we’ve done a
clone in here we’ve got our dev cluster
mo our production cluster um oh I’ve got
some selectors so in this case we were
calling it anthos dev this is our dev
environment you can see we’ve got our
same thing here our cluster selector so
we’ve got an annotation here for the
selector and we’ve got a production one
same thing you’ll see we’ve got an
annotation for the selector for
production one thing I’ll show you
around the Policy Manager here we’re
gonna go ahead and just do a test run of
a container so in this case you can see
we’ve got this debug pod and we’ve been
able to create it now let’s go ahead and
push out a policy using our gatekeeper
so it’s simple yellow file we’re gonna
go ahead and save this yeah mo and the
big thing here as you can see we’re
referencing the artifactory endpoint
here so this is gonna say we’re only
allowed to pull images from our secure
artifactory registry in the name space
of the bank so let’s go ahead and commit
this into our repo and that’s gonna push
this rule out to all of our clusters so
we’ll go ahead and we’ll delete that pot
since it’s already deployed and then
what we’re gonna do is go ahead and try
to deploy the exact same pod what’s
gonna happen is our gatekeeper policy’s
gonna kick in and say we’re not allowed
to do it if you can see it’s saying
denied because we’re not pulling from
the appropriate registry we’re trying to
pull from docker hub so let’s go in and
talk about the application we’re
deploying this is a anthos Bank
application a sample application it’s a
mix of Python and Java applications you
can see it’s got a load generator and a
front-end we’ve got some different
services ledger balance reader
transaction history and you can see
we’ve got a couple Postgres databases
that are looking after our state so what
we’ve done is we’ve cloned this into our
git lab
infrastructure give you a quick view of
the application we log in as a test user
we can send some payments send a payment
to Bob
we’ll give Bob some money and what
that’s gonna do is send a payment to Bob
and then reduce our balance and store
that in the database so let’s go back
into our repo and you can see we’re
controlling we’re saying we want an Argo
CD namespace in our bank so anthos
config management is looking after all
these aspects it’s gonna push this to
every cluster that’s being managed
there’s the namespace repo if we have a
look under Argo CD we’ve got some
different things you can see we’ve got
that deployment Y Amal we’ve got the app
dev and production so those are gonna
reference the various production
environments and we’ve also got our
ingress configuration and what
namespaces we want configured that we
showed you earlier so have a quick look
at the mo you can see this is our dev
the URL that we’re using insisting on
port 80 it’s a virtual service and then
we would also define this as a
production for our production
environment so you can see we’ve got a
selector here that says selector
environment dev and that’s going to
match our dev URL we’ve also got the
prod that’s gonna match our prod so in
this case we’re having configuration for
– we’re also using Argos CD where you
can see we’ve got one running in dev and
we’ll also have one running in the
production cluster here and Argos CDs
gonna look after all the sinking of the
application aspects we’ve got the helm
templates how the applications are
enabled and then if we look in our
config management namespace back here
we’ll go in and look at this ya know
configuration so let’s look at the dev
one so you can see we’ve got our
annotation here and this one’s for dev
and then the image tag I want to pull is
you can see the ID right there the value
is gonna be there for the dev URL and
then we’ve got our secure repo here
pointing at the helm templates and we’ve
got our target version let’s login to
dejay for our platform
have a look
and here we can look at our artifacts
you can see all our helm templates are
stored in here different versions let’s
search for the bank ones there’s all the
bank versions in our home templates
there’s our latest we’ve also got our
docker repo in here if we have a look at
our dr. repo these are all our docker
files and this is a secure repo that
we’ve approved with gatekeeper if you
look at the account DB you can see all
the different image tags that we’ve got
and they’re gonna be matched to the
shaun get lab jump back into get lab and
we’re gonna simulate a merger crest and
show you how that would work in this
case we’ll go have a look at our source
repository here we’ll pull up the
appropriate branch and you can see we’ve
got this Arctic feature Brants and our
master that’s the one that’s linked to
production but we’ve we’re gonna put a
new feature in here and we want to push
this to our dev cluster and then we’ll
eventually push this to master once it’s
been approved and go through our
workflow go back to merge requests here
and we’ll go create a merge request and
let’s go ahead and pick the branch so
we’re gonna say arctic feature push this
to death you can do a quick compare on
these and we’ll add a message so let’s
turn this into Arctic Bank
we’ve got anthos Bank well you know we
just need some free code we can have an
Arctic Bank why not and we’ve got our
kubernetes cluster so we might as well
have a bank we don’t want to delete the
source branch we’ll go ahead and submit
the merge request here now you can see
as soon as we submit the merge request
we’ve got a pipeline deployed if we dive
into the pipeline you’re gonna get to
see the workflow here a little bit so
first thing it’s gonna do is go ahead
and do a code quality then it’s going to
go ahead and do the build runs through a
bunch of different aspects of the builds
you can see it’s building the different
containers go through the docker images
for all these it’s also going to use jib
jib is a tool provided by Google
used to build containers and then the
third stage and then we’re gonna stage
our help parameters in our ACM repo
let’s go and quickly show the ACM repo
we’ve got a pipeline staging folder here
and you can see we’ve got a gamma file
here and it’s a skeleton and you can see
we’re providing environment variables
here and then anything in dev with this
annotation but the prod will have an
also a separate annotation that it
showed you earlier namespace we target
is going to be Bank
there’s the server and the parameter we
want to leverage is the image tag and
the host name you can see we variable
eyes those stud back into our pipeline
here’s our image ID you can see it’s in
gitlab so it’ll come in as an
environment variable and also our
front-end will come in based on that
annotation so let’s go under our
pipeline settings and we can look at the
variables and all the variables are set
in here here’s our front-end URL for dev
URL for prod all the different variables
we want here we showed you that earlier
that’s also managed by the ingress
configuration at those config management
so let’s log into artifactory and you
can see all our different tar files
there go back and have a quick look at
our pipeline you can see our pipelines
running and it’s making its way it’s
done its code quality it’s gonna start
to build these images you can see it’s
going through most of the images are
built
we’ll show you how this pipeline is
configured if we go back and look at our
repo here now this pipeline actually
comes out of another repo here it’s a
separate repo and you can see the
different stages here code quality build
Stage four helm home deploy to dev how
to deploy to prod and then here’s a
simple code that goes along with that
that looks after the pipeline I can see
here we’ve got the deploy to prod deploy
to dev so this is the workflow so build
stage is done and how ammeter czar done
so the CI process is done at this point
now if I was a release manager I’m gonna
go ahead and merge this if I feel that
it’s in a good state and it’s passed all
my pipeline checks so I trust it so I’ll
go ahead and merge it and you can see
another pipeline is kicked off this
one’s got one job and it’s gonna do a
home deploy to dev we can go and dive
into that and see real-time what’s going
on let this run now let’s go look at the
agro environment and we can see the
synchronization so here’s all the
configuration and deployments and you
can see it’s synched but 21 hours ago
was the last time it got synced but what
we can see here as we go in to the
namespace under Argos CD you can see 24
seconds ago that um will get updated if
we dive in here you see the image tag
the values ending in zero seven and
there’s the hostname that’s the same go
back to Argos CD and now you can see
that it’s been updated as of a few
seconds ago
and if we keep an eye on a watch command
looking at our kubernetes environment or
application you can see that some pods
are getting terminated and recreated
based on the new images if we go back to
the Jay frog environment
let’s check by last modified and
the image and we can have a look at the
x-ray status of this image for example
x-ray data and you can see the security
vulnerabilities it’s calling out in this
image any licenses that we should be
considering now let’s go from dev and
let’s say you can see here our dev
environments have been updated now we
have a bank of Arctic instead of a bank
of anthos but now that we’ve tested this
it’s time to push this out to production
so if we go back to get lab let’s go to
our merge request this time and we’ll
create a new merge request now in this
case we want to put this from dev to
prod since we’ve done our test we don’t
want to delete the source go ahead and
submit the merge request and that’s
green check so we can go ahead and merge
this now as you can see it kicks off
another pipeline now this is the helm
deploy to prod pipeline so we’ve already
got our artifacts and we’re gonna push
these out to our production environment
no if we go back and look at our go CD
you can see five minutes ago and this is
21 and this has got the old image tag
now if you look in prod agro you can see
we haven’t synced for 21 hours so our
pipelines running here you can see it’s
deploying artifacts it’s finishing up
and now we should get this picked up you
can see it’s out of sync it’s doing a
quick synchronize now this is going to
refresh all those pods that are in a
production environment and redeploy our
application inside production so to the
right here you can see the containers
are getting killed and there’s
respawning off of the new image you can
see they’ve been up for five seconds and
and prod is up and running so prod very
shortly would be in the same status
devil if we go ahead and log into our
pradhan bottom you can see it’s Bank of
Bank of Arctic now instead of anthos and
we’ve got all our data in in check here
so you’ve got to see the workflow and
how it works this is one way to do it
obviously you can put more check and
balances you could add more security but
really this is showing the power of
we’re just leveraging
those kubernetes clusters and anthos
config management is managing all the
environment aspects we’re leveraging
some really good tools in git lab and
Argos CD obviously the artifactory
toolkit and the security scanning
aspects and you know as a developer I’m
able to just focus on writing code
checking it in operators are building
the rule sets for me and helping me make
my life easier but at the same time
we’ve got secure infrastructure that’s
reliable and we can push code out in a
real fashion and get those new releases
to market so I hope you enjoyed the demo
I’ll throw things back to Archie to
close things off any questions feel free
to reach out to us and we’ll definitely
put out some more content my colleague
Daniel did a great job building this
video I hope you enjoyed it thank you
awesome thanks Kyle and back to our
presentation cloud native gear ops with
Antos and Jay from artifactory since we
are at the Jay for a conference we don’t
need to explain what is artifactory
however you might be interested to learn
more about gear ops cloud native and ant
us and how this tag can work together
with jay from first few words about
called dative and CN CF it all started
with Google open sourcing and donating
their internal container scheduling
system known as kubernetes to CN CF
since then C&C have grew to a huge
ecosystem of projects that nicely
integrated with each other right now CN
CF hosts and support ten graduated six
incubation and more sandbox open source
projects you can check out the latest
state of CN CF ecosystem at landscape
dot CN CF da Rio in our today’s demo
we’ll be using some of the cnc of
projects that are highlighted with the
green check boxes finally google remains
top contributor to CN CF and continuous
support foundation first is this year
Google summer called internship will
invite students to help contribute to
open source software development
including kubernetes ok now let’s talk
about Gil ops as Kelsey said kubernetes
is a new SSH and it’s becoming a
standard platform to deploy software and
users want to learn better way of
deploying kubernetes code natively
safely and securely and reliably to
soget ops became a hot topic of 2020 and
there is a huge demand for it now gear
ops a new paradigm a new set of prints
based on the care of the DevOps best
practices around continuous software
delivery it is essentially a set of
practices to use git pull requests to
manage infrastructure config policy and
application configuration that’s why we
say here continuous anything first
principle of the get-ups is to specify
antenna state with the declarative
configurations since we deploy code to
kubernetes it’s our Yammer manifest
could be customized or ham charts second
principle the desired state is defined
and version in git and so get treated as
the single source of truth and all
changes to the configuration can be
automatically applied using pull request
mechanism most probably you’ll be saying
wow this is we have this before so
what’s the difference well I hope the
final principle can clarify what I mean
when I say get-ups kubernetes is our
get-ups runtime that pool sinks
kubernetes state from get their
additional get ups agents operators or
some call some of them called
controllers this controller deploy and
reconcile the state of the get the
controller also ensures that no
configuration strips are present here’s
simple scenario of the get-ups
continuous delivery pipeline step one
developer commits a code to github step
two code get reviewed and tested and
then merge the subsequent pipeline stage
step 3
get up s– agent applies changes if it’s
different from kubernetes cluster state
it was to note that you know ages like
our gue CD can not only apply a
kubernetes manifest but they also can
apply your hound charts are customized
so from what we’ve discussed so far it
is quite obvious to see the advantage of
the get-ups
this is deployment options available
today as developer do they wanna deploy
code with a third-party CD tool or
simply follow the git merge workflow or
ask your operations team if they prefer
get workflow rather than using ansible
bash or make file to deploy kubernetes
code for traditional CD system second
increased speed of software deployment
without making things complex
controllers can apply and revert changes
within a second third get as a single
source of truth bring consistency and
standard across you can seek an
application deployment there is no more
sprawl there are other advantage that
are listed here like reliability
security and disability that the user is
gaining by using get-ups
so where you can apply get-ups today
first you can use it for continuous
delivery you can deploy your clock
native apps like prometheus jagger lundi
you can deploy your micro services
people even using it for mo ops then you
can use it for continuous operations and
apply it configuration on all your
kubernetes clusters to prevent
configuration sprawl
you can use it for continuous security
and apply your policies and compliances
and finally perhaps something new to
many of us is to deploy cloud
infrastructure in the get-ups fashion
now let’s get into the middle of our
presentation and talk about flowers or
Antos in greek anthos is a google cloud
open platform for application
modernization that provides you
flexibility to run your applications
anywhere embracing open standards Antos
lets you run your applications
unmodified on existing on-prem hardware
investments or in the public cloud antis
is a hundred percent software-based
solution and most importantly Antos
leverages open api’s such as kubernetes
sto and key native giving you the
freedom to modernize any place at any
time at your own taste anthos consists
of set of services developed to ensure
that platform admins DevOps dev Cyclops
developers have successfully cloud
native journey this services covering
application authorization habit and
multi-cloud operations and management
and even VM to Cabrini’s migration with
antis you can deploy google kubernetes
engine not only on GCP but also on pram
AWS and as your later this year so you
can end up with cheeky clusters running
everywhere and now that you can run you
GT clusters and bring your own clusters
everywhere you need to have a way to
centrally manage up
great a continuum this is where anthos
cluster management comes into place it
provides single pane of glass view to
all your kubernetes clusters and
workloads across the clouds and
on-premises for example in partnership
with j-rok enabling hybrid multi-cloud
develops on anthos in this architecture
diagram using cloud native hybrid say CD
platform with j4 gala factory and Antos
Jiki that allow us to deploy both on
pram and on the clouds and now you can
even to deploy or.net workloads with
anthos and jefra on top of cluster
management sits the service management
layer or anthos service Mash ASM for
short
it’s managed service mesh offering to
connect manage and secure VMs and
containers ASM is built on top of sto
open ATS unless you easily create a
network of deployed services and offers
out-of-the-box telemetry and uniform
observability add to service mesh
transparently deploys lightweight and
voice sidecar proxy throughout the
environment spanning DMS and containers
this proxy enforces uniform policies for
all communication between services
without requiring code or application
changes without a service gnash
developers don’t need to have to hard
code configurations for traffic
management or ratification like they did
before into the application they can
continue ship the new features without
having to worry about security on
operations finally application
development provides solutions on anthos
marketplace from Google and our partners
for example in Antos marketplace you can
find and deploy J for candy factory
enterprise on any cloud providers or in
premises also application development
layer includes capabilities of anthos to
run serverless applications via cloud
run for anthos based on key native API s
and provide easy way to deploy services
on kubernetes and a trans capabilities
such as advanced load balancers and auto
scaling on top of kubernetes
operation management simply brings
observability to your apps and services
however the main focus of our
presentation today going to be ant
conceive management ACM anthos config
management is a multi cluster
configuration manager that helps you to
have consistent configuration and pulses
across your clusters whether they on
Prem or in the cloud it is fully
embraces getup style it uses git
repository to centrally set and enforce
policies and compliances for security
apps and infrastructure once you declare
new desired State ACM will apply it and
we’ll configure check for changes or
drift ACM can be easily plugged into
your existing CCD system as it’s not
supposed to do CD part it’s really
focusing on your configs and policies
ACM covers 3d reviews cases first use
case continuous infrastructure with
config connector ACM config connector is
a kubernetes add-on that allows you to
manage your google cloud resources
through kubernetes configuration yes we
can deploy anything from spanner even
Jiki using config connector right now
it’s only works with Google cloud here
we have an example of cloud sequel my
sequel database manifest that will be
automatically sync as soon as it will be
committed to the git repository it’s
pretty dope right I’m just remembering
my develops days and building terraform
modules and this looks so much easier to
me right now
second gear ups use case is continuous
operations with config sync can fixing
enables configuration as code it
synchronizes the state of your cluster
would you get repository as Kelsey said
they two concerns for kubernetes users
managing cluster level can fix is still
a challenge but ACM come fixing you can
control kubernetes cluster configs ball
and make it auditable in this example
ACM deploying role-based access copper
net is control rules and set of
namespaces across clusters in different
environment you can also use ACM to
deploy cloud native CD solutions such as
agro however its main use case today
limited to continuous operation and
continuous security finally ACM policy
controller that enables get-ups
continuous security
a cm policy controller is integrated
with CNC F open policy agents OPA and
this uses gatekeeper project and
provides catalog of policy templates
that turns Arrigo policies in
compliances into kubernetes objects so
like this manifest shows enforcement of
each two mutual TLS policies across
kubernetes clusters or in this diagram
you can see that ACM policy controller
applies j4 artifactory as the only
allowed doc repository as well put
security policies and container limits
policies that are going to be applied
across all plasters so today we give you
a quick look on how you can use anthos
and j4 got a factory and take advantage
of new principles in the web space such
as gear ops there is a lot of other
technologies that Anthes can enable to
help modernize applications such as
binary authorization server release code
run and more to learn more about anthos
inviting you to join us for a free
virtual multi-week digital event Google
Max 2020 showcasing the latest
innovation in cloud technology and with
that thank you for joining us today

54:36
NOW PLAYING

[Webinar] DevOps Omniscience with JFrog Xray