DevSecOps for Kubernetes-Based Applications [swampUP 2020]
Sven Ruppert,Developer Advocate ,JFrog
July 1, 2020
< 1 min read
SHOW LESS
SHOW LESS
Sven Ruppert
Developer Advocate
October 27, 2021 | < 1 min read
Distroless Docker Images Check out this talk by Melissa as she speaks at the CNCF Eastern Canada Meetup! Securing a Docker image does not just…
December 2, 2021 | 3 min read min read
JFrog Security experts, Moran Ashkenazi, VP Security Engineering, CSO @ JFrog, Nitzan Gotlib, Security Lead Engineer, collaborate with other DevOps industry leaders and discussed the…
June 30, 2021 | 2 min read min read
In this session hear about the ways tech-enabled enterprises approach a DevSecOps practice, how they make it visible, and how Splunk + JFrog can accelerate…
July 7, 2020 | < 1 min read
Kubernetes CI/CD Pipelines at Scale: https://jfrog.com/blog/kubernetes-ci-… Kubernetes is the de facto open source container orchestration system that supercharges applications. We know this to be true…
hello and welcome
to devsecops for kubernetes based
applications
my name is van rupert and i’m developer
advocate for jfrog
specialized for devsecops
what do we want to do today we want to
talk a little bit about what are the key
features or the key points for
cloud native development or cloud native
in general
because this will lead us to some
special things uh
called devops or death segues here we
want to
see why deaf cyclops is good for
business and what are the key points
here
and this will lead us to the developer
view
we want to have if you are implementing
security
from scratch so the main focus will be
how will be the view for you as a
developer
if you have to deal with dev sag ops and
if you’re developing for kubernetes
based applications
one point in the end will be what is a
good thing or what you can do
to speed up your production line or your
ci environment if you are
dealing with long-running always complex
ci pipelines what is true immutability
and why you should go for it
one thing for everybody is if you’re
dealing with new toolings or new
technologies
and you have already an existing
infrastructure how to integrate
jfrog tooling in your existing
environment yeah that’s it so
let’s start
better view cloud native if you’re
looking for the words cloud native or
what is
mean in the internet you will find that
it’s a
broad topic it will go from service or
oriented architecture of api oriented
communication
through how to use container and all
this stuff and everything is managed by
devsecops so the meaning of this is not
only some single topic about how to do a
or b
it’s a broad topic about software
development in everything
around reading about it will show you
that you have a lot of different
meanings opinions and all this stuff but
in general you have
wirecloud native foundation the
possibility to get the condensed and
the um official meaning of this finally
so it took some years but
it’s available so cloud native what does
it mean it means that we have solar
service oriented architecture service
oriented architecture means that you’re
breaking up your
big big application into smaller pieces
micro services one word or the other
thing is that you’re just going to
um serverless functions
on which means that at this point you
will have
very tiny part projects or from the big
project you have
tiny sub-projects and the sub-projects
can freely decide
what technology they’re using and
how they want to deliver that inside a
container that
could be managed by a docker or via
biocommunities in the end
so this is a container-based
infrastructure that
everything is running inside docker
managed by kubernetes
the service oriented architecture is a
way how to split all this stuff
and then this is api oriented
communication
api oriented communication is just
to define how the part sharing
information
and all together is
the possibility to choose freely the
technology
it’s good and bad at the same time so
it’s good because you can
freely choose it’s good because you can
get rid of legacy stuff
but it’s bad because you’re dealing
always with all these tiny details of a
new technology
if you’re a senior in one technology
doesn’t mean that you’re a senior in the
other
technology as well your ramp up time
will be maybe faster compared to a
really
youngster but in the end if you’re
senior here you’re a youngster here
again
and this means all this stuff like
security issues
best practices and what are the good and
stable sub-components or dependencies
you have to learn again and this is a
bad thing
because with new technologies you have
new attack vectors it can be used
new ways of breaking into your system
and all this stuff you have to learn so
you need tools to help you
to identify what kind of security
breaches are there
and then the other thing is how to deal
with license
comparing java with javascript for
example
the java ecosystem is a way more stable
one
and the javascript one is very
active it’s it’s developing a lot of
stuff in short time
but the dependency tree between
components and how this is versioned
and the security issue there and the
compliance issues
this is a completely different thing
okay this is the technology how to run
it
on a technology how to split it up how
to communicate between all the stars
and the whole thing that is managing
this one is
called devsecops so the cloud native
foundation is explicitly talking about
cloud dev segues not about death ops
so the main thing is what is
the difference between devops
and death setups this is one thing but
first
i want to have a few more ideas what
does it mean
or what what part of this cloud native
full stack we want to
look at i want to have a detailed look
at
how to implement stuff so if you’re
looking at this part from
coding software until let it run in
kubernetes
we have through the application this
piece of code you’re writing as a
developer and this is the first thing
you’re providing this one must be
wrapped inside linux or must run
inside linux and this one will be
packaged in docker images
that will be delivered
via registry to your kubernetes universe
so these are the four steps i want to
have a look at
and if you’re checking here so if you
have a security breach
inside your application during the time
you’re coding it will be available in
all other stacks
maybe it can be masked or hidden by some
stuff like a firewall or reverse proxy
but in general their security hole
is available the same thing here with
license so you can have license
or transitive dependencies here during
the time you’re coding the application
you have the same thing
in the linux distribution and for sure
in docker and kubernetes as well so
everything will be managed or the whole
life cycle of this will be managed by
the
term called devsecops okay this is
so have in mind if you’re
pushing security or if you have security
issues during the
development time then they are available
until the end and this is where this
term shift
left is coming so if you’re rotating
this one
at 90 degree or 490 degree then you see
that you start creating application
linux docker kubernetes and
shift left means that you’re going in
this direction that even during the
coding time the earliest possible time
you are starting with the term security
so and inside the application
development again
coding part and then testing and so on
shift left is the same orientation
difference between devops and dev’s
hackers because
devsecops is exactly what they want to
have
the pure devops is the thing is
mentioned earliest
i think 2008 or 2009 and we have a bunch
of
books about it and different opinions
and meanings and all that stuff but in
the end if you’re checking you have
all the spots like coding building
testing packaging releasing configuring
monitoring
and that means devops is just
organizing the development and operation
stuff of this one there’s no explicit
security there’s no explicit performance
testing and all this stuff
so where it’s coming from early days we
had this developers in the ops team and
then you know there was this
distribution
a few people were developing a few
people were just running and
if you’re speaking about security the
operations part was mostly responsive
responsible for delivering a firewall
reverse proxy and all that stuff and the
developer was just not thinking about it
see the security and back backs
had very high financial impact because
mostly all this stuff is just
found during the operations part and
then it was going back to another team
and bug fixing all this stuff
see what we learned at this part they
said def and orbs should be
merged together and then we have the
whole thing from developing
until running in production this is good
this is fast there’s a
financial impact of having a bug or
fixing a bug is less because it’s a very
transparent
um communication between these teams but
the security part was
mostly always the same here so it means
the security team was checking
after everything was verified and ready
for production and then they had the
license to kill in some in
um environments and so it means
what we had with def and ops now we have
to do with security as well
and devsecops is
exactly this so what we learned from def
and ops to devops we are now doing with
devops and security so security is
something
that is everywhere in the pipeline
because security
for example what’s russ plays for
security there is no right place
where you’re just searching for security
issues and then you’re doing the next
one so there is no additional step in
this pipeline that will just slow you
down
well make sure that you now have one
security guard that will check
everything so it’s not
just a product you’re buying and then
everything is secure like a firewall
it’s not um a dedicated person you can
hire and that’s it
so security will help you to
have a lower impact of security bikes
and all these stuff issues
see we’re talking about the security
first culture it means
security should be everywhere in every
part
and we have this to integrate in the
whole application
life cycle and that is def segups
so security as early as possible
why this will minimize your risk well
it’s more or less always the same
speaking about dependencies in the java
world you have a bunch of dependencies
because you don’t want to reinvent the
wheel that makes sense because
if you want to sort something or want to
create a pdf library just use a
dependency for it
but you have no control about it you
have to trust this guy not only that
he’s creating good quality
that he’s maintaining this stuff but as
well you have to check for example for
compliance issues
so is all every dependency checked
that he is using so not only you are
checking
the compliance issues for this
one for this dependency but you have to
check it for the next level
again so all transitive dependencies
must be checked for security
and as well as for compliance issues so
it’s easy to analyze this also stuff but
a lot of stuff is based on trust and you
have to check
security and compliance issues this is
one thing
so um yeah to achieve this in depth
devsecops well
it’s the same like devops you have to
optimize everything you have to speed up
your production it means
get rid of boring work from the
developer let the ci
environment do all these things say
security to integrate is just
extending a little bit the tooling but
you’re going through the same stuff
you’re using cr environment you’re
adding some checks
you train your people that they have
security that they are security-minded
or securities skilled
the same thing you have done earlier
with operations
so here one thing that’s very important
is
be reproducible everywhere so every tiny
step must be
somehow reproducible otherwise you can’t
analyze later if you have a security
breach
so in general devsecours means on the
box as early as possible and make sure
that there is no security and no
compliance issue
and make sure that this is as soon as
possible eliminated inside your
production line
the next step will be security from
scratch and
how this world looks like for you as a
developer
security from scratch
now we saw what what are the basic ideas
about it
and we’re going back to one picture i
showed in the beginning it was this
application linux docker coordinated
universe stack
that is more or less basic for for cloud
native
he i explicit exclude
the concept phase it’s true if if you’re
talking about security
even during the concert phase you should
have an eye on security
you can have security per definition or
not not per definition per
architecture so this will have some
influence on it so even there you have
this
security id so this concepts this way of
isolation this way of reporting
it will include uh changes in processes
how to deal with all this stuff
but it’s not part of this talk here i’m
just focusing from the application
down to the kubernetes deck and just as
a reminder
if we have some security hole in the
application
during the time we’re coding it will be
in the linux
in the docker and in the kubernetes
layer as well so if you’re doing all
this stuff
as early as possible then it will help
us to minimize the risk
but talking about the different layers
if you have different application or
different
microservices and the freedom to choose
what is the right technology you want to
use
immediately you will have corresponding
infrastructure things
for example if i’m talking about java
then i have
for my application a maven repository
the main repository is a single source
of trees
if i have to check what is used in my
application
and based on this information i have the
possibility to analyze a whole graph
but then talking about linux i have the
same
linux if i’m using debian i have a
debian repository
in the background and this is a source
where i’m grabbing all my stuff
and in this repository i have my
binaries the license information and all
this stuff
after i’ve done this one going to the
docker image layer
i have my docker registry and again this
is a repository where all this stuff is
stored
and if i have access to this one i have
access to the whole binary stack and can
analyze every single layer inside my
docker container and the docker files
and so on
and after this one how it’s mounted
together
then i have my kubernetes layer and
kubernetes
then we can talk for example or about
hand repositories what we see here is
that we have more or less a common part
of this deck for example this
uh docker helm part and then based on
the linux distribution a debian
repository or an alpine repository
whatever
but then the biggest fluctuation will be
in this application layer if you have
different languages different
technologies we have npm
or maven or nougat or whatever
so we have different levels here
different repositories
the good thing with artifactory is that
artifactory can handle all these types
last time i checked it was
i don’t know 24 26 different kind of
package managers
so you can have all these repositories
managed
inside artifactory so everything that’s
coming out from the internet or in from
the internet
will be stored in artifactory and hold
there
the good thing is you’re more or less
independent if you’re grabbing it once
and storing it
you can say okay now even if the
internet connection is not available we
can produce
internally caching for sure but
talking about security x-ray
is a component that will scan everything
that is in artifactory how to do this
and how to use it i will show it a
little bit later but
think about artifactory and x-ray as a
combination
that x-ray will have a detailed view on
every tiny binary that’s going in
that information you can consume via web
ui
or vian rest api so a machine can deal
with the combination artifactory and
x-ray
or the human can directly interact with
with this uh via a web ui i will show
you both uh things how it
looks like but the main thing is you
have one single point
all binaries in all configurations are
in
and even if you have to for example
think about the security payload
injection even if you want to mention
this one this could be part of
artifactory inside the generic artifact
repository so all these layers
will have different behavior but you
have different update cycles you
have to know the knowledge of this
repository so to deal with
transitive dependencies and get the
license information
out of it
how to define this stuff artifactory
will give you the repositories x-rays
connecting to it and now i have to
define
how to search for security and
compliance issues we have three levels
for this definition inside x-ray one
thing is a rule
a rule is a stateless definition what
should happen
so you can say if you find something
within c
v s score from a to b please
write an email too or please start a web
hook or
whatever break a build so this one
is stateless and independent from the
repository itself and this is an atomic
thing
and then you can combine this to a more
domain specific policy a policy is a
logical name
and a combination or a
aggregation or a composite of rules so
the rule itself will describe what
should happen
a policy will have a logical name and a
bunch of rules so here you’re more
domain specific so for example the
policy for web apps a policy for
whatever and then to connect this one to
the repositories you need
watches a watch will connect build
information
or a repository with policies
and then you can see the reports so
the maintenance here is quite easy you
have a very fragrant way to describe
what’s important for you and how to
react to this one in a generic way and
then you can just combine
it to different repositories the good
thing here is that you can just
create a watch to for this combination i
will explain a little bit later
if you are talking about the repository
structure and what you can do here to
kill rebuilds and to be really mutable
so have this in mind rules are
aggregated to policies and policies
via watches are combined with the
resource you want to
check
you as a developer you have another task
to implement a use case or proof of
concept or whatever
so what you’re doing you’re starting
your ide you have a tiny fresh
side project you just start from scratch
assuming that you’re including java and
using maven
but you have different other yeah
package managers as well so
it’s not it’s only an example because
i’m a java developer
so you start writing your pom xml file
and you add the first dependency because
you don’t want to reinvent the wheel you
don’t want to implement this sorting
algorithm or whatever
and then immediately you will see
information
inside your ide so we are providing for
intellij for eclipse vs code and all
this
on an open source base plugins for the
ides
so if you have for example netbeans yes
so far i
don’t know it no plug-in for this one
you could create for example for
netbeans here and plug-in based on the
implementations
we are providing on github but the main
thing is how to use this one
so you are adding this one and then you
have this ide plug-in
and you will see immediately this
dependency
will have this security information it’s
red
green and then you’re checking all
transitive dependencies
and you can see the license information
as well
how this looks like and how to how this
handling is i will show you for
intellij and this is the next thing
okay next is the integration inside the
ide
i’m using here intellij but we have
plugins for different other
ides for example eclipse or vs code
so have a look at the web page
and see what version and plugin is
available for your id
here for this one i have to install this
plugin
it means i’m going to the plugins
marketplace
searching for jfrogner will find the
jfrog plugin
in my case it’s already downloaded and
installed
because i’m using it already
so after you install this plugin you
have the configuration page
in intellij it’s under other settings
jfrog x-ray configuration
you can add the url the username
password and check
if you have a connection to your
instance in my case it’s an x-ray
version 3.2.6
and that’s it now it’s available the
functionality is available inside your
id
for this demo i’m using a maven a very
easy and
small maven project
the typical workflow is you start adding
a dependency after this it depends on
your ide you have to
or your configuration you have to
trigger a reload
of this reload of the definition
some some people have it activated on
default and auto reload
and i i just do it manually
so now the ide knows i have this
dependency comes collections with
version 3.2
and then i can go to my plugin
here i have the license info selected so
i see our comments collection this
version is running on the apache license
and i can have a look at the security
issues as well
so if this is not available you can just
say
sometimes reload sometimes it’s already
loaded sometimes you
are doing it manually then you can see
here the coins collections
there are right now three
security issues and the good thing is
you can see here as well if there is a
fixed version available
for every security issue you have after
this you can decide
if you want to have this fixed with an
up or downgrade of the version number or
if you are fixing
transitive dependencies for example i
have something with transitive
dependencies
let’s see how fast it’s today with my
internet connection so
i’m selecting just another dependency
from a little bit
bigger project i have my
main reload this performs
just depending on the internet
connection you have
and um my one is not the best so it will
take
a few seconds to get this information
after you’ve got this information so the
id was able to load all dependencies you
have the new dependency tree here
it could happen that sometimes you have
to say okay please ask jfrog now
for this new dependency tree you have in
your project
it will connect to the x-ray instance
and again it depends a little bit on the
internet connection you have
then you will see here the compound tree
is the dependency
here it’s vardin and if you’re clicking
inside you can now navigate through the
transitive dependencies
green red or orange for the different
levels
and if you check here for example
zavardine charts
with this version is consuming or has a
transitive dependency to
jackson cool uh the jackson data bind
in this version and the transitive
dependencies from
jackson data bind they are green so
here is no issue but the jigsaw data
bind itself
has some issues here you have the
information what is
inside and the good thing is again you
see
if there are some fixed versions already
here for example for this one we don’t
have a fixed version until
now so now it’s up to you to decide if
you want to
override the transitive dependencies if
you want to exclude charts because
you’re not using it or if you are going
to a different variety in version itself
so really
this is project depending but the whole
thing is you have the possibility to
navigate
the whole dependency tree that’s it
so even if you’re just adding a
dependency to your project
the good thing is that you are informed
immediately if you have some compliance
or some
security issues so that’s it
with uid integration
you saw now how this is realized in this
ide plugin
and this was just an example of this
intellij but it’s available for other
ideas as well
so this is a way a developer would see
it immediately
and now i want to just
show you how it’s available on on the
web ui and for this i’m taking an
example for example
the next step so we coded the
application now we want to wrap it
on linux docker so that we can provide
it later on
kubernetes so what what you’re doing if
you want to wrap your application
your fed jar for example that’s running
inside the dockerfile
you start with the dockerfile then the
first line
from and bang here we have it
we are using a base image and this base
image
you have to analyze so this docker image
will be based on debian ubuntu
alpine whatever and immediately you
should have view
on what is going on on this operation
level operating system level if you have
a
library somewhere in your linux
distribution that
is uh critical and then you can decide
if you want to have this package or your
explicit de-installing
it or or or so
the tip will show you um how to get this
information about docker images and
again
over the java site but this time with a
web ui so what we see now is how to
create a rule
how to combine different rules policies
how to create a watch and how this
report will be accessible
inside your web ui and then you will see
that from your application your tiny jar
file
up to the operation system and the
docker image
you will see everywhere in this tree
what’s going on because
x-ray know the whole dependency tree
it means after we combined the different
technologies
x-ray knows okay this jar
is inside this layer inside this docker
image and this docker image is based on
this linux and this linux will have this
uh
glipc whatever version
inside every time we are
scanning or we are updating the security
database immediately if we are pushing
this one to our installations
if you are not connected to the internet
because you’re on prem you can just
download on a regular basis security
database and provide it internally
immediately if you have this one the
whole graph will be rescanned
and it means even if you provided it
yesterday you binary your docker image
and it’s
running on production and if you know
today
something new it will show you
immediately
that this image is infected how this
looks like how this report looks like
i will show you on the web ui now
okay let’s have a look at the jfrog
platform x-ray
installation um this
is here my surface service instance but
you can have the same as
this one on on-prem um if you want to
try out what i’m showing here right now
um i will
give you the link for the trials a
little bit later
so you can ramp up a trial it will take
approximately
10 minutes or so and then you have a
whole platform
installation
on a cloud or in a cloud and then you
can try all this
by yourself so if you have your platform
log in and go to the point menu point
security and compliance
here we’ll have two different menu
entries
you have to start with policies because
policies
are used inside watches
a policy is a stateless definition
what should happen if you find something
depending on your definitions i will
create now a new policy
i have to define a logical name for this
say
policy minus demo
if you have to deal with a lot of
policies just think about
naming schemas or that this is scaling
of the time
um first of all you have to decide if it
is something
from the area security or license or
compliance issues
i will select security you can add a
description
but have in mind that this description
must be in sync with all changes
you are doing all the time so i
personally
just leave it blank here right now a
policy
is a composition of rules
and rules is a fine grain thing here
exactly the same like a few seconds
before
you need a logical name then
you can choose what’s a pre
you can just use some predefined levels
or you can define the cvss score by
yourself
i just say grab everything
and now you know how sensitive this
should be
this rule and the next thing is
you have to um define what is the action
that should
should be triggered or the thing that
should happen
so generate valet violation
sorry for this generate violation is
just an
thing or is just the entry in the web ui
will show you in a few minutes
but you can trigger webhooks to
integrate with third-party programs or
other infrastructure components
you can notify the platform user itself
or external ones as well via email if
you want
you can block downloads so x-ray is
always connected to an artifactory
and if you want to make sure that
infected or
affected components are not even inside
your repository as you can just say here
block download if something is unscanned
if you want to block yes now
the same for release bundles and the
most common thing
is failing a build yes i know this can
be used
from pipelines and tim city and
jenkins whatever ci you’re using
i’m just generating the violation
now i have this rule inside my policy
and i can just create it now
the next step is creating a watch
creating watch means that you are
connecting
the policies or a policy you created
before
with the resources you want to have a
look at so i will say
new watch the same here a logical name
watch demo
and now you have to decide
what are the resources you want to look
at i’m just
selecting a few repositories i have here
for example i have my docker you can
filter here for example
i have my bin tray my docker removed
and that’s it so these two repositories
are now scanned
that means this watch is connected to
this repository
and now i have to say what should happen
i’m just selecting the policies i want
to
have combined here the policy demo is
now associated with this watch
and i can create everything
after this is done you have this
overview
here in this many paint porches and you
can see
what are the connected resources and you
can
calculate the amount of violations here
you will have
zero because i just created this
watch and there was no
trigger to recalculate everything
because there was no change
not inside the repository no build was
triggered nothing
but you can trigger it manually
for example just have a look at the last
90 days or whatever you want to define
and then it will start calculate this
one
it will take a few seconds but i have
prepared here
something a little bit earlier this one
oh let’s go back so if i’m going here to
calculate you will see
here you have this 400 and something
violations
you can have a detailed list
you can filter this list if you want
you have this one and then you can just
grab one of these items you want to have
a look at
you see this small text snip that will
give you a short information
you see what’s the level
the classification level of this
security issue
and what is the resource we found it in
what is a component here it’s a debian
buster
docker image and it’s used in my
created docker image that is based on
this component or it’s
containing this component debian buster
if you’re clicking here you will see the
impact graph
so it’s insert the db and bust ip tables
binary inside this docker layer
inside my image
so some additional information is here
the good thing is all of this is
available via rest
as well it means if you want to have
this information for your reporting
system
or whatever you want to do with this or
you want to trigger some external
um other infrastructure parts
you can do this via rest you can consume
this information or you can just
trigger a web book so this is the web ui
and the
core functionality of x-ray the web ui
the ide
view those views you saw so now you have
the knowledge
how to create the rules the policies
combine it with watches how to
make it visible in the web ui inside
your ide so you can start from scratch
and we are able to scan everything until
we have a docker image and even the
stock image we can provide in our own
repositories
great kubernetes now now i want to have
this docker image created so that i have
a bunch of docker containers and managed
by kubernetes
if you’re checking the um documentation
or the internet and reportings and all
this stuff
about security and kubernetes you will
see that
a bunch of the major things are trivial
smallest trivial things for example one
thing is
don’t use plain text passwords okay
but again don’t use plain text passwords
it’s done again and again but
this is nothing that you can do as a
developer during your coding your use
case
it’s something that must be for the more
operational parcel
who is responsible for uh the the
yeah the kubernetes cluster itself so
how to deal with um
passwords then again you see a lot of
comments about
using unsecure libraries okay
if you’re killing them during the
implementation phase then you don’t have
this challenge later
bugs in software okay that’s every time
the same if you have bugs bugs are bad
think about security payload injection
to make sure that
knowing issues are not used for example
to break in
operating system bugs operating system
as security issues again we are on linux
level and docker level so you check here
with x-ray then you’re on a good path
outdated docker images sure if you’re
just using provided images and you’re
not updating them by yourself
then it’s a big security risk so if
you’re using a provided docker image
just make your in docker image make an
update of all libraries scan it with
x-ray even if it is
the official jenkins plugin docker image
so just grab it and scan it once
and then make sure that you’re fixing
all stuff that will be
shown untrusted images it’s the same so
it’s just
grabbing information somewhere and let
it run now you’re going out
over artifactory it will be scanned if
it is not secured will be blocked
and not going to production so these are
more or less operational things
but you as a developer what you can can
you do to
to do it for example using just the
quantity security features that are
provided sure
if they are available use them it’s
maybe not trivial but
use them but now as a developer
what you can really actively do is that
you are reducing the
amount of attack vectors that can be
used from application
up to the docker layer so it means not
only the
secure socket layer communication
between all nodes so that you don’t have
a man in the middle attack
sure this one but this is more or less
even configuration again
but now we are coming to how to manage
docker images don’t use untrusted ones
sure go to your repository only verified
by you
verified docker images and scanned by
x-ray and these stock images
use it and then we have a challenge with
kubernetes if something will be
sponsored
where’s the source they’re looking
through but
you’re hardening your images you’re
hardening the operating system that’s
used inside
and continuously rescan so if it pass
you in production as i mentioned earlier
we are scanning it immediately if there
is in your build if there is a change in
the repository and
if there is an update in the security
database and if this
is not enough you can start to reskin
manually or via rest interface by
yourself
so that you just make sure if it’s not
done during the last eight hours or
whatever you want to have
just rescan it again to house the
information if there’s something
that is critical okay updating lips
again
changing your dependencies let x-ray
help you
so we have one thing that is not
easy if you are now leaving the
application docker linux
layer because we want to talk about
kubernetes and how to do the last steps
the last step is
if you have docker images you must make
sure that your kubernetes stick is
explicit using
your repository to grab this
docker image even if you’re selecting
official ones
so how to make sure how to deal with
this for this
we have a tiny open source project you
can see it’s actively maintained so
uh grab it it’s called qnap and that’s
available on the jfrog
qnap on github so it’s open source check
it
and it’s a kubernetes dynamic administ
at mission weapon complicated word for
me
so what it what it is doing is
it will make sure that first of all all
images that are requested are pulled
from some
special repository that is under your
control and scanned and only what’s say
inside and free it could be instantiated
and the next thing is if you have a new
version of this docker image
you must make sure that as soon as
possible
active instance are killed and
redeployed within your version so that
you have more control about this one and
how to manage
where is the source where grabbing all
the stuff so that kubernetes is just
grabbing
green images this is for example done
with qnap
so have a look at this one and now we
have from application development over
how to update the linux stack how to
deal with docker images and make sure
that you have clean layers
um up to how to manage these stock
images in
kubernetes and it’s under control how
long it will
uh live and how fast it will be updated
in which way will
it will be updated and what docker
images are used so
this is um yeah and the next thing
will be how to optimize your repository
structure
and see you in a second
okay speed up your ci and true
immutability
this is a very powerful concept and i
will show you right now
what this is we saw that the most stuff
that we are doing is more or less
connecting
at some point to a repository so
repository is something very important
in your ci environment to
speed up um if you’re just thinking
about
there is an outsides reposit outside
repository and you have an
inside one because you want to scan it
for security and
compliance issues and the next thing is
if you’re grabbing something from the
outside world it’s stored in your own
repository
this node you can think like a caching
note as well so
it’s coming from the outside of the low
bandwidth it’s inside your network
nearby and talking about maven for
example
it’s coming from pinterest it’s coming
from from maven central
stored in artifactory locally and then
again you will have
on the machine you’re compiling in dot
m2 folder
a copy as well so um
this is just caching and the easy
structure but you can do
way more with repositories but there are
some challenges
for example if you’re talking about a
hierarchy of repositories if you want to
have a hierarchy but
because you have inside your company
different departments different groups
that are working on different products
or whatever
and you want to share information then
you have the single point of entrance
so that you can scan here for the harder
thing and then you want to share
internally some some
libraries you have for example one
father
and two child and the child and you’re
working with this child
you can push information there you can
check here for something and something
from the outside world will go through
the whole hierarchy
and will be in your dedicated cache if
you’re pushing now because you are now
on develop
and this snapshot you want to share with
some other teams or some other parts
why i don’t care about this one right
now it’s just you want to share the
snapshot
then you have this for example pushing
it to your own repository everything is
green and then
how to share it so if you’re going to
save links you can just
push to their repository
you could push to the father and then it
will be
just collected or requested from the
other one because they are the newer
terms zone
whatever you’re choosing the challenge
here is that
the process will be different so if
you’re
going just compiling pushing to your own
repository there’s an easy one and then
sometimes you’re just pushing to an
other repository as well so it’s just a
question of time that you have a bunch
of different
pushing whatever things and it’s just a
question of time that something will
happen
and the other thing is every child is a
superset of its parents so it means
you want to clean up this stuff you have
to maintain it so how to make sure that
these
repositories are not constantly growing
because you have a lot of feature
builds you have a lot of developer
builds that you just don’t need anymore
because they are not relevant for for
the later lifetime
so this is a question how to deal with
this one
so think about what is the right place
to to push
something to share it in between and how
to make sure that
no version that is just
or that there is no bad version that’s
bleeding between different builds how to
get rid of this
with a classical organization like trees
you have the challenge that sometimes
snapshots are bleeding
over different builds and you have to
rebuild and rebuild again or to make
sure that this caching of maven for
example
is not working against you so how to how
to deal with this one
artifactory has one very powerful
concept called
virtual repository mostly people are
using virtual repositories just to
aggregate
several external repositories to one
logical name and requesting this one
this is good if you want to have single
entrant from different resources you’re
just adding one more external resource
and you’re pushing it
to an internal one but you can use it
for several other things
thinking about that a virtual repository
is a very light one thing
but you can declare x-ray to scan this
virtual repository or their
corresponding remote or local repository
and the virtual repository has a
possibility if you’re pushing to the
virtual repository
so it’s not only read it’s read write so
if you’re pushing to a virtual
repository you can declare
where is the default repository or
what’s the default repository to push
um through so you’re pushing to the
virtual one and in the virtual one
there is a default location okay this
local one is where it should be
normally so this is a good thing you
have it both ways
but just for aggregating it’s not really
perfect thinking about the frequency of
created binary so
if you are building on feature branches
you are creating a lot of binaries you
can just throw away
immediately if you are merging on
develop all changes from the feature
branch are in develop they are rebuild
and then you have these changes here you
can immediately delete all features
builds from
from the past what it means at this
point
if you’re doing it with every stage the
amount of binaries you
should store are smaller and smaller and
smaller and the frequency is lower
but you’re not getting rid of rebuilds
so rebuilds mean you have a long running
thing
and something is going wrong and then
you’re starting from scratch and doing
the whole stuff again it’s time
consuming it’s resource intensive
it’s not fast how to deal with this one
thinking about virtual repositories
everything can be created via rest api
as i mentioned before in artifactory and
virtual repositories are very light one
thing
and you can create a lot of tiny local
repositories for example remote
repositories as well but here we are
talking about
local repositories and thinking about a
chain
inside build so you want to build now
the developer branch
so you have step one two three and so on
and the first one is doing something and
creating a binary
and this will be the input for the next
step because you’re doing the next
step in your build chain what you can do
is you can say i have a virtual
repository
and in this virtual repository it’s not
only my father to collect
stuff that’s already existing but i have
a local one
that i’m pushing through my result of my
step
the next step is using the same virtual
repository creating its
its own local repository
and everything that the second step is
producing its push to the
second local repository the third step
we’ll see now the result from one
from two and from the parent and this is
going on so
for example if step number whatever is
failing
you can just restart with this step
because
all interim steps are persisted
so far it means if you have a long
running pipeline
you can break at every time and you can
just
restart it every time again so you can
freely choose at what stage you want to
start with rebuild and
it makes sense just start rebuilding
with a failed
step so don’t rebuild stuff if you have
a binary just use it
as long as possible it will save time it
will save resource and it will minimize
the amount of storage you are using so
make sure that the concept of virtual
repositories here using
all the whole production line that’s
it and the last thing i want to mention
here is
the power of integration if you have an
existing infrastructure
the power of integration is one of the
good things you know that
all products of jfrog can be available
in the cloud or on-prem or hybrid
architectures
it means some parts can be in the cloud
and then you can connect to an internal
one
and combine this freely so you have both
ways
to the cloud and back and everything in
between
but mostly you will have some kind of
infrastructure
so looking at x-ray here for example
because we are talking about security
topics
you can not only break build or send
notifications via mail but you can start
web hooks and everything is available
virus api
what does it mean if something is
happening you can start with a webhook
a different path in your pipeline so you
can have dynamic paths
or dynamic processors you can switch and
you can trigger
external processes the next thing is if
you start trigging for example a
compliance tool that you’re using for
documentation
it can go through x-ray and requesting
the data via the rest api to generate
the reports
or to whatever so third-party products
can integrate with a whole
infrastructure that is built up on
on jfrog and you can start working with
dynamic workflows you can
create your own report with your
compliance toolings and so on
so have a look at this rest api
and the possibility to start webhooks
and to
be managed by a rest api this is a very
powerful thing if you want to try all
this stuff by yourself
it is easy because you are just going to
this platform
and then you can start a trial it will
take approximately 10 or 15 minutes
and then you have the whole ecosystem
ramped up for you
in your environment of your choice
amazon google
and i don’t know you will see it if you
don’t use this one you don’t want to go
to cloud you can have the team on prem
as a trial but if you’re ramping it up
after 15 minutes in the cloud you have
this trial then you have artifactory you
have x-ray you have pipelines sci build
line
um ci environment and then you can start
all this stuff
what you should do is just start this
trial
then create a remote maven repository so
that you can grab stuff or docker
repository whatever technology you want
to use
then create a watch that’s looking
exactly at this repository how it’s done
i showed you before but you can see it
later
and on youtube there are some some
screen cards about it
then connect your maven repository or
your maven pump file to this remote
repository that you created right now
and then make a mavenclean verify after
you added some dependencies
and surprise surprise you can see
what is going wrong with the dependency
you’re just
using and edit so and
yeah well you can do it with your ide
plugin as well so all in general
if you are going through this one you
can do in less than an hour or two hours
if you have some more time take some
more time but um
this can be done very easily so just
try it i will say thank you at this
point
if you want to reach me because you have
more questions or you want to just
stay in touch the best way to reach me
is twitter
and um i’m more than happy to have
feedback and
start a discussion with you about the
topics devsecops java kotlin mutation
testing tdd whatever
just call me and so far i would say
thank you very much for attending and
Your action was successful
Please try again later
Modal Message