Best Practices In Implementing Container Image Promotion Pipelines @ Toronto Enterprise DevOps User Group Meetup – 2021

3月 11, 2021

< 1 min read

Best Practices In Implementing Container Image Promotion Pipelines

Surprisingly, implementing a secure, robust and fast promotion pipelines for container images is not as easy as it might sound. Automating dependency resolution (base images), implementing multiple registries for different maturity stages and making sure that we actually run in production containers from the images we intended can be tricky. In this talk, we will compare different approaches, compile a wish-list of features and create a pipeline that checks all the boxes using free and open-source tools.

Talk Materials

View Slides Here

Speakers

Baruch Sadogursky

Developer Advocate @JFrog

Baruch Sadogursky (a.k.a JBaruch) is the Head of Developer Relations and a Developer Advocate at JFrog. His passion is speaking about technology. Well, speaking in general, but doing it about technology makes him look smart, and 18 years of hi-tech experience sure helps. When he’s not on stage (or on a plane to get there), he learns about technology, people and how they work, or more precisely, don’t work together. He is a CNCF ambassador, Developer Champion, and a professional conference speaker on DevOps, DevSecOps, Go, Java and many other topics, and is a regular at the industry’s most prestigious events including DockerCon, GopherCon, Devoxx, DevOps Days, OSCON, Qcon, JavaOne and many others. You can see some of his talks at jfrog.com/shownotes

Video Transcript

00:00
recording and it’s recording now
00:03
um uh hi everybody welcome to toronto
00:05
devops uh enterprise devops user group
00:08
um the user group has been around for
00:10
two three years now
00:12
it’s been a while um we meet every
00:14
second thursday of the month
00:15
today is wednesday i realized that
00:18
and to be honest he booked it by mistake
00:21
but
00:22
uh it worked out for presenters without
00:24
for me worked out for everybody so
00:26
the next meeting is on second thursday
00:28
um then the next meeting we have is
00:31
going to be on the test automation
00:34
and how to use it um in the agile teams
00:37
um the the presenters are really really
00:40
good
00:40
uh in that area so again if you’re
00:43
interested in the test automation please
00:45
attempt if you know somebody who might
00:47
be interested uh share that link
00:49
um we again we meet every second third
00:52
of the month uh the topics vary
00:54
uh but they always devops related it’s
00:57
related to devops processes
00:58
people’s tools technologies and so on
01:01
and so forth that work inside the
01:03
enterprise
01:04
um not the unicorn shops but inside the
01:07
enterprise on nine to five kind of shops
01:09
where most of us work uh if there are
01:12
topics that you would like to hear on or
01:14
present on
01:15
please let us know we are always uh
01:18
looking for um uh for good speakers
01:22
um again then if you’re gonna be
01:23
interested in a specific topic let us
01:25
know we i know quite a few people so i
01:27
can
01:28
i can always find the person who can
01:30
talk on the matter
01:32
okay um any
01:35
questions that you have uh before we
01:38
start anybody’s any questions
01:42
all right um okay so uh today uh
01:46
we have um uh and again
01:49
apologies by bob butcher named baruch
01:52
uh from uh jprok uh who will be
01:56
uh talking on the best practices on in
01:58
implementing the data image
02:00
uh promotion pipelines uh i honestly
02:03
found myself uh
02:05
even on the projects i have right now
02:07
it’s it’s a it’s a
02:09
it’s a very very uh good topic to um
02:12
to cover because it’s going to come in
02:14
handy one way or another
02:16
uh without uh server comments uh
02:19
i’m yielding the floor to the presenter
02:21
it’s all yours please take away
02:25
thank you max thank you for having me
02:27
excited to be here
02:28
i mean i’ve been to a couple of meetups
02:30
in toronto in person
02:32
in the last when when it was the thing
02:35
and obviously
02:37
great city great community uh looking
02:40
forward getting back to all of you
02:42
in person bringing you some swag i’m
02:44
going to talk about that in a sec
02:47
a but um in thursday we’re going to go
02:51
what in person that might not take that
02:54
might not happen for a while but
02:56
well you will invite me when it will
02:58
happen uh we definitely will thank you
03:01
that’s that’s that’s the idea right
03:03
that’s the idea um
03:05
but i’m bringing i’m bringing i’m
03:07
carrying gifts
03:09
when i come so it’s it’s a win-win
03:12
um so yeah best practices and
03:14
implementing container image promotion
03:16
pipelines
03:17
um one disclaimer i know that there are
03:20
different containers there are different
03:23
implementations
03:24
um but most of us still consider
03:29
containers as in docker containers and
03:33
this is what i’m going to talk about
03:34
mostly docker containers
03:37
um so with this disclaimer aside
03:40
this is a vn diagram the software that
03:43
you know really well
03:44
and the software that you like are
03:46
usually different things and the more
03:48
you know the software that
03:50
usually the less you you you like it
03:53
it’s literally like the sausage factory
03:57
and it’s the same with dockers the
03:59
service containers
04:00
we like it but we don’t really
04:04
trust it we’re going to talk about how
04:07
can we reestablish this trust
04:09
and my name is baruch sadogurski and
04:12
i am the chief sticker officer of jfrog
04:15
it means that when
04:16
the in-person meetups will come back i
04:19
will come
04:20
with stickers and t-shirts and
04:23
tons of awesome swag and a new talk and
04:26
we’ll have
04:27
hell of a time for now the most
04:29
important piece of
04:31
on this on this slide is my twitter
04:33
handle aj
04:34
baruch and this is the easiest way to
04:36
communicate with me
04:38
um now online this is the most important
04:41
slide
04:42
of this presentation if you go to jeff
04:45
becomes a show now you will find the
04:48
slides already there
04:50
the video this is max why i got excited
04:53
when told me that you are recording
04:55
please share with me or with re the
04:57
recording when you have it
04:58
we will upload it to jeffree grams or
05:01
show notes
05:02
and it will be in the same place all the
05:05
links to everything that i’m going to
05:07
talk about and mention
05:08
um all the resources are there a play to
05:11
comment to
05:12
foray to raid and a very nice raffle
05:17
tonight we are going to ruffle out a
05:19
baby yoda lego set
05:21
so if you are fans of star wars the
05:24
mandalorian
05:25
and or lego that’s uh um that’s a great
05:28
chance we’re not a lot of people so we
05:30
have a very good chance of winning
05:32
please go and and participate now
05:35
um if somewhere down the line you’re
05:37
like ah what was this url again because
05:39
i really want to see
05:41
this link to this particular resource
05:45
the url is on the bottom on every slide
05:48
my twitter handle is also on the bottom
05:50
of every slide so whenever you feel
05:53
to follow me um a please do
05:56
so with that let’s start talking about
05:58
patterns
05:59
when we need a pattern for example
06:02
promotion
06:03
we ask ourselves do we have an existing
06:06
pattern
06:07
and if we do can we use it at is or
06:10
what needs to be changed how can we
06:12
adopt it
06:14
with ci cd patterns containers or not
06:18
this is something that we do for years
06:22
and by four years i mean like more than
06:24
20 years
06:25
and obviously we know exactly how to
06:27
promote
06:28
how to build cicd pipelines how do we do
06:31
that
06:31
we have the promotion pyramid we have
06:34
builds that are a lot in the beginning
06:38
and then as they go through quality
06:41
gates
06:42
they become less and less builds but the
06:44
tests
06:45
become more and more elaborate
06:49
they take the longer time they check for
06:51
more things
06:52
in the end of the day they end up in
06:54
production
06:56
i i i’m 100 sure you’re all familiar
07:00
with that
07:00
because you are awesome developers
07:04
another view on the same promotion is
07:08
the pipeline view in the pipeline view
07:11
you have your sources it’s your
07:13
application sources
07:16
it’s your docker files whatever it is
07:19
and then it’s all crunched by the sci
07:22
server
07:23
and what is
07:26
getting out of the ci server are the
07:28
binaries that then you
07:30
start to promote and you start to
07:32
promote them
07:33
by deploying them to a corresponding
07:37
environment like the integration or the
07:39
system testing
07:40
or the staging you run tests in this
07:43
particular
07:44
environment and then if those binaries
07:47
satisfy
07:48
your quality requirements all kind of
07:51
tests
07:52
quality their security tests compliance
07:55
tests
07:56
stress tests performance tests whatever
07:58
they are
07:59
you move them into a new area
08:02
the new area again you deploy to new
08:05
environment
08:06
and this goes all the way to production
08:10
again i’m sure you all know that
08:14
i’m not sure if you all do it but at
08:16
least you all know that that’s for sure
08:18
now you say okay how it’s different with
08:22
docker
08:23
well docker has a very interesting way
08:27
of building and i mean docker build
08:31
docker build is simple and powerful way
08:35
to pack your containers with all kind of
08:38
stuff
08:39
and it’s so simple and so powerful that
08:43
a lot of people are inclined to
08:46
docker build all the things what i mean
08:49
docker build all the things
08:51
instead of promoting the binary after
08:54
building it twice
08:55
once they tag their
08:58
docker file with different status
09:02
it can be docker file which is in a
09:04
development docker file and staging
09:06
geographic production and then just
09:09
rebuild
09:10
in each and every environment and looks
09:13
like
09:13
it works right i mean there is nothing
09:15
technically impossible
09:17
you just rebuild and every
09:20
uh um on every stage and
09:23
you get you end up with the same
09:27
outcome but is it the same outcome
09:31
i will claim that it’s not that fast and
09:33
cheap builds
09:34
are not always the way to go and they
09:37
are not always the way to go
09:39
because when you look at the typical
09:41
docker file and
09:43
i wanted to come with like a ridiculous
09:45
example and write some
09:47
horrible docker file but i didn’t have
09:49
to i just went to the internet and find
09:52
tons of docker files which are as
09:54
horrible as possible
09:56
and again just a reminder for comes to
09:58
show notes
09:59
if you go there under resources you will
10:01
find a live link to this file
10:04
it is real life everything is this file
10:07
except of one or two lines is
10:11
resolving a latest version of
10:13
dependencies
10:15
a latest version of the base image a
10:18
latest version
10:19
of python and node.js and then
10:24
a latest version of the application
10:27
itself
10:27
there is one line creating a directory
10:30
which is not a dependency manager
10:32
and this is just wrong then you say
10:35
well that’s true but we can fix it
10:39
well let’s fix it we can nail the
10:42
version
10:43
of the base image to a to
10:46
a version we can say well not the latest
10:48
ubuntu but 1904
10:50
is it better well it’s better to an
10:53
extent
10:55
yes we won’t get the latest version
10:57
which is what
10:58
2009 almost 2104
11:02
instead we will keep getting 1904 but
11:05
there have been two years since 1904 is
11:08
out
11:08
and in those two years canonical who is
11:11
the maintainer of the ubuntu
11:13
image pushed bunch of changes
11:16
directly into this version and that will
11:19
be mostly
11:20
security patches while it’s important
11:23
to get security patches it’s still
11:26
something that will change
11:28
under our hands every time we build
11:31
this docker file in you there is a very
11:34
good chance
11:34
that we will get different results
11:36
because it downloads
11:38
new version of ubuntu 1904
11:42
the real way to actually
11:46
tell docker to go and grab
11:49
the exact file from docker hub is using
11:52
the fingerprint
11:54
now if i’m using the fingerprint shot
11:56
256
11:57
i can be 100 sure that the
12:01
version of ubuntu that i download now
12:04
will be
12:05
absolutely the same the real question
12:08
now
12:09
is what version is it how can you know
12:13
which version of ubuntu i’m downloading
12:15
now
12:16
you know what maybe it’s not even a
12:18
valid fingerprint maybe it’s my cat
12:20
now going through my keyboard
12:24
and this is what i typed there is
12:26
actually no
12:27
way for me to to know
12:30
which version of ubuntu am i using
12:34
now what about the rest of them
12:38
how about those whether you can
12:41
lock those to a version and whether this
12:45
version is truly immutable
12:48
knows people know people who who know
12:51
update
12:52
if you know apt-get i can ask you hey
12:56
can i pin the version down and you can
12:58
tell me yes or no
13:00
and then i’ll ask you is it immutable
13:02
you can tell me yes or no
13:03
and then you can ask me and then i can
13:06
ask
13:07
you what about transitive dependencies
13:09
in the end of the day if you know up
13:11
really well you can tell me yes you know
13:14
what i’m sure
13:15
it’s immutable but you know what
13:19
how about this mvn clean install
13:25
which version of dependencies it
13:27
installs now
13:29
if you don’t know what is it and it’s
13:31
maven you can say
13:32
well i have no idea if you know me then
13:35
you can say well yes you can go
13:37
into the pawn file and nail down the
13:40
versions
13:41
if you know me even really well
13:44
you will tell me well i have no idea
13:46
because the way maven behaves
13:48
there is almost impossible to nail down
13:52
all the dragon different dependencies to
13:53
their version so in the end of the day
13:57
you need to have tons of knowledge about
14:00
each and every package manager
14:03
and dependency manager in the world in
14:05
order to create
14:08
truly immutable docker file
14:11
and you know what what about that every
14:14
command that you run might
14:16
as a side effect and sometimes as a
14:18
transitive side effect
14:20
bring different dependencies and change
14:23
your environments
14:25
under your fingerprints and under your
14:28
fingertips and then
14:29
what is happening is that while you
14:32
build
14:33
every time you get a slightly different
14:37
container
14:38
and you remember how i mentioned we
14:40
don’t really trust docker we have this
14:42
feeling that
14:43
hey i’m going with it in the production
14:45
but i’m kind of not 100 sure what is
14:48
being deployed
14:49
this is why
14:52
now how do we fix it there is really
14:55
only one way
14:57
we instead of rebuilding we need to
15:00
promote
15:02
immutable and stable binaries we need
15:05
to build once and then as in the first
15:09
diagram
15:10
go through quality gates
15:13
with the same binary
15:16
now i i keep mentioning those gates
15:19
what’s
15:20
what’s up with those gates why those
15:21
gates are so important
15:23
those gates are so important because
15:26
those are the gates
15:27
that won’t allow qa to test dev images
15:31
or staged images which weren’t tested
15:35
or obviously going to production with
15:38
images
15:39
that shouldn’t go to production you
15:42
understand that
15:43
if how can we guarantee that
15:47
whatever we have in production are only
15:51
the artifacts that
15:54
went through the entire probe of the
15:57
entire pipeline
15:58
through each and every quality gate
16:02
now let’s see how we do it in order to
16:05
set up those gates
16:07
we need to build a system
16:11
that will be separated for
16:14
different environments we want to
16:16
separate dev
16:17
from prod to a level that a mistake
16:21
is almost not possible
16:24
there are different ways to achieve it
16:26
yeah sure absolutely
16:27
um so again uh the the the
16:31
in in one of the slides previous slide
16:33
you said that i cannot trust
16:36
i have to use the like a hashtag code of
16:38
the
16:39
of the of the image to make sure that
16:41
it’s uh
16:42
consistent right to make sure that i’m
16:44
getting what i’m supposed to
16:46
um so again that makes sense when you’re
16:50
getting something
16:52
outside uh when you’re getting some when
16:55
you’re using the internal ones
16:56
right um you like again
17:00
the this hash code again while it’s very
17:02
predictable
17:03
you’re getting predictable results it’s
17:05
also now there’s no semantic versioning
17:07
right so
17:08
how do you exactly yeah
17:11
i mean what you asked about the outside
17:14
kind of implying that it’s different
17:16
when you use your internal registry this
17:19
is where the money is
17:20
you’re 100 correct the only way to go
17:24
around it and we’re going to mention it
17:26
soon enough is
17:27
having control of your base images if
17:30
you know
17:31
exactly which version of ubuntu you have
17:34
in your internal registry you can just
17:38
use latest you can use 1904 you can use
17:40
whatever you like
17:42
and you will still get a repeatable
17:45
result but look how here
17:47
in this docker image that i found online
17:49
they
17:50
actually use ubuntu without the host
17:53
name
17:53
they use ubuntu from doggerhub they use
17:56
the latest version
17:58
or the latest dynamic tag which is 1904
18:01
this is what it is it’s a dynamic tag it
18:04
can change
18:05
without any guarantees that they will
18:08
repeatedly get the same binary
18:11
okay thank you that makes it all right
18:14
okay so um let’s go back to the question
18:18
how do i separate
18:19
development for production and there are
18:21
a lot of options that you can do it
18:23
the first and this is kind of what
18:26
implied by docker documentation
18:28
is using docker built-in metadata docker
18:31
has metadata
18:32
labels and labels are key value pairs
18:35
that you can put on basically everything
18:38
including
18:39
obviously images and you can say for
18:42
example
18:43
label status equals
18:47
staging or um
18:51
label maturity
18:54
equals qa something like that and
18:57
and then when you deploy to the right
19:00
environment
19:01
let’s say now you have a kubernetes
19:03
cluster in your qa
19:05
you write an admission controller
19:08
that will check those labels and if
19:11
those labels
19:12
are not status
19:15
equals qa or maturity equals
19:19
staging you will not let
19:22
this um
19:26
this uh image to be deployed in this
19:29
container in in in this cluster
19:31
which is great but i don’t know if you
19:34
noticed
19:35
uh just over those two sentences and two
19:38
simple examples that i said
19:40
i managed to confuse what is the name of
19:42
the label
19:43
what is the value of the label one time
19:45
i used the name maturity the other
19:47
status
19:48
and one time i said the the status was
19:51
testing and the other qa
19:53
those are three text
19:56
strings obviously it’s very hard to make
20:00
sure
20:01
that they will always match and
20:03
obviously this
20:04
is how disastrous happen
20:07
so this is okay but we can do better
20:11
what else we can use we can use what is
20:14
called
20:14
docker repositories repositories
20:18
are spaces within the registry
20:21
which can be our bug controlled and this
20:24
is interesting because this is what
20:26
would
20:26
eventually we want to do we want to give
20:29
each and every cluster
20:31
username and power the borrower
20:33
rotatable keys or whatever
20:35
and just say this is
20:38
the space that you go to get your images
20:41
this is your username and password if
20:44
you can log in
20:45
and get the images those are the right
20:47
images
20:48
if you cannot log in and get the images
20:50
obviously there is a bug
20:52
but no harm done because you won’t be
20:54
able to get those images
20:56
and run them in the wrong environment
20:59
and this is great
21:00
the problem with repositories in docker
21:04
is that they are copy paste from
21:07
github repositories and as you know
21:10
github are sources and github
21:13
don’t have the notion of maturity
21:16
repositories for obviously a very very
21:19
good reason
21:20
because sources don’t have different
21:23
maturities
21:23
you remember where the sources are in
21:26
our promotion pipeline
21:27
they are always be before the pipeline
21:30
even begins
21:31
we first take the sources convert them
21:34
the binaries
21:35
and then we retire the sources part we
21:37
don’t need it anymore
21:38
and instead we start working with
21:40
binaries
21:41
so the sources github don’t even have
21:44
the notion of
21:46
repository for staging and repository
21:49
for qa
21:50
because we don’t stage and we don’t qa
21:52
sources
21:53
we do it with binaries so we might
21:57
try and use those repositories but
22:00
really the
22:00
repositories are intended for what
22:03
repositories and github return
22:05
intended which are different
22:09
um different projects so this is kind of
22:12
math
22:13
now what we really want to do is exactly
22:17
that we want to have registry
22:20
per environment we want to have
22:23
integration registry
22:25
system testing registry staging registry
22:28
and production registry
22:30
each and every one of them completely
22:32
isolated
22:33
from the point of view of the cluster
22:36
that consumes the images
22:38
so when a staging cluster goes to
22:41
tries to pull an image to create a
22:44
container out of it
22:46
it will always be from the right phase
22:49
in the pipeline just because the staging
22:52
cluster
22:53
doesn’t know of the existence of
22:56
any other registries he it has one
22:59
registry
23:00
that it gets all the images from
23:04
which you say great okay thank you we
23:07
now we know let’s go and start doing it
23:10
the problem is not so easy it’s docker
23:15
it has quirks
23:18
i’m sorry go back and go for this one
23:21
more time
23:22
you’ll lost me for a second and don’t
23:24
worry still
23:25
yeah so the separate registers per
23:27
environment
23:29
yes but okay
23:32
does it make sense well the fact that
23:35
it’s a lot of registries
23:38
the fact that the best quality gate is a
23:41
separate registry
23:42
the environment doesn’t know about the
23:44
existence of anything else
23:45
yeah it’s extreme but it’s definitely
23:48
the most
23:50
effective all right right so the thing
23:53
is let me convince you that we don’t
23:55
care that it’s extreme
23:57
and it will take me a couple of uh
24:00
slides to get to that because i
24:02
understand why
24:03
are you saying that it’s extreme and it
24:06
looks like an extreme
24:07
because docker
24:11
kind of implies that you will have one
24:14
registry it implies that you have one
24:18
registry
24:19
on a very basic level by
24:22
the format of the docker attack
24:27
docker tax says look you have
24:31
the name sorry you have the host you
24:34
have the port
24:35
and then you have the image that’s all
24:38
there is no
24:39
additional uh
24:43
token for expressing
24:46
which registry in this host
24:49
do i want to use there is no way
24:52
to say hey under this registry under
24:55
this host
24:56
i have registries for dev for qa
25:00
for staging from prod and this is so
25:03
fundamental that now we really have a
25:06
problem
25:07
if we want to use multiple registries
25:10
there is no
25:12
simple way and this is where max
25:15
tells us hey it’s extreme it’s extreme
25:18
because it’s not built in
25:19
it’s it’s hard i don’t even know where
25:21
to start maybe
25:23
i can use crazy like um url
25:26
rewriting
25:27
like virtual awesome ports um when
25:31
someone actually do docker tag
25:34
for a host on a port i actually want
25:38
to rewrite it and go to
25:41
a different url when i have the right
25:44
registry right and and what we want to
25:47
rewrite is this simple
25:49
host port busy box to actually
25:53
docker dev busy how can i do it
25:57
well this is example of nginx you can do
26:00
the same with our
26:01
with apache httpd with um
26:05
he proxy and with every
26:08
service mesh basically you say
26:12
you have this port 5001 which is like a
26:15
physical port
26:16
that’s like you listen on that there are
26:20
requests that go there but when there is
26:23
a request
26:24
that comes to the port 5001 what
26:27
they really mean is going to
26:30
the same registry but to the registry
26:34
same host but and same port but to the
26:38
docker dev registry
26:40
now i understand that it looks
26:42
cumbersome
26:43
and as you mentioned much extreme
26:46
but there is a better way and the better
26:49
way
26:50
is when we spoke about this
26:53
um this um
26:57
format of docker tag there is another
27:01
token that we can plainly abuse
27:04
outside of docker hub when we are
27:07
talking about
27:08
the registry that you host there is
27:11
really
27:12
no use for the username
27:15
which is usually the name of the company
27:18
just because it’s your internal
27:20
registry everybody in your internal
27:23
registry
27:24
are from the same organization you don’t
27:26
need to specify it every time
27:28
instead you can use this token to
27:31
specify
27:33
the registry you can say docker tag
27:38
httpd goes to staging
27:42
brilliant easy and all we need
27:45
is a tool that supports it and we’re
27:46
going to show
27:48
how we do it in a sec now once we
27:52
overcame this stupid limitation
27:55
we are now facing a completely new prod
27:59
problem and this is how
28:03
do we promote images
28:06
from one registry to another now we
28:10
have two registries in the same host
28:14
but there is no promote there is no move
28:17
there is no any command in docker
28:20
that allows moving images
28:24
from one registry to another just
28:26
because as we just spoke
28:28
it never was a scenario
28:31
because when registries are a single
28:34
registry person per host
28:36
moving between the registries always
28:38
evolves
28:39
pulling re-tagging pushing and it’s fine
28:42
it’s
28:43
it’s over network anyway but now we
28:45
eliminated this
28:47
the network now we have two images
28:50
which are in two registries in the same
28:53
host
28:54
how can i take this image and move it to
28:57
the other registry which is exactly
28:59
there without
29:01
pulling re-tagging and pushing
29:05
let me show you the answer this comes
29:08
with the same tool that actually allows
29:11
you
29:12
having multiple registries per host
29:15
there are a couple of those tools on the
29:18
market
29:19
obviously i’m here because jfrog pays my
29:21
salary
29:22
the example there i show you and that’s
29:24
just an example
29:26
is of the jfrog container registry
29:30
which is a free tool that you can
29:32
download and use
29:33
or the jfrog free tier which is a
29:36
free cloud service that you can open the
29:40
account and use
29:41
no payment no credit card required
29:43
nothing you just use it
29:45
the way it works is that we have this
29:49
notion
29:50
of multiple registries within the tool
29:54
and you can see how they are all stacked
29:56
here
29:57
locative local testing staging and pro
29:59
local here they are
30:01
now on top of that we also have two more
30:04
types of registries
30:06
one is the proxy for
30:09
remote registry and that’s our docker
30:12
hub remote
30:13
what it does is it proxies dogger hub
30:16
for you
30:17
and this is doing a number of things
30:19
first
30:20
it caches your images now docker hub
30:24
have limitations for how many pools
30:27
you can do for free during the day
30:32
once you cached your docker image you
30:35
won’t have to pull them again and again
30:38
for docker hub
30:39
and then you won’t hit those limits more
30:42
so
30:43
if you use the cloud service that i
30:44
mentioned you actually
30:46
got rid of the limits at all
30:50
it doesn’t matter how many times you hit
30:53
the docker remote or docker remote
30:56
hits docker hub it’s all free for you
30:59
right so now we have an entire zoo
31:03
of registries we have a bunch of local
31:06
registries
31:06
we have those remote registries but the
31:09
developer
31:10
don’t want to know about any of that all
31:12
they want
31:13
is pull base image play with
31:16
it and then push their uh
31:20
the end result not them but their ci
31:23
into the right place
31:24
of the pipeline automatically and you
31:27
remember what is the right place
31:29
the right place is always the entry
31:32
to the pipeline because we have to
31:35
ensure
31:36
that all the process works
31:39
now how does it work we have a virtual
31:42
registry the docker virtual
31:44
the virtual registry actually we can say
31:46
it rules
31:47
we can say okay when someone resolves
31:50
from it go and search in docker remote
31:54
and search in all of those registers
31:58
because for resolution we have no
32:00
limitation it’s for developer
32:02
they need to know about everything now
32:04
when your ci server deploys
32:07
it will always and only deploy to the
32:10
docker dev local
32:12
because you remember this is your
32:16
entry to the pipeline now you start the
32:18
process of promotion
32:20
after you check your image
32:23
it um it kind of satisfies all the
32:26
quality requirements
32:28
and now you want to promote it this is
32:31
done
32:31
by an api call by your pipeline
32:35
management
32:36
software which might be your ci server
32:39
your cd
32:40
tool or your shell script that that runs
32:44
deploys run tests promotes deploys run
32:47
test promotes whatever you use
32:49
now this is this promotion is within the
32:52
same tool
32:53
now this tool has a dead duplicated
32:56
storage
32:57
it means that it stores all the layers
33:00
only once and sitting stores all the
33:03
layers only once
33:05
those promotions are immediate
33:08
and free in terms of networks there is
33:12
no network
33:12
but also in terms of io even files are
33:16
not copied
33:17
the only thing changes is the visibility
33:21
how the images are are shown in which
33:24
registry they show now the beauty of it
33:28
that we fool the
33:31
clusters into thinking they are
33:34
different registries
33:35
and there are no other registers so when
33:38
we look at for a production cluster for
33:40
example
33:41
the production cluster has no idea about
33:44
the existence
33:45
of any other layers or images
33:48
in any other maturity level
33:51
because it only knows that there is one
33:55
registry
33:56
that it always takes the images from and
33:59
this gives us the win-win-win we desire
34:03
so yes we have a single point of access
34:05
for multiple registries when needed
34:07
and that’s the virtual thing that we
34:09
spoke about
34:11
we have completely isolated environments
34:14
because um as i mentioned the images
34:17
leave
34:18
into us in a certain registry and the
34:21
environment
34:21
only knows about this registry and only
34:24
sees what’s going on there
34:26
and the promotions between them are
34:28
immediate
34:29
and actually free no files are moved
34:32
over the network on the disk or
34:34
otherwise
34:36
so this is the
34:39
iron cloud quality gates and the true
34:43
promotion
34:44
pipeline for your docker images as i
34:48
already mentioned
34:49
there are number of tools that do that
34:51
and i encourage you to do the research
34:54
and find the one that works best for you
34:58
obviously um i’m i will be more than
35:01
happy if you give
35:02
um our software a try while you’re on
35:05
the
35:06
journey of uh picking the right tool
35:09
that manages to do promotion with docker
35:12
images
35:13
there is one more aspect that we need to
35:15
remember
35:16
and this well while we know how to
35:20
manage
35:21
our own images now and how successfully
35:24
promote them
35:25
the question is what’s going on with the
35:28
rest
35:29
of our dependencies how can we make sure
35:33
that our docker base image
35:36
never changes you remember that’s the
35:38
question that max
35:40
asked and he kind of already gave away
35:42
the answer
35:43
spoiling me the entire intrigue so the
35:46
way you do it
35:47
is with the proximage registry
35:50
the proxy registry it’s right here you
35:52
can see how
35:54
the ubuntu version the one that we that
35:57
we spoke about
35:58
is now living in docker
36:02
remote cache and docker remote cache
36:05
is our in-house caching registry
36:09
for docker hub no matter what
36:11
limitations dr hub imposes on you
36:13
you are safe to use the same version
36:16
over and over again because it is
36:20
cached in your tool
36:24
now there are other uh
36:27
other dependencies that you have for
36:29
example
36:30
you might have system dependencies for
36:33
example your jdk
36:35
or infrastructure
36:38
such as your tomcat or your application
36:42
itself
36:43
you can host all those in
36:46
jeffree container registry and obviously
36:48
j4 platform
36:50
and then have them safely stored
36:53
and managed for you now
36:57
there is more you have the same pipeline
37:00
for each and every application component
37:05
inside your a application
37:08
right so you have let’s say what do you
37:11
put in your
37:11
um docker images you put java building
37:14
with maven
37:15
or you put them pm javascript built with
37:18
npm
37:19
or you put their go applications
37:22
whatever you use you can build the same
37:25
pipeline
37:27
as now we just did with docker using
37:29
again
37:30
free um jfrog jeffrey tools
37:34
so again there are other tools what i
37:38
want you to remember
37:39
is this own your own dependencies
37:44
own your base image don’t redownload it
37:47
from docker hub every time
37:49
own your infrastructure don’t redownload
37:52
it every time
37:53
and on your application files do not
37:56
rebuild it every time
38:00
and now back to again yeah go ahead
38:03
sorry uh
38:04
i just want to get your input on this
38:08
um when you have
38:11
multiple uh application images in your
38:13
registry
38:15
uh is there a naming convention that you
38:17
recommend
38:18
for those base images
38:23
oh so like a project name
38:27
slash service yeah yeah yeah yeah so
38:30
yeah so obviously
38:32
there are naming conventions firstly for
38:35
registries themselves
38:36
right and and uh it makes sense uh
38:39
having as you saw there is really no
38:42
reason now
38:43
not to create separate registries for
38:46
everything
38:46
just because they are free the promotion
38:48
between them is free
38:50
and even if you use base images from
38:53
other registries
38:54
you actually don’t copy them over just
38:58
because it’s that duplicated storage you
38:59
can reuse it
39:01
like virtually
39:04
uh limitless and that means that you
39:07
might consider having different
39:09
registries for different projects
39:10
because why not it’s clean you come to
39:14
a registry and all you see are docker
39:16
images which are related to this project
39:19
in this project you will have different
39:22
registries
39:23
based on your pipeline in some of them
39:26
you will have dev qa prod
39:29
in others you will have they q they have
39:31
qa staging prod
39:33
and then obviously there is a naming
39:35
convention for
39:37
a for for those things now for
39:40
images themselves well again you can
39:43
embrace
39:44
naming convention but even better and
39:47
i’m going to show you in a sec
39:49
you can use metadata to express
39:52
information
39:52
about the images and they don’t have to
39:56
be in the name
39:57
because in some point of time you just
39:59
go ahead and it’s just like too much
40:02
you put in the you put in the name of
40:04
the project
40:05
the name of the obligation the version
40:07
the shock commit
40:09
the date and the name become absolutely
40:11
useless
40:12
instead you can use metadata to express
40:16
everything which is more than just uh
40:19
than just a name can i make a small
40:23
suggestion
40:24
sure uh the um again you’re saying this
40:28
environment uh based uh
40:31
registers right right maybe it’s not the
40:34
environment-based registries maybe it’s
40:36
environment type based registries
40:38
because you can have more than one dev
40:41
and you want to have multiple bridges
40:44
for every dev environment
40:45
or is it absorbing registry for step
40:48
environment
40:49
yeah no absolutely you want to have
40:51
multiple registers for different dev
40:52
environments
40:53
because if you have different dev
40:56
environments
40:57
you want to limit each and every cluster
41:00
to see
41:00
only its own registry this is how you
41:03
simplify
41:04
the work you know what huh now i thought
41:07
about a nice analogy
41:08
you know how a good object-oriented
41:12
design
41:13
simplifies the flow of the application
41:16
how people say you know what don’t use
41:18
a branched if else
41:22
algorithm because it’s complicated it’s
41:25
hard to follow
41:26
and it’s very easy to make mistakes
41:29
instead
41:30
use encapsulation and inheritance and
41:34
composition
41:35
to me and polymorphism to achieve
41:38
the same uh the same results here
41:41
it’s exactly the same you can have one
41:44
registry
41:45
with tons of images and then it will be
41:48
up to your lo deployment logic to
41:52
kind of choose which images should go
41:54
there or shouldn’t
41:56
and mistakes will be made instead
41:59
use a multiple registries as many as you
42:04
need and make them
42:05
as narrow as you need
42:08
in order to make sure that the
42:10
deployment
42:11
is as easy as you go to this registry
42:15
whatever is the last image of the
42:18
application
42:19
take it no questions asked i don’t want
42:22
you
42:22
as a cluster to ever have to do some
42:25
kind of regex filtering
42:27
based on the name of the image or some
42:30
kind of flags that
42:31
are in the image name of metadata no
42:35
just go to this registry this is yours
42:38
whatever is there
42:39
just take the newest one and run it
42:45
i’m just i’m still not convinced that
42:48
this is the best way because the amount
42:49
of
42:50
overhead it creates for the users to
42:52
make sure they know which registry they
42:54
grab the image from
42:59
who are the users max when i’m talking
43:02
about the users
43:03
now i really mean your clusters
43:07
and your clusters are they don’t care
43:10
they really don’t whatever is there in
43:13
the yaml configuration file
43:15
as the as the registry name this is
43:18
what’s going to be used
43:20
when you talk about humans then the
43:22
situation is different
43:24
and when you log into your j4 container
43:26
registry
43:27
and you see 50 different registries you
43:30
are completely overwhelmed you have no
43:32
idea what’s going on
43:33
what’s yours and what’s not there are
43:36
two
43:37
ways to um
43:40
to battle that the first is the virtual
43:43
registries
43:44
the virtual registers you set them up by
43:47
grouping
43:48
those dozens of small dedicated
43:51
registries
43:52
into a bigger space in which user can
43:55
find what they need
43:57
and on top of that you control them by
44:00
using
44:00
our bug access if you have a team that
44:04
works on a project
44:05
they don’t have to see the rest of the
44:08
registries
44:08
that have nothing to do with their
44:11
project
44:12
so you just set up the visibility in the
44:15
way that they will only see
44:16
what makes sense for them and then they
44:19
will end up
44:20
with four repos four registries and
44:23
those four registries are very clear
44:25
you have dev staging pre-production
44:26
production done
44:28
so it’s not overwhelming yes in general
44:31
in your jeffree container registry you
44:33
have hundreds
44:34
but for the sake of this particular
44:37
project there are four no harm done
44:41
no but again i i get that but when you
44:44
said that we’re gonna have a like if we
44:46
have a dev one dev two that three
44:48
and every dev environment has to have
44:50
its own registry
44:52
that part is getting a bit more
44:54
complicated so all right so maybe maybe
44:56
we have a misunderstanding here why do
44:59
you have four dev environments
45:03
we need to
45:06
the client needs it the client needs
45:10
for i mean sorry the question is are
45:13
they the same
45:13
that’s what i wanted to ask yes
45:17
if they are the same and the
45:19
requirements of
45:20
what goes there is the same they can
45:23
obviously use the same registry
45:29
right the question is are the
45:32
requirements
45:33
for the images that they deploy
45:36
differ if they are inserted
45:39
i’m sorry so there’s a comment in the
45:41
chat from simone
45:43
uh that again what if the environments
45:45
are not the same because the dev
45:46
environment is used by
45:48
the team a and f2 is used by team
45:52
b right and they just made it slightly
45:54
differently because
45:55
it’s a dev environment right let’s say
45:57
the people are
45:59
uh exploring experimenting
46:02
oh yeah no absolutely that’s fine but
46:04
again the question is
46:06
what are the requirements
46:10
from the images if the requirements are
46:13
the same
46:14
then you can set up a registry that will
46:17
help
46:18
the images that satisfy those
46:20
requirements
46:21
and whatever environments need the same
46:24
images
46:25
they just go there now if the
46:27
requirements are
46:28
different we don’t want
46:32
this well if it’s that requirement a
46:36
then only admit images
46:39
that have a in their name this logic
46:42
is a way to disaster what we want to do
46:46
is we want to encapsulate this logic in
46:49
the promotion
46:51
and not in the admission
47:02
i’m that’s okay i’m confused but that’s
47:05
okay anybody else has any questions
47:07
yeah so jason had a great question here
47:10
and just opened the chat so there are a
47:12
bunch of
47:13
um um a bunch of other good comments and
47:18
questions sorry for
47:19
not looking at it previously um um
47:22
here it is um how does the local cache
47:25
of docker hub deals
47:26
with potentially mutable upstream tags
47:29
and that’s a wonderful question let’s
47:31
say i have now ubuntu latest
47:34
that was cached in my docker hub
47:37
in my docker hub proxy
47:40
and and now i do do pull
47:44
i do docker pull ubuntu latest
47:48
should now the proxy say well
47:52
i have docker i have ubuntu latest here
47:54
you go or should it say
47:56
well i have the latest but maybe it’s
47:58
not the latest
47:59
because the latest immutable and maybe
48:02
already
48:02
changed in docker hub and the co
48:06
and the answer is there is a timeout
48:08
flag
48:09
that you put there and um
48:12
and you just decide how often
48:16
do you want to check for latest in
48:18
docker hub it’s a trade-off
48:20
you check too often you run into docker
48:23
limitations
48:24
and your builds become slower because
48:27
you run there
48:28
and check every time is there is a new
48:29
one but if you check
48:31
to sell them then you might miss the
48:34
changes of the latest in the docker hub
48:37
now the good news are that you don’t
48:40
want
48:40
the automatic check for latest in docker
48:43
hub
48:44
anyway because you remember we spoke
48:47
about it the last thing you know
48:49
is downloading a new dependency which
48:52
will create an
48:53
unexpected binary that you will drag all
48:55
the way to production
48:57
without knowing that one of the
48:59
dependencies changed
49:01
what you want is a controlled
49:04
check for your images
49:07
so whatever you used in production you
49:10
are going
49:11
to cache forever in one of your
49:15
registries you move it there and it’s
49:17
there forever because you might need it
49:19
in any point of time and then you can
49:22
run
49:23
a test in your test environment give me
49:25
the latest
49:26
let’s see if it broke anything it didn’t
49:29
break anything
49:30
good i’m going to stash this latest i’m
49:33
going to save it as the one that i’m
49:35
going to use for production now
49:37
and run from there
49:40
um i hope it um
49:44
uh answers jason’s question
49:48
we are almost done so let me finish the
49:50
presentation i have like four more
49:52
slides
49:52
and then we will go into more into more
49:56
questions
49:56
so we spoke about how we own our
49:59
dependencies
50:00
and now and this is very related to what
50:02
i just um
50:03
spoke about with jason well i want
50:07
latest
50:07
right because link twist is is
50:09
convenient
50:11
latest is convenient because when
50:13
someone created a new version
50:15
of an internal dependency i don’t want
50:18
to go
50:19
into each and every docker file and
50:21
change something there
50:22
i want to say give me the latest i’m in
50:25
devmod
50:26
i don’t give a that it changes i
50:28
actually want the changes
50:30
and this is a normal trade-off between
50:33
static tagging and dynamic tagging
50:35
it’s exist everywhere if you are
50:37
familiar with java and who’s maven
50:39
you’re familiar of the notion of
50:41
snapshots you can
50:43
create an artifact which will be of a
50:46
version snapshot
50:47
and that means basically dynamic tagging
50:50
the patches in semantic versioning is
50:53
again the same context
50:55
and obviously the latest in poker so
50:57
this
51:00
trade-off of assigned ones remain
51:03
unchanged
51:04
and provides us with reproducible builds
51:08
because it is always the same and on the
51:10
other side
51:11
when i say for dynamic tanking give me
51:14
the latest
51:15
good state of image i really don’t care
51:19
what it is
51:19
because you already tested it and i just
51:22
want to use it
51:23
this is very easy to use the way to
51:27
achieve
51:27
both and today i am in the win-win mode
51:30
for you
51:31
is yeah when then you don’t know which
51:33
version
51:34
the way to achieve both is doing
51:37
double tagging you can have
51:40
an image and this is again an example
51:43
from difficult data registry that you
51:45
don’t have to use
51:46
because you have other means for example
51:48
with docker tags
51:50
to achieve the same is look at that
51:53
i have a latest right here but
51:56
then i have a metadata that describes
52:00
that this latest actually refers
52:03
to an immutable version number 26
52:07
and now i don’t have a problem using
52:10
latest
52:11
in my system within my closed and tested
52:14
environment
52:15
because i can always say aha i use
52:18
latest
52:18
but it actually was 26 i can always
52:21
reproduce it
52:23
so with that just the conclusions to
52:25
remind you what we spoke about
52:27
you build only once you separate the
52:30
environments
52:31
you promote what you have built in terms
52:34
of artifacts
52:35
and you own and manage your dependencies
52:40
i’m jay barrow here for you on twitter
52:43
every day my dms are open
52:45
jeffrey.shownotes jfrog.com
52:48
show notes you go there for the slides
52:51
already there
52:52
all the links already there the video
52:54
once
52:55
max will share it with me and
52:58
a chance of winning a baby yoda
53:02
lego set which is pretty awesome
53:05
if you ask me with that thank you very
53:08
much
53:09
i think we have like seven minutes four
53:12
questions
53:12
so yeah again we have as much time as
53:15
you’re willing to give
53:16
uh just so you know and that’s yeah
53:20
seven minutes that’s why i’m that’s why
53:23
i said
53:24
sorry about that all right uh i don’t
53:25
know do you guess what i i just
53:27
do you want to go for qa for q a first
53:30
and then do the europe
53:31
or do you want to do the euro the other
53:33
is let me let me tell you how the other
53:35
works you go there you register
53:37
and then ari who is our meetup manager
53:40
will do the raffle offline i think later
53:43
today or maybe tomorrow
53:44
and we’ll let the the winner know um
53:48
through email that you will that you
53:50
will leave there so the raffle is
53:52
offline
53:52
we can use all this time for q a okay
53:55
excellent we have one more question from
53:57
uh cement
53:58
uh do you have to have a git repo to
54:01
play with this concept
54:03
no absolutely there is i mean you need
54:06
to have a git
54:07
repo to store your application
54:10
sources and your docker image but
54:13
remember all what we spoke about starts
54:17
through the promotion pipeline after
54:19
your sources
54:20
have been converted to binaries the
54:24
way you convert them into binaries is
54:26
kind of out of scope of this discussion
54:29
you can send your files by email and
54:32
then build them with
54:33
a docker file build on your machine
54:37
which we obviously don’t recommend but
54:40
once you have your binaries all that we
54:43
spoke about
54:43
implies
54:49
can i ask a question absolutely okay
54:52
so first i want to throw a small bone to
54:55
baruch and say that the
54:57
idea of taking a binary and moving it
55:00
through these stages i think is
55:02
you know very much widely accepted best
55:06
practice in the
55:07
cicd world and so
55:10
you know you have no idea how many
55:13
people don’t do it
55:14
i i i i deal with the real world every
55:18
day and it’s amazing how far things are
55:20
behind it some places
55:21
but but no i i just you know wanted to
55:26
support that side of things uh for a
55:28
second but my question is this
55:30
so i i’m totally in agreement with like
55:34
i want to have a limited set of things
55:37
for each of my
55:38
you know staging qa production sorts of
55:41
environments
55:42
but for dev i pretty much
55:46
don’t care i want to leave it open to
55:48
the world and i basically want this
55:50
thing to be a cache so that i don’t have
55:51
to deal with the docker hub rate limit
55:53
mess so does this support that
55:56
sort of functionality out of the box or
55:58
does that require
56:00
absolutely let me let me go back here
56:02
for a sec
56:04
here to this slide you can see how the
56:06
developer
56:07
when they resolve they have access to
56:10
everything through this docker virtual
56:12
that we really don’t have this
56:14
separation we want to give
56:16
as much control a to the developers
56:18
since we trust them you know we hire the
56:20
best and all this stuff
56:21
and then we don’t really limit any of
56:24
that with any of that
56:26
and they stay away from the
56:29
limits of docker hub just by using this
56:31
cache
56:32
of docker hard remote or using our
56:34
agreement with docker hub
56:36
if it’s a cloud deployment then they
56:38
don’t even bother thinking about it at
56:42
all
56:48
and this is something that like we could
56:49
run ourselves or is this totally
56:52
soft as a service yeah no no no there
56:54
are two options if you go with
56:56
jeffrey container registry it’s on prem
56:58
you download it
56:59
you install it you run it yourself and
57:01
then you get exactly that
57:03
um and then you are protected by using
57:05
the cache
57:06
if you use the cloud service the jfrog
57:10
the j4 platform the free tier of jeff
57:12
platform then
57:14
it’s on the cloud as a service so you
57:16
can choose whatever works for you
57:31
i have a question sure absolutely
57:34
i can although i can barely hear you
57:38
yeah that’s better i forgot my
57:40
microphone again
57:42
um yeah i find they work much better in
57:44
front of your mouth not
57:45
up by your ear so yeah one problem i
57:49
don’t see this model solving
57:52
is security issues
57:57
um that arise from
58:00
um i guess security defects in the base
58:03
image
58:04
so i can see how i could use something
58:08
like this to
58:12
model something similar to this even to
58:14
say i’m going to build my own
58:15
my own base image and i’m going to base
58:18
it on
58:18
ubuntu whatever from docker hub and none
58:21
of that matters
58:22
right right i build my own pipeline that
58:24
says okay here is the base image that
58:27
everybody in my company is going to run
58:28
off of
58:30
and i’m going to make sure that it has
58:31
all of its security checks in place and
58:33
all of that stuff
58:35
but then once that’s
58:38
done how do you prevent
58:44
images based on a
58:48
image based on a base image that has a
58:51
security
58:52
flaw that then patched in a future
58:55
release
58:56
how do you get make sure that doesn’t
58:58
get go to prod i’m not sure if i’m
59:00
right no no describing that very well
59:03
you’re great
59:04
so that there are two uh dangers there
59:08
the first danger is i’ve downloaded the
59:11
latest
59:12
and i plan to control it i i take full
59:15
control of it
59:16
i move it to um my
59:20
registry local registry and i
59:23
build my own base image out of it and
59:26
then i just let all the company run from
59:29
it
59:29
and then i’m protect from changes but i
59:32
didn’t scan it for security
59:34
uh for security problems
59:37
this is done again with again with our
59:40
tools but also with others
59:41
um if you use for example the jfrog
59:44
platform the cloud
59:45
platform it comes with the security
59:48
scanner
59:49
that scans everything that goes into
59:52
your registries and it will say hey you
59:54
downloaded a new
59:56
um a new image
59:59
into your docker hub remote and it has a
60:02
known security venue ability
60:04
so i won’t let you downloading
60:08
to your ci server to build your own base
60:11
image out of it
60:12
or i will fail your build when you try
60:15
to do that
60:16
but i will protect you from this image
60:18
because i know it has problems
60:20
so this solves problem number one there
60:23
is problem number two
60:24
which is i now run on my of my own base
60:29
image
60:30
and there is a newer version
60:34
of the base image that i based off
60:37
which has security patches
60:41
how do i know that there is a new
60:44
version
60:45
with those important release notes
60:48
and for that unfortunately we don’t have
60:51
anything built in
60:52
what i recommend is just signing up
60:56
for new version announcements in docker
60:59
hub
61:00
or or elsewhere there are services that
61:03
do that
61:04
and then just make sure that you are on
61:07
top
61:08
of when your base image changed in
61:11
docker hub and why
61:13
and you want to push it in order to
61:16
test it and check whether your base
61:19
image
61:20
should migrate to it and on what
61:22
schedule
61:33
yep that makes sense thank you great
61:55
have a good question uh do you have a
61:58
sample
61:58
pipeline for promotion uh yes yeah so we
62:02
have
62:03
we have a bunch of examples of all this
62:05
stuff
62:06
if you go to um jfrog
62:09
organization on github github comes as
62:12
jfrog
62:12
you will hear you will see um a
62:15
repositories that have
62:17
different examples for all these
62:19
different stuff
62:20
and it also has like exactly how how
62:23
how that all works
62:26
yes so the answer is absolutely ah
62:30
finally simon and thunderling and and
62:33
i’ll post it in the
62:34
in the media group
62:37
um folks unfortunately i have to run
62:41
one more reason a to um invite me in
62:44
person
62:45
is that then i’m your hostage and not
62:48
only i’m not going anywhere after the
62:50
talk but we are going to have some
62:51
drinks
62:53
and hopefully that will come sooner than
62:56
later
62:56
i miss that a lot but for now
63:00
thank you very much for coming i
63:02
appreciate it great deal
63:03
and my thanks to you is obviously the
63:05
ruffle of the
63:07
baby other don’t miss that and if you
63:09
have any questions at jay barrow on
63:11
twitter
63:12
at your service okay thank you very much
63:15
uh for everything
63:16
again if you can i don’t know where the
63:17
raffle link is
63:20
uh jeffrey comes the show notes that’s
63:21
the link you go jeffrey.com the show
63:24
notes
63:24
you will see an entry for this meetup
63:27
you go there
63:28
and there we it’s right there okay
63:30
excellent thank you very much again
63:32
um i’m gonna post the video on the
63:35
youtube i’m gonna put
63:36
the link on the meetup again and you can
63:38
take it from there
63:40
perfect okay
63:44
uh again thank you everybody um
63:47
have a good evening uh next meeting is
63:50
the second first of the month i think
63:52
it’s april 8th
63:53
or something and uh it’s going to be
63:55
testing automation awesome
63:58
thank you very much thank you hi
64:01
thank you very much great presentation
64:03
thank you appreciate it