Software Delivery and The Rube Goldberg Machine @ NGINX Meetup

February 16, 2023

2 min read

Screenshot 2023-02-16 at 10.48.54 AM

Software Delivery and The Rube Goldberg Machine: What Is the Problem We Are Trying to Solve?

The Software Delivery Lifecycle can be complicated. Depending on your architecture and your particular deployment environment, your team may find themselves cobbling together numerous different tools and frameworks. Each tool comes with its own history and is intended to solve a subset of issues. Sometimes teams find creative ways to use tools outside of their original purpose. The end goal is to ease the process, secure, deliver, monitor, reflect, make changes, and repeat. It’s a cycle of continuous improvement. This session will discuss some of the common problems that teams face during the development and delivery process, and how organizations come together to address them. You will leave empowered and with a call to action to become part of the solution.

Melissa McKay, Developer Advocate, JFrog
Melissa’s background and experience as a software engineer spans a slew of technologies and tools used in the development and operation of enterprise products and services. She is a mom, software engineer, Java geek, huge fan of unconferences, and is always on the lookout for ways to grow and learn. She has spoken at Code One and Java Dev Day Mexico and is part of the JCrete and JAlba unconference teams. She is currently a Developer Advocate for JFrog, Inc.
Twitter: @melissajmckay

Video-Transkript

first of all the name of this talk software delivery and the Rube Goldberg machine what is the problem we are
trying to solve that is something that I the first time I heard that phrase what are you really trying to solve was when
I was an intern coming out of out of school and I was on a you know brand new Junior
developer on a team and um we had all of these Fantastical ideas on
how to do things and and how to code what we want to do and add features and whatever but ultimately uh we had to
always answer this question what is the problem we are trying to solve meaning we had to stay focused can’t go off on
all of these tangents we needed to stay focused on the actual problem
so uh quickly just a little bit about me and my background first and foremost I
am a developer have been for many many years over 20 all the way from an intern
to principal engineer and during that period of time I started to speak it was
several years before I was able to get into it regularly um you you know either from family
obligations or work obligations but um ultimately that was something I wanted to do more of and I was
introduced to some amazing Folks at uh what was called an unconference at the
time and I don’t know if all of you are familiar with what unconferences are um
they are not the same thing as a formal conference uh it’s usually with a
smaller group of people and at the agenda for the conference isn’t decided beforehand it’s decided while you’re
there each and every day and it depends you know who who is there who are the experts that have come and
what are the topics that are most you know that everyone is interested in talking about anyway um during one of
those on conferences there were some sessions on how to speak how to do public speaking how to submit to
conferences things like that and that was something I was really excited about doing so moved on to jfrog I’m going on my third
year now here at jfrog as a developer Advocate and that’s what we do a lot is a lot of speaking to groups like you
um I do still some coding like you know putting together demos working on open
source projects things like that but it’s not so much more um you know as it was for me in the past
being directly involved in production code but over those years I’ve learned a
lot and it’s time for me to share and pass that down we need to get this next Generation up and you know get them up
to speed on how we do things and how to do things successfully I am a Java Champion
um that was my primary background was Java server side coding but obviously uh you know I
rarely meet a Java developer that isn’t doing anything else other than Java so
um you know there’s a lot of python shell scripting uh JavaScript other
languages that were in you know in my history as well I was really fortunate
to work for a services company for quite a while as an intern and my first uh one of my first
professional positions um and so we we worked on whatever came in the door and so that gave me quite
you know breadth of experience across different types of projects I am also a Docker Captain I do pay
attention to what’s going on in the industry right now especially since uh
containers in production are becoming more widely used a lot of cloud native
applications being developed out there I like to pay attention to that both both the docker side of things and also um
you know open container initiative uh stuff that goes on so alternatives to
Docker as well the whole idea of containers is pretty interesting to me
um my Twitter handles here my LinkedIn is here um I can share that again later if you
like if you want to reach out and connect or follow that would be awesome
okay in reference to the title of this talk let’s just get into it this is one
of my shorter talks it’s going to be more high level concept driven and I’m hoping that it will generate some
discussion about some of the day-to-day problems that a lot of us have especially those that are involved and
the whole pipeline of delivering your software um one of the most important things this
this also includes developers um I like to bring in the developer from
that perspective that is the best perspective that I have so we’ll hear a lot from that side as well
if I’m going to make an analogy like this I can’t help but share a little bit
of trivia about Rube Goldberg machines that needs to happen obviously that’s a pretty easy internet search they’re very
popular but for anyone who’s not familiar with what it is a Rube Goldberg machine is just a ridiculously complex
and inefficient machine usually composed of a high number of moving parts that
sets out to achieve a simple goal I’ve seen these before I’ve played with them before they’re they’re fun to work with
and to watch um I think I’ve seen them in a couple airports that I’ve been in and um
you can almost get hypnotized watching these things and how they work but some new things that I did not know uh Rube
Goldberg was a Pulitzer prize-winning inventor innovator and a cartoonist and
he’s who made uh these namesake machines popular through his cartoons and the one
shown here is called self operating at Napkin you can see all the necessary steps labeled obviously this is a
cartoon this isn’t a real machine I didn’t realize Rube Goldberg machine started out this way
another interesting fact is that Rube Goldberg is the only person whose name
is in the Merriam whips Webster dictionary as an adjective but that was pretty interesting so
something to look forward to if you want to get famous try to get your name into the dictionary like he did
another thing that I ran into doing a little bit of research on Rube Goldberg machines was finding out there’s
actually a contest that’s held every year for participants to create these you can go to rubegoldberg.org to find
out more details on that but each year has a theme and this year the focus was on literacy and the challenge to create
one of these machines um to open a book for you uh that was
pretty clever I think for this year and you can find videos on that website of the top 10 selections uh they’re super
cute there’s a lot of really clever kids out there making these
so you can certainly uh do your own research on Rube Goldberg but let’s talk
about software that’s what we’re here for and what am I talking about what is this analogy that I’m making between
software delivery pipelines and Rube Goldberg machines I want to expound a
little bit more on that analogy and there are three characteristics of Rube
Goldberg machines that I think apply very well to a software delivery pipelines today
and the first one is that there are a lot of moving Parts uh I remember when I
first got started um more on the Ops side of things actually I became a part of a devops
team so there was a lot of new material for me to absorb and I just remember being pretty overwhelmed
with how many different tools were involved in the whole process and I had a lot of learning to do
the second one is inefficient Rube Goldberg machines are notoriously
inefficient on purpose uh sometimes our software delivery pipelines are that as well not so much
on purpose and the third the tendency of these
things to be unreliable and unsafe if you look at some of the more of the Rube
Goldberg cartoons you’ll see some of them that are pretty funny that you absolutely wouldn’t want to put you know
make real because they are horribly unsafe someone could get hurt pretty easy
our pipelines too our software delivery pipelines can be unreliable sometimes
and also unsafe we will talk a little bit more about that later
so let’s hit on this the first characteristic I want to emphasize you
know my intent here is not to make be disrespectful or make some broad sweeping judgment that all software
practitioners are behaving in a ridiculous or unintelligent way that’s not what I’m saying but I think
um I can empathize wholeheartedly that software development and delivery is not
an easy problem to solve especially if you’re new to today’s landscape and all of those components and Tool chains that
are can be potentially involved in your delivery pipelines and this became really apparent to me when I co-authored
this book with a few of my colleagues that was published earlier this year called devops tools for Java developers
and my co-authors and I we came together with the intent of just putting together
for readers much of the knowledge and background that’s required in order to develop and deliver software efficiently
efficiently and safely and um I’ve listed a lot of the topics here that are
explored in the book uh the basics of devops methodology Source control of
course containers microservices continuous integration super important
package management securing your binaries and deployment
um there’s a couple of things as well that we discuss and the material here gives you a pretty good idea of what
you’re getting into it’ll undoubtedly though result in deeper Dives on many of these subjects in order to develop a
solid uh development and delivery pipeline we did discuss some of the most common
tools that are used in the industry we even touched a bit on kubernetes but
what became becomes abundantly clear here is that the entire process can be pretty overwhelming and significantly
more so in a cloud native environment
did a quick Google search on the largest Rube Goldberg machine and it just so
happened that on December 10th 2021 there was a video posted this is a Rube
Goldberg machine that has 427 steps and this these kids made the
Guinness Book of World Records um pretty incredible and I just love how
excited they are at the end that it worked I wonder how many times this was recorded the from beginning to end it’s
about four minutes and 26 seconds obviously I’m not going to show that whole video here you can go look it up
yourself but um just the end part of it here is pretty exciting and they were pretty thrilled to get this done
on that note many of you have probably seen this already this is the cncf the
cloud native Computing Foundation the cloud native landscape map
I don’t expect you to actually be able to see any of the logos here it’s massive there’s a lot here just to
highlight one of the smaller sections called orchestration and management it’s subdivided into a few other categories
there are 20 tools that are under scheduling and orchestration there are
seven more under coordination and service discovery there’s 10 under the remote procedure
call section there’s 21 under service proxy there’s 19 under API Gateway and another
17 under service mesh which I thought was pretty interesting I’ve done talks before on service mesh when they were
first starting to come out so I find that interesting that there’s so many tools now available that accomplish that
foreign I don’t know how many of you like jigsaw puzzles here they’re one of my favorite things to do
um I think when I retire I’ll probably just do one right after another they’re just a ton of fun I have a lot
um I like to do them repeatedly I’m not too good at committing to gluing them together I like to do them and then take
them apart and then put them together in a different way I just love jigsaw puzzles so the cncf landscape became a
meme uh an earlier version of it made rounds on Twitter a couple years ago as a
thousand piece jigsaw puzzle I thought that was hilarious and I went to search to see if someone has actually made a
jigsaw puzzle like this and I was thrilled to see that it was available on Etsy but after 42 of them were sold it’s
now sold out and um there’s one review up there which I thought was cute the
puzzle to end all puzzles anyway someday maybe I will get a cncf
landscape jigsaw puzzle foreign
for a lot of moving parts and the fact that there are a lot of moving parts
that’s not necessarily a bad thing in and of itself um really it’s just uh which ones you
choose to use for your particular use case and how you use them that can leave
to a measure of inefficiency which is our second characteristic we’ll talk about
if you are new to this landscape uh
it’s gonna take you a while to find your place and this is totally reasonable to expect it will take the expertise of an
entire team from development to operations although some have been
involved across the board from beginning to end it’s usually expected to take years of experience to be able to do
that it’s a tall order to ask one person to know everything and this is one of
the biggest misconceptions about devops methodology that I come across all of the time and this is the idea that a
single person should know in depth all of the stages of the pipeline it’s important to have awareness yes but it’s
also more effective to have a team that works and communicates together all within their areas of expertise just
like you know I wouldn’t expect maybe someone whose expertise in operations and in deployments I wouldn’t expect
them to be diving into you know our Java code for example and and trying to fix
bugs in there um for with the same hesitation would I go into you know something that is
deployment configuration and start messing with those um you know
lovers without having that communication um among the team
so uh you don’t need to know everything again be aware of the different parts
and what your software is doing but as far as going in depth to everything uh
much more advantageous for you to stay within your area of expertise
and above all know that you will make mistakes especially with you know
deploying quickly there will be problems but a successful
team is going to deal with those problems as they come and you’ll you’ll get that churn and the important part is
just to learn continuously evaluate your Solutions make changes try again
I don’t know how many of you here remember typewriters or have ever used
one I’m this is going to age me a bit but I do remember using typewriters as a
kid it was actually my mom she had a mechanical typewriter I also remember running out of Correction tape
this is another Rube Goldberg machine that uses a drill to press the space bar on a typewriter this one reminds me of
something pretty specific with software delivery pipelines especially
one of the more interesting phenomena and counter most is that some software or tools are used in ways that were
unanticipated and often in ways that are unintended by the creators and designers
and this can be due to a number of factors some of these could be a lack of
due diligence when evaluating a tool for use or simply an attachment to a beloved
tool and a desire to make it be the One-Stop shop I’ve been there definitely
in my junior years you learn something you get comfortable with it you just want it to work for everything
another one is uh the development of a pipeline ad hoc this implies a lack of
planning or agreement amongst the team and stakeholders on what actions are required and where and this can actually
lead to several different tools being used that overlap and are unnecessarily
redundant foreign could be the unavailability of a
specific required feature either because the team doesn’t know where to look or
isn’t allowed to use available Solutions because of Licensing or other corporate restrictions
and these are all common reasons but regardless this can cause an assortment of issues and a big one is um
glue code this can be a maintenance nightmare an example of glue code that
provides compatibility between two different tools that might make sense at the time but then an update to either
tool on each side that can break the glue code
another is that projects are tools that originally had a specific Focus become
spaghetti code unmanageable and Bloated and that’s just due to an attempt to
respond to community requests for features that are outside of their focus it may be that a different tool
altogether would be suited and there’s a lot to unpack there especially when it
comes to effectively meeting the needs of the community and open source projects but it is always a challenge to
push back on things that completely make sense for a specific use case but don’t make sense for the project as a whole
so all of these things I listed they can lead to some pretty severe inefficiencies in your pipelines I think
we have that checked um I know one example that I had personally it was a devops team that I
started working on several years ago and it was a brand new project it had
been brought in from a third party that had built this initial project and and
the idea was you know we wanted to bring it internal have a development team maintain it add some features things
like that the problem was is that we did not have a pipeline developed I was you
know pretty much a lone developer on the team it’s very small team like I said a small project so risk wasn’t as high in
this regard but what I found was that in the past this particular project was
manually deployed and it was deployed using a script and this script was
located on various machines or even on you know production servers and used
there so you could have you know different versions of this script in different places and when it came time
to roll back or deploy a new project you never knew if you were doing the right thing so obviously that was something
that we tackled first was to get all of that you know coordinated and make sure that our you
know deployments and our environment um specifics were all you know put into
Source control um that was a a pretty interesting experience for me uh one that really
really highlighted the need to um communicate with you know Ops like I
said we this is a brand new devops team so we were taking baby steps
um one thing that I wanted to say about that too um being on a devops team you know
there’s this idea that perhaps developers don’t need to be so involved in deployments
but the reason I say it’s so important to be aware of what’s going on in the
pipeline and how your software is deployed I mean this is you know one of the whole reasons that microservices
came about I mean when you understand how your software is being deployed you may make completely different design
decisions that go all the way back to the beginning of the project you may choose to you know write your software
in such a way your application in such a way that it can be broken up and scaled a little bit differently especially if
it’s intended for a cloud native environment um so it is important for developers to
understand they also need to understand you know that what needs to be variable
as far as I don’t know something as simple as ports that need to be different you know depending on whether
you’re deploying to a development environment a staging environment or production environment so these are
things that you need to build those levers in so that operations can do what
needs to be done all right we have one item left here this one is pretty serious when it comes
to pipelines this tendency to be unreliable and unsafe
foreign excuse me this is a slide that I often see in in
our corporate deck here at jfrog this is a detailed step-by-step of the
software development life cycle this is really busy but I like it because it does a really good job of highlighting
how overwhelming devops can be when you’re presented with all of this stuff in the ecosystem the space is huge it’s
easy to get lost in the mix but this whole process is an attempt to answer the questions how do I build everything
test everything and deploy everything where do dependencies come from how do I
keep everything secure how do I include automation to help increase efficiency
and reduce human error and as expected there’s quite a few projects products and solutions represented in
here that handle artifact management and storage distribution security scanning
on packages and this setup has some really strong Integrations with other components and
Frameworks that are out there that you’re likely already familiar with and using today like various build and
dependency managers like Maven and npm on continuous integration servers Circle
CI Jenkins and then provisioning tools like chef and puppet but what this
particular slide doesn’t really address is what needs to happen after deployment those are tools that are around
monitoring and troubleshooting production issues this is a whole nother
side of the development process that sometimes gets ignored and it’s one of
the most important because even after you’ve deployed to production you’re not done there’s there’s always Improvement
there’s monitoring there’s observing your customer Behavior
if your application you know is you know SAS or something like that you want to observe how your customers are using
your product and then all of that feedback feed you know goes back uh to
the beginning of the cycle where we begin again we add features we fix bugs things like that
even if you’ve managed to Cobble your system together with all of these parts
there’s likely points in here that include that glue code that I spoke of
earlier to handle various Integrations between different tools and this can
lead to a measure of unreliability and as a developer uh there’s afraid phrase that I’ve heard
often um happy path coding or happy path programming
um just like with happy path coding you can easily end up with a happy path
pipeline where everything needs to behave just so in a
certain way in order for a successful end result and reliable pipelines more
often than not come from a lot of trial and error much like I imagine a Rube Goldberg
machine would remember that first that largest Rube Goldberg machine it took
them three months to build that and remember how excited they were at the
end I remember the first time that I built a pipeline and actually ran through to completion I was pretty
excited too that it worked um glue code or not everywhere on this
slide where you see a green connecting line those are the places to start looking for weaknesses and putting serious
thought into what to do when there’s a break in one of those connections is it the is the result that the entire
pipeline breaks down or are there measures you should be taking to handle paths that are not happy
there’s actually um a ton involved in putting a full pipeline together I’d be remiss if I
didn’t mention the parts that are terribly unsafe if not addressed this
this diagram comes from salsa.dev that stands for that’s the acronym for supply
chain levels for software artifacts that is actually a security framework
it’s a checklist of Standards to help prevent your pipelines from being
tampered with to keep your packages secure and your infrastructure secure as
much as possible and between your source code and delivery of
your product or deployment of your surface There Are Places you need to take extra care to protect your supply
chain and each of these red triangles that you see here those represent an
opportunity for A disruption and as a developer I focus a lot on the build
parts so the continuous integration and the initial development builds
in a lot of my talks I focus on dependencies in particular all of the
pieces and parts of libraries and other artifacts that your code depends on and
this aspect becomes even more important when you start packaging all these things into containers
and one of the most crucial areas to harden is your build process during continuous integration
you must ensure that the dependencies you’re pulling in are exactly what you
expect them to be for example they need to be an expected version of software
you must also verify that you’re using a trusted dependency and not a malicious
package that just so happens to have a similar name so um paying attention to all of these
things is important this is a slide directly from another
talk actually um since and it’s from a
you know containers and uh using using Docker containers responsibly talk I
won’t get into too many too much detail here but uh just since using a Docker file is pretty common
um there’s a lot to talk about as far as dependencies go this graphic of a
glacier I’m sure you’ve seen it over and over again is pretty overused but the concept that software is potentially
made up of a ton of components that a developer might not have firsthand knowledge of cannot be understated these
applications and services that are built today are more complex than ever and developers generally don’t want to
recreate the wheel if it isn’t necessary we’re trying to be efficient and quick in our work
and containers especially have this problem because not only do we package
the software we’ve done and put it in the container but with a container image we often build these container images
with a parent or base images and where are these coming from this is
something you should know even as a developer all the way at the beginning of the pipeline you should know where
your base and parent images are coming from and just for fun let’s walk through a an
example Docker file I think some of this will become even more clear to you since building is so important let’s talk
about how container images are built and this is just a contrived example
just to highlight a few areas to watch for in your own Docker files if you have them line one we have right away this is
a parent or base image and it’s called untrusted parent image obviously they’re
not going to be named so you know not so easy to spot but um a couple things about this
there’s no tag or Shaw identifier on this and we don’t know where this parent
image comes from um if you remember the solar winds hack
that was back in September of 2019 um that was a pretty nasty attack that
was actually done when a binary was switched out from
under the company after in the CI system in the continuous integration system
and that was particularly nasty because that binary was even signed and it
suggested that there may be some you know access to the continuous integration system itself and uh it’s a
good idea to um you know make sure that you know where
your dependencies are coming from like I said this in this particular case it was even signed maybe it would be better to
use a Shaw confirm that the correct shot is being used moving on lines two through four we’ve
got some additional problems with not specifying versions of packages and not
keeping up with updates if you remember the Equifax data breach that was in July
of 2017. and um and then the most recent one the log for Shell attack that
vulnerability that was discovered um you know the Equifax data breach was just due to the fact that there was a
vulnerable vulnerable package it was known just like on line number four but
it wasn’t updated in time um you know for whatever reasons
um that’s that’s what happened and the end result was pretty embarrassing for them
um blog for Shell was actually a pretty good example vulnerability was discovered then there was a very quick
action by a lot of companies they did learn their lesson and update this stuff
and um you know that was pretty big one for me especially being a Java developer uh using log4j is very common so using
these libraries they were involved in everything even if you weren’t using it directly luckily there was a dependency
that was being pulled in that was using it so this was a pretty big one and uh there was a lot of effort and a lot of
money put into making sure that things got updated very quickly
and line six uh that could be like an efficiency and performance problem
um and one point I’d like to make is um you know that copy line the dot dot it’s
basically copying everything from your local machine and putting it in well everything in your in your context that
you’ve sent and moving it into the container um it would be good to use a DOT uh
Docker ignore file it’s very much like a git ignore file use that to make sure
you’re only copying things over that you intend you don’t want to be copying secrets you don’t want to be copying
maybe configuration that’s for you know your machine maybe you don’t want to
copy your git directory over there um there’s a lot of stuff that you
wouldn’t want to just throw into a container all right number seven line seven I see
this a lot I see you know curl statements uh wget statements and Docker
files these are pointing to external resources which in and of itself isn’t a bad thing unless you don’t have control
of that resource I have an example where one time I saw a Docker file which was actually
reaching out and it was a legitimate reason it was using an installation script from a proprietary product from
another company using their installation script to install their product into the
image so that it could be used in the container and the problem with that is that uh you
know invariably they’re gonna change that someday maybe the script moves to a
completely different location maybe it just gets deleted all together maybe it gets updated and updated in a way that’s
incompatible with your your current setup so
if you’re going to have something like this it’s better to bring that script internal so that you manage it you
version it you update it on your schedule not on someone else’s schedule
uh number nine that’s running as root that little start
shell script that starts you know when you’ve launched the container it’s running as root make sure that you’re
using the principal lease privilege um that you’re you know providing a user and group so that you’re not running as
root when you don’t need to these are just a few problems that I have other talks about this that go more
into depth and detail on these items and more but I come across these pretty
frequently in Docker files this is definitely not an exhaustive list but it’s a really good place to start
focus on your build for sure
okay I think we satisfied those three characteristics the many moving Parts
yes inefficiencies yes tendency to be unreliable and unsafe yes
and we’ve complained a lot uh during this entire talk but you’re probably
wondering now if there’s any real solutions to these problems and there are a ton of tools out there to explore
that will help you with real case scenarios and as overwhelming as it is the cncf landscape for example at least
lists some tools within categories that will be helpful to you and for now quite a bit of research is required to vet and
evaluate those Solutions you choose to use the hardest part is just getting all of those pieces to fit the industry
right now is ripe for Real uh Holistic Solutions that solve the basic problems
of continuously delivering and updating software as well as monitoring and troubleshooting once that software’s out
in production and it’s simple to say but there’s a ton of one-off tools out there that have grown out of very specific
engineering problems that’s where you get all of these pieces involved when
you know one group tries to make all of these pieces fit for their particular use case scenario
what we really need now is agreement in the industry on best practices and some
organizations are coming together to work on these issues and one important effort that’s an example of this is an
effort between jfrog and nginx that will harden and improve the mara
project Mara is an acronym for a modern application reference architecture that
was announced last year at nginx’s Sprint 2.0 it’s basically a framework
for deploying any containerized application in a reliable and repeatable way it’s completely open source but
there’s definitely some things in there that I spotted to you know looking there’s ways that we can improve this
and make this more usable for more engineering teams that’s just one example of organizations that are
working together to produce open source that’s available to everyone and to push
forward best practices but I also want to mention the efforts that are happening right now with the continuous
delivery Foundation and um they
you know and there’s a lot of other projects that with that particular
Foundation they’re starting their journey to be adopted that are trying to solve a lot of these issues as well
uh the CD Foundation continuous delivery Foundation that’s a sister organization to the cncf it also has a landscape map
with a number of associated tools it’s not quite as overwhelming as the cncf
map mainly just because there’s a limited focus on continuous delivery
especially and now is a really good time to check out what they’re doing regarding interopera interoperability
and outlining best practices there’s a number of organizations that are involved and there are regular
meetings uh around these subjects that are open to the public it’s just a
matter of going to their website and signing up for the mailing list and getting involved in the slack channels
and starting to attend some of these meetings I’m the chair of the interoperability meeting co-chair
and uh you know we recently had well we have a list of projects right now that
are coming to present um and it’s been interesting to learn exactly how they work what they do what
their purpose is what problem they are trying to solve and making sure that
they land in the right place and that they’re interoperable with a lot of the other tools that we all need to use for
our Pipelines um one interesting project that came out of the CD CD Foundation
um it’s called CD events definitely check that one out it spun out from the
interoperability special interest group originally this is the beginning of a
specification for receiving and emitting events from various tools and that would
better support interoperability versus tight integration which can be difficult
to maintain foreign to consider taking a look at is Persia
this is basically a decentralized package registry and it provides a measure of trust that you are actually
getting the package that you think you’re getting that you want to get and basically it requires the package to be
built from source and there needs to be a network consensus meaning a quorum of
trusted Registries and nodes that agree that this is this version is correct
that it hasn’t been interfered with in any way since it came from source
foreign leave you with a few links to look at of
course we have the landscape maps that I showed you those are really good starting points there’s a community page
on the CD Foundation page jfrog also has a community page with a lot of different
you know information and talks and workshops and stuff and um and then of
course um the if you’re interested in looking or contributing to the mara project there is a Blog that was
announcing that reference architecture on the nginx website
foreign that is what I had to share with you
today um do we have any questions we’ve got a few minutes
hey Melissa that was really really interesting to be honest this is one of those I was just
sitting here watching listening to you I have a question and you know being a
co-organizer I’m gonna abuse my powers and ask the questions before looking at the question section of course so I I
love that visual around that g frog uh you know you showed the ecosystem where
you had different stages and everything um is there in that pipeline or in that
stages of pipeline is there a section you can say this is the most important this you have to do or this is how it
has to be is there like an important let me go back to that because I think
that I like that slide too I use it in a lot of places
oh should I go too far I went too far I went right by it
oh there it is okay um yeah so yeah you can see that this
this goes all the way through from development um talks about you know how how
dependencies are resolved for uh the developer and for CI servers and stuff like that obviously there’s a lot here
and there’s some missing we talked about that already the monitoring and stuff afterward that needs to happen in
production um but I think the most important part of a delivery pipeline to focus on is
planning it in the beginning um figuring out that simplest thing that
works getting your team together and white boarding your pipeline meaning every single step needs to be accounted
for so that everyone understands why and what is happening and then when you
choose the tools that you’re going to use be very clear and understand why
like what you need that tool to do for you and why you’ve chosen it meaning you
need to document this this is so important there’s so much documentation I see out there that explains how to do
things and how to implement something but they forget why why do you do it
this way and that ends in um you know the next person that joins the team they want to change it
the next person that joined the team wants to change it again you know and then you end up you could get in a
situation where you just go back and forth between a few different tools so explaining why and really narrowing that
down making sure everyone understands um that’s brilliant um and are there any I
mean I’ll look at the question section in a sec like just after this one but are there any let’s say
um specific tools from that this entire tool chain that you would recommend for folks who want to just improve their you
know pipelines or you know do good do better yes I love this question because it’s
like asking me what’s the best language to learn [Laughter] and I definitely have some personal
opinions on what I like to use and stuff but um what I’d rather say is that every
tool has their own strengths and weaknesses it depends on your situation and whether for example whether you use
an open source tool or whether you use something off the shelf could totally depend on the rules of your organization
it could depend on you know whether or not you need to be behind a firewall
whether you even have access to the internet um whether you’re working for a
government or a private agency that’ll make a difference and whether or not you have funds and resources to maintain a
tool that doesn’t have any kind of support contract that will make a
difference so it’s not even just the tool itself but you know who’s going to
manage that tool who’s going to keep it updated things like that those are all things to consider and the most
important thing is just to understand each tool when you’re doing the research and evaluating a tool make sure you
understand where it came from and what problem they were trying to solve so that you can avoid trying to make
something do something it wasn’t intended for you know oh brilliant I
love this answer because this is exactly what I use when I when people ask me should I use kubernetes I just go back
to First principles ask yourself what are you trying to do all right there are
a couple of questions if you don’t mind and probably keep two or three so Paris here is a blockchain network so this is
a question by Amrita so um
only to keep a record of what has been uploaded to the system
so I think that’s a pretty good use uh it uses that you know record
right brilliant okay I hope that answers if it doesn’t let us know there are a few questions in the chat as well uh so
ask folks to add them to question questions tab but I’ll ask the one by sayage
um so the question is will CD events and interoper operability between vendors have an impact on these problems
absolutely absolutely um uh even now I know of projects that
are um making themselves CD event compatible so that you know they aren’t behind
um one example is if it’s fairly new uh it’s actually coming up to you know try
to be a part of the CD Foundation it’s called J releaser and that was one step that they took was to make that
particular Tool uh compatible with CD events um if we see more and more of that
happening we’re going to get more uh you know companies engaged and supporting of this
and so we might even start seeing you know some off-the-shelf products some commercial products uh start supporting
this as well and moving in that direction it’s to everyone’s benefit to do so and if you take a look at a lot of
the people that are engaged in the special
interest groups that are running under the CD Foundation you’ll see a quite a
variety of companies that are all involved and people attending and sharing their experiences and
you know giving their input that’s super important brilliant
um so there’s a follow-up from Amrita bandui so so this follows from the pedicia
question so record of what language binding and its dependencies on jfrog database previously also it
is there can you elaborate more on this how it is used inside jfrog I don’t know
maybe that question makes sense um well jfrog I mean the the platform
itself is written in Java but it supports I mean it that is
aside from what it’s meant to do for you it’s meant to manage all of your packages and all
of your binaries that you use in your code base which could be pretty much anything we support
a ton of different languages um and I I think like 30 and Counting or
something even rust and Docker containers now um in you know npm Maven Gradle of
course we are going to support all the Java stuff um python JavaScript go Ruby all of
those things so I I hope that answered your question um but as far as managing packages and
stuff the jfrog platform does that I think the question is around because
all of that data lives inside G4 database like code dependencies and everything
from yeah um so there is an internal database for
example when you upload you know packages to different repositories and stuff
um the way that it stores those packages is it takes that Shaw and says this Shaw
belongs in this repository if you move stuff around or copy stuff to other repositories it will use that Shaw and
use that database in order to prevent you know having multiple copies of the same thing so the database itself will
manage that for you um I honestly don’t know which databases
are supported right now um I know that it’s something that you can set up yourself you can choose the
database I know postgres is supported um but I know others are supported as
well if you choose to use something different cool
um okay another one from Amrita vandu busy asking questions so how blockchain is used to keep record of jfrog where do
you see a use case inside jfrog I think that’s dead oh Persia is an open source
project it’s supported by J frog but not it’s not part of the jfrog platform
um it’s meant for like open source projects that are able to be built from Source
there is some agreement with trusted Registries like with um you know Maven
Central and with Docker Hub to handle some of those cases where you can’t get
like an exactly um an exact build from Source like you can
with some other projects like go um but the blockchain itself is just used
to verify uh to have that history of what what is in the system and what is
approved cool I hope that answers the questions
if not you have details for it to ask those questions you know we have stuff
coming up um especially like at kubecon coming up we’re going to be having
um you know more information about Persia and stuff that’s a project that we you know is near and dear to our
hearts we will have workshops on that and Tiger Gardens and things like that
we’ve done that in the past so keep an eye on Persia I think it’s persia.io
yeah no brilliant and follow Melissa um yes and you will have to thank you
Melissa if you can um post your LinkedIn URL on the chat section so the people
can add you in you know um if you guys have any more questions for Melissa please don’t hesitate to reach out to
her um we are not keeping Melissa for long because it is I believe 1am her and she
is actually based in Denver Colorado so she needs to sleep um so guys Melissa will be posting
um she has posted already her Twitter account um if you want to post your LinkedIn as well Melissa so that people
can add you and again guys um if you have any more questions for Melissa uh please don’t hesitate to
reach out to her um the if they have events um you can millisecond can give
you more information on that or Arie as well Arie if you want to um post also your LinkedIn or any of
your social media accounts um on the chat section so that people can reach out to you but thank you
Melissa thank you Arie and thank you thank you everyone um okay I do have okay this is the last
one um pratik I think um maybe let’s just go ahead with the
questions uh the question from amruta uh actually it’s not really a question
it’s more often but yeah go ahead yeah no it’s just um it can be integrated and I think it’s
more of a thanks yeah it’s more of a thanks uh but yeah no that’s very good I think yeah we can
wrap it here Melissa’s really really thank you uh and apologies I didn’t realize you were in the Army and I knew
you were in Denver but I was just going ahead with the question interesting conversation