Artifactory User Plugin Pipeline Automation and Integration [swampUP 2020]

Alek Patel ,Software Engineer at Capital One

July 7, 2020

< 1 min read

User plugins are used for running user’s code in Artifactory: https://www.jfrog.com/confluence/disp…

JFrog Artifactory User Plugins allow administrators to personalize the functionality of Artifactory to cater to their end-users. In this session, we will discuss these main topics: 1.) How Capital One tests plugin functionality in a miniature containerized version of Artifactory before deployment. 2.) How we deploy plugins to our HA implementation of Artifactory in AWS 3.) How we assess plugin performance after QA deployment. 4.) And how we assess plugin functionality after QA and PROD deployments. We will also discuss how we automate every single one of these topics for a large scale enterprise in our release pipeline for our resilient HA implementation

Video Transcript

today we’ll be talking about how the
artifactory team at Capital One deploys
automates and integrates artifactory
user plugins at an enterprise level
first of all I would like to introduce
myself my name is Alec Patel and I have
been on the artifactory team here at
Capital One for about 11 months I would
also like to introduce my coworker
Hank Hudgens who has helped me create
this material that I’ll be sharing with
you all today Hank and myself will be
around during and after the presentation
to answer any questions that you may
have if time permits the Q&A session
will be handled via chat so feel free to
engage in any questions that may come up
and we’ll try to answer them as best as
we can let’s start through it the agenda
where I’ll go over the overview and
pipeline process first of all we have
unit tests these are short and simple
tests to ensure the behavior of the
plug-in in question is behaving as
intended next we have the security scan
stage where we verify the that industry
and enterprise security standards are
met third we have the actual deployment
of the user plug-in into the cluster and
some modifications to assist in logging
after successful deployment we move on
to functionality test stage here we want
to make sure the behavior of the plugin
is correct in the newly deployed
environment and finally we have
performance testing this is the longest
stage in the pipeline in terms of
duration this is a very important stage
as it verifies that plug-in code is not
affecting the overall application as a
whole now that we have an overview let’s
dive a little deeper into each one of
these stages let’s circle back to our
first stage unit tests here we actually
bring up the same version of artifactory
that we will be deploying in a docker
container we wait for the artifactory
application to start off in this docker
container and shortly after we deploy
the plugins to the appropriate directory
once the plugins deploy we execute all
the tests and if all the tests pass
great we move on if not the pipeline
will actually fail short and simple
let’s move on to the next stage our
security scan stage actually has two
elements
first off we have a code quality and
security scan here we will ensure our
unit tests are properly cover all blocks
of code and identify any possible bugs
security vulnerabilities or code smells
next off we have a source code analysis
the second scan will highlight any
possible static code vulnerabilities as
well as identify any open source
software vulnerabilities and expose
licenses for open source components like
the previous stage if this stage fails
to me a set all D Gate the entire
pipeline will fail onto the actual
deployment stage which is split up into
three different tasks for this stage we
leverage the configuration management
tool called chef first we have the
deployment which consists of a chef’s
recipe code block that moves all the
plugins from our development directory
into the appropriate directory on the
artifact thérèse primary node and only
on the primary node once the plugins are
deployed their permissions are also
appropriately modified next we have
another chef recipe code block that
actually modifies artifact trees and
logback file which allows us to
aggregate user plug-in logs into one
file we also keep a backup of this
initial log back file just in case we
need it in the future
lastly we have a validation process
where we use the chef in spec framework
to ensure that plugins are in the right
directory and the log back file is
correctly modified this pipeline stage
will also fail if we for example a
plug-in is not deployed properly or one
of the lines in a log back file is
missing now after the plugins are
deployed and the artifactory application
is up as an fully functional as a whole
we have our integration tests or also
known as functionality tests like I
mentioned earlier this is just some
extra validation that we have in place
to make sure that the deployed plugins
are actually behaving correctly in their
new environment for the sake of time
these all these tests run in parallel
and follow a common theme that if they
fail so does the pipeline and last but
not least we have performance tests
these tests are crucial to ensure there
are no negative performing
impacts to the application as a whole
they run in parallel for 15 minutes for
each plug-in making continuous requests
that execute plug-in code and we do this
solely to ensure that our users are not
experiencing any latency when deploying
new plugins thank you guys for attending
letting talk and we hope you’ve gained
some valuable insight on how we deploy
and test these artifactory user plugins
at an enterprise level feel free to ask
any questions

NOW PLAYING

Ethica