I’m Martin Eggenberger. I’m the chief architect over here at Monster. I’m responsible for the overall solution delivery of the Monster ecosystem. And I’ll give this over to Graham now.
My name’s Graham Bucknell. I’m the CICD team lead and an architect. And my job is to build tools for the developers to help them be the one that push software into production in a safe way.
What worked 10 years ago may no longer work today. There are examples to that. There are maybe different ways of doing work. CICD automation is one of them. We didn’t automate 10 years ago, we didn’t have pull-based deployments or canary testing. It took us three months to deploy a single piece of software into a production environment. Now it’s down to a button click. That is how you are really competing. And a lot of organizations struggle through that transition, and Monster is actually succeeding in it. So, that’s the good news.
That’s one of the really strong points about Artifactory. I sort of characterize it as a bit of an omnivorous box. You can really throw any sort of artifact into it, and that’s really key for us. Being in that polyglot sort of world where we’re building all sorts of different types of artifacts, because I’m in the kind of Kubernetes world a lot, the Docker support and Helm chart support is just excellent in Artifactory. And we haven’t really hit a type of thing that we can’t can’t push into Artifactory yet. So it’s really, really good that our technology choices aren’t ever limited by Artifactory. And back to your question, the parts of JFrog that we use primarily are Artifactory and then Xray for security scanning.
So the basic is we obviously we use EC2 elastic compute. Obviously we use the different storage mechanisms, ranging from S3 to other storage attached, to EC2 elastic storage, et cetera, et cetera. And that’s kind of basics. On top of that, we use EKS exclusively, which is really our target deployment environment, is an EKS cluster. And Kubernetes has made this fairly straightforward. We use multiple storage system and databases from AWS, including RDS systems, including DynamoDB. We use different message queuing mechanism ranging from SNS, SQS, Kinesis streams where SQS doesn’t scale, where we needed to scale. Obviously the hardware components, ELBs, et cetera, et cetera, ALBs, CloudFront, plus other tools. So I would assume that we probably use about 40% of all the AWS service offerings overall. We exclusively use AWS for all of our operational needs, which is important for us. We use the account landing zones, et cetera, et cetera. So pretty much, you name a technology, and we probably either evaluated it or have thought about using it.
I’m really into performance. So, keeping the pipeline moving, from a wall clock point of view. You want developers to be able to maintain that velocity over the day. So they’ll check something in, and you want them to get feedback so they don’t get distracted with the next thing. And I know that happens to me a lot. You’ll check something in and then be waiting for a build and go away and get coffee and come back. And what happened to the day? So, making sure things are performant is also really satisfying too. Just, I don’t know. I get an endorphin rush from making stuff run faster. We actually changed the instance types that we run our builds on yesterday.
And I think the builds are twice as fast, literally, as they were. And I just wish I did it sooner. So yeah. Performance and cost, I guess, is the other thing. That’s a really easy metric to capture. And it also has that same satisfaction, I think, just reducing the cost that it costs to run your processes. And that’s one of the amazing things about cloud, that you can see that dollar value and actually make a really quantifiable impact on your company’s bottom line. You know, back in the OnPrem days, that was a lot harder to do, because of the way that systems were built. You could build this great efficiency boosting thing, but then it was hard to really say how much money did you save the company? It was all sort of guessing.
Hope to see you all at the next swampUP for Artifactory. And we can chat in person.