Creating & Securing DevOps Multi-Cloud Environments
Learn how the JFrog DevOps Cloud Platform unifies workflows that break down silos and deliver real-time end-to-end transparency. With built-in security, governance, and compliance for your software delivery requirements, JFrog helps DevOps teams maintain business continuity during multi-cloud migrations.
- Why choosing multiple cloud providers is a good idea.
- Benefits & challenges of multi-cloud
- Methods and tools for an ideal cloud configuration.
- Access Federation
JFrog’s DevOps Cloud Platform is also available on the Google Cloud, Azure & AWS marketplaces, get started today!
The Guide to Creating & Securing a DevOps Multi Cloud Environment
Good morning and good afternoon everyone. My name is Courtney Gold, I will be your moderator today. I also have Bill Manning with me, who will your speaker today. Just a few housekeeping tips before we start. We are recording this webinar and this will be sent to you post-event. Also, please, please ask any Q&A. We’ll be here to answer those whether it’s during or after the webinar is done. Other than that, I will let Bill kick it off because I’m probably the last person that you want to hear from today. So Bill, take it away.
Hey everybody, how you doing? This is Bill Manning. I’m going to discuss today basically what’s called the Singular Approach to DevOps Across Multiple Clouds. I’m Bill Manning. I am actually one of the solution architects here and I do a lot with our customer base. I obviously do a bunch of webinars, public speaking, blah blah blah, all the usual kind of stuff. But the main focus of today is basically a lot of companies these days are choosing things like multiple clouds, right? And we’re going to discuss some of that, the benefits and why you would do that. We’re also going to talk about how you can use our platform as an approach to actually unify everything together so that non-dependent on how you host it, even in the hybrid model, that basically our platform can help you through the entire process from end-to-end and make it so that you have this unified experience while still having underpinnings of all the other kind of providers that are out there.
Now of course I mentioned here of course Google Cloud, Azure and AWS, the major three that everybody knows about. Of course there’s other ones like Oracle Cloud, other providers out there too that provide Iron and things like that. I’m not going to go too heavily into them, I’m just using them as a primer to say we understand that there’s a lot of offerings out there. These just happen to be the big three, but there are others out there also, and this applies to those others also. So understanding that, the first thing we wanted to discuss is, of course, just as an introduction, just so people understand, when I say unified and I talk about our platform, it truly is an end-to-end platform. And I’m going to touch on some of the components today.
Just as a quick overview, Artifactory, the universal binary repository manager, it’s a place to manage all those third party transit dependencies you use as a software development team. 85 to 95% of your software is someone else’s. But it’s also an exceptional build management tool and a software also life cycle management tool in addition. So it’s a place to store your builds. It’s a place that you can go ahead and actually emulate your SDLC, your software development life cycle. It’s a place for accountability, understanding how things were built, and adding as much information around how things are built so they become more relevant to you as an organization.
To the right of that, we have Xray, it’s our security and vulnerability and license compliance tool. It ensures that those 85 to 95% of those binaries that you’re bringing in and consuming as an organization are safe and secure and compliant. It could be integrated in everything from you hear the term shift left all the time, so we have integrations into IDEs, we have a CLI tool that you can utilize at the desktop. You can integrate into your CI process. And you can even do it into your release process and your distribution, which is to the right of that.
Our Distribution is a way for you to go ahead, and I’m going to talk about actually Distribution in depth as part of this, because in some cases you might want to have other cloud providers just be hosting providers, or you want to go ahead and actually have it so that your software is spread across multiple regions and multiple cloud providers. And to do that, you can use our Distribution platform. Distribution is the hub, and then we have lightweight immutable versions of Artifactory called Edge nodes.
These Edge nodes can be placed in these other regions and it’s a way for you to actually maybe push things out and deploy them. Say you’re deploying a web service and you want to have multi-regional approach, you can go ahead, package up the things that you do with our Distribution hub into a thing called a release bundle, which I’ll talk about, which is a digitally signed immutable release that might contain things like helm charts and docker images for a web service. And then you can publish those out to the Edge nodes for easily consumable objects that makes it so that you can distribute your actual web services more succinctly even across multiple cloud providers.
Below that, we have pipelines. This is our CI/CD and CI orchestration tool. You can use that as a CI tool, you can use that as a CI extension. So if you have already CI processes in place and you want to add additional functionality such as blue-green testing or approval processes, you even have signed pipelines, so you have a blockchain style ledger of everything that you’re doing. You can also use it just strictly for distribution and you can also use it as a way if you have more than one CI tool or more than one CI environment to orchestrate them together so you understand the interdependencies between those builds.
And then at the very top of this we have Mission Control and Insight, which will actually become a key factor of this when you are doing things like multiple cloud providers. Because to us, we don’t care. And I’m going to talk about that too because we don’t care where you’re hosting us, we just care that you have connectivity. One of the major things about the way we approach things, like I said before, is an end-to-end platform platform. It’s made to scale with Infinity, because the thing is we have customers that are pumping petabytes of data daily through our products.
It’s also radically universal, meaning that we have over 30 package types you can integrate with. You can integrate into any extensible systems such as DI environments, other CD environments, it’s all API-able and you can use our CLI. So it’s extensive. You can use us where it sees fit, while at the time having things like continuous security, as I mentioned before with our X-ray product. Everything from shift left at the developer level where the ROI is greatest, all the way down to shift right where you’re actually deploying the things that you’re looking to. And the main focus today is going to be hybrid and multi-cloud, right?So the idea is that we provide basically a base level foundation for you to do everything on, non-dependent on how you’re hosting us. And this is essential to what we do and this is going to be one of the core functionalities that we get. And we have a lot of discussions with this of some of our customers who are looking to do this. And we’ll talk about some of the benefits of using multi-cloud and also some of the caveats and how we can use our products to actually help you overcome those.
And then lastly, the ecosystem is huge. We actually have built our product to be adopted by everyone. So we actually open our interfaces for interactions. There’s a lot of tools in the market that have integrations into us. So we’re made to fit where we need to fit. But at the same time, that base level functionality is very, very essential. So one of the things we’ll talk about is the optimized architecture of what we’re doing today. And the idea here is that as I stated, we don’t care where you install us. We can be in everything from your self-hosting on-premise, [inaudible 00:07:08] SaaS offerings, you can use those in combination together in a hybrid approach, which is the right of this. We’re even multi-cloud. So even our SaaS offering is multi-cloud available in AWS, Azure and GCP.
But in addition to that, if you’re one of our lucky government customers, we’re also in Azure Government and we’re also at AWS GovCloud, you can actually go to their respective marketplaces and utilize us. But really what it comes down to is that keep in mind that we are the base for this. We are the common connective tissue that allows you to have the ability to do hybrid models, strictly, multi-cloud. And I’ll talk about the various approaches today on how you can utilize us to help expedite that. So what we’ve found recently is that cloud is much more than the technology. When companies choose to go to the cloud, there’s usually a bunch of different reasons. And we’ll discuss some of the reasons behind that.
But the thing is that whenever you look at people and companies that say, “Hey, we’re looking at moving to the cloud.” And the cloud has become such a generalized term these days, basically when we say cloud, most every company we talk to is cloud hosting, right? We’re going to host our applications. But usually the determination comes based in various forms. Whether it’s a mandate from up above, a relationship that the company might have with one of the providers, it might be a certain set of tools that are available for some of these cloud providers. Somebody might want to use AWS Code Build, other ones might want to use Azure DevOps. It depends really. And it really comes down to what the company is really comfortable with in those natures.
But there’s a lot of information that you have to take into account. And this can be kind of staggering sometimes for a lot of organizations. The thing is that as we talk about this, why would you want to consider multiple clouds? And this has become a really common thing. In the past you would just go with a single cloud provider, you’d form a bond and relationship with them. And then the problem is with that, there’s some caveats that we’ll discuss in a bit. But the thing that we see and the industry sees right now is that most companies, 70% of the companies plan to increase their cloud budget going forward. This has just become the standard. There’s a lower TCL that’s associated to it, the total cost of ownership because you don’t have to have the infrastructure. You got to remember there’s like OpEx you got to think about, and budgets.
61% of businesses have migrated their workload to the cloud just in 2020 alone. I mean the pandemic had a lot to do with, it was decentralization in its purest form. And this allowed a lot of companies to take a step back and say, “Hey, we’re doing a lot more remote and we’re also looking to reduce some of our costs. This is one way to do it.” So the thing is that in 2020, 50% of organizations moved the majority of what they do to the cloud for this one reason because of things like TCL.
Now when we also think about cloud adoption and stuff, it says that 90% of the companies are utilizing this. And then 87% of those have actually gone ahead and gone the hybrid route. Some companies, and we’ll talk about the caveats because there’s still some misconceptions, or there’s some actually good reasons why a hybrid model would work best. And I’ll actually talk about that, especially in terms of development and software delivery and also just in general utilizing both a hybrid model and a multi-cloud model to actually provide a super robust way for you to go ahead and have things like security reasons, cost savings, backup strategies, the whole idea of not putting your virtualized eggs in one cloud basket. The thing is that there’s a lot of reasons why companies are starting to embrace this.
The thing is, is that yes, one of the main motivations that we’ve seen out of there is modernization, right? It’s a huge industry trend. You’ll see a lot of CIOs and CSOs and they’ll start talking about, “You know what? We need to go ahead and budget for these things because we read an article somewhere.” Sometimes it’s strategic in nature. Also, in a lot of cases, actually having an infrastructure hosted by somebody else allows you to focus on things more that are more relevant to what you’re doing than trying to maintain a physical environment. The thing is that operationally data centers and regulations are one thing, infrastructure refresh, these are all the standard things you hear, what we hear actually from customers. Operational cost scaling out. There’s CapEx versus OpEx. Maybe you want to start using CDNs and other network optimizations to make it more robust.
And what’s funny is the CDN and network optimization portion really started to get more traction. We hear more of this, especially if you combine these with elements of things like footprints and geographical expansion, especially with the COVID over the past 20 plus months, the main thing there was is teams were working from home and working more remote and it made more sense to start thinking about these things because of the fact that we became more of a divested infrastructure than the typical office model.
Now saying that… Oh, come on. Switch. There we go. There are benefits to actually having multiple cloud providers instead of just one. It’s vendor flexibility. And when I say vendor flexibility, I just don’t mean competing one cloud provider against another. What I mean is once again going back to the idea of tool sets. There are certain tools that are only available in certain cloud providers. It also improves disaster recovery. We’ve seen it less over time as these cloud infrastructures have become more robust. But there’s a vex on it that people will still have this anxiety of saying, “I host everything in AWS US-East-1.” That’s the most used AWS region. What if that whole thing goes down? Or what if AWS suddenly has a DMS problem? Well you know what? If I have something in Azure, I can quickly go ahead and maybe move all my operations over to a different cloud provider.
The thing is is that it also helps also decrease some capital expenditures when you are doing this because you’re diversifying it and stuff like that. Some cloud providers will actually go ahead and discuss terms and usage and things like that. But that’s not stuff I deal with. This is stuff that you would deal with in terms of more the managerial side, budgetary constraints and whatnot. But we all know the fact that moving to the cloud in most cases is a way for you to lower the costs. But it does come at a cost. And that’s the thing is that there are challenges. So understanding when you look at a cloud provider, assessing it and identifying which applications and systems need to go there. Understanding different cost models, is it infrastructure as a service? Is it the platform as a service, is it SaaS? There’s a lot of things like that.
Also too, when you start looking at deployment models, public versus private, is it a hybrid cloud model? Is a multi-cloud model? Choosing the right providers, vendors, and consultants, you are going to have to take on a whole bunch of new skill sets when you’re dealing with this. Also, when you’re doing the migration, how do you maintain business continuity? This is a huge one. We see this all the time. I’ve had customers that peel the bandaid off and do it over a weekend. I have other ones that are like, “We have an 18 month plan in which we’re going to be migrating all the things that we do because we just don’t have the capability to bring down any of our services during that time. It’ll affect revenue.”
The other thing too is when you start looking at this, is coming out with all the proper routes for you to go ahead and actually create these kind of models and understanding what the cloud routes that you can do are. Is it rehosting? Are you reshaping? Are you repurchasing? Are you rearchitecting? There’s a lot of things you need to take into account when you start thinking about this. And for a lot of corporations this is a huge herculean task. And the big thing is is that when you do have the challenges, training and certification become a thing. You know what, just because I know how to use AWS doesn’t mean I know how to use Azure or GCP or one of the other multiple cloud providers. You need to bring in talents like cloud architects, cloud network engineering, automation engineering, security engineering and cloud developers.
But there’s also questions about adequate security, “How secure is my stuff that I have there?” You have things like for AWS you have AWS Direct Connect that’s basically like a VPN directly to your instance and you can actually have more homogenized control around this. But others might not have that provider. And the thing is too is that there is always planning minimal disruption and also additional costs. Cost always comes up as always a major factor and budget. And there is a portion where you need to go in and invest heavily up front to gain the actual benefits down the road. But these are just things you need to keep in mind.
And one of the things we’re going to discuss here is how can the JFrog platform really help you get through this. And that’s one of the key factors because of the fact that we handle things like your production side, we handle the development side, it could be your QA, all these things that you rely on to get your software to where it needs to be. And during this time period, you want to have a foundation that you can build upon that will allow you to do these things, where you can start off with maybe on premise or you’ve already started on premise and how do you start slowly migrating your operational characteristics.
Because I see in some cases, and a little aside on this, I see a lot of customers that are like, “What we’re going to do is we want to keep all our developers running locally in our offices in regions. And so we want to have a local instance of your Artifactory platform, the JFrog platform that we use already, but we want to start pushing all of our CI tooling, all of our testing tooling, all of our distribution components into the cloud because it allows us to have a broader reach.” This is a very typical scenario that we hear. Or, “We’re actually closing down our data centers. We’re getting rid of all the costs that we have. We are going to migrate everything into the cloud. We do want to have multiple clouds because we had multiple data center providers before and we feel very safe and satisfied knowing that we have more than one organization watching what we do because this is how we make money and the money we make allows us to pay you.”
So saying that, when we talk about a hybrid architecture, we see this approach with a lot of companies, like I said. “We’ve been doing things in our DCs forever. You know what though? Every time we have to do a refresh of everything that we do in our DC, the operational costs is terrible. It takes down things, it’s unpredictable. So we actually want to go ahead and we want to start migrating some of the essential parts of our organization up.” And if you notice here, if you’re using our whole JFrog platform, we have things like Artifactory, our Xray product, Distribution and Pipelines. You can have that same exact architecture in terms of products in the cloud without having to have all the heavy operational load that you might have in terms of actually having all your own equipment. So all these things are there.
Then when you get to these providers, choosing how you want to deploy them, this is always a big discussion. “We’re exploring Kubernetes, we want to start virtualizing and orchestrating all the things that we do.” Well, if you’re using our products, once again, we don’t care how you deploy them as long as they’re deployed properly. You can deploy us into an orchestrated environment like Kubernetes or OpenShift. You can deploy us as docker images and have them running in the various even just individual docker services. You can install us as actual virtual machines like EC2s and things like that. You can utilize the external service if you’re using our high availability, say you’re using AWS. You can use services for storage like S3 and use RDS with the database. If you’re in Azure, you can use the Azure Block Storage. So there’s a very flexible approach too when you start going from your hard pressed, self-hosted DC environment, which you could still retain, but then you can start emulating all those other factors and utilize those services that those cloud providers provide to make your infrastructure in the cloud more robust.
So when you start thinking about that, the typical thing we see is that we have a data center, maybe we’re doing things in cloud provider and, like I said, we don’t want to put our eggs in the virtualized basket. But in this case, you’re given a mandate as say an individual, you’re part of the infrastructure team. Your CIO comes down and says, “Hey guys, by the way, we’re going multi-cloud approach. I know we have everything, all the development stuff that we do right now hosted in our data centers, we want to move them out to one of the cloud providers. Can you provide me with a strategy that will make things easy?” Well, if you’re already at Artifactory customer, the nice thing about it is that you can have your Artifactory currently in your self hosted solution, install us into your multi-cloud environments and then suddenly you have options on how you can actually go ahead and utilize our platform to ease this approach and give you that base level foundation once you’ve actually gone ahead to do it.
Then you can start bringing in all the other ancillary technologies stacks that you might bring in, such as your CI servers, your distribution servers, testing tools, whatever. But you can utilize us as that common thread between all these providers that allows you to share the things you do, those third party trends of dependencies, the build you produce, the production level instance components that you want to deploy into your environment. And you can utilize this as a way to make sure that they’re all in sync non-dependent on where it is. Because we don’t care. It could be in Azure, it could be in AWS, it can be in Google Cloud, it could be in your data center. The snozzberries taste like snozzberries, they’re all the same thing across the board. There’s no variance between them because to our platform it all looks the same. Underneath in the underpinnings, don’t really care.
So before you begin and you start looking at maybe moving, if you’re already an existing customer, these are some things I want you to keep in mind. Always plan your migration and your replication strategy, because we’ll talk about that today. Always clean up your instance. You don’t want to replicate garbage. We have plenty of information on there on how you can go ahead if you design your repositories properly, being able to go in and actually clean those up so you’re not moving garbage. But also to start backing up and have backup strategies. But at least first sit down and when you plan this, look at all the pieces you have currently and then organize them so that you figure out how you want to migrate the various aspects of your organization going forward.
So the thing is that what we offer as a platform, we offer a whole bunch of different tools that help normalize the experience of actually going ahead and moving from your self hosted or a cloud or expanding as a multi-cloud or whatever. This is applicable non-dependent on strategy and non-dependent on the actual infrastructure in place. This is where we help the most. We help lower that barrier to entry by providing the same experience no matter where you have us. I’m going to accentuate that a lot. So if you get sick of hearing it, I’m sorry, but that’s one of the major things that we do.
So first thing you want to know is how do I get all my users groups and permissions synchronized across multiple instantiations in different locations? I want to make sure that my permission models do not vary between myself hosted or between cloud providers. If Bob goes in and logs into an instance, and you may be using a load balancer that might be geographically based, he should have the same experience no matter where he goes and he shouldn’t even care what provider he’s on. So for those permission models, we have Access Federation, this is part of our Enterprise+ offering as part of the platform. What this does is the more instances you have… And if you notice here, by the way, I didn’t put down cloud providers. Because I don’t care.
And the thing is that once I have these set up, we have a circle of trust that we enable between these instances of the platform, but allows me to synchronize users, groups, permissions and access tokens across multiple instances. So Bob isn’t suddenly contacting IT wondering why he can’t connect, and he doesn’t know that he is connecting to GCP, he doesn’t know he’s connecting to Azure, he doesn’t know he’s connecting to maybe AWS or Oracle Cloud or one of those. He shouldn’t care. He should be able to just go in, be direct, redirect it, log in and get all the things he needs and get his work done.
So we have star based federation and we have mesh based. So if you want to have one centralized instance, maybe your San Francisco self-hosted is the center point for your DevOps universe. And then maybe New York is an AWS, Paris has GCP and in Bangalore they’re running Azure. It doesn’t matter. The thing is that you can actually have it so that San Francisco would then dictate to the others. If you want them all in equal footing, you can set up a mesh based topology. And by the way, this extends down to the Edge nodes, which we’ll discuss in a little bit too. I can control the access level controls at the edge itself. So this is one way of handling things like permissions and users.
The next thing I want to talk about is the fact that when we talk about this, it handles entitlements, you’re also leveraging the access synchronization, and this will build that global model for you. And you can also use it for your CI tooling. So this way you can also share those access tokens you might use for your CI tooling itself. The next thing we want to talk about is replication. Replication is one of the core components of what we do with our platform. Let’s start with Artifactory, five, six, I lose track of time, seven years ago when we started doing this, but replication is a way for you to make sure that consistency is a paramount feature. That when you’re an organization and you’re using our replication strategy, once again, I don’t care where I go when I’m connecting, if I’m using say a geo load balancer in this day and age of having distributed workers, the major thing is that my stuff is there, the things I have.
So the thing is this allows you to even do this across even multiple teams. You might have a team in India, you might have one in Eastern Europe, you might have one in East Coast in US, one in South America, doesn’t make a difference. It provides consistency across the teams. So my team in Bangalore, when they’re downloading and constructing a build should be using the same third party transit dependencies that are being utilized say in my San Francisco office or in my Buenos Aires office. It also reduces risk of making sure that you don’t have that inconsistency excel.
You could also go ahead and also set it up so that it’s done not just at a whole level, so it’s not an instance to instance, it’s actually more of an a la carte method. You can choose what you want to share between locations or between cloud providers if you wanted to. It can be push or pull. We’ll discuss that. It has many different use cases it can be applied to. It doesn’t matter on network topology, we don’t care. Once again, where you host it doesn’t make a difference. You just got to make sure you have the right rules in place. Oh, that was weird. It went back. Hold on. It’s going forward now.
Well let’s talk about optimization. The thing is that our platform has had this built in for a while. So it includes things like basically optimization behind the scenes on the network. And we’ll talk about that, especially in terms of we use… If you’re not familiar with the way Artifactory stores its artifacts, we store the artifacts in the checksum based approach. And I’ll explain it a bit how we actually use checksum based replication so that when you are transferring objects between locations, one of the nice factors of it is that you’re not transferring all the artifacts, you’re transferring the deltas between versions. But this is just one of the major things behind this. But we also use that same level of checksum based replication when we publish out things to the Edge nodes.
When I mentioned the distribution side before, I’m going to talk about that in a bit, but that’s all checksum based, meaning you say you have five terabytes of a build, right? Five terabytes, woo, that’s a big build. Five gigabytes of a build. And the next build is only a slight variant, a 10% deviation between the previous version. When you’re replicating these out, we’ll only be replicating out that 10%, or if you have a release bundle pushing that 10%. So it actually makes it highly effective and efficient. So we have different types of replication, right? Oh, this thing made it a little weird.
We have push based replication. So push based replication means they have Location A pushes an artifact to Location B. We also have pull based replication where maybe Location B might be pulling a binary from Location A. So this actually helps…. Well yeah, sorry about the formatting here. I guess it uploaded weird. But pull based replication means that this could be utilized, by the way, in terms of having it so you could have either schedule or event based. So it could be real time based on a new binary gets uploaded or it could chron based, maybe you do it every 12 hours, you replicate between things. But what happens is in a push based replication, the source gets an event that says, “I need to replicate this out.” And it pushes it automatically to the destinations. It could be more than one destination too, by the way.
Then you have the other side where it’s event based pull or a chron based poll where the source says send an event to the target and the target says, “I’m going to download.” Now pull based replication is great for places that might have latency or low bandwidth. These are ways to alleviate that. Remember downloading is usually faster than uploading. So this is one way to attribute to it. You can also go ahead and do things with multi-site replication too, where we actually even have star and mesh based topologies. You might have one where it’s a central source of truth, maybe in this case on the left hand side, Location A. And this one would actually go ahead and you can either do pull or push based star replication out. Or on the right hand side where you have two locations, and this gives you the ability to actually have mesh based topology replication between them so that you might have two groups that are in constant synchronization with each other. So these are all open ended options on how you can do things.
One of the other things that we just introduced, and this is actually one of my new favorite features, is the fact that we also have federated repositories. Now federated repositories are amazing. And actually, instead of just discussing that, I’m actually going to go and show you what it looks like because I’m super excited about the feature. So let me go ahead and I’m going to connect to my instance here and let me bring it up. Oh, why am I not sharing? Here, let me share. I’m going to share a Chrome tab here. I’m going to share a window. So here we go. Here is Artifactory, this is my Artifactory. And as you can see, by the way, I have instances globally around the world. I’ve got four instances. I can have them on various cloud providers, I don’t care. This is our mission control product that’ll let you know the current state of what you’re doing.
But one of the coolest new things that we have is if you look at our repositories that we have, say local and remote repositories, local is where you store your binaries and remote is where you actually use the third part [inaudible 00:31:38] of the source, you proxy it through. Any of these repositories that you have here, if I just select any of them at all, you can convert them over to a thing called a federated repository. Now, what a federated repository is simply is this, I can go in and select a repository that I have here. I can go select and say, “Hey, by the way, I have two other instances I want to share this with.” I select the repository. I can go in and say, “Hey, I want to share it with my New York instance and I want to share it with my Bangalore instance.” Now in the past, I would have to go to each location, create the repositories in each, set up the ability for me to go ahead and set up the replication between them. It’s kind of an arduous process.
Now if you have two locations, that’s easy. Three, it’s fairly straightforward. Anything above that can be really kind of daunting. So now when you have multiple locations, I can select a repository, say, “I want to create this repository in my New York instance and I want to create this repository in my Bangalore instance.” I hit done. Now what it’s going to do is I’m going to hit save. And when I hit save, it’s going to go to those two other instances. It’s going to create those repositories and it’s immediately going to start replicating out the binaries from this instance out to the others.
I just set up full mesh replication, non-dependent on location, only dependent on location of the actual Artifactory instance and not dependent on provider. Those could be hosted in self-hosted DCs, it could be in Azure while I’m in AWS and GCP. Don’t care. We’ve taken that straight out for you, giving you the ability to quickly go ahead and distribute those binaries on where they need to be. I’m super excited about this feature and it makes me really happy. But what it’s meant to be also is bidirectional mirroring and it’s also event-based pull. So it actually pulls the binaries down, so it’s very expedient. On top of that, it actually uses the check same based approach, so it’s only bringing down the changes that it needs to bring down.
The next thing I want to talk about, just because of time purposes, is we also have JFrog Distribution. Now JFrog Distribution, the beauty of this is that maybe you have other cloud providers out there, so maybe you have it so you do all your build services inside of say AWS, right? And then you go ahead and you’re like, “You know what? We’re hosting an AWS but we’re also hosting an Azure AND GCP. We want to make sure that we have the best footprint globally that we could have.” Now you could go in and set up Artifactory instances in each one of those locations and that’s fine. And maybe you have multiple regions. So maybe you go ahead and you have other instances of Artifactory. You go ahead, you have the platform deployed in each one of these. Use federated repositories to get the builds on where they need to a then maybe in those instances you have your centralized US-West instances and GCP and AWS, whatever.
But then you have other regions where you want to deploy this application. Now you go ahead and install other Artifactory instances and use replication again. Or you can use our distribution method and set up these Edge nodes. So the Edge nodes are lightweight, immutable versions of Artifactory. And you can create these things called release bundles, which I’ll talk about in a minute. But this will give the ability for you to go ahead and actually build models that you can go ahead, these Edge nodes that when you’re doing them, they’re read-only, they’re secure. You can also use a thing called a smart repository, which is like a remote repository except that the actual source is an Artifactory instance and you could actually browse it from the Edge node and then pull and it’s request based cache.
You can also use it for optimization so that we actually use an optimization component to make sure these are distributed more rapidly and more efficiently. It also allows you, if you are doing things, we have a lot of companies that are trying to get say their software into, say, like Ali Cloud inside of China, you’re going to be able to go ahead and deploy Edge nodes in Ali Cloud and then be able to have it so you can push your binaries into there to actually go ahead and just have it as a distribution without exposing all the things that you’re doing. But really what it comes down to are these Edge nodes, they’re cheaper, they’re more efficient, you can actually just use them for deployment purposes. So you are deploying your web service or whatever you’re doing in multiple location, in multiple regions, in multiple cloud providers. You can create this gigantic star based or mesh based infrastructure.
In addition to that, I mentioned release bundles. Now release bundles are great. You can actually go ahead and package up things such as, like I said, a helm chart and docker images, place them together into a digitally signed release. They actually have their own bill of materials behind it. When we actually distribute them out, they’re actually validated to make sure they weren’t tampered with. You’re actually going to have RBAC control at the edges, so you can say who or what has access to this. You can use our Xray product to scan them one last time before they actually go into where they need to go. And you can use either our API, CLI or JFrog Pipelines product. You can integrate it to your own CI process or your own CD process to distribute these out and create these release bundles.
It’s highly flexible in its approach. But we have a solid CD model with the platform. And the thing is, with our platform, it’s end-to-end as I mentioned before. But the great thing is this is a level of consistency from developer to deployment. You use the entire base of us and you have complete accountability. What was used? How was it used? Where is it located? And in the future we’ll have some more exciting things even around security at the edges themselves that we’ll be announcing probably in 2022. And it’s super exciting. It’s on the tip of my tongue and I wish I could say more but I can’t.
The next thing about this though is that when you are doing this, when you have this distribution model, like I said, we don’t care, you can have your on-premise data center and publish these artifacts out. These are your end products, these are your production level artifacts, these are the things you need to download. These are things you actually might even use in another location where you have geographically dispersed individuals. And this can give you a way for you to actually go ahead and they just need to pull things.And you can just set up an instance somewhere in a region that some guy that might be, I don’t know, in Kalamazoo wants to download something and this just gives him the ability to pull it down locally. Maybe it’s just all these third party transits. He uses a smart repository to bring down all the things that he needs as a developer. Because maybe you’re constructing everything you’re doing inside of AWS or GCP or Azure, your own data center. It doesn’t matter. But the Edge nodes give you the ability to distribute out your production level artifacts into various cloud providers in multiple regions, in multiple availability zones if you need to.
So saying that,, I’m going to go ahead now and just show a couple of the pieces here because we only have a few minutes left behind this. And then if there’s any Q&A, I’d like to actually take some questions. But let me go ahead and let me actually share with you just some of these components and actually how easy it is to use. So if you look here at what I’m doing, you can see, as I stated before, our entire platform is actually designed to be a global non-dependent non-infrastructure entity. So what you see here is actually… What? Oh, I screaming?
No, share your screen.
Oh, I am sharing.
It should be shared. Okay, it took a minute to share. Sorry, I was sharing it. I thought I was. Anyway, just so you’re aware, having all these various cloud providers or infrastructure doesn’t make a difference. Here’s my main location, here’s my New York instance, here’s my Amsterdam instance. And it actually shows there’s a problem with it. I know there’s an issue with this and I can go investigate. This is in Bangkok. You can see I have Edge nodes evenly distributed in here. And I can actually, if I wanted to, by the way, I can click on any of these Edge nodes like this, and I’ll just show you, is that when I go into an Edge node, I can actually have it so that it connects to this provider. I’m going to log. In this case, it’s going to have me actually enter my password. Hold on. There we go.
When I look at this Edge node, you can see that I have a whole bunch of packages available. And if you look here, you can actually see that the artifacts are what they are. So I can even show you, I have things like smart repositories where I can go ahead and this instance is actually connected to a remote instance. In this case, this is actually connected to a SaaS instance I have. So I’m actually even able to go in and even do it where I have a SaaS instance and I also have a self-hosted instance and a hybrid model in this case. I’m actually even able to proxy the binaries for my SaaS instance down to my localized instance. And you can see where I have actually the ability I can go in and actually look at the files. And if you look at this not being cached. But I can download these at any point.
You can see under here, I think I have a couple in here that actually have some binaries that I’ve actually pulled down in terms of smart cache. I might have actually gotten rid of them. But the idea is that this is just like Artifactory, right? I can go in and use all the set me up instructions and grab the code snippets I need to connect to it. But these are read-only. All I need to do is consume binaries. You could also see Artifactory, here’s all the builds, all the packages, all the binaries I do. Here’s our distribution hub. And in a distribution hub, you can actually see where I’m actually going in. And I actually might have a release here, so 1.0 of this release, you can see where I’ve actually distributed this out and there’s actually a failure at some point when I was distributing out and I had an error. Huh, that’s interesting. Or the fact is that here’s all the contents. I pulled this out of the build. I went ahead and I have some release notes here that I represent what I’m doing.If I had the Xray component enabled in this case, I would actually just show you that there’s Xray information. I can also show you in this case where the actual if it’s been produced by one of our pipelines. But you can see that when I look at these, if I were to go ahead and build out these releases, I could actually do this automated style. And say I wanted to go ahead and distribute this out again, I could actually choose and I could say, “Is the target available in this remote instance?” I can go ahead and select the Edge nodes that I want to do. So I’m going to do say Edge level one. And I just simply do that. Now I’m actually going to go ahead and distribute the artifact down to the Edge node where it needs to be.
Since it’s check sum based, if you notice here, it didn’t even bother to… It just went. In this case it said, I already checked the remote incidents, it already has the binary, I’m just going to mark it completed. If there was a new variance in there, it will actually only push down when it needs to push down. So this is more of the distribution side. But the big thing is that I have these artifacts and I can go ahead and show you that any of the binaries I have, and if I have more than one location, I’ll go connect to… Lets see, I’ll just go connect to this instance here.
Here’s my Bangkok instance, where I’m doing something. And I’ll show you that what we can do is we can look at some of the binaries I have and I can set up this binary replication where I have, say I’m doing things like… I showed before, one of the repositories, I set up the federated model. And I even go ahead and say, “All right, you know what? I’m going to create a federated repository.” Let’s just do it right now. I’m might as well. Let’s go ahead. I’m going to create a generic one. I’m going to go ahead and create a federated repository. I’m going to do a generic one because I just want to upload a file of some sort. I love doing live demos. So I’m just going to say test fed generic. And I’m going to say local. It’s local repository here. I don’t care about type. I’m not going to go ahead and add any security in right now. I’m just going to go ahead and create the federated repository. And that’s it.
So let’s go ahead and say test. And I’m going to say generic. So I created my test fed generic right here. Let’s go in and copy. I’m going to copy the actual key because I want to be able to just make sure I’m typing it correctly. And let’s go in. And I brought up the other instance. So let’s do this. I’ll go log in into the other instance I have. So let’s go take a look at this one. Now this is my bank… Oh, hold on. Love when I don’t have the right password. All right, so I’m going to log in. So this is actually… Let’s go look at the artifact registry here. As you can see, there’s a couple generic repos and things like that.
But let’s go in and I’m going to say I want to go ahead and I’m going to go create actually this federated repo right here, test fed generic local. And I’m going to go ahead and say I want to actually federate it out. I’m going to save. This has actually go ahead and let me know that it’s actually created that repository. Let’s go ahead and look inside of artifact. Let’s hit this. Let’s say, “Okay, here’s my generic local repo.” There’s nothing in it right now. If I go to expand it out, I’ve got nothing. But let’s go over to the other side and let’s take a look here and see if it’s in here. Oh look, there it is. There’s that same repository I created. Now let’s go in and do something. Let’s go deploy, I don’t know, some sort of file. I’m going to go select a file from my local desktop. I’ll just do an image, why not? I’ll just do a screenshot. There we go. I’m going to upload a screenshot. I’m going to deploy this. You can see here it’s been deployed into my primary Artifactory instance.
Let’s go ahead and I’m going to do a quick refresh on this side. And if I expand it out, what do you know, I instantly just shared between two. I created a repository in my primary, I created the same federated repository in another location, I uploaded a file and there it is directly. That was it. I just went ahead and I could actually now span two instances of Artifactory in two different cloud providers or a hybrid model, and I’m instantly sharing the binaries between them. That’s how easy it is even in that approach. There’s multitudes of different things that you can do going forward to actually enhance the experience.
But this is kind of the main things I wanted to discuss today. I just want to say thank you for your time. If there’s any questions, you may now have the time to actually go ahead and do it. I can’t see the question [inaudible 00:47:02]. Oh, there it is.
We did have some questions come in. So I will go ahead and just say them out loud and then you can go ahead and respond, Bill. Sounds great. Our first question is, what do you see as best practices in terms of trade offs on leveraging multi-cloud providers, minimizing complexity and overhead cost? And addition to that kind of question is where’s the right balance in your view or your personal opinion?
So yes, I am one of these people that when it comes to multiple cloud providers, I’m all for it. Let me explain my background a little bit. Before I joined JFrog over five years ago, I was the founder of multiple companies, CTO and founder, and always understanding infrastructure costs was, behind the scenes, always in my mind. We always had meetings about how much cost we were doing. So in terms of multiple cloud providers, there’s a lot of benefits in each case. Some have better storage quotes, others have better transfer, some have better tools than others. Some charge for some services over others. But reliability and consistency, and the ability to have global coverage, there’s some cloud providers don’t have regions that are handled by others. So having that.
The thing is that I wish at the time the things that I’m showing here in Artifactory, when I was doing it back then over five and a half years ago, I wish I had these tools now.Providing that level of consistency and lowering my cost of getting stuff between locations, just from the actual aspect of trying to figure out how to do it, find the right tool, doing all the deep level analysis. Where if I had a base foundation like our platform to do it, it would’ve made my life easier. It’s just there. Now in terms of using multiple cloud providers, there’s a lot of benefits. I’m not the guy to talk about it in those terms. I think about it in terms of deployment. In my old life, like I said, cost was always the thing. But in my terms is say I’m a release manager or I’m a product manager, all I care about is getting my stuff to where it needs to be as fast as possible, as rapidly as possible in the same consistent manner. That to me is true TCO, right?
The other stuff leading up to it, the cost that’s going to cost you, of course, is the education and things like that. Learning the platforms, understanding it, deploying to the platforms. We see this all the time. There’s variants between the platforms, there’s basic things you can apply from one cloud provider to another. But actually doing it is a whole nother story. So for me personally, I’m a big fan. I like diversification, I like decentralization of objects, being able to have it no matter where it is, wherever I need it, I don’t care. I just care about is my customer getting the best value? Am I releasing as expediently as possible? And also, is it as consistent as I want it to be? Because then if it’s not, that’s when TCO goes through the roof.
Great. Looks like we have one more question. This might be our last one. So if we have any additional questions beyond this webinar, please don’t hesitate to reach out to firstname.lastname@example.org and we will go ahead and answer those post event. So not to worry if your question gets missed or if you think of something post event. Alrighty, so the next question is are you [inaudible 00:50:36] defined different policies for different geographies to cater to a diverse regulatory landscape for multi-site, multi-geography development teams? Now, that was a handful of a question. So hopefully you can do that.
Yes. [inaudible 00:50:55] Actually you can. The thing is that you can choose, say, with the Access Federation model, so you can have it so that you can have common users, groups of permissions done. You can also have mix and match. The thing is that you’re not just relegated to say one IDP, one identity provider. You can have multiple providers and then have basic rules based on that. But one of the cool things, and I’ll just show it actually, is this. And this is actually one of the new features that we’ve actually introduced. It’s my new favorite feature. I am not going to lie. Just let me know when it’s sharing. It should be sharing by now. I hope. Come on. Okay.
And it’s called Projects. Projects is a new feature that we introduced a couple of releases ago. One of our biggest complaints that we had was that before you had super user and then granular users and granted permissions. Now those are fine and they were super strong. But the problem is administering them was kind of hard. We put a lot of emphasis on administration and the rights behind administration. Projects, now you can go in and not only can you have super user, but now you can have project level admins. Now project level admins allow a little bit more group autonomy. Now inside of there, you can do things like set storage quota. And when you define, you have a hard ceiling and a soft ceiling, you can also, when you’re billing these as a super admin or this admin level control building a project, you could say what they can do. Can they control their own repositories? Can they manage their own members? Can they manage their own Xray resources? Can they manage their own pipelines? Things like that.
But if you look here we have, the main thing on what you’ve asked is that we have global roles that are across all projects and Artifactory in general, but you also have the ability to create roles that are specific to those projects. Now that’s great because then you could say what’s custom to that actual product, what’s relevant to that project and the level of access control. And you could do this and have these projects in different locations globally where each team might have say different roles and responsibilities. But using replication, you can replicate the binaries into those projects in those regions and then have them interact with it. And I’m going to show you a cool feature with projects and this is, in my opinion, awesome.
One of the things too is cross team dependencies is a very real thing. One team produces something for another.
When you’re doing this and you’re using our project model, what’s awesome is you can just share and you can say, “I want to share this repository as read-only with maybe these two other projects I have here.” And then they only have read-only access controls to it. It’s awesome. Or you can say, “I want to share this with every project.” Maybe you have a common set of binaries that are utilized for authentication for all your products, and one team produces it. They can include it in their virtual repositories, because if I shared this out to another team…
And I’ll just show you, is that if I select another team that I know is a shared repository, you can see where it’s locked here, right? So this is a read-only repository, so I can go look at it, but I can’t really do anything with it other than interact with the components in there.
But if I had a virtual repository and I created a virtual repository like this one, like an Ivy one, you can see where I can actually have that actual repository as an inclusion. And it’s read-only. So I’m always assured maybe of getting the latest version from the other team without me having to hunt them down. I’m automating it and I’m actually expediting the process with a couple of button clicks. And that’s the thing is I want to make sure, this isn’t just a product for… You got to think in terms of not just getting binaries, controlling binaries. But also too how you utilize that infrastructure to make your organization more relevant, efficient, and keep the velocity growing as opposed to pulling back and having to come up with strategies that are old and antiquated. Do it all behind the scenes. I always joke around, I’m always like, “Automate, automate, automate.” This helps that. The more you automate, the more you can control on other things. Any other questions?
That looks like it said, but we are out of time, so ended up perfectly. And those are great questions. As I mentioned, if you for some reason, forgot to ask something or something came up post-event, you can email us at email@example.com. Then myself and Bill or somebody will help get to those questions for you. Other than that, thank you Bill for joining us today.
Thank you very much.
And everyone else on the seminar. Thank you very much. All right, see you later guys. Bye.
Thanks guys. Be safe. Be wonderful. Be well.
Try JFrog Cloud Platform
Software supply chain protection from the world's ONLY DevOps-centric security solution.