Data Centric Workloads with HPE Ezmeral Container Platform [swampUP 2021]

Dejay Noy , Ezmeral Field CTO For the Americas , HPE

6月 30, 2021

< 1 min read

See how HPE’s Ezmeral Container Platform can help you with your data-centric workload, and follow along with this demo of harvesting live public data and making it available to your users. Get started with your instance today: https://jfrog.co/35OKwXW

We will be using a Jupyter notebook, Kubernetes, Docker, and other opensource tools to show the platform’s power.

Video Transcript

Hello, everyone.
This is Dejay Noy with the HPE Ezmeral container platform team
and thank you for attending this session,
we’re going to go over a couple of whiteboards to introduce you
to the container platform.
Then we’ll jump over and take a tour
of the interface and create an application
and then delete that application
to show you the power and flexibility of the platform.
So let’s get started.
Let’s pretend for a second this cylinder represents your data.
Your data sits at the edge data center or colo.
These represent your applications that are accessing your data,
and it sits inside your secure four walls.
Over here we’ve got the cloud provider.
Typically we see two types of strategy when it comes to
a cloud transformation journey
that a customer is on.
Now, this is what we’ve been seeing for the last few years.
These numbers may not be 100% accurate,
but they’re on par on average of what we’re seeing.
So let’s take a look at a couple of these strategies that most of our customers have out there,
and we’ll call them workstreams.
Workstream number one says,
any new application is going to be cloud native
and we’ll go ahead and develop them over here in the public cloud.
Now, part of workstream number one also says,
I want to migrate some applications over from my on premise environment
to the cloud as well.
Depending on the maturity of the customers,
this could be anywhere from 20% to 80%.
But on average, we’re seeing about 40%.
Let’s migrate 4 out of 10 applications over to the cloud.
Now workstream number two says,
hey, I know there’s certain applications that I can’t migrate.
So what am I going to do with those applications?
Well, I’m going to go ahead and put them here on premise in this bucket
and there’s a couple of strategies I can use to
to resolve this solution or to resolve
applications that I can’t migrate.
The first solution is to look for some type of SaaS provider out there.
Maybe someone is hosting some type of solution out there that we can rent
as a service and then we can switch over to some type of SaaS solution there.
The second one is,
maybe we can retire these applications because they’re not being utilized
or they might be replaced by the new cloud native application,
or by the SaaS provider.
Now, here’s what happens when some of our customers migrate applications over to the cloud.
They find out that all of these applications
are interconnected in some way, some form.
So when you migrate one to the cloud,
you start getting performance issues.
Whether that is because of the interconnectivity,
or because it has to stay close to the data.
Right? So some sort of data gravity issue,
you start getting latency issues and poor performance.
So most of our customers are thinking the workstream number two
is a relatively small piece,
they’re thinking somewhere about 10%.
In reality, this could be 10%, all the way up to 90%
because of that data gravity issue and the interconnectivity issue.
So instead of 40% of the applications being moved from on premise
into the cloud, it’s more like 5% to 10% that would be successful.
So how do you solve this problem?
Well, one solution is to have cloud native services here on premise
that can span your private environment, as well as your public cloud environment.
This is where the HPE Ezmeral container platform can come in and help you.
Now let’s explore the components of the HPE container platform.
At the very heart of it is the Docker container runtime engine.
There will be future support for other container runtime engines
such as Cryo, Rocket and Singularity,
but right now it is the Docker runtime engine and also container D as well.
We are a control plane for a container orchestration.
Right now there’s two container orchestration built into the platform.
And that’s Kubernetes and blue data epic,
slurm is on the roadmap, that is the scheduler and orchestrator
for the singularity container runtime engine.
Out of the box, we enable multi tenancy,
this means you can carve up resources
and assign them to different sets of users.
So that way, they don’t interfere with each other’s resources.
Resources, such as CPUs, Ram, GPUs, and storage.
Out of the box, our platform will allow you to run multiple versions of Kubernetes,
you can run the latest versions of Kubernetes
and you can run up to three different versions on the same cluster.
The platform will allow you to run stateless cloud native micro services
focused applications with no data on the platform.
In addition to that, you can run stateful applications with data.
This is your monolithic legacy applications.
That is one big gigantic set of code,
you can run that on our platform in addition to the stateless applications.
And the reason why you can do that is because of our global data fabric.
It was the Map R Acquisition
which has been renamed to the HPE Ezmeral Data Fabric.
So this allows you to host stateful applications
and to ingest data and store it on the platform.
So let’s explore that a little bit more here in the next whiteboard.
Let’s pretend you’ve got HPE Ezmeral container platform
here deployed across your edge and your core environment.
Each one of these will have their own Kubernetes clusters.
Here at the core, you’ve got an application that might be using spark
and the data scientists might be using Jupiter Notebook and Jupiter hub
and it is stored on a MySQL database.
Here at the edge, you might have an application that is utilizing Kafka,
and it is stored on a Cassandra database.
Over here, you’ve got the cloud environment.
Here, you’ve got an application that’s sitting on its own Kubernetes cluster.
This application is in charge of harnessing data
from your core and your edge environment and providing it to your users.
So how does this application access
the data that is sitting inside the edge and the core
without you copying that data into the cloud,
this is where the HPE Ezmeral Data Fabric comes into play.
So now it can span the edge the cloud and core,
so the applications can live anywhere and access that data.
But it doesn’t stop there.
Because of our CSI driver,
the application now has access to other HPE storage solutions,
as well as third party storage provider.
And this is because of our data tap technology
that allows you to tap into each one of those data solutions
and make it part of the data fabric.
Now what if the application sitting at the core needs access
to cold storage sitting inside of one of the cloud providers?
Can it access that storage or that data that’s inside that storage solution?
The answer is yes, through our S3 driver as well.
Now, there’s another environment that is getting really popular out there,
and that is the far edge environment.
We’re going to have a factory here connected via Wi Fi
or connected via satellite represent our far edge environment.
Here we’ve got the HPE Ezmeral container platform deployed as well,
and we’ve got its own Kubernetes cluster
with an application and its own set of data.
This environment, because of the Data Fabric
will be able to process data at the source
and make real time decisions without
needing a connection to the data fabric.
As soon as there’s a connection,
it will sync up and the platform is intelligent enough
to know that, and to participate in the data fabric.
As you can see, the data fabric
is what keeps everything together
to allow you to access data from multiple resources,
and allows you to decouple your application from the data,
so your application can fit anywhere on any platform,
and be able to have secure and reliable access to the data that it needs.
Because of its ability to host stateless applications and stateful application,
the HPE Ezmeral container platform is the one true unified solution
where you can run both cloud native and non cloud native applications.
So your data engineers and your data scientists can use the same platform
as your app developers and your DevOps teams,
because of this ability to run stateful and stateless application
on a CN\CF certified Kubernetes environment with the HPE Ezmeral data fabric.
Now let’s jump into our demo environment
and create an application to harness live data from a Twitter feed
and delete it once we are done.
Okay, let’s jump into our HPE Ezmeral container platform demo environment.
Let’s sign in.
And right away, we get brought to the dashboard.
So here I can see what I’m consuming,
the amount of resources I have left and what’s been assigned to me.
So far, I’ve got eight out of 16 cores being used
consuming 43% of my storage.
I’ve got no GPUs assigned to me,
but if you need it, the platform will allow you
to assign GPUs to any users out there.
Let’s jump into our app store and see what’s available to us.
So I’ve got five images that my admin has allotted to me,
and we are going to want to deploy this image
to create our application.
So to create our application,
let’s deploy this container image
and upload some of our files.
To do that, we’ll come up here to the cluster section,
and then we’ll click on create cluster.
From there, we enter in the details, we’ll call this…
we’ll call it test app, give it a description, if you want,
we’ll change the cluster to an Intel cluster
and we want the Intel distribution for Python.
And that’s our only option anyways.
Okay, so our admin has allowed us to change the size of our node.
If we click on the down arrow, here, we can see a small, medium or large configuration.
And we’ll keep it at small for now,
and our admin is allowing us to create only one instance.
If you need to do some advanced settings,
you can expand this
and you can get into the debug mode, the isolated mode, the two phase delete mode,
or the bootstrap action.
Bootstrap action comes in very handy if you need to
alter the image by adding other packages or typing other commands and things like that.
So once we’re done with our detail here,
we’ll create cluster.
And once you create the cluster, it’ll show up here,
and it’ll show us the status of starting.
Once it’s ready it’ll turn green and it’ll say ready.
We already have an environment here ready to go.
So let’s jump into that.
And we can see that our Jupiter hub
and our Jupiter Notebook is ready to go.
So if we click on this,
our Jupiter Notebook comes up.
Now I already uploaded some files that I created.
Let’s jump into it and create our application here.
Okay, so here’s my code.
So basically, what I’m doing here is I am
mining live tweets from Twitter that has the word Bitcoin in it.
And what I’m going to do is I’m going to place what these people are saying about Bitcoin
on a world map and we can explore that some more.
So let’s run through our code here.
And we’ve got our tweets coming in.
Let’s wait for it to finish.
I went ahead and limited it to 50, so it doesn’t take all day.
A little warning here, you may see some foul language on some of this
because people say bad things out there.
Okay, it looks like we are done gathering these tweets.
So let’s continue with our code.
Looks like we collected about 87 tweets about Bitcoin, around 50 locations.
And let’s continue here and get the geocoding to put on our map.
Let’s save our map,
and let’s view it.
And there we go.
So we can click on one of these,
and we can see what people are saying.
Now it is a world map,
so if we zoom out a bit, we can see what other folks are saying
all over the planet here.
Okay. And if we need to make any changes,
we can come to the kernel, we can restart and clear all output,
and we can adjust our code to do whatever it is we need to do.
Now, if you need to import some type of module,
the platform will allow you to open up a terminal
and do a pip install.
So let’s take a look at that real quick.
If we go to new, and we go to terminal,
a terminal screen will pop up
and if your administrator allows you to,
you can go ahead and install other modules.
So we’ll just do an example here of Tweetpea, but Tweetpea is already installed,
and you can see that it is already there.
But if you need to install other modules,
your administrator can give you access to the terminal.
So let’s go ahead and close this
and go to running and we’ll see we’ve got a couple of things running.
Let’s shut it down. So that way, it doesn’t take up our resources.
And just like that we are done, we can close this guy out.
And we can close this out.
And we are right back to our environment here.
Now as quickly as we created the environment,
we can highlight it,
and we can go up to action,
and we can delete the cluster.
Click OK on it
and within a few minutes, if not seconds,
the cluster will be deleted.
Okay, our application has been deleted.
Let’s go back to the dashboard and make sure we’re not consuming everything that’s been assigned to us.
Perfect.
Now, you may have a user who likes to do this via command line
and not necessarily use the app store that they have access to.
And that’s okay, if you switch over to our Kubernetes environment,
you’ll have the ability to get into the terminal
and you’ll be able to coop cuddle your way to creating any type of application.
So let’s expand our terminal.
And just like that we are in there.
Let’s type in coop cuddle get nodes to see our cluster.
Now type coop cuddle get deployments to see how many deployments we have.
Now let’s deploy our nginx server.
Let’s do a coop cuddle get deployment.
And we’ll do deployment details as well to see what deployments are out there.
As you can see, we’ve got detailed information now about our nginx deployment.
Let’s add a service to our nginx deployment
and we’ll choose TCP Port 80.
Okay, now we’ve got Port 80 assigned to our nginx server there.
Let’s get our node information again here.
And we will do a curl against one of the controllers to make sure everything is working.
Looks like everything is working for our nginx server.
Okay, so this will conclude
our demonstration of creating an application inside of the platform.
Now there are a few ways you can get access to the HPE Ezmeral container platform.
You can buy it as a standalone software environment,
or you can hit the easy button and let HPE bring you the solution in a box.
So whether you need an AI and ML Big Data solution in a box,
we can do that for you by providing the hardware and addition to the software,
security and services built in a box for you.
You can buy this as a capex solution
or you can have it as part of your Green Lake solution
where you pay as you go and pay as you grow.
Thank you everyone for attending this demo on the HPE Ezmeral container platform.
Let me know if you have any questions.