Application Deployment Using Kubernetes Operator @ DevOps India

JFrog is a proud Community Sponsor for DevOps India

June 18, 2022

< 1 min read

Application Deployment Using Kubernetes Operator

Kubernetes Operator let us extend the functionality of the Kubernetes API, enabling it to configure, create, and manage instances of applications automatically using a structured process. In this talk, we will take a deep dive into the capabilities of Kubernetes controllers, security risks, and lifecycle management, including backup, recovery, and automatic configuration tuning. We will also explore the uses of Kubernetes Operator Writing Controler logic and its unique way to manage application deployments as a use case.

Presenters’ Bio:
Hey! I’m Oum Kale, a Software Engineer(RnD Team) at JFrog working on the Installer team to build Kubernetes Operator for deployment and management applications. I’m also a Core-Contributor/Member of CNCF Incubating project LitmusChaos. A Sporty guy International Rated Chess Player.

View Slides Here


Oum Kale

Software Engineer, R & D

Oum Kale, is a Software Engineer(RnD Team) at JFrog working on the Installer team to build Kubernetes Operator for deployment and management applications. He is also a Core-Contributor/Member of the CNCF Incubating project LitmusChaos.  He is a Sporty guy International Rated Chess Player.

Video Transcript

for instance
understanding and things like that while working as an industry one of the questions
so my favorite part
so this is
so and this is to do with technology right
over the years
which talks about how uh
right when i was younger
so how
with driving for instance because i worked in the automotive industry for a while so much of change happening there
as well things like autonomous driving how is this going to take over
i started off
[Music] so when i started my career in 2006 i started off as a software developer and
then we were working on things like cc plus plus visual basic and i don’t know how many of you have heard of these
things as well um and but now when i got back into this piece that i don’t recognize the tools
and frameworks that are currently exist so it’s it’s crazy to think of how developers keep up with all of this new
technology of all of these new tools and i think gene yang who’s also a software developer sums this up nicely
she says that the new wave of tools has helped developers build bigger better systems and also build them faster it
turns out that developers are now spending their time worrying about an unintended consequence which is how
exactly are they supposed to manage and coordinate these fast-moving heterogeneous system and free themselves
after bill so if it all we want great technology and great products in the future
we don’t want developers to be spending that time and figuring out how to use a new tool or how to learn a new framework
what we want them to do is build good products so how where can design play a
role in all of this where can other folks play a role in creating a better
experience for developers so this is where developer experience comes into play so how do we
create a better experience for developers and i’ll tell you talk about what developer experience is
okay before we go into this i don’t know how many of you have access to the internet right now probably either wi-fi
on your phones but i would love to know just from your perspective what would you say your biggest pain point is when
it comes to dev tooling today you don’t have to name specific tools you can just scan this code or if you go to and enter this yeah your answer should pop up here but i’d just love to know like what is
it that you find difficult when it comes to dev tooling
hopefully this works yeah okay it says participants are typing
okay we have our first one in which is designing the technology exactly
yeah these are really good ones so tool fitting into other set of tools how do we make all of these tools work
with one another upgrading of tools
yeah documentation is an interesting one in the dev tools space that i feel that it’s a given that dev tools need a lot
of documentation cross platform compatibility very much
so i guess docker helps in a certain extent there
overlap of tools yeah which one do you pick which sort of goes back to this initial point right deciding the technology and designing the tools that
you want to use so a lot of these tools probably do a lot of similar things which one do you pick
choosing a tool to solve a client issue
yeah exactly so this is this is exactly what we are also looking at at docker so
some of the things that we’re trying to do is also solve this idea of having to use like these multiple tools
uh cross platform compatibility is also a good problem for docker as well
okay oh convincing your manager yes that’s a different skill set all
together so two management as you can see is a
really huge issue right like same tools different abstraction yeah that’s also a good one
now on the other hand i would love to know so what are the most delightful aspects of dev tooling today so what do you like about
the tools that you use today again you don’t have to name specific tools necessarily if you don’t want to
but i would love to know what is it that you like about it
oh that’s great considering it’s a docker meetup it’s great to see that containers it’s nice
standardization is a tricky one because every time you come up with the standard i feel with new technology and things
like that it it changes way too quickly to standardize things
new tools and ecosystem support single file all configuration
yeah willing to adapt
yeah there is i think some joy to take into like exploring new technology every day
as well i think this it’s nice to learn new things uh as long as it’s not taking away time
from what you actually want to do it being platform agnostic
open source technology and community support yes okay great
i’ll move on and i’ll i’ll tie back into our answers from today
yes github is a good tool for sure all right so i want to talk a little bit
about the role of design and all of this right also tie into some of the answers that you were talking about
but before that since i’m talking to developers maybe i’ll give you a little bit of insight of to what designers do because there is
some misunderstanding of what designers do generally so the term user experience design
is not a very old term so it was kind of introduced only in the 1990s right
whereas if you look at programming i think the first programming like language was by ita lovelace in like
1843 or something like that so which means that design itself as uh as a
career path has been quite a new introduction so design is trying to
always play a little bit of catch up to like uh technology and we still haven’t reached
a stage where everyone has a good understanding of what designers do of what design is so it’s just a little bit
of history of like where design started it was developed by uh sorry it was introduced by uh donald norman who was
working as at apple and he came up with this term as user experience design
now if you’ve been to any presentation of a designer they’ll they’ve probably shown you this uh graphic which talks
about what we do as design right because design is all almost always conceived as a
perceived as this thing where we come make things pretty and then we hand it over to develop developers to implement
but there’s a lot more that we do as designers like i was mentioning earlier we think about these bigger problems and
try to come up with ideas and solutions in order to solve it and the way we get to it is by doing a ton of user research
so trying to understand what are the actual problems users are facing and then come up with some concepts and then
come up with like screen flows wireframes etc and ultimately we come up with like visual design but visual
design is such a small it’s not necessarily a small part but it is just one part of what we do as
designers so it is not about just making things look good but it’s about trying
to find the best solution for the to the problem by closely speaking to our users
for example so the official definition of user experience design is that it is user
experiences the holistic journey uses travis as the user product not only does it include that direct interactions with
the product but also how it fits into their overall task completion process so the famous example that people always
use is apple right so apple does not start that design process with just the products that we use they start
that design process from right from how they sell it in their stores how they train their employees in the stores then
how their products are be the hardware the software but also how their servicing is done so when you think
about products like open source products like docker it’s not just about the product like docker desktop or docker
cli or docker hub it’s also about encouraging the community right which is why all of these meetups are uh held as
well it’s also about focusing in about and not only encouraging but also ensuring
we are listening to the community and the product reflects the community’s needs for example so it’s just not about
like selling and designing these products into silo so developer experience
so to me the word developer experience i didn’t know that there was a word development experience because when i
look at any product it’s any user of a product is user experience to me but there is a term called developer
experience which is inspired by user experience practice and it sees developers as a special case of users so
developer experience design is the practice of understanding how developers get their work done and optimizing that
experience so which means that as designers i think there are not too many designers in this
space currently uh but i think it’s probably one of the best parts about it
is there’s so much opportunity for good design as well so essentially what it means is that we as designers are trying
to understand developers needs which is challenging if you don’t have too much of a technical background and
then trying to build better products for developers now with developer experience i don’t
believe that it’s only designers who need to play a role in it over the years i’ve also seen like the devops role
starting to play a very significant role right more and more you go with all of these tools all of this technology a lot of
you see a lot of folks also getting into the devops space who tend to manage all of this
so some of the biggest challenges like i’ve come across so far so i haven’t been
working at docker for too long i’ve been working since october last year so the whole dev tool new space is very new to
me and i’m trying to understand what the challenges in this space are so some of the biggest challenges that i’ve at
least come across and i sourced it from this uh developer media is that one like we spoke about so
getting started with new tools managers and build and deploy processes the second one wrapping your head around
a new way of completing all familiar tasks third one potentially learning new languages in order to use new tech fifth
having to constantly refer to dogs instead of having it all in your head so the sixth one says understanding what
functionality is and it’s not available to you and the sixth one is like learning new design patterns seventh one
is like finding time to practice now if you look at a lot of these challenges i genuinely believe that
design can play a huge role in helping solve some of these problems to a
certain extent at least right in order to you know for instance finding a new way to complete old and familiar tasks i
think with good user-friendly developer tools this can really help potentially learning new languages i think the
onboarding of developer tools isn’t the best necessarily i think they can definitely do a better job and design
can play a role in this constantly having to refer to dogs this is also an interesting thing i’ve seen with
developer tools that the documentation i think the documentation
is a lot more vast than other products but i don’t see as much contextual help within developer tools as much as i
would imagine it would be so understanding what functionality is not available to you and is available to you is also something design console and
of course learning your design patterns is in our forte as well
now some of the design challenges we have in this space is how do we design for all these different types of developers right so how do we
design for front-end developers back-end developers full-stack developers experience developers not experienced
developers uh but also all the high possible number of use cases so how do we come up with
good solutions that work for all of these different types of developers and type of use cases
so but it is an interesting problem to have again it it seems quite challenging and uh intimidating but nevertheless
so when you look at a normal product so any product i worked on until i reached the step 2 space it’s quite a linear
flow you look at i don’t know uh e-commerce product look at amazon for instance it’s still a very very linear
flow or you look at anything to do with your education sector it’s again a very linear flow whereas with the dev tool
it’s a very chaotic flow because there’s so many things that are happening simultaneously so how can we build a
good design for it it’s one of the challenges that we’re looking into but also all the different contexts for
instance there’s an immediate context let’s say you’re just debugging something there’s an intermediate
context where you’re working at the app level you’re seeing what microservices are connected to this app etc and what
data is coming in what’s going on and then there’s like a a larger scope like the old overscope or beyond or
maybe you’re deploying something to your kubernetes cluster so there are also these different contexts that we need to think
about when it comes to the dev tool space okay design and docker
now i’m going to talk to you a little bit about my experience so far at docker so uh docker is
comes out as one of the more popular tools right when stack overflow surveys so this was news to me so i’m very
familiar with git like ever since i was a developer back in time but docker communities and all of these things were
quite new to me so doc seems to come up as uh like one of the favorite tools so
when i got into this piece one of my one of the things that i try to do is what is it that people like about this
tool uh and why is this tool so popular uh and at docker when i came uh when i
joined the company basically if you look at docker’s values one of the core values is developer obsession which
means that they’re constantly trying to understand the needs of the developer they seek that feedback and then
accordingly try to build products and this to me says a lot about a company in the dental space
anyway so now i’m going to talk to you a little bit about docker extensions and tied to olin so i guess i just have a
question as to how many of you have heard of docker extensions and how many of you have tried it you can just raise
your hand okay
okay by the end of this talk i’m hoping that i can get more of you to try this but so the first thing i started working
on when i joined docker was docker extensions at this point there were absolutely no extensions
right so the brief was we want to introduce extensions how do we go about it uh so the first thing i did when uh
again when i joined as a designer is i started asking the stakeholders for
instance everyone involved and making this decision within the company as to
why does docker want to build extensions what do we want to achieve with building extensions
now some of the things when i asked you about the pain points you mentioned where that understanding all of these
different tools right like and keeping up with these different tools so developers work in really complex
environments and use multiple tools for different does so why not bring it to one place again we the things that you
mentioned that you like are things like standardization so you don’t have to learn new design patterns you don’t have
to work open like 10 windows to work on different things so we aim to do
something like this with docker extensions and create like consistent experiences across these and introduce
the right level of abstraction so this is also as i understand a problem with dev tooling to make sure
that we don’t try to abstract things too much and try to figure out what the right balance is
now what is the value that the second question i asked is so what is the value
that value add extensions currently build and bring in other products so the concept
of extensions and plugins are not new at all right so so many other products use extensions and plugins so the second
thing i started doing is trying to benchmark as to what other products use extensions how they’re
using it what works well for them what does not work well for them so did a little bit of a competitor study and try
to understand a little bit more of extensions and what we would want to do with extensions within docker
okay the third question i asked is what extensions would developers like to see as part of docker desktop
now there are so many tools out there so what is it that we would like to see
which would make sense to introduce as part of docker desktop so in order to do this as a designer we did a bunch of
like user interviews we spoke to a lot of developers experienced developers not so experienced developers and ask them
what their favorite tools are and why they use it what they like about this tool what they dislike about this tool
and then we tried to come up with a set of tools which we thought would make sense to introduce with the initial
version of docker desktop so docker extensions is correctly in its beta
format and the next thing was how can our extension partners deliver the best
possible experience to developers so in this my role was essentially working as a consultant to partners to
help improve their ux and so that you have like the best possible experience with extensions within docker
desktop so going back to the what if questions right so
when i worked it when i work in each sector i tried to think about these what if questions it helps me
frame the problem well and accordingly come up with solutions so i’d talk to the what if question that i kind of
framed was what if developers this is specific to extensions by the way what if developers could access all
their favorite tools through one interface through a seamless experience because this is one of the problems in
this tool space is there’s too many you don’t know what to choose and the experience is inconsistent across these
different tools so maybe this could be the problems that we try to solve with docker extensions
so we just introduced docker extensions at dockercon which happened in may and uh
so far it seems to be good users seem to like all the extensions in the marketplace uh and we want to introduce
more and more of these extensions right as we go on and introduce more tools that developers would find interesting
so with extensions the next things that we’re working on is improved discoverability of extensions which
means that how can we uh help users determine which extensions would be useful to them how best can we
help personalize your extension experience so if you’re using this this and this technology how best can we tell
you that these extensions could be useful to you supporting the community and building extensions so like it like i was saying
user experience is not just about like designing these screens and putting it out there right we also want to figure out how we can engage the community to
build extensions and also understanding what extensions would be a useful part of docker desktop
so we don’t want to just introduce any and every tool we want to make sure that it makes sense to have it within docker
desktop okay so my next question to you is that
if at all you would like to see an extension in docker desktop what would it be so any kind of tools that you
would like to see as part of docker desktop i would love to know
or anything that you feel is missing in docker desktop or docker cli that you would like to
just see need not be an extension
yes we did did release recently
okay build tools can someone tell me a little bit
more about build tools what is it that you would like to see
it’s okay if you want to remain anonymous but if you’d like to tell me more i’d love to hear about that
apm okay
something tells me all of these are topics for today
yeah monitoring tools is also an interesting area i think we don’t um
we have some scanning extensions currently we don’t necessarily have some monitoring ones but i think there’s a
lot of scope for monitoring tools
container security tools yes yeah we have a bunch of scanning tools currently though so we
if you haven’t tried extensions you should try those out
docker logs so we have an extension called um
which is an internal extension that docker spell called logs explorer where we have tried to improve our logs
experience because currently we can you can kind of just see the logs of one particular container so with logs
explorer you can see the logs of multiple containers for example you have advanced queries that you can do so
i would love for you to test that out and also there’s a feedback section in each page in docker desktop if you’d
like to see additional features please please give us feedback database ui management like redis in
size okay that’s an interesting one analyze docker file and ways to optimize
this so we have an extension for this as well called slim ai which kind of analyzes your docker file
so i would recommend testing that out a better way to push images to the repo
with extension yes we have some a long way to go with this right with
building consistency between docker hub and docker desktop but it’s definitely on our agenda to do
this and we’ll get there docker co-pilot
okay i’m not too sure what that one is security and scanning a bunch of them are introduced
see works like essential it’s looking into multiple containers as one okay
nice thank you so much for all the suggestions so yeah it’s one of the things i’m also
going to do is take in some suggestions for today and take it to my team uh and see what tools we want to
introduce because we have about 15 extensions now but we’re also releasing new extensions
every month or as and when extensions come in and also to let you know so extensions
are not just partner extensions right if you are interested in building an extension for docker desktop you’re most
welcome to so we have the documentation and um if you’re familiar with docker
documentation it’s docs on and then there is a link for you to like
with the extension sdk and uh you can also build your own extension if you would like
so yeah uh with this this has been my experience with docker it’s been my experience with design and dev tooling
and thank you so much for all your time i hope there was something useful to you i know it was very much design oriented
but i hope there were some takeaways thank you [Applause]
do we have time uh for q a or should we move on something okay we have a five minutes q a if you have
some questions we can just take that before we go on to the next speaker anyone can it’s an open forum like you
can ask any question anything related to docker as well not respectively not too technical
yeah sorry
who’s sorry i can’t hear you right
so do you mean like a network
sorry foreign
okay so do you mean an improvement to basically how the events are generated in the logs or how the logs themselves
are presented
sorry yeah okay so the debugging of these events and building the understanding the correlation between these events
okay yeah that’s an interesting one so uh we have like i said a long way to go when improving the way we show our logs
and how we can do this but i would love for to get a lot more details so uh we have this feedback form on docker
discipli where if you go to logs explorer which is this extension we have for logs have you tried that by any chance
yeah so if you could write your detail feedback you would like to see this this and this we will definitely take it into
consideration because we’re working on improvements with the extensions that we have built anyway so we’ll definitely
take that into consideration
right yeah it’s a very valid problem so yeah we can look into it for sure
i have no idea so i’m not uh i don’t have a technical background so uh i don’t know whether
you can so i know about docker compose but i don’t really know whether you can there’s a better way to do it sorry
i’m sure ajit or someone who’s more technical will have like a more idea about these things
you get what images you put what you are using you get all the details
so i am still trying to understand so is it because you have different environment and different compose files
yes okay so there is there was a product called as docker app
docker ad
that kind of solution for that so for a different environment you have a different compose
file and uh based on your requirement you can bring up those um
are there are tools if you look at the docker even so
that was an idea actually so have you heard about docker tools so the collapse community i am going to show you
the next presentation we have started something called as docker tools there are around 60 to 70 tools which
you can go ahead and get added and these tools are basically all the different categories and it can
help you as a developer this is one problem statement which you are saying right these tools will help you out
thank you right thank you so much everyone
oh yeah sorry
foreign so for each of the speaker we have some
swag and you’ll be getting the code in the slack channel so please collect the grease thank you thank you melanie for a
great presentation and thank you once again with that we go to our next speaker we have ajit singh rena i don’t
think so we need much more introduction to him but let me introduce him it’s my pleasure to introduce ajit uh he’s a
doctor captain and doctor community leader working for docker uh from past you know
2016 or 2017 we have this channel and uh he has an uh he holds experience of
around 15 years and uh colabnix has around uh i guess more than 350 blocks so you can explore
each of the blocks which is in our collab next channel and today we have some of the great announcement and news
for the docker hub which arjeet is going to introduce to us with that a bomb welcome to achieve
[Applause] okay like this
can you guys hear me
my voice is duplicating right okay let me check
what is happening you can mute here i already muted
it is muted right
okay why is there is an echo
i need to press this
one hour only it will come
[Music] hello
no problem we can do it check [Music]
did you add your presentation check check check check
what’s happening that it was echoing
share screen
come now
check check check check check check check check check can you excuse me can you just check on
the youtube you’re getting the audio
i guess okay okay
how do you open this up
i think you’re in a presentation mode is there
check check check check check check check check check check check
check check check
tomorrow if
don’t worry about what you remember
she’s not well so probably she’ll be cancelled
foreign foreign
hello hello uh is it like how to profit
long press
because we are also streaming live right yeah so his audio was not working
okay so let me just switch it off
applications that we sort of thought would be
a good listing so moving on
moving closer to the topic of the day so docker desktop so
um i just happened to speak to um a college student last week so i just
both of us just graduated out of college so we happened to visit our college and everyone started asking us what you guys
work on so uh we started telling them that we use tools like doc docker and kubernetes
at work so people still have this feeling that see docker
is something that’s very hard to learn right so i think docker desktop makes it really
easy for someone who’s just starting or has no idea about docker itself so
docker desktop is a gui version of docker it gives you some of the features of docker cli
and they’ve got your basics covered like ajith mentioned in his talk uh he went
on a deep dive of what docker desktop can do so uh as you can see on the screen right
so you can see all the containers images and volumes listed on
your local so that’s what docker desktop looks like and we also have extensions that has
recently come up as part of docker docker desktop so i thought it would be a great idea to
take you guys and show a docker desktop to people who are new to docker
so this is what docker desktop looks like when you open it up so
of course it asks you ask you to sign into your docker hub account that’s something that i’ve already done
apart from that on your home screen you can see this uh you can just pin up a container by clicking one of the buttons
just one button and you’ll be good to go you can also see all the different
containers that are present locally uh on your system um
you can also see the images that you’ve uh you know
pulled from docker hub uh and with this with a click of a button you
can just pin up uh this on a container so this is very easy right so uh for anyone
who’s uh who’s actually learned the cli uh version of uh docker uh something
like this saves a lot of time as well and uh i think for someone who’s just
stepped in to the world of docker i think it’s a very nice way to get started
so i thought i will just show up show an example of how i can just spin up
elastic search on my machine just like that without without
much to do from my end uh just a click of this button
yeah so elasticsearch is now running of course this is not this might not be
the right way to run elastic on my local machine but
it’s still it’s it’s up and running with a click of a button right you can also
use your kibana it can also spin up just like that uh
just this button you click it copy it and you know just
run it within a couple of i mean in less than a minute you have your kibana also up and running
um so that’s what docker desktop is it’s it’s a very nice place to get started and it
gets things done and with docker desktop becoming more powerful with features uh every day
right i think it’s a very it’s good it’s going to be a very uh powerful tool in the in the days to come
so i think i think it’s up
five six is that five six zero one can i can’t read it from here okay
just give me a minute
so i go to localhost 5601 and elastic the whole kibana dashboard is
ready for you so i was just trying to show you how powerful docker can be and how easy it can
be to launch applications like elastic right so so that’s what i wanted to cover
and the next half of this presentation will be done by my friend aditya he’ll be talking about docker extensions and
mesherie so thank you guys for being a wonderful audience [Applause]
let me just
okay uh good afternoon everyone so uh docker desktop uh and docker extension
specifically okay i just missed that
yeah so uh how many of you guys are iron man fans here uh i guess most of uh you all so i
would say uh docker extension is something like the hulk buster armor and
it just extends the capability capabilities of the iron man suit and then
coming to uh the main docker extensions itself so
so uh here is one of the uh docker extensions which was uh released
along with the docker extensions as well so disk usage it’s by the docker company
itself and it helps you to optimize uh and clean up your existing disk space which
is used by uh your objects from docker desktop
so um coming to missouri uh it’s an open source uh software
uh cloud native uh thing uh specifically so let’s consider a service uh istio for
example uh it is a way to control uh parts of different parts of your application
and share data with one another and i would say it acts as a dedicated infrastructure layer which is built
right on top of the app and it abstracts the service to service communication and
abstracts it to a different layer of infrastructure measuries powerful
performance management system is accomplished through implementation of service mesh performance and it
leverages a service mesh interface as well and all of this measuring integrates
with open application model so um coming to
missouri docker extension is one of the docker extension partners
whom uh who developed the machine extension coming to the machine extension
specifically let’s say let’s say uh you need to spin up a application and you have a
docker compose file you can just import the docker compose file using that button
and then directly have service management capabilities along with kubernetes as well
and uh it can help you to develop and learn deployment strategies such as
canadian deployments and of course it can help you for performance testing and behavioral benchmarking as well along
with that uh messierican misery’s docker extension can help you to understand
observability and uh to help you to learn the capabilities of
service mesh and uh along with all of this it can
help you to understand and test applications in a zero test environment
so um okay let’s just go to the demo
so coming to the demo uh i’ll just create a sample application uh composed of four separate microservices
and uh which demonstrate istio’s basic features so if i go to
okay yeah a docker desktop
we have mystery here so just click on login
so it will actually take me to this page when i’ll just sign in twitter
okay um [Music] now now this is the homepage once you
log into the mystery docker extension and another another main feature of misery s
extension is you can enable or deploy a service mesh with the click of a single button so let’s say i want to
deploy the link card service message which is by buying company uh and then if i wanna deploy any other
as well like cilium istio i just click on that button and it will just spin it up
and then i will just launch the memory application
okay so this is uh the home page of the measure application and if i just
click on this it will ping for the kubernetes cluster to confirm its kubernetes and messages are connected directly
and if i wanna uh as uh just before uh opening this dashboard i i
enabled link rd istio and cilium as well so if i click on them you can see you
will get a notification like this which will say it’s successfully connected
now um before all of that let me just go to
designs okay
so we have
and under that i’ll just enable grafana and prometheus again with a take off
toggle switch and
that’s not supposed to happen
but it is still spinning successfully
just click default itself okay and then
okay um if i just come to
let me just check if the dashboard is running
okay nope
so yeah uh again coming to mesh map which is like a visual designer for your application so you can drag in and drop
uh different nodes so let me just uh upload a design pattern file
so i have [Music] yeah booking for application
you know uh you successfully see this thing that design has been successfully uploaded and then if you go to this
you can see the various design components like you have kubernetes and then
so yeah and in in this visual designer you can uh arrange arrange the
uh nodes in a different manner as well like star or let’s say if you want to do it in a hierarchical way
like this so now uh let me just do like this uh
just put it here then put this here
then let it be here now using edges i’ll just connect them together
okay so now from all of that we’ll just
directly go to ratings now if i do something like this you can
see different arrangements for your design pattern so yeah
[Music] i just come here
and you can also enable uh different ports let’s say like you want to do
that was
i don’t know why this is not popping up right now
so if i just go back to steal
it’s okay yeah let me just try thinking grafana now
okay again that was not supposed to happen
okay now now it’s running uh so with the click of a button i just enable graphene and
graphing enabled but from which is give up so if i go to
let okay just do one thing i just go back to here
executing now
okay again and never mind uh so yesterday when we were checking it out everything was running fine but
yes so yeah once once uh what happens is
when you when you graphene and prometheus it just pin
you can you will be able to see that address of the graphana’s aurora along with primitive one as well
and all of that all of it will be viewable in the same dashboard itself so this is brief about the
mysterious docker extension so yeah uh
and if you guys want to try that uh mystery docker extension out you can just
let me do that yeah you can just scan this qr code and then
go to it and download the mystery docker extension and if you guys want to join the life
community and contribute to missouri we are actually looking for new contributors in terms of golang
react and all you can just go to
this link and then join the community so yeah
thank you everyone uh sorry sorry that uh the grafana and prometheus gave up
yeah it’s an organization we actually maintain and work on misery and misery doctor
extension as well thank you
so next we have our next speaker sangam he’s a principal security advocate uh sangam is working as a principal
security advocate at tenable with focus on cyber resilience and he
he has also been you know active community co-author and learn lightweight
kubernetes over the years he also contributed to packed publication and also run an independent cube daily
community uh so with that we have a quick guide to port security uh
ever on kubernetes developer which you need to know with that a warm welcome
okay hello hello
it’s mute only
it’s mute only
hello hmm
no you just can do like
that’s why for here you can show something
can you just share this you got red you have a coupon code you can check on your slack channels
can you just share this scene maybe or maybe i will just create
okay leave it leave it you don’t want stream
hello everyone so how many of you use kubernetes
day-to-day in life okay that’s good
okay cyber exposure company
as a principle security advocate also i write a blog you can also contribute to it it’s open for all also i’m a docker
community leader aws community builder and i i advocate as an advisory board at devo
network so today’s topic is more around the
security part you may have heard about docker extensions and lot of stuff
i’m gonna just talk about little bit security context over here so if you see cloud native security
strategies four c’s are very important five strategies forces means your
data center laptops whatever wherever you are running your things above that you have cluster kubernetes
clusters on top of that you have the containers and then your code and every face
security is very important maybe it’s code you need to secure your code container you need to have best
practices um you need to create a user in their docker files maybe
don’t express your socket files there are best practices even the cluster level every level need a security
and next thing is that how to make your more kind of strategy
when you move forward with the cncf landscape is growing more tools are
coming in the picture people are using multi-cloud people are using so many things like hybrid things
how you can deal with that how can win into the space is there are like five c’s one is share responsibility model
for security every layer from top physical layer to the top your application
everybody need to understand what is the responsibility the code and application is only of developer
the cloud responsibility is different so you need to understand the responsibilities and next thing is
cloud on agnostic security you need to achieve that in some way container is
good kubernetes is good but how we can everything run as a code that is
infracture as a code right you can deploy easily as one code you can create a replicas make developer life easy also
this infrastructure code is immutable code it’s it’s kind of declarative syntaxes
we can do scan those things we can do best practices there i think infrastructure code will make kind of
cloud agnostic security next thing is ship left you need to start thinking about
security when you start your coding the first day you code you need to think about the security so ship left you see
everything ship left right from now uh you maybe need to scan your code you
need to check base practice from the starting so you cannot expose later so what happening in the ops thing people
deploy thing and then they will scan so better to ship left everything whenever hit something vulnerability so this is
one of our approach and defensive depth approach is auto resilience at a remediation kind of
thing which is let’s say i got a vulnerability or some issue how that can be auto immediate
or are we doing chaos engineering also important another thing is supply chain security now use how workflows you
have so many things you are deploying pipeline is changed every chain has kind
of security thread you are exposing environment secret like a lot of things
and one of the thing you have come across like lock four shell kind of vulnerabilities and
one of the hard thing in security is that to find are you your your environment is
one level or not and what resources by which resources get affected that
that is important so that’s why s bomc bomb come to picture that is uh software
bill of material c1 is cyber security bill of material
okay notes s-bomb is different it’s software cyber
security also kind of vulnerability management risk assessment you do that is cv and all those things and s
bomb is it will give the each resource you have right
so you can just search it also on google okay i know a lot of people had the
thing on this but i think both you need to think
yep we will discuss about that i will come so to the topic you know the topic of
pod security how many people are using psp which is deprecated
support security is one of a new feature uh which is a kind of successor of pod security policies
uh what happened in port security policies you need to have kind of role
and sas but currently like we it didn’t have something in build
security feature in the kubernetes but this is something the sig group decide to create a admission controller
so from version like kind of in next version release
they are deprecating ports that’s why we need a kind of
port security so next thing what is admission control
now pod security is admission controller and if you see now when you apply cop
city you’ll apply your file and then it will go to your do authentication
authorization and there are inbuilt like 13 build kind of admission controllers i will i will
show how to find out those some of the pre-enabled some of the not enabled you need to
enable there are two types of kind of admission control one is
validation and mutation mutation is kind of when you make changes
so there are like third-party mutation uh also and but this
pod security controller is a validation controller and this is very important for the
security perspective you can enforce the kind of oppa open policy agent whatever
your security best practices you can write own policies and you can add the policy
enforcement into the use as a kind of admission controller there are so many
so many like oppa gatekeeper one of them we have the tera scan validation
controller for kubernetes we have the 500 policies and the terascan you can doubt if you find
useful then whatever the database you get whatever the thing happening it’s stored
in etcd uh kind of key value based database thing
you will find the full blog around this i’m going to publish on this which one
are back this is kind of roll base
this kind of uh a back i will check i have access back base but i
this is the workflow that happened that that you just when you create like cubes
it will apply so if you are already using the you want
to use separately as a admission controller this is like separate component you can just uh clone this repo and you can
use in maybe you are using an older version and if you are using 1.23 you can just
configure on the server level like kind of using coupe api also
you can use and you can use as admission config also you can there are multiple ways you can
use it so there are three types of standard if you check the pod security standard
there is a privilege baseline and restricted and every privilege that means you can like it’s open
uh if it’s security threads it run everything undistinctly there is baseline they convert they
decided to like divide into three part you can have the kind of restriction and warning alerts
so what happened is like let’s say you have a developer you define some policy
and you added a security point uh context as a true let’s say you have three kind of name
spaces now you can apply this policy infinite this pod security admission country
using labels so you can apply label for particular kind of name space
and whenever that whenever you run this pod or try to deploy
it will give you warning or it will give a restricted so in this case it will go to this privilege because it’s a true if
it’s false it will go to some other place right so using namespace level you
can do this kind of stuff moving forward how you can apply
you can create as an m space also you can just take this
kind of oml file and create a name space and you can add a label so you can
create a three name space you can kind of try it out the policies whatever the
pod hitting uh but you can apply kind of
let’s say you can combine kind of two policies also like restricted and in one
one namespace itself you can do that also so moving forward
now i have something called bad pod which contain the best practices yml
there is some yml file which named as a good pod some of the bad part so when you scan
this using this label you will get which are the best practices so i have created a multiple kind of yml files so
you can check that on the killer quota on the killer coda you will get like three free norman
if you have a laptop currently you want to try it out with me you can try it out and i will talk lastly about terascan
how can use these policies predefined policies also
if you go to this link kind of killer codes and number so you’ll get like two
labs you can create a scenarios
now if you see here this is the command line if i hit on my
uh kind of control pin or whatever node you are running on so you get this kind of plugins what is
the inbuilt kind of admission control you have so you see the newest admission control pod
security here right so you will get the what kind of if there is any deprecation
happen you can you can always check it out and then you can just create a kind of
name space so i will create a two name space here so just copy paste this line of code you
don’t need to create a setup kubernetes setup is already pre kind of configured you can just try it
out so it will create a two name space now for me
now what i will do is i will just take some some kind of kubernetes file
from doc and i will just run on some name space we will see we get
some kind of restriction warning or not sorry
so you see the kind of restriction value you get directly it will not deploy directly so you need to just specify
what kind of and if you go to the cooperatives document it will
give you more kind of guideline what to allow what not to allow and this
is one of the updated one updated version i think everybody should take a note because there is a
lot of changes happen uh and moving to the
bad part so i created two kind of based on the
port security admission control baseline folder and restricted which contain lot of practices you can just go and copy
paste and use it and if you go inside
it will give you kind of one md file which which kind of give you
kind of uh how to solve problem if you get some kind of vulnerabilities so you have the
remediation guideline here and then i have created a resource file which is you can apply this resource
file on the kubernetes namespace with label so it will tell this is a bad part so it will fail
but some of the part this is like good pod it will get deployed without any warning so this
way you can just practice don’t run this in production environment this this files whatever i am showing just for the
learning purposes uh another thing why it’s
so you can just learn kind of here is some kind of scenario i have created
and in this case i have created everything allowed if let’s say you allowing everything as a true
and there is one of a project called bad port you can go to the github
so there is one it’s it’s inspired this is copied actually from that content and updated
with new pod security standard but i think this is very good kind of scenario you can
see what kind of this is kind of lab based in admit you can get access to the your etcd
database directly this means you can anybody can use your secrets
so you can play a little bit get familiar with like pod security context how to use then
if you get some kind of vulnerabilities then you can apply those specific thing
now how you can identify those thing is very important let’s say you are writing one kubernetes file and you don’t know
it’s secure or not first first place second you need to add the remediation for that you need some kind of warning
okay this is happening then you go and find a remediation then you apply those security context to your file this is
very important that’s why in tunable we created something called
it’s going so if you go to the terra scan repo
let me check what happening i think my magic keyboard
yeah so this is a terascan repo which is kind of scanning tool static
scanning tool you can scan your helm charts you can scan your kubernetes manifest files
so when you scan this thing it will check with like open we have something called
open policies we have written a policies uh in rego language which which is by
oppa so you see the lot of policies here if you see the pod policies we have created
policies here it will scan against these policies and it will show you the
remediation you scan your yml file it will show this is the one that you find on this line
number when you go and add security context you get some kind of one
message on your terminal this is the you can apply the remediation
then you can just go and apply revenation and scan again so next time it will not show any security
violation so you can go to go and deploy
so one more thing so this is the repo which
currently it’s in beta version pod security admission
controller it will hit ga on version 1.25
and this is the good kind of documentation you can always go
and it’s official documentation you can learn about if there is anything changes
and you you can you can apply this for demonstrate con jobs
set there are like different is not just a pod thing if you are using for other
purpose you can scan those kind of remediation also
and i think
let me share
so this is a kind of admission controller documentation you can always like go and refer
you can apply labels to the namespace this kind of you can use like if you are using 23
version you can apply those labels i think that’s it that’s it
from my side if you have any questions regarding specific there is there is a repo you can i think everybody go and
check this documentation and scan try to scan this yml file uh it’s not possible for me to go to any
ch and everything but it’s well documented go and scan you get some kind of vulnerabilities
you try to fix it i’m gonna write blog around it it’s very lengthy kind of it is hard to cover kind
of all security contacts but it’s good to use that admission controller or any third party tool to
check your vulnerabilities in the starting itself
this one
for them to scan and join this
so you can scan and join the collab linux if you are not joined
if you have any questions i will happy to take
if you want to try something uh with the this any controller add the
node name to the your worker node or control plane and then
try to get a path of your atcd and get all string database from your
whatever the atcd cluster so if you expose your if you add this thing it will expose your
all the database etc database so don’t do that
anybody have a question
press upside
i need to add this in stream right
hi all hi
i think we took a break right uh i guess we we have a lunch i think ajit has
organized so yeah uh okay so this is ashish i’m a developer
advocate to work in elastic i take care of community in india
i do meetups conferences bbl sessions and sometimes help with the architect
system design like this so um i have been a startup guy my background started with a software
engineer and uh especially in email industry i was an
email engineer and after that i started my startup in the same industry and in
the last december i joined elastic as a developer advocate so this is my uh a
quick quick overview so uh in today’s session we will more going to have a
look on how you can run elastic cluster on a docker container okay
and uh on top of it we will try to have a demo on apm application performance
monitoring which is offering by again the elastic so uh
cool uh docker i think we are talking about today docker so
let me share my story as well uh in my startup i had some i used to get some
custom requirement from the from the client from my users so uh if one user
they they want some this kind of the customization so we have to establish on under the server and we have to do uh
some customization in the coding like this and uh anyone is all developer here
paul paul it’s old school okay so i had some project which i used to
write a code in a pearl and uh eventually uh when we moved to you know uh
starting pod uh starting a server on a distillation we uh that that repo has been depreciated so the the dependency
we used to uh collect from uh pearl repository it’s been depreciated so
uh we got very worried about like a career how to get a client things are
start getting break so uh yeah that’s that’s where docker came in the picture so uh i i remember
my first project uh i thought yeah let’s put everything on a docker image and let’s run it at least it will be save
our life okay so what we have done we have put it we have installed everything on an image and we build up one base
image and whenever client come to us we just you keep using those image and we have all
all environment ready made available so this is the power of a docker right so uh you can there is a lots of use case i
think people already talked about how you can use a docker and like this so yep so uh we have a good
docker desktop also i think uh the single click installation you already have seen the demo so i won’t go in that
uh depth okay and uh yeah this is something a docker and let’s
move to elasticstacks overview anyone who know elastic stacks elasticsearch
familiar with kibana yeah kibana beats like this okay
so uh elasticsearch is uh something which is uh backed by our elastic company it is a
search in them probably search engine some people use as a primary database as well so um
whenever you search on any food app or let’s say you want to
you want to book tickets something and you you see that search bar uh in any application in the world so there is a
very heavy chance it’s powered by elasticsearch so uh whenever you book the
cab from uber and you get to know the cab mapping it is powered by
elasticsearch so in a tinder when you do swipe left swipe right you know it’s a pretty trending
app so that is also powered by elasticsearch so yep uh
we have a different kind of use cases uh of different kind of user story you can go and have a look on our elastic
official page and you can explode right you can it’s open
and free so you can just get uh elasticsearch on your machine and just spin it just play
around it so uh how this solution uh solution is
let’s say uh possible this is this is because of these three stacks i think of kibana elastic search and the
integration so kibana is nothing but it is a visualization tool uh on the top of elasticsearch so whatever data you have
in elasticsearch you can see with the help of kibana you can create a charts dashboards
visualizations your custom dashboards like this like this and in the integration part we have a
beats we have a elastic agents which you can use as per your requirement
like if you want to measure a system metrics you can use metric beat or you
want to get read from some lock file in a real time use of user i’m
file beat yeah for monitoring the network you can use the package so there is a lots of option available and use
accordingly so yeah this is i think a quick overview of elastic stack
and uh i think yeah docker desktop is giving a single click install a single
click installation of elasticsearch which already shown by uh last to last
uh talk so i think you can just click on the run and your elasticsearch will start so i think this demo has been done so i
think i’m not going to repeat okay so
let’s talk about some pain points i think all folks are text so what is your pain point guys as a
developer as a devops if i may ask
which point some must tell their life
okay debugging right logging it was gonna work
my pinpoint is i used to get call on weekends here from my manager
so cool uh yeah one of the pain point i have this uh internal server error yeah
you must have familiar with this page so if something went wrong your user used to see this page and they
used to put a ticket and it’s showing this and yep it’s a delay loading right
so if something is happen we don’t put message we simply show loader some process is going on and uh
yeah so i think in such it’s this is a server some of the cases where you are not the first person to uh get
to know about this right your client and your user is the first person uh
and as a tech you start investigating right so uh you have to you become
detective you do like this and start performing some command like uh
okay checking locks checking services is it up or not or
it’s killed checking resources whether memory is full or hardware is full or disk is full
whatever uh this is my favorite
it’s always find some errors so and developer likes me do this do
this they go this is last this is the last resort they go and they do eco print every
variable whether we used to get response or not in particular variables right so uh i
used to do this and find a time limit it’s take a time but yeah end of the day
you will find it so you will end up with this by opening
multiple screen in your terminal and by logging to your multiple servers performing the
same command on the multiple servers like but it’s okay if you if you have if
you have to do all this operation on a one server but what if there is a 50 server or like this so you cannot
perform right it’s it’s uh very hard to do a manually this is where apm comes in a picture
it’s a application performance monitoring call it’s a very general term uh it is nothing but
uh it’s it’s a simply it’s uh it’s add a monitoring or it’s it’s a clear collect
the traces from your code bases so uh apm is nothing but it’s allowed to
instrument to your code so what is the meaning of instrumentation or if i say in a lemon language like
adding a few more code in your existing code base to monitor right so
once you edit that apm in your program or in your code base it will
automatically start collecting the data okay it will start collecting the data like uh what kind of the transaction is
going what kind of the queries are going and uh how much time it’s taking
okay so uh yeah so elastic apm is come with this functionality it’s provide the apm and
how it works so we have a apm agents for a different kind of language we have a
node java php most of the famous language we have covered and you can just hook that particular
agent with your code base so suppose your application written in a node.js you can just have
that javascript client of elastic apm you can just hook with your node.js similarly for java go and like this
and what apm agent does they collect the all information and it pass it to bus to the apm server like
this here at its passing and apm server used to take all the data from your all the n
number of agents and it push the same data to the elastic search in the formatted way
okay now uh once that icon once data has been arrived to the elastic search it’s available to visualize from the kibana
so we will going to have a quick demo on this i’m going to show you i have
prepared one small golang app any go developers okay
nothing it’s same no no issue you can relate with all languages okay so uh uh
yeah i’m i have prepared some uh small application and where i’m going to add this elastic api and i will try to show
you how this tran this apm elastic apm works
um okay before that let’s get spin uh elastic cluster
uh with the help of the docker compose so i have a prepare one quick docker compose file this is a one block i have
written you can just refer this or we have a docker compose official
uh steps also on official documentation you can just have it look um okay so i’m just going to quick copy
paste nothing rocket science just okay already pasted i okay so uh i have
a dot env file here where i have specified a password elastic password k bar now
whatever you want stack version cluster name and which port you want to expose for a
elastic search kibana like this and we have a docker docker composed file
so yeah it does the some operation because elasticsearch come with the pre-built
security enabled feature so it whatever the communication happening between
kibana to elasticsearch or elasticsearch to agent it’s the it’s uh happening
securely so it’s generating the some certificate on kind of the this stuff is uh just take care uh in this step and
es01 is our elasticsearch instance which is running
and there is a apm apm server which i have shown in the last slides
and there is a kibana so there is a three important container we are going to run elasticsearch kibana and atm
server so if we go to here
so here the apm server and elasticsearch and kibana these three containers we are
going to run with the help of the docker compose so yeah my docker compose is ready i’m just
quickly going to up this
okay so uh let’s let’s jump to next slide and
uh till it’s pulling the image and spin our container
agent agent used to collect the all data let’s say transactional or errors or
span any third party you are calling third party api these kind of the all data is
used to collect and push to apm server and apm server used to push to elastic search
so these are all the languages and the framework in which we have a agent available so if you are developing
any application in one of these language you can integrate elastic apm
so yeah in the demo i have written four services nothing but the http services uh in http service one
is the router service which will just simply going to route to the particular microservice and uh
but the these services is going to call some database uh query so
let’s get start with this i think yeah
okay all continuous up just let me confirm once
so i can go 5601 where kibana runs
yep it’s working and the password is pass added one two three
or this is the same which we have put here okay
here we have put elastic pass 123 and kibana
so i’m login so this is a something dashboard it’s
come with the more option like enterprise search observability security and analytics we will talk some other
day about these components and you can explore all these options here as well so today we are going to
focus on this section observability and more specifically apm so i’m going to
apm here so here you need to add the add the apm integration first so i’m
just going to click elastic apm
adding elastic apm here yeah
so you can just fill these information but i will keep as a default
and save and continue sorry
okay so it will going to take some moments
till now till now any questions guys any questions
yeah so kibana is just a visualization i’m going to enable the right now i’m enabling the elastic apm
this is the first step if you are going to uh integrate the apm in kibana only we are going to visualize
each and every transaction so i’ll i’ll show you once it’s done
all transaction yeah so you have to add in the code so
yeah now i’m going to add in the code i’ll give you the code this yes
okay i’ll just do this agent later
now i’m going to have going to again to apm and now
just click on add data so in add data just check whether apm server is running
yeah it’s running and after that these are the language and framework in
which agents are available okay so if you are developing a in php and you are
using laravel so we already having a agent for laravel you just pull it and
start using it and yeah it will done it will start tracking all the
transitional transaction happening in a laravel framework itself so i’m
going to go in you can just go get the elastic this is how you can just pull the dependency
or library and yeah these are the some of the environment variable we need to
set so i’m just going to paste it in my exact profile
i think okay sorry
okay so i have paste it and just i’ll do the source
okay so that the values get reflect and this is a one of the example is
given how you can instrument your application but let’s go to our demo first so in a demo
uh yeah here we are so there were four service as i
told you here router user projects task i have made four files
router projects tasks and users so let’s run one by one so i’m going to run
routers dot go they will run on different port and
it’s it’s going to just route the request simply i am going to run
users and projects
and task yep so all four services are right now are running
okay it’s running so let’s make some call let’s let’s do some api call so i have prepared api call as
well just a minute these are some api calls you can just
copy paste
so this is the first first api call we have made and we uh
did to the users at the 8000 port yes sir yeah
okay it’s made it to the users on the port 8000 so here the routers is running on 8
000 and it will identify if it is a slash user it will simply going to forward to the user’s microsoft micro
services so if we go to the user we must have got one request here the
same request now this has been happen let’s go on apm dashboard and let’s see whether we
receive some data or not
yes we have received this uh yeah it’s users
let’s go to users it was a post call internally
it was a dependency but wait let me again check
task okay so i think
i’m going to call more uh like projects and like the other
call as well
ah what is happening here
just a minute something problem
yeah okay
and i’m going to simply refresh it here we get projects users but we are
not getting one so let’s go to the projects and in projects this is a transaction
in a project service we did one transaction that is a post srb
projects and in transaction as well we got one query so
my my whole transaction has took 1.6 milliseconds in that that particular query has taken
1.3 milliseconds so this is how you can trace you can enable trace for
your whole code base you can you can just monitoring uh what’s happening in our like uh in your applications
and sometime you know we have to figure out which exact segment is taking time which exact query so in such
yeah so let me give you a look
so here in routers i’m just i have used this uh go elastic api module like apm
gene framework chain framework is a uh uh where you can build up http api so uh
what a elastic elastic is provide some framework based library
library as well so if you are using java there is a spring boot or support is available if you are using php for la
there is a laravel support is available so i have just uh pulled that library and i am using here
so for http like this uh elastic apm http just include and then use
the in the same way how we are using the uh the actual original library and it
will start instrumenting your code so this is for framework specific let’s do a
let’s let’s say you have some custom custom requirement and you you want to
like you want to just monitor some some segment of code or some line of code so
here is the custom example where i can just start the transaction
like this and i can give any name this is a any name you can give a request
i can start the spam select star from this this is also you can give anything and just done with this uh just close
the span and just close the transition so just let me run this as well
and let’s check whether we receive [Music]
i have not received it
something wrong
yeah so here i’m using elastic apm library and with the apm library i’m
connecting i’m starting this transaction and starting the span so i think it should
the data should have come here
i don’t know i don’t know i need to check something
wrong so uh so what happened when you when you run this it it’s uh it pushed the
particular transaction to the apm server and apm server pushed to elasticsearch and you can you are able to see here
that transaction as well but it’s not coming i don’t know why i need to check but yeah there is a custom
um custom instrumentation is possible okay so this is how uh elastic apn you can
use and uh yeah this is was uh this is what i want to show this demo
so some of the feature is restricted it is allowed in elastic cloud subscription so
uh in elastic cloud you can see this service map as well so uh router it’s going to forward the
request to the project service and project is going to connect with the radius and my sql so
uh you don’t need to create you know some fancy uh charts or like this it will create
automatically the way transactional will happening the service map will automatically build and this will give
you the bird eye view how your architecture is so this is possible in
elastic cloud again it’s a paid one but there is a 14 days
trial is available from the elastic cloud you can you guys can explore
so how distributed pressing is work it’s it’s not like uh it you can track on only one server if you have a n number
of server and your transaction is happening from one server to another or like this
you can trace the whole transaction across your uh distributed architecture so how it’s happening it’s happening
because of trace pen and there is a trace parent id uh it’s uh it sees a
unique per transaction it is a unique id for every transaction and it’s just get
carry forward with the all spam so yeah this is how elastic apm works
if you have any doubts about this or anything you can just post it in this community discussion slack and there is
a contributor programming program as well we have you can just have a part of it uh
yeah so this is it from my side this is a official bangalore chapter we have group we used to do a meetups and this
so you can join it and we will happy to have you so this is all
from my side guys so i am open for question yes
reporting what
yeah so this is what he here i have shown you right
let’s go to just a minute if i go to projects
so yes oh yeah you can always go to discover
and have a look yes this is for open source so which i have given the demo know or by using
docker containers this is open source tool reporting yes that is open all uh data
can be visualized in kibana only so it’s open source
html report from out of what kind of like
okay you mean a pdf or like this uh i’m not sure if you can export this but uh
because this is uh this dashboard is come with the some of the calculation which is done by kibana but you can make
your own dashboard as well there is here is the raw data this is the raw data which is available
in the elastic search and this is the apm dashboard this comes with the some of the calculation
of the kibana it’s performing some aggregation queries or like this so you can perform those you can make your own
dashboard dashboard from this raw data and you can make it and from that dashboard you can export as a pdf report
and like this so uh there is i think there is no option where you can directly
uh pull any kind of report like pdf or like this from apm dashboard you have to
build your custom and from where you can able to pull the reports
no i think you can just stop
[Music] i’m not sure about data dogs but uh you
can just try with the docker compose like this while this cluster and uh don’t uh don’t delete the drivers and
volumes like this yes yes i have mounted the volume as well so just
mount the volume as a persistent storage okay
anyone else
yeah we have an integration for open telemetry uh just let me show you quickly
here it is you can just go through this documentation how you can integrate
wow ah okay see uh apm is very general generic thing
uh it’s provided by uh new relic signals also provide the same
and like this so uh yeah same same you can do the transition like this
but uh i think uh elastic is elastic kpm is something uh build on top of different stack
or let’s say it has a search engine capability as well uh search as well and the atm is one of
the solution we are not uh what we can say we are not focusing that
we are good in the search but apm is one of the solution which is available by elasticate
compared [Music]
yes yes so in that that’s what i uh shown you trace parent id it’s a trace
parent id so it’s it is a unique for each transaction and in each transaction
there is a multiple span and errors you can collect so yeah the the concept is same
yes so transaction is the your starting and end point the thread
and in that particular thread there can be a multiple spans so span can be uh
you are calling third party http spam can be you are making database queries the uh multiple hops in particular so
transition is transaction is collection of spans
i have shown you here just a minute
let me show you in slides just work be quick uh i know i know just two minutes yeah yeah
yeah so yeah so this is how uh the transition looks like so you have a product
products is nothing but something arbitrary name of a transaction and in that product transaction it’s it’s
hitting get api call as well it’s hitting another third-party api it’s doing
multiple queries so transition is something parent and the span you can visualize as a child like this
i i can get back to i have an answer but i think we are running out of time so please guys if you have any question
just let me know after this session okay yeah yeah we have a big pack so thank you guys
thanks thanks thanks a lot to have here thank you
hi uh everyone so i’m arduin i work in a startup and uh it’s a stealth startup so not much information
out there but it’s basically like we do a network as a service you must have heard of
like i mean everybody uses databases service or computer service so this is where we do
network as service where we provide network facilities like for enterprises
like you joined the uh today you joined the guest network on microsoft trade so there’s somebody who creates all this
networks and maintains it so we we have a we have a product that actually does all
this and it you know it’s easily uh manageable from the cloud so it’s all dumb routers
and and we give all the configuration it picks up from the from the cloud and uh it uses iot and it’s also auto auto
healing and things like that so if you want to know more i can definitely tell you the not much information is present
right now on the internet but it’s called knife global you can look it up there’s i think there’s only one page right now uh
website on this so today um i’ll be speaking about uh running shell commands in
with docker so i mean why should you want to do that right so um
so i mean somebody talked about security right so consider uh like you want to run
a command shell command in a secure environment like in a in a production
environment and you don’t have access to the you know you can’t write files you don’t have access right permissions um or
you know you want to share your commands maybe on github or something but don’t
want to share the configuration and you don’t want to make file where where you uh have a make step or any any setup so
you want everything to be uh run um you know integrated with when you run the com shell command it should also set up
uh automatically and then run so there’s no uh you know you want that sort of process uh where it’s it’s simple to run
basically so that’s what i’m aiming at where where we can create a command shell command which uses docker
to set up all the uh you know configuration because docker is provides a file system uh and and we can set up
whatever files because we write a docker file copy all the required files into the uh into the docker container and and
uh you know create the image and run the image right so uh the the command will basically uh uh on the fly create a
docker file uh and it will it’ll build the image and it’ll run so each time it does this each time it will
create an image and it will run and and we we want to do it each time because it will pick up the shell uh the
configurations can change right each time you run it based on the environment variable it may change so the file also
will change the configuration file will change so this needs to be copied each time each time
the uh your shell command runs it needs to create the image again each time and
then copy the you know the file with the uh environment variable in it and uh and then it runs so we created each time so
i’ll so anyway yeah so i’ll just go to the demo hopefully uh
the demo works okay
should i make it bigger you can see
okay so below i have a mysql running so uh to show this right i’ll i’ll
introduce a new tool i don’t know like some people use python and so in python you have a alembic tool for
migration db migrations many people may use java like liquibase or flyway whatever right so
alumique is another tool that we use for our db migrations so i’ll try to run this alembic tool
first and then see how how we can integrate it and run it within a docker command
so let me just create a
okay so just this is an empty folder and so just to start
it has a init script right so it created uh let me increase
the size so it’s created a few files
and so the alembic.ini is contains all the variables the environment variables and
env.pi will pick up the variables and create a jdbc url from it
so if you look at the so it has certain variables here defined
so the the thing that interests us is this uh the url here right so we want to
uh it’s just having default values right now um so let me change it
so i have a demo user password is a password
localhost is fine db name i have a demo db
right so all the rest can remain the same okay uh now if i think i’ve logged in
already uh to uh demo database so
so there are no tables right now so let’s try to create a simple uh uh you know employee department uh sort of a
schema so just to create a schema you need to uh
just run this command right revisions and then specify any name any name you can give
so this will just create a create a file
which which we need to edit um so right now it doesn’t contain
anything so uh upgrade is what runs when uh when when there is when a new revision
runs uh when when you run the migration it runs the upgrade when you do a downgrade it will it’ll divert whatever
migration done so i already have those uh
let me
let me just copy it because i can’t sit in typos so i have some sample uh
you know ddl statements okay so i’ve just created
an employee a department table and an employee table and add some data and
employee has a foreign key relation with the departments right so simple
and then downgraded it will just drop the tables that have been created
now to run it
okay i think we need to change the
yeah the driver name is me should be specified here so
should be my sequel let me just check it out
okay my sequel by mysql
okay let’s see if it runs okay it ran uh
we can see uh show tables
so it’s created three tables one is the alembic version which contains your current migration
state and departments and employees two tables and uh if you see what is the let me
just switch here and check the data so it says there are three uh um rows uh
in the employee so there are three employees so let me go back here right so now that we
have a now that this is running right say suppose we want to add a new uh like new
column like hoda whatever right so head of department i want to add so a similar process right you you go
here and run the create a new uh script
so i’ll come to the docker i mean i just want to introduce this alembic uh you know and just get a fair
understanding of this uh so uh it’s created that file uh
you can just go to that uh file
just copy this these things here
so this creates the um department in department it’ll create a column called
uh you know hrd and it it’s a foreign key relation to the employee table because uh um
you know the hrd is also an employee right the head of the department so uh
okay let me run run the revisions
so now uh if you see uh let me
so it’s added a hod column and and e1 employee is the is a head of uh uh you
know engineering department and if you see uh uh if you want to see who’s the hrd uh
is his john right so that’s the data that we have right now so all this is fine so now the thing is uh if if i want
to change my uh um you know i want to change the db i want to i want to
connect to a new db like a demo is the db that i use so i want to use demo one
so what i need to do is i need to go to the alembic and and go and change here right so this
is cumbersome i mean to go change this file each time
right so so now
let me switch to demo one show tables
so it’s empty right now so running upgrade heads
so it shows that the departments have the tables have been created right so so we have the required tables here
um so so now uh we see that every time we need to go make this change right so how can we improve this right so let’s
uh let us make an uh a file which which takes these
take these parameters right so so let me create a
okay fine let me create a new
and also i’ll copy the existing for olympic scripts from the previous folder
so now we have uh in the in this folder right i just created a docker demo folder and it has certain files within
it so it has
this hang on so maybe i’ll just move all of this into create a new folder
let’s hang on
okay yeah this is fine so let me come back to this folder
okay so now let’s create a docker file so uh
i’ll just uh copy this for now but i’ll explain what i’m doing
right so i’ve i’m creating a image i’m using base image of alpine uh you
can use in python also but i need to install certain other libraries and uh so that’s why i used uh alpine uh
um and uh these are dependencies for alembic um so that’s the only thing and then i’m also copying uh uh creating a
directory uh um opt inside opt alembic and then i’m copying all the files from alembic folder into uh opt
alambik and then i’m setting the work directory okay let’s
so now let’s run it
okay there’s no hyphen t
okay so it’s entered into this fold into the container and then uh
if you see the current contents uh it has those uh same contents as alembic right so
uh whatever was there in the previous folder it’s just copied it so that’s what we want say now i’ll run alembic
from here right alembic and then
upgrade heads so right now i’m in the container
so if there is an error that it says that can’t connect to mysql server and localhost
so this is because on on mac any container refers to the host machine
as a host internal or docker internal so we need to change that
so let me go to that particular line so instead of flow closed
okay so now it ran i mean we can see that it’s uh if you want we can drop the tables and run it and to
see that they’re actually run
okay so now it’s deleted it let’s run it again
so show tables it’s created it so it’s running right so this is fine but still i made a change going into the docker
container which is not what we want right so so next step uh
next step let’s um [Music] where is the next step
okay now so let’s make it a uh variable uh let’s make that
the hostname as a variable so we can we can set it right so um
let me just copy the contents
so i’ve just copied the contents of the alembic file the the properties right which contains
the properties delete the delete that file so now if you see uh
hellmbig doesn’t have that file alamic.ini which contains that url the jdbc url and we’ll create this
file on the flight so now let me edit the docker file
um so here i’ll just add the so here we are going to create the uh
at the specific contents right so i just copied it right so let’s um
let’s say uh create alembic
dot ini okay
let me come back to this
okay so these are the contents uh that that were there in that file um and uh
so here um so i’ve just made of all of all of these variables the the password the username
the host and db port everything is set as euro as variable so now how do we
add it to the specific you know within the docker file so this is something called here docs
that’s newly added in docker and so we just set it like this right so if
you use unix doc here docs where you can create files on the fly similarly album
docker also provides this and so we need to set the uh
we need to set uh a property which says that this is a
experimental property and you know beta property sort of so
just said this i think we are done that’s uh um
and yeah set the end of the file
okay so now now that it’s done let’s uh we’ll try to run it and it should it should add the file automatically um
and uh based on the properties uh right so we have to uh pass the properties to
this so uh in order to pass the properties that i’ve set so i’ve passed passing it through this right the docker
build and i’m passing the host name here as a host
docker internal and that’s what was the problem and the username and
password and other things and also you see that i’ve set a docker build kit in the beginning right so that’s something
that’s required when you’re using here docs
so now let’s see that uh albumic file so this is what it’s created
okay however it’s not added the variables
but anyway we see that it’s it’s added it right i’ve deleted it from the from my folder it’s
and it’s added through the variables
okay let me just
sorry all right right right right good catch
okay let me define the arguments
yeah it’s come so thanks for that so now let’s run the
planet and see if it works
so it’s ran it ran successfully and uh so great so now that we have this uh
next step is uh um why don’t we make it like a you know
command like a script uh you know shell command out of this right so um so we
don’t need to have a docker file so this entire docker file i mean the docker file that i created it can be uh created
on the fly similar to how i’m creating the docker i mean the ini file alembic.ini
file i’m creating on the fly similarly i can even create the entire docker file on the fly which also creates internally
the alembic fire on the fly so okay
okay let’s create a
okay so i’m creating a new script and
i’ll just copy it for now but i’ll explain the contents of it
okay so ah so the first line is just printing the whatever i’m running right so
second is i’m creating uh i’m just deleting the image so each time i want to create the image
right so um i’m deleting it in the first line and then i’ll create it so next
step is the same right uh we ran the with the build kit and we ran passing the arguments right so
same thing i’m doing here but the only difference is when i’m build i’m when i’m building i’m also
i’m passing the file here so this is another thing that we can do we don’t need to have a docker file uh we can we
can pass it to build uh and it will create it right so like you do cat right cat and we can we can pass
the the contents directly in the command line so similarly we can pass the contents of the docker file so that’s
what i’m doing here passing the contents and you see here it contains all the uh you know whatever we wrote right the uh
the passing setting the arguments and uh and the file right so only thing we need to con take into consideration is we
need to escape all the dollars because um this is a shell script and and it will interpret
this as a as a shell variable right but this is not a shell variable this is a docker variable so we need to escape it
so that shell doesn’t try to interpret the docker variables so uh that’s about it uh most of this is
the same that that we had added earlier but other other file i’ve created is some other files are required right
along with we saw there are four files initially when we did know in it in the first first time so these
are the other files that i also added it uh and um so yeah that’s it the env.pi
also is there so that’s it and i’ve said the entry point here entry point is alembic and uh
and it will take the alembic.ini so this needs to be passed each time so that it
picks up the commands right so
now we can run it right
so let’s try running it here so now i’m set as localhostdb right
okay so it picked up and it ran uh the specific you know
this particular thing so if you change it here
okay should i failed actually this one
okay here here i have hardcoded the the values here um db host right so that’s why it’s it’s passing but uh
suppose we we make it like uh you know environment variables um so let me just define an environment variable here
so it failed because it’s localhost like i told from docker you cannot connect to the host machine using localhost we need
to specify a host in docker internal
so great it’s working so it’s picking up values from so you’re passing it as a parameter to your uh uh to your specific
uh you know to the command right to the albumin command and and it’s picking it up so similarly you can write your own
uh shell commands using docker uh to and and pass environment variables as required right so if i i can specify
here uh different username or host or whatever so you get the idea so that’s the basic idea behind
behind this and other than that you can also uh we can also
you know make certain changes uh here and we can we can add a you know meaningful
default so right now there’s no defaults that i’ve said but based on different uh like mac mac it takes a
host.internal like we know right so similarly for linux you just it’s just low close you can connect to your host
machine so you can set a default uh you know based on whatever is the you know darwin or sequin or whatever is the
underlying architecture it can you can set default values um
and uh yeah so this uh the code is available
here uh albumic docker you can uh you can go and check it out uh yeah i’m
done any questions [Applause]
sorry this is just a mac
which docker docker commands you’re saying
oh that that’s uh mkcdir is just an alias i think huh
okay i think i’ve added it uh it’s just like it’ll it’ll create the folder and cd into it in stuff like creating and
then seeding right separately just one line
anything else okay okay
just [Music]
tool in place it helps us to basically decide how much uh request and the limits to be set for
the containers and to detect and resolve issues faster so this can happen in staging or
production as well so using various signals that we see sorry
previous one right
done right
okay so this is about like uh why monitoring is important uh and next we have
like so what to monitor so uh if we go in indepth like there are
a lot of things to monitor but uh on surface it’s around application metrics traces node level metrics that could be
like for your physical machine or other virtual machines uh container label metrics and we have
part level metrics in case of kubernetes and next we have like around what is
observability and about the signals uh which ankit will be explaining
how many you guys like to monitor their applications or have been monitoring their applications
right almost all like it’s the basic need right so i hope you must have heard about the
term apm observability and what constitutes of it okay so i’ll dive
deeper into it and with a brief show off and i would like to understand how many of you have
heard about open telemetry yeah what about pentel what do you think so comparatively so i’ll give you a
brief about what open telemetry is and why should we be knowing about that term
going forward so it will be beneficial for you in the long run so observability came from the word
observe in the control systems that we should be able to observe or
understand our systems okay so having the basic understanding of what is happening internally inside
your applications host or different state flows and everything so if you get that sort of understanding
then you can say that your systems are observable so like it’s like very simple thing like
uh how your body reacts to different kinds of diseases or
illness so suppose you have i would say high fever or dengue then
there are a few parameters that you uh check out and try to understand like put a thermometer in and trying to figure
out like where is the temperature going is it bad is good then you try to understand like uh how is the state of
your body right now and then you diagnose it and then you try to treat it out right very similar analogy can be
put into our devops life cycle there is error rate there is latency we try to debug it out we try to put
probes to find out what is happening where is happening and then we try to debug it out
right so like this is like a picture i usually
show it to people to make an interesting and like what is the importance of observability so when we have or used to
have a monolith application it was just one simple application we know where to put developer statements
but with the advent of micro services there are many services which which when
when they start failing it’s like cascading and you don’t know what to measure where to measure how to put the
thermometer and where so that’s why like this image became very prominent to me like now where to
put the thermometer in this set of scenarios right so out of which like i would say
distributed tracing was born where you propagate the context that you have from one service and propagate it down to the
database call and give them the result back so we’ll talk more on that terms first like the basics is the three
pillars of observability we have been knowing about metrics and logs for like
a long time now we are comfortable with the dashboards and the tools there so with matrix we usually see the
aggregate and overall level like how is the system performing overall but when you have to debug you have to go down to
the log lines to see what actually the log lines of the statements were printing what exception was it throwing
or whatever thing that was now with tracing and micro services so we
have to propagate the context also like id correlation id when the request first
hit your load balancer and down to the database calls via services a b c d and
e then each and every of these events in the spans can be added to the context and
you will know eventually like this series of events happened and that’s why where the bug is
the open source tools that we know about till now is like we have been using prometheus for metrics elastic for logs
and for tracing we have a tool from uber that’s called diego
so now like i gave you an example like what happened in a micro services architecture so for three four micro
services and applications it is still okay and like under your control but think about
when the company goes at the scale of uber so this is something like that like i would say the data flow diagram
of a uber and one request usually goes through like hundreds of microservices
before it gets back to the user okay so it becomes very difficult to debug things then
like with more micro services more instances horizontally scalable like multiple instances the same services
running on which upstream or downstream service or dependencies failing like you deploy different versions of different
microservices in different environments where the new version is failing is the error rate going higher low with the new
deployment like whatever different questions come come to your mind now i’ll give you an introduction like
what is open telemetry and what we are trying to do at xenos so open telemetry like the
whole system of debugging usually constitutes by a few terms like you first generate the data that is needed
and can be used later to understand the system right the first part is data generation and the second part is data
storage analysis and visualization for the vitamins so open telemetry is usually
usually comes at the first place like it generates the data it gives you a set of libraries toolkits sdks so that you can
instrument inside your application and you can send the data to a common and like agnostic or platform agnostic
platforms right so open telemetry has all the libraries for different languages whether it be a java golang net or
anything and it has for metrics traces and logs and all okay
so what are the key features that i want to take out from open telemetry world that is the only word if you have to
take it out from today’s stock it would be open telemetry and open source okay so it has facilities to
enable all the three verticals like traces metrics and logs it is it has a drop in replacement like
you can just include your packages and you it will automatically auto instrument your applications so all the
end points and different client libraries will be auto instrumented and third is it is open source it does not
depend on any vendor it’s not from any uh open source vendors like open telemetry vendors like uh if you like
the vendor locking problem is very common i will give an example like if company started with using a vendor
called data dog and as the company skills that developers added more and more code of data dog sdks and libraries
into their own applications now the companies scale from a team of 10 12
engineers 2 000 engineers and the code base is flooded with data dog libraries and everything right
now if you have to remove like let’s say today the next day your vp of engineering came and wanted to ask you
okay i have to move out from later doc because it is very costly and you have to go to new league or light step you have to try it out
then removing all the data stuff from your whole code base becomes very tiresome and it will delay your growth
in the company so that’s why open telemetry is very important
it is standardize the data generation part and in the back end you can use any of the other vendors if you like like it
can be data block it can be neural like by just switching the config that starts data dog or
neurallic a brief about open telemetry collector
that is the basics that has been given by open telemetry itself that collects the first set of data sets okay so it
has receivers processors and exporters the receivers can be different x y z
it can process them you can change the attributes and do something else and then it can parallelly export to
different storage systems and all like you know you can export it to data dog neutrality or any other tool okay so that’s the
power of opinion like it’s just so modular so it can receive a data from radius receiver from stats
receiver or what engineering’s receiver or different protocols okay and then you can forward into those imports
now we are trying at signals to be a full stack open source backend for open
telemetry open telemetry generates the data and then it is up to you where to
distribute it and how to analyze it we are trying to be an open source native packet means we store the data we
collect the data we process the data and give you a visualization where you can see out and debug your stuff
right so we are trying to be natively working on top of open telemetry
so our architecture is something like this like uh your applications app one app to actually you will instrument them
with open telemetry you will write them to open telemetry collector the collectors in the collectors i we have
written our own exporters that write the data to click house and click house is a database that we use in our backend
it’s a slightly different kind of database that we do not use in normal day-to-day life it’s like an olap
database so it has analytical capabilities it has very high injection rate and you can
perform aggregations on top of it at a blazing fast rate so
we get the data we push the data to open telemetry collector the collector writes to click house we have a query service
written in golang that gets the data from click house and exposes the apis which can be read by front end which is
written in react.js to display the different components that we have so that’s a rough idea and also you can
like use hot and cold storage while uh storing the data you can use s3 to
store it for a longer term and for the most recent data you can use the disk in click house using click house to
get faster responses to brief it up like we have matrix plus
traces already and they’re in a single plane we’ll have logs too in some time uh due to the virtue of the database you
can do some powerful aggregations like what is the p99 percentile of
your customer who is like on a bronze plan or silver plan or gold plan uh if
you’re using multiple payment channels which payment channel is giving you highest error rate so all these kinds of
things you can do and like it’s an open source so you can install it on your intro and get it started
yes i think uh prasant will give you more idea about what things are going to do today
uh so yeah so we’ll be going like a live demo uh we’ll be showing live demo of uh how we will
be uh how we can monitor the underlying environment in which containers run then we’ll move on with
uh how applications uh in canada application like you know stats or we can see from using signals
so uh mainly like uh so ankit mentioned about the receivers of open telemetry uh so there are like uh many the dozens
are more than a dozen like of receivers exporters and processors which
yes they’re open telemetry receivers so each of them like uh function differently like so first one we have is
the host metrics so whichever environment you have uh you’re running the the hotel
collector isn’t in it’ll get the information from the host system and it will uh pass it to the
agent and like which would be uh added to our click house and which would be used by signals so
so that’s like for the host metrics all the host metrics that would be basically uh the host cpu users memory users network io disk
io and next we have is a cubelet stats
so cubelet stats would basically pull in like the quad matrix from the api server so how it internally
works is again a different story it basically uses a metric server which
extracts these metrics out of pods using c advisor which would come in like in
each nodes i think so so that’s like a different story on uh how they do it but in on the overall
like in the on the higher picture it’s basically pulls in pod matrix using the api server
uh next we have kubernetes cluster which cluster level matrix like uh from the api server it again fetches
and next we have docker stacks which would be uh getting all the docker
daemons container stats so for this we have to do volume mounting of like docker socket file uh
so that it can fetch all those so uh let’s go ahead with uh
the demo then okay so
all right so we have our instance uh in aws
okay let me just bring okay
let me just bring it down once okay so this is a separate one
that was experimenting with
okay all right so i have some so this is our ripple
i’m sorry this is the thing this is the
that’s for our website so this is our uh repo we are close to like uh we are like 6.9 k
so we’re close to hit seven now i think so first thing what i do is i basically
clone the repo to my instance so i have it like already cloned up signals
and i go to deploy folder and how to install is like just run the install script
and it does the rest in the end it basically asks for the email so that you can provide your
email id here yep so that’s how easy it is to basically install signals using the
install script and uh so it also takes care of in case your instant doesn’t have docker already
installed so it takes care of all of those
so i can see that
i have okay it’s already running just
okay let me stop this so that i can follow through the process again
all right so for docker stat uh let’s do one thing
so uh so we have hotel collector agents that are running uh in signals and we have two of them one is like for
uh uh the getting the data and like passing it to click house other one is for metrics which we actually use it for
internally to get internal metrics such as
yeah such as as you can see in the screen uh so we collect host metrics
uh and we have like eager otlp uh receivers as well
and if we go to the ui after deploying so which would be this right
endpoint so we all
all right so after basically deploying the signals what if you go to the endpoint uh
uh of the instance with the 3301 port you will see the ui so we all will already have a sample app
uh running which should uh show you the metrics about the
application so we have front end we have custom customers application driver and router
so application let’s say front end so you can see all these metrics uh p99 95 p50 latency request
and we also have external calls count so this happens by default uh so you just have to like instrument application and send that data to our
signals and this will happen on its own apart from this what we have is uh
uh the host matrix so
so whichever instance you basically install signals in by default we already capture the
the host metric of that instance so if you click on this if you import the dashboard
so this just just the import of the dashboard and you already have these all this you
can you can generate a dashboard with all these widgets uh where you can monitor uh the disk
uses cpu uses the average cpu load uh so network io disk io all of these
things so this is how easy it is to basically monitor the underlying uh node or the
host right
so next let’s see let’s get the start of the containers that are running in instance so for that we will run c
advisor uh so if you see so this is the command that i got it from the c advisor
so this is a this is a project by google uh all right um
so i run this um so i think i could remove this
okay so i run the c advisor then what will happen is um i’ll be able to
so all the container stats it basically exposed to the machine’s att port slash
matrix now so these are the prometheus metrics so like previously mentioned uh signal
supports whatever open telemetry support so we support like ego or tlp and prometheus so
what we’ll do is we’ll basically capture the matrix that is exposed by c advisor
and will plug it into signals so to do that we just have to copy this step here
this one
and open the metrics config and include it here so i’ve included here right
so this four lines i just have to include uh so using this we can basically like uh
monitor the c advisor metrics so this can be replaced with any other exporter that you might have
it could be like uh your node mongodb you have node exporter
you can have uh let’s say what else is there as a proxy
so a lot of this matrix that is exposed by various application you can plug in here and just start
uh using signals dashboard and like you know start plotting charts
so uh so we have exposed it so let’s go and import a container
matrix dashboard so it should be this
so yeah uh so after integrating c advisor what we have is
the container memory uses uh container cpu uses the totally used memory used
storage all of these things so these are basically all of your container metrics and which container are there so if you
hover over you’ll be able to see like c advisor so this is a cache memory user so if you
go uh you see memory users so you can see which one application use uses the most
just from the ui here so this we haven’t done any extra thing we just integrated we just imported the
dashboard that’s it it’s so simple like
all right um so let me actually show you a little demo around metrics and traces
so as i previously mentioned we have the hot rod application integrated which sends the
and if you go to instrument application overview so you can see we have we already have all of these
blogs and docs uh documentation for instrumenting your application in any of the programming languages or
frameworks uh but there’s actually more to this
because open telemetry is mainly like a community so there’s a big community which is
maintaining various sdks which supports open telemetry specification so you can use a lot of them actually
lot more than this all right
right right so what we used is basically the so the first one i showed you right host matrix
um okay
so this one is captured from the config i showed you here so here you can see right the so this is
the receivers so i’ll show you the flow of how it just works so we have list of receivers
we have list of processors so receivers are where we basically expo uh
expose or basically capture ess and we have processors in between which basically we have a pass processor
so that we operate in batch right uh to not cause much uh throttle right
and we have a span matrix promises so using which we basically generated the matrix one matrix last part that we saw
in the homepage and uh we have exporters so we have actually two of them exporters here
actually three but prometheus we use it internally so click house trace and click house matrix is what basically uh captures it
and stores it in click house like so using this exporter so these are the one which uh we made it ourselves uh these are the
exporters which eventually will be merging it back to the main repo
all right
yes so no not this one so we have two uh
which one three oh yeah exporters you mean
yes yes so for that what we mentioned is so that we have something called as pipelines so here you can see pipelines right
this is like a uh so how many pipelines you can have so we have one trace pipeline which receives from these many so it’s
like a uh how many of you have used git of actions so you have something called as matrix
matrix deployment right metric strategy it’s exactly similar to that so we have these two so any of this source that
comes in just from the two source it comes it processes through these and it passes to the exporter so it’s
like many to many and in between you have multiple processes
right is it clear yep so we have one one pipeline for traces we have another
pipeline for the metrics and we have another one for span matrix which is used internally
so this is the main hotel so we have i told you we have two instances of hotel collector so this is the main one and other one is
the matrix
especially from the java if you’re coming from the java world right so there are a few of the points which i
have seen like number of times in the internet like you know when you go and read a blog the first thing is okay you
have to learn the docker container the second point would be okay automate
all the tasks you don’t have to build those uh you know maven field outside
the container right you have to build it inside the container
now there was one interesting use case which i was reading where ninety percent of the java
application they said like you need to attach possibility to when you build such kind of container
right so one question which i have is especially for a developer who are
building such kind of an application so signals could be would be one of the solution right but how are you bundling
button bundling it is it like a separate container which you are running or what do you recommend for because i need to
mention that that the application observability is really important
but i am trying to understand like how as a java developers they are going to bundle it and then is
it is it like yeah so i’ll give you an example like as i told
you the data creation and data processing has been separated so signal is the back end
data creation that actually should be from the inside the application should be managed by open telemetry now
the real question is how does open telemetry do it okay for most of the automatic uh
metrics and tracers that we usually use uh you need to add a java agent in your
run command that’s all open telemetry provides a jar file that you can add as an agent alongside
your uh application jar file that developer has so there is there has to be no touching of like a jar file that
is already there uh okay so that’s the first step but if an application developer decided to go
deeper into generating different kinds of telemetry data that is not as default and wants to something custom so they
will have to include the libraries of open telemetry they will instrument it and they will have like a general build
process right so the first one follows all the default traces and default metrics like
ready matrix so everything is automatic you just need to add an open telemetry javascript
yeah i think i think in this talk right i would say since now i’m working very closely with the
developers right i think i think whenever we present such kind of what
i’m thinking about is yeah usually we start with signals or open telemetry right i would say it would be best if we
can start with some kind of say best practices say for the java developers
now these are the best practices and because this is very less stock if you look at all the resources which is
available over the internet they talk about multi-state build they talk about
so you can you can pick up the eclipse but i think one thing which they ignore is the observable and that has you know i have
seen few of the white papers where they are captured and we need to go ahead because sometimes what happened you know
when you have say one of i was working in one of the application called to-do list and when
you shut it down so that is not a graceful shortcut and you will never know what what
crashed right in the java right so so that was one of the best practices i think signals fit in that world where
you attach an observability into your application and then you monitor what is happening right so maybe maybe that flow
is first you talk about the pain point and then you bring the signals rather than directly jumping into the signals
then i think the developer will understand the value right so i would say give an example of the common
problems that develop a phase in debugging like going to the scenarios and then showing like how it will help
yes you can debug it using signals and open telemetry yes right yeah that’s a nice way to do it we should be doing it
from the next thing yeah i think i think that is because this i completely look at not a devops tool but a developer
okay and when you connect this story right when you solve the pinpoint of developer i think that is
the way you are attacking that particular solution right that’s interesting so we also thought of
doing some rc case studies like root cause analysis if something is going wrong wrong somewhere or there right
right so pick up any programming language and then you talk about the best practices and what problems you are
trying to solve and then you connect the story of signals i think that will make sense
thank you yeah please consider that we actually have like a blog tutorial on
jvm but uh i think if if someone is interested like they can try it as well
uh okay coming back to this uh so as i was mentioning like uh about
including a new prometheus uh config script config
for the c riser so we do it in the matrix component so that is another instance of the hotel collector and this and it goes it how it
works is similar to like previous ones so we have a pipeline matrix uh which sends data to click house
matrix right all right so that’s actually like what is
basically covers about uh the docker monitoring docker containers
uh and the host matrix yeah yeah so let me actually show you the demo
actually like so so what most of the tools out there in the market basically uh is like
they have the separate tool like an integration and the ui for the metrics and the separate one for
traces separate one for logs so for uh signals using signals what happens is like you have a correlated
matrix and traces so for example so we have front end here and you can see application latency
basically uh went high here so you can click on here you can go to traces
so from the metric space you you would basically like uh instantly you you moved like you know uh
you’re taken to the traces space with with that related to that uh whatever application
latency like you know so you can monitor it better so you can open any of the traces and you can uh
basically evaluate key what might have gone wrong
right so uh also we have support for uh
basically uh resource at uh hotel uh resource attributes so if you basically pass the resource attribute
environments you should be able to access it and filter it via that so this is like i think this is like
overall covers about docker container monitoring and the host metrics uh but what about the kubernetes so
so kubernetes i think we can like quickly see how we can go about that
so i have a local kind server running so
what i do is basically help install
okay so i think create ns platforms have to create the namespace first create helm install
okay so i think okay let me go back to this
we basically can monitor is the click pod level metrics uh which is
which should be new i think and we can also have like the containers matrix that’s related to kubernetes for example
how much resources was supposed to be allocated like you know how much is the request
resource request how much the request uh limits and what’s the actual metrics that’s being used by the containers
so those things we can measure monitor using this chart uh sorry using this
dashboard here that we can import now afterwards uh so this is basically already covered in detail uh in kubernetes infra metrics
so if you any one of you is interested like you know you can actually go through this and also we just recently released a
a blog so regarding the same so it’s like kubernetes monitoring with open source tools
so this actually is very in-depth uh if you guys are into it so yeah
this should take around a minute or two
yes actually yeah
yes yes so uh so by default what we install is like in case of docker uh we have hot rod application which is uh
if you see it’s by eager this one of the example application so what open telemetry supports is uh
otlp prometheus like so we can actually plug in from any source so you can have like all any of
the tools like you know in place and you can send data to so using this example we send the data
uh to the open telemetry agent and from there we send it to uh click house
i think you’re more interested like how to what to do in your application so that you can generate that kind of data
just like a few set of run commands that you need to change install open downloaded packages like change the if you if you’re a python
developer then you will have to uh add a few packages in your requirement or txt like c4 packages and
you just have to change your run command by changing i mean you need to add open telemetry instrument xyz and end point
where you want to send the data that’s all that why are the environment variables that’s all you need to do uh
in node.js also like very similar configuration like three four lines of change inside your package.json or your
requirement.txt whatever it is and changing the run command that’s all you need
uh so you can see here like for the java right so you just have to download the jar and you have to set the environment variables so saying uh
where exactly you want to send the data to so so we actually generate like otlp format
which so you see you basically change that to ipf signals and otlp data
whatever is generated it’s sent there and you have like resource attributes which
would be utilized in the ui so to basically filter and uh you can have name spaces to basically separate your
dev environment staging environment production so all those things comes under resource attributes like any tag
yes this is the java agent that i was talking about like download the open telemetry jar and that’s all you need for it
is okay you build something using a docker file itself so rather than just in a docker file
copying it i am building it inside a container so do you have any examples
i don’t think things are like
people build in their machine or something the build environment is different and they actually
that’s wrong okay because they say
inside your doctor file like you can run a series of run commands right but download it there and while the build is
step only and just like it will add a couple of lines only in the docker file so it can be either like you added like
the jar is there you either copy it from your laptop or you download it in the
by the application itself rather than the open telemetries
right yeah yeah we’re not building it so that’s why we have to like basically copy this yeah or download like so this
one copy or download but the application one like you like you said we can build in the uh
multistyles build inside so i think we can have a like next blog or some docs around that same
so i think yeah so our kubernetes cluster is ready so we can go about
put forwarding so we should be able to see
okay so let’s create an account okay let’s give this
by the way you have artifact controls now like you can add your team members with your permissions
yeah that text we recently introduced all of those so we have uh support for various level of like access viewer
admin and uh viewer admin and editor so you can
invite more people here so yeah so this is uh
actually powered by kubernetes cluster so let’s import some of the dashboard which makes sense for uh kubernetes
so let’s go down right
okay actually yeah let’s go through this so so we have another repo called hotel collector which like previously i
mentioned about the same so dm collector
so so basically uh so in that repo i basically modified the end point that’s supposed to be to the the cluster
endpoint which is the release that the name of the service dot platform dot this so it’s like
so what i do is i just go here and click apply
so what this repo basically does is um right so so monitor a couple of things so
for each of the kubernetes node that’s running it monitors the host metrics it monitors the cubelet stats
and it basically attaches the it basically detects like from where it’s coming so we have resource detectors in
place uh a from the env from the eks or ac2 so all the labels that are present for
the instance will attach it during the processor here resist detection process
uh and uh it sends it to otlp so which is basically another signals otp right at
the receiver so we have a two instance of audio collector in place so from the source portal collector from
it sends two signal total collector from there it goes to click house
all right so yeah we have host matrix cubelet stats uh two processors
and otlp expos here so this is one other one we have is uh
for the kubernetes cluster so it’s basically for ready or the memory pressure right so this like node
matrix so i think we can go ahead and import
[Music] matrix dashboard here okay so this local host basically i’m
able to access because i have like port forwarded here so let’s go and import the dashboard
okay so i think it should uh it’s basically like a query how it works
is uh it takes the average out yeah it should be able to see it in some
yeah there’s no data yet so i think it’s it’s going to take some time to basically show it
do you have any already running communities i don’t actually
so basically like using this pre-built dashboard what you can monitor is
uh what is the cpu request when limits that are simply cpu or memories request a limit
assigned and how much does it actually uses so it’s actually loading because uh so cpu legend takes some time because we
have the average query so average 5 let me change that to 2 i think
so should yeah so so you can see like how much is used and
you can see 143 mb is used by uh click house container
and how much was assigned to it you can see using the limits and requests so i think that covers about
monitoring docker as well as kubernetes so
yes they are so we actually like had to uh go extract steps to do that
uh so so you can see like our documentation site right we have like documentation specific for
this we have a different yaml for arm which i think will be changing down the
line because uh so the what we mainly rely on is uh why for this is uh click house so click house before didn’t have
official version of arm image so now uh they moved out of index so it was previously called like index is a recent
company so after the world like they actually moved out uh so it’s
so yeah they started building multistage built for click house so so from the next after upgrade click of the upgrade which will be doing
i think in couple of weeks or so then it will change will be a single documentation which supports both amazon
chip arm as and uh amd i think that will be important like for
docker desktop team like if yeah those who are trying to laptop right is one thing now
as the best practices what we recommend is whenever you build any docker image you have to build it for multiple
platforms yes right right that is the reason because this is one problem right i was working
with as i said i was working in one of the
i think recently we had similar issue with uh docker hub so we had actually built uh
some retainer okay i think it’s telling us this
so we the image we had built example images for people to try out in the docs we had meant we have what we had said is
like use this image but they had built it for in uh for for amd and people were some some of
the user was trying in there so he didn’t support that so what it did is like she made the changes to docker file
to support both of the architectures so now you can see uh we have amd and arm
for the same also for rest of the components i think we can see this service
we have actually a couple of components in which we i don’t like we have to actually migrate that after that we can
officially be like uh erm supported like you know monitoring platform
not yet because it’s small enough to get started on raspberry pi but i have we have not
tested it by what it should be
right so so you might be like wondering how do we
get involved because we have community uh open source community also involved and they we get uh pull requests from a lot
of them actually uh so you can join the slack using this link here these are official repo
and so you can see any of the issues that you’re interested to contribute to you’ll see a good first issue which like
you know very beginner friendly for you guys to try it out and we primarily like have like react drive script and go so
our backend is in go uh we have reactant type script front end and if you guys are like a
devops engineer and like want to contribute to the deployment files so we also have like deploy folder inside this
on here we have a hem chart in charts repo this is like one of the ask that we usually do when you come to the talk to
the community like as a building uh infrastructure tool from india
uh a great help from all of you like if if you can help in finding out the first doing performance testing or
giving feedback even on the product and if most of you must be worrying about the monitoring and observability needs so
any sort of help in those regards will be very helpful in doing this fastly
right so yeah that’s how we’ll be wrapping it now it’s like thank you thank you guys for being a wonderful
[Music] um
so that’s a bit about my background
and just fade it up
give it [Music]
so uh just with the show of hands how many of you have you know made that attempt to
you know make two container interact with each other okay so can i have a show of hands like how many of you did try with it
uh made to like uh you know connect with two containers right so you have done it great like uh what about others
have you tried it yeah uh yeah you tried okay
so in a docker we have an interesting thing right so we have some thing called network drivers in that if if you go to
there you know in the network drivers we have a lot of things uh then we have a null uh basically none and then we have
uh mac vlan then we have host then we have bridge so uh to explain you i i
actually want to run the slides but later so these are uh five different types of
networking network drivers which are available in docker right so uh default by default like suppose if any person
enter this room right any person enter this room so will that person be like he can
communicate with any person right here he can just talk to the any so such is a container okay if you start a container
by default what happened is that if in this room we can all assume ourselves to be a container and that container can
talk with any other container so this is a default mode bridge mode in the docker
you know that’s the thing so the and this is the first way how the containers talk to each
okay uh i actually wanted to yeah
to talk to other content yeah but you say that each container can talk to other containers without reporting he
cannot i’m talking about like see uh by default docker provider default bridge network
to a container so by default
uh yeah so by default the bridge uh the bridge is provided and uh through this bridge
the all the containers uh by default if you just run the simple comma by default all these containers can talk to each
other so in the first attempt i am going to
try i like this is two containers here so we are going to try to make them communicate with each other so yeah
we have this container who cannot talk to his friend so moving ahead there are two ways actually to
you know make the networking happen in the docker the first way is the obviously the networking side we know and the second way is of sharing of
files uh if you might be aware we have the thing called uh we have you heard about the amount and the volume one
right yes yeah so in that what happened is that we can have some shared files you know uh what we can do is that these
containers can write and read and write from the same files right so this is the way we the containers can talk to each
other but this way is actually not usually recommended right because we do not want uh we are already advanced in
the networking why to use the files just to talk with the containers so yeah
so these are the few networking terms i wanted to introduce but yeah so uh just to have a like these will be will be
using this there’s a thing uh network namespace the these are these are used for setting up containers uh these network namespace
right they provide uh has their own ip addresses rounding table and such okay
so you can keep it at uh like it’s used for the ip address thing we have the vf pair the v8 pair here is very important
okay uh it acts as a tunnel to create a bridge between to another network space so this v8 pair is actually very
important just to connect our two containers and then we have the ip tables uh this ip table is used for port mapping okay
so this i was as i was talking about the uh how the containers talk to the this
is another way i was explaining like you can have the fi uh the volume uh talking through the files right so as we know
that emphasis like if you want a persistent data connection like a person’s persistent data in the sense that if you want your container to talk
to the files what you can do is that you can so for the you can have some the default
directory which is provided by the docker or you can either use the docker volume right
moving ahead uh yeah why do i just want to explain like why there is a need for the container what
is the difference between a virtual machine and a container like anyone just
like this for me i can have some anyone talk about like why why we are using container what was the thing which was
available in my uh virtual machine but not there in the container so anyone
virtual machine yeah definitely so
that’s a thing so in this virtual machine uh the thing was that you know when uh when you go to the networking
site right there is one one more important thing uh we have this called uh what they provide simulate hard
hardware interface right so in the virtual machine they simulate the hardware interface but the design of the
container is such that it was used it is used actually just for running our application it has it does not know
about networking site right a container is not about but the virtual machine has the ability to simulate a physical
hardware using uh like it has a virtual network interface and it can be used to uh you
know talk to so as we were progressing towards uh like uh we were like uh if you if you saw in the first slide you
know we had some servers and that servers we had some routing uh you as you saw the routing thing right
we had some in that server like they were communicating communicating with each other through some office switches
there service there so there was a traditional way back in i don’t know how much back but i guess
all the software developers here must be aware that how the communication was happening back then and how we evolved
then came the virtual machine era in the virtual machine era we had this uh we were using the physical uh the
simulated hardware interface yeah so that was the thing so uh there was a
need to introduce like container networking standard okay so when i talk about this right
there’s a thing uh doc uh okay we’ll come up with it so uh this is a
container networking model right so uh docker uh if you go here uh so this
network this we have a container here the container has its endpoint this endpoint is connected to the networks
here and uh yeah and that’s how it goes like there are a lot of technical terms here but what i wanted to say is that the
networking the networking part of the docker was not was not in like uh the container
run time in the engine it was not included in that and there was a particular reason because we want
to develop there were some policy requirements in the policy requirements it was such that uh
some people wanted to connect from one docker host to another docker host some people wanted to like they wanted
certain kind of uh only they want fi security in that right so there was a
need for that and that’s how the uh container networking model evolved now there is a lot of argument between going
on why docker adopted the cnn model and or the ci siena so you can do your own google research and you can like
like that’s an argument for different like everyone is right here i’m not here to blame but yeah
so a little bit here so the uh there’s a lot of thing which is happening on the network side of the
docker right so network controller here this provides the entry point okay so uh
just a bit of information the lab network is actually the uh implementation like it is meant uh we
use for the container networking model we which is the docker one which it uses right so we’ll go ahead and uh it
provides an entry point for all the simple apis for the user to allocate and manage networks okay
driver is like the actual implant network implementation now the sandbox if you see here the
sandbox we have uh it is used for uh represent continuous network ip address
okay market uh main market is trouser dns entries and ipam drivers provide a private ip
address so we will talk about the bridge network here so what these ipm drivers do is that they provide container with the their own uh private ip addresses
right so uh i will come to this slide later also so
i think so we get the need why the why there is a need of a container networking model so is everyone like getting a point like
why do we need the container networking model or like should what do you think it
should have been included with the container like the run engine or should have it should have
been separated what’s your thought on it i just want to have a quick part anyone
separate it would be better right because yeah and if there is any more issues
that we can it will be easier to remove that flaws yeah networking any yeah that’s a very any
other security specific yeah so these were the reasons like uh the networking was not
part of this thing and docker went with the container networking model and that’s how it
started so uh just uh uh like i i will just i have already
done this on a terminal i’m not going to go through the whole demonstration over here so what i have done over the terminal
like let me visible yeah so uh first of all i have created a bridge bridge like uh right uh used our
best driver and i have created a network called web server so
second okay so uh this
i’ve created a bridge and it got uh network id which is assigned to it so uh
when i inspect the web server here so it’s i create a web server name and uh network with the name web server and it
has a driver uh with the driver bridge right and right now when i created a this one
it has no container attached to it okay so first i will pull two
uh like i have create uh like i will pull up the nzx and i will put the
alpine right symbol to c and the next uh i’ll make them join a network which is
the web server over here right and when i make them join the network what we can see over here is
that they uh after i inspect it right as you can see after the docker inspect web
server these container attached to it you can see it ah a container client and
server right these two containers are now attached to a web server
so earlier used to see like uh there was no container which is which was attached to the
web server but after this after running this command like after adding it to the web server it’s like attack
now the thing is that yeah
by default so i will come to it later also like why uh by default is uh where
to use the by default one and when to use a user defined network bridge right so these are the uh okay uh if as i
mentioned okay there are two things okay as i said that if if a person enters this room
he will able to hear me right and if uh like these two will enter now
they will be able to hear my conversation do you think it’s a good practice that if anyone enters this room uh they can hear my conversation it’s i
think it’s not not a good security practice that we allow i’m not nothing to do with i’m just
taking an example okay so uh it’s not it’s not a good practice to allow okay
so if anyone joins this room what’s the meaning was a there’s a security here
right so then comes the user-defined bridge so in the user-defined bridge what we do is that we create our own
bridge similar to the development but we make it very specific we don’t allow i’m not again not blaming you guys
we don’t allow anyone like we only uh for example i wanted these two containers like i wanted my client and
server to be there in the network i do not want any other to join this
right so i think so it’s a good practice that we use a user defined bridge and
instead of a default although uh i will talk about the benefits and the way no when we talk about the docker networking
it’s very confusing okay yeah you can see that you can get stuck like which network should i use which when
should i use host when should i use the default network bridge when should i use the user defined then when should i use
the mac vlan so these are the different terms okay again the purpose of this talk is not to it’s just to get you
hyped up that these things exist in the docker and to make your application communicate more well to each other you
know even if you’re heading towards a load balancing balancing site this is the perfect start to your docker
networking journey right so uh as we have seen over here i have created use it’s a user defined bridge network
as a servicer and uh after this i go to the inside the client server and try to uh
put up a curl request here okay with the server of it so as you can see now we can definitely
go to the engines right as you can see the we have a perfect output over here for the engines
so yeah so that was the thing about going ahead so okay
just to give you a pre a brief overview of what is it is okay so this is a bridge when you run this command all
this car like our docker bridges automatically as i said the none the nun is like the black hole okay
i call it like you make your container that no one can access it no like
this container is sitting here ideally it has no access to any other container outside it and the host one i will talk
about the host is like suppose uh in this have you used the dash p command
in the docker how many of you used it that’s pretty common right you have must have done the network uh port mapping
right the network map mapping you might have done like this 80 port of this like we’ll listen to this so
uh in the host what did happen is that you do not have to do it right uh you just like uh when you run
this 5000 will directly get attached to the hot network port right so uh that
thing uh this host was introduced later okay previous space it was different to do it but this host was introduced later
and now but there is a one fault with this okay what if there are two container with the
same port what do you think would happen in the host one this one for example uh
let’s assume that there is another container which is running on port 5000 what do
you think will happen will it run or there will be a conflict any answers
there will be a conflict so so that’s a thing about like who uh there will be
so we cannot use host in that but so uh you might be asking like what is the benefit of using the host okay
so uh the first benefit is that okay you get a good speed well like as you can know that we are not
it does not has to go to the this like mapping is not there so instead directly connected to the host
the speed is definitely one thing which you are looking at okay so that’s the thing but the cons here is that there
can be a port conflict and you have to take care of that okay so uh i guess you
guys are pretty much getting an idea about the bridge none uh
yeah everyone back good they said okay
so yeah so these were the three but let’s go ahead
the bridge network as i said that uh
sure which one exactly
uh so this type address thing is taken care by the uh as i explained in the
networking this term right uh it creates a you know whenever there
is a container which is created there’s a namespace which is creating there is a via pair which is adding so uh as you
can see uh in all this process like the ip i uh this ipm drivers as well which
provide container with the private ip addresses like as i mentioned over here at the ip driver ipam driver it provides
a uh the container with its own ip address which it can communicate okay
so uh that’s uh it’s more of a kind of a networking mostly and uh talking about like uh the bridge
can do you think these two containers on a different host host like uh do you think these two containers can
communicate with each other this con like the b container can communicate with the
c container or like a container can no right because uh they are the
containers on different bridge cannot communicate with each other like a and b cannot communicate c and d cannot
communicate so that’s the problem we are facing here right uh containers on different bridge cannot
communicate with each other that was uh so you get so network assets explain you the container network
model okay initially let’s say it started on exposing the port of a single container
like okay imagine you are developing a container okay in the net uh and you’re thinking like what are the
possibilities which can exist what are the different user requirements out there in the industry so it was a very
good idea that they separated it and uh through via which we can have you know
even third party plugins we can have so yeah so now then there is another need here
that two different these are two different hosts now they need to talk it to each other how will
we make this happen uh yeah but there is uh this is on a
different host yeah that’s a i’m coming to the part later how there is one other thing called overlay networks
so i will come to that so uh it’s just an example like a stick
let’s skip this okay so yeah uh just a bit about like uh i
think the as i explained the bridge is created by the bridge driver for the pre-built and network is called every container is
connected to a bridge via vf pair okay this via pair i explained it earlier it’s used to connect two different
network name space right like it creates a tunnel between this and this okay and it provide external ss required poop
or mapping for all the port mappings and all so this v8 pair is very important it’s kind of a linux term and much of
the things which docker adopted in the uh in the networking side was from the linux right
so it would be for the other linux developers it will be like yeah they could understand it
so okay let’s skip i think so host is very clear
to you all i will try to wind it up because i think uh now i just explained to the over the
overlay networks okay i think so i need to do it
okay now imagine there is two castles okay these two castles are separated from each other how can they communicate
and i think so you already got the answer but still we can have some new ideas it will be better to have a new
eye so uh imagine there is two castles and uh this is like a this is a work of water which is filled and right and
there are crocodiles in the water how can these two castles this is east and this is the west castle how can these two castle communicate with each other
any one we can have new ideas i know that you saw the image of a pigeon but we can have some new ideas as well
sign language definitely we can have some sign languages kind of stuff yeah
so uh as you remember you saw the slides where there were two different docker hosts right we so imagine this this is
the same thing we have a docker host one we have a docker host two and these uh we want them to communicate but you
know there are uh what we can sign language is one thing but we can have some
which is flying which is taking a message from the castle and going to the castle okay king here’s your message take it so yeah so that’s the thing uh
that’s how uh in the overlay network we’ll study about though will you network implement a similar kind of thing okay
so moving ahead so what overlay network does is that suppose we have a different host it
creates uh another bridge inside and this bridge is responsible for
communication now every container okay every container which is connected to this bridge can communicate with another
container like this right so uh right now we have this overlay network here and
we have this ip addresses assigned so yeah that’s so so this uh overlay
network okay we’ll go in depth how it goes but this overland network assign a unique private address
it already see you know that uh a container already gets a private ipad so if you went through the terminal if i
so uh if you go over here like uh every container gets its own ip
addresses right so we know that but your uh in the overlay network what
happened is that there is another ip address which is assigned to it and which is used for the internal private so it’s like uh
we already have uh it’s like an internal conversation which is happening between two different docker hosts and the
overlay network is responsible for that uh it’s mostly okay so another thing here
over here we have a vxlan so we have different technologies which are implemented to
make this overlay network work so uh if you want to be well kind it’s
more like a networking terms like overlay network uh has this vtep vtep is the end point uh on the two side of like
there is a endpoint endpoint here and the the communication hyper happens
here so over network use vxlan to build network okay so uh the vx lan is created
through underlay networks okay it’s more of a networking side uh the underlying network i wanted to go in
but it’s another 40 minutes of lecture i don’t want you to guys go to that but i hope you’re getting the point why we use
the overlay network right we are understanding uh let’s let’s understand that we have two different docker hosts and we want them
to communicate with each other so again the goal of the talk is to make you more excited about the networking side and i
want you to explore different possibilities i want you to build different applications on different hosts and then try to communicate with
each other because then you are actually doing something right because uh let’s say that even i can like basically build
a website you know a database i’m not able to scale it i’m not able to meet up the demands you need to know this how
they interact with each other then only you will understand the complexity or and how the networking happens because
earlier so what will happen is that earlier we used to have some servers and all these servers were responsible for
the connections thing right but then we moved to the vm where vmware as i explained they already had it but when
we are moving to the docker why forget the net networking because as we are moving we need to understand that
networking is a part of the docker and as a docker developer you need to make sure you get at least a good idea
that how the networking is happening right so uh as i explained
so uh this we this vtep this performs encapsulation and decapitation and uh it
ends exists in the docker whole space right uh okay just a couple of minutes
so uh yeah ingress network okay i wanted to explain about this so
english net i everyone clear with the overlay networks right having a good idea about the whole
endeavors at least you got the idea that we can have the two different dockers talk to each other right so uh in the docker do you know that we also have the
concept of load balancing here uh how many of you have the idea that we have the concept of load balancing the docker
okay so we do have the concept of load balancing and it by default comes with it like the or with the overlay networks
you already get this uh load balancer in this and you do not have to do anything right because uh let’s say that uh the
load balancer the as the name suggests what it does can anyone uh like
can even balance the load right so if there is a high uh there’s a huge load on this is
uh this one the node one it can shift it to the
other node right so by default this comes in the docker right and uh
you might want to give it a try so yeah so and okay uh last there is a mac vlan
network so this is uh you will not mostly encounter it okay the most of the networks which you will encounter while
working with the docker is the the host okay the default and the user defined bridge okay and the ingress
network which is very important which is like if you want to just know more about the load balancing side
of the docker i would definitely say that please go through the overlay networks and try to understand like how
it works in internet technology like i just want to explain it but it’s again in the networking terms right you know uh the when i talk about the overlay
networks i have to talk about the underlying networks and when i talk about the underlying networks available
external vtep what is encapsulation decapsulation you know uh getting the idea like in the
networking side what it happens so just just to give you you know all these uh people when people ask like how do i
stop uh start my docker networking journey so the right starting would be to start with the net con uh you know
basically go back to back like what was the problem which we faced earlier time okay how the virtual machine solved it
how the docker will do it how the rocker will solve a problem so this is the thing
and uh macvlan as uh mac feeling it does nothing we know that uh every container
gets his ip address right but in the virtual machine we had the concept of mac address if uh it’s like virtual
network and there was a virtual network interface card right and this card uh was responsible for
providing a mac address to the virtual machine right so this uh mac address as soon as we came with the
docker technology this mac address called uh this mac feel and it does nothing but it gives the uh
mac id to your docker container but making it look like a yeah it can be a va machine or such
so yeah that’s it there uh uh so do we have any like i would try to
laugh about no okay that’s done i guess so thank you everyone [Applause]
so i guess any questions we have
i know you all might be tired but just to give you a roadmap i suggest everyone to start with the docker journey you can
just get familiar with the linux networking terms you know uh we can start with the ip tables routing map
route it’s really good for you to understand this basic terminology right the how the v8 works the ethernet works
how the switch works so you know just to if you’re venturing into the network site you need to understand this basic
terminology and then go ahead okay so uh thank you everyone like yeah
i know it’s five o’clock so thank you everyone we had a wonderful session today great honors because
i think this would be the first uh uh offline meetup for our docker community after so much time
so usually i tell you like what happened is the when we interact with the online right we do not know if the
people are paying attention or not we are not engaging this lively as we are talking here right now i know that there
are so many years talk listening to me right now you know and right thank you and
appreciate it for traveling from mumbai yeah i wanted to explore like it’s my first time in bangalore so yeah anyone has
good solution where should i explore can please give me something
no no we have traffic in mumbai as well okay thank you