Using GoLang Concurrency & Asynchronous Processing with CI/CD @ GoSF Meetup

November 10, 2021

< 1

Using GoLang Concurrency & Asynchronous Processing with CI/CD

Concurrency is one Go’s most prominent features. So how did we incorporate its usage when we needed to significantly reduce the 8 hour compile time of our complex Kubernetes platform which consisted of 25 individual components? What about the multiple CI/CD pipelines we were triggering subsequently during this process? In this talk, Sudhindra will share the process and approach his team took to work through these uncertainties, and how incorporated Asynchronous Processing while leveraging Golang concurrency. He will also showcase concepts in Go Concurrency from waitGroups to concurrency pipelines. Using GoLang Concurrency & Asynchronous Processing with CI/CD

View Slides Here

Speakers

    Sudhindra Rao

    Development Manager

    Sudhindra Rao currently works at JFrog as Development Manager to help build communities and partnerships to provide visibility into JFrog's liquid software mission. He has been working as a developer/architect for critical business applications developing in multiple languages including, Go, Ruby, and Java. After having worked in traditional application development, Sudhindra became part of the Pivotal team and built their Kubernetes(k8s) platform offering. Sudhindra's diverse project experiences include: building an application for the largest publishing company in Chicago, a large datacenter automation effort, a large auctioning system, and a voter campaigning application for US national elections.

    Video Transcript

    oh great meetings now being recorded
    00:04
    awesome
    00:05
    so thank you uh to go uh 12 years old
    00:08
    that’s really fantastic so happy
    00:10
    birthday to golang
    00:14
    so real quick about my employer and then
    00:15
    i promise this talk is really focused on
    00:17
    json web tokens not about fusion auth at
    00:20
    all
    00:20
    uh we are a drop in user data store and
    00:24
    authentication service so
    00:26
    competitors are off zero or octa things
    00:28
    like that we have different editions
    00:32
    including a community edition which is
    00:35
    freezing beer and that supports multiple
    00:37
    different grants
    00:39
    it supports saml
    00:41
    it has other user management features
    00:43
    like registration and password lists
    00:46
    and
    00:48
    a lot more and how that’s relevant to
    00:50
    this talk is that we do produce a whole
    00:52
    lot of json web tokens
    00:54
    real quick about me
    00:56
    who cares you can come here to learn
    00:57
    about me my twitter handle is down there
    00:59
    if you want to learn more about uh what
    01:01
    i tweet about and you can google me if
    01:04
    you want um
    01:06
    if you have questions go ahead and drop
    01:08
    them in the chat i did
    01:10
    i do have the chat window open and then
    01:12
    i do have a little bit of swag giveaway
    01:14
    in addition to what ari mentioned i can
    01:16
    ship anybody in the us continental us
    01:18
    well no anywhere in the us
    01:21
    or canada a shirt so if you want that
    01:24
    i’m gonna let ahri kind of handle giving
    01:27
    that away uh at the end probably the
    01:29
    same time as he does the jetbrain stuff
    01:31
    but
    01:32
    uh we’ll need shirt size and and mailing
    01:34
    address obviously
    01:36
    all right let’s get into the content so
    01:38
    json web tokens are
    01:41
    a ietf standard they’re actually
    01:43
    pronounced jot instead of jwt or
    01:47
    i guess you could say json web token
    01:48
    that’s precise too but
    01:51
    it’s about 12 or 14 pages
    01:53
    and
    01:55
    it was standardized i want to say 2016
    01:57
    2016 or 2017
    01:59
    and um
    02:00
    [Music]
    02:02
    there are two kinds of json web tokens
    02:05
    there are tokens that are signed which
    02:08
    means that you can cryptographically
    02:10
    determine the the veracity of the
    02:13
    content but the content’s not secret
    02:16
    or there are json web tokens that are
    02:18
    encrypted where the content the payload
    02:20
    is actually hidden and it’s encrypted in
    02:23
    a way that you have to have the key to
    02:26
    to view
    02:27
    i’m going to talk mostly tonight about
    02:29
    signed json web tokens because those are
    02:31
    far more common in my experience
    02:33
    but you should be aware that encrypted
    02:35
    web tokens json web tokens are exist out
    02:38
    there too
    02:39
    so in the context of fusion auth and a
    02:42
    very common context json web tokens are
    02:44
    used as stateless portable tokens of
    02:47
    identity
    02:48
    let me break that apart that’s a
    02:49
    mouthful
    02:51
    they’re stateless in that as i alluded
    02:54
    to earlier you can determine the
    02:55
    provenance of the
    02:58
    uh json token whether it’s uh been
    03:01
    modified or anything like that
    03:03
    um without communicating with an
    03:06
    additional server
    03:07
    so that means you can basically if your
    03:09
    hana json with token is a client or as
    03:12
    someone who’s consuming a json token or
    03:14
    some service that’s consuming a dot you
    03:16
    can know that it is a valid jot without
    03:20
    ever
    03:21
    reaching out to a server to say hey is
    03:23
    it valid or not
    03:25
    they’re portable because they can be
    03:26
    turned into url url friendly strings and
    03:29
    we’ll see an example of that
    03:31
    in a bit and then they’re often used to
    03:33
    represent identity they could i mean
    03:35
    they’re basically json objects hence the
    03:37
    name jot
    03:39
    json web token but they’re often used to
    03:42
    especially by identity providers to
    03:46
    say hey the holder of this json web
    03:48
    token is this entity or is this person
    03:52
    that makes them great for apis they can
    03:54
    be used as bearer tokens
    03:56
    and one of the nice things about jots is
    03:58
    that a lot of identity providers fusion
    04:00
    auth and others
    04:02
    produce them so you can have the
    04:04
    identity provider
    04:05
    authenticate the user hand this token
    04:07
    and this token can then be passed around
    04:10
    other services
    04:12
    who can rest assured that the hold of
    04:15
    that token
    04:16
    except for in certain circumstances is
    04:18
    that entity
    04:22
    all right so here’s the application we
    04:24
    can we’re going to be walking through in
    04:27
    this meetup presentation
    04:29
    so we have a
    04:31
    to do application
    04:33
    and this is not this is probably an
    04:35
    overly complicated application
    04:37
    but it is a user you have a user service
    04:41
    or api and then you have a to do api
    04:43
    and then you have clients over there on
    04:44
    the left that communicate with those
    04:46
    services
    04:47
    and then obviously on the right you have
    04:49
    the you the data stores for these
    04:51
    different services
    04:52
    the user’s data store looks a little bit
    04:55
    like this right mapping users and roles
    04:58
    and then the to-do’s data store just has
    05:00
    a list
    05:01
    of to-do’s
    05:03
    if this is stored in a post-rescue
    05:05
    database or another relational database
    05:08
    that to-do’s table might look a little
    05:09
    bit something like this
    05:11
    and what’s important to
    05:13
    acknowledge is that user id is not a
    05:15
    foreign key because these are separate
    05:16
    services
    05:18
    it’s gonna be um you’re not gonna be
    05:20
    able to have that referential integrity
    05:23
    so let’s walk through this application
    05:25
    the user
    05:27
    posts some credentials to the user api
    05:30
    the user api verifies those credentials
    05:32
    and it typically is username and
    05:34
    password but it could be a lot of other
    05:36
    things
    05:37
    it doesn’t really matter we don’t really
    05:38
    care the point is the user api magically
    05:41
    does its verification of who this user
    05:44
    is
    05:45
    and then it gets the user’s data from
    05:48
    the data store and then paths passes it
    05:50
    back to the client as json
    05:55
    that json might look a little bit
    05:56
    something like this right it can have
    05:58
    an identifier
    06:00
    name
    06:01
    email roles
    06:03
    you know other information that’s useful
    06:05
    to this application
    06:07
    then we’re going to want to get our
    06:09
    to-do’s because people don’t care about
    06:11
    logging in people care about changing
    06:13
    their to-do’s adding new to-do’s marking
    06:15
    them off
    06:16
    that’s the best part in my opinion with
    06:18
    any to-do list
    06:20
    so
    06:21
    the client is going to have to request
    06:23
    the users to do’s so in this very naive
    06:26
    implementation which you should not
    06:27
    implement
    06:29
    uh the
    06:31
    client takes the id from the json that
    06:34
    was returned to it
    06:35
    and passes that to the to do api which
    06:37
    then
    06:38
    looks up that id in the todos database
    06:41
    and hands back all those users to do’s
    06:44
    that gets sent back to the client
    06:46
    and the client can then render it that
    06:48
    json as it sees fit
    06:51
    so why is this a bad idea well
    06:53
    uh it’s a bad idea as i’m sure some of
    06:55
    you noticed because you can a malicious
    06:58
    client or a misconfigured client can
    07:01
    pass in the wrong user id and therefore
    07:04
    get back the wrong to do’s
    07:07
    and we need to keep things like hey dan
    07:09
    go get milk after this media
    07:11
    presentation you know very safe and
    07:13
    secure so we don’t want people to be
    07:15
    able to look at my to-do’s and i
    07:16
    shouldn’t be able to look at anybody
    07:17
    else’s to-do’s
    07:20
    so
    07:21
    hopefully we we made that diagnosis of
    07:24
    that
    07:25
    security that fundamental security issue
    07:28
    a um
    07:29
    we found that out before we started
    07:31
    building anything out so let’s look at
    07:33
    two alternatives that actually solve
    07:35
    this problem the first is what i call
    07:37
    the opaque token solution and so in this
    07:40
    case we have
    07:42
    a
    07:44
    token which is just a string of
    07:46
    random characters that is generated
    07:48
    after the user logs in and it’s passed
    07:51
    down as part of the user data to the
    07:52
    client
    07:53
    the client then passes that token to the
    07:55
    to-do api
    07:57
    now this this token
    07:59
    represents a valid user somehow but it’s
    08:01
    not a user id because if it was a user
    08:03
    id then again it could be enumerated or
    08:07
    um
    08:08
    people could try to guess
    08:09
    valid tokens that pointed to the today’s
    08:12
    database instead this is just an
    08:13
    ephemeral string that is pretty much
    08:16
    representative of this of the session
    08:18
    of the user
    08:20
    so the to do api says doesn’t have
    08:22
    anything doesn’t have any idea what to
    08:23
    do with this token but it knows
    08:26
    something it does
    08:27
    know what to do with this token and so
    08:29
    it can
    08:30
    call into the user api and say hey i got
    08:32
    this token from this client is this a
    08:34
    real token is this valid
    08:36
    okay if it is the user api will hand
    08:38
    back json if it isn’t you know the user
    08:41
    api hands back some other status code
    08:43
    and the token or the zoo api says
    08:46
    uh basically tells the client hey i’m
    08:48
    not going to
    08:50
    give you any to-do it’s because you gave
    08:52
    me a bogus token
    08:54
    but if it does get back the user
    08:57
    object then it can pull off that id
    09:00
    and
    09:01
    get the to-do’s for that user and then
    09:03
    pass them back as json to the client so
    09:07
    the difference here is that the token is
    09:09
    out of control of the client the client
    09:11
    can try to manipulate it but chances are
    09:13
    high but if it’s a string with a lot of
    09:16
    entropy it’s not going to be able to
    09:18
    come up with another valid token in a
    09:20
    reasonable period of time
    09:23
    so this has a couple of problems um it
    09:25
    is a valid solution but it does couple
    09:27
    the user api to everything and as you
    09:29
    add more and more apis if they need any
    09:31
    user data they’re going to have to keep
    09:34
    talking to the user api and it’s not
    09:36
    just kind of a one-off it’s like every
    09:38
    time that token is presented because
    09:46
    hello um looks like um someone
    09:48
    accidentally muted me no worries um so
    09:51
    you have this coupling and then you also
    09:53
    have a lot of communication with the
    09:54
    user api
    09:56
    in addition the fact the user api now
    09:57
    has to like
    09:59
    store this token somewhere and and keep
    10:01
    it keep an idea of the state of that
    10:04
    user
    10:07
    so
    10:08
    dots are an alternative implementation
    10:10
    and we’re all uh you know anyone who’s
    10:12
    engineer knows that there are trade-offs
    10:14
    with each type of implementation jots
    10:16
    have their strengths and weaknesses
    10:18
    so let’s kind of walk through a jot
    10:20
    implementation in this case
    10:22
    the user api creates a json web token or
    10:24
    a jot and passes it down and that jot is
    10:27
    presented to the do api
    10:30
    and the two api at this point
    10:32
    needs to
    10:33
    validate the jot um which it can do by
    10:36
    checking the signature
    10:37
    and it also needs to
    10:39
    look at what’s inside the jot
    10:42
    but the difference here is that it can
    10:44
    do that validation without ever talking
    10:46
    to the user api
    10:49
    and
    10:50
    that a jock contained things like an
    10:53
    identifier for the user and because it
    10:56
    checks the signature it can be sure that
    10:57
    the client
    10:59
    well the client might be able to read it
    11:00
    because it’s a sign jot the client won’t
    11:02
    be able to modify it
    11:04
    and that gives it the um
    11:07
    gives the to it the two api the
    11:08
    assurance that it doesn’t actually have
    11:10
    to
    11:11
    go talk to the user api
    11:16
    so let’s talk about some benefits as i
    11:18
    mentioned earlier it can be signed it
    11:20
    can be assigned either by a public
    11:21
    private asymmetric key pair or symmetric
    11:24
    key and we’ll talk a little bit later
    11:25
    about how you kind of do each of those
    11:28
    um the validation is really critical
    11:30
    that that’s that stateless aspect of
    11:32
    jots
    11:33
    and then the other nice thing is because
    11:35
    it’s a json object it isn’t just um
    11:40
    it can contain other interesting
    11:41
    information and that can make it a
    11:44
    something that’s valuable for your
    11:46
    application because you can have more
    11:48
    things in there than you might if you
    11:51
    used an alternative
    11:52
    solution like the opaque token
    11:54
    um
    11:55
    the
    11:56
    or
    11:57
    or even an api key or something like
    11:58
    that
    12:00
    so let’s look at a jot so we have this
    12:03
    is a sign jot here we have three
    12:04
    components we have a green header we
    12:07
    have a blue payload and we have a tan or
    12:09
    white signature
    12:11
    and
    12:12
    each of these or the first two the
    12:15
    payload and the header are actually just
    12:16
    base64
    12:18
    url encoded
    12:20
    or url friendly json so you can take
    12:24
    them apart you can cut and paste this
    12:25
    into a base64 decoder and you’ll see
    12:29
    the
    12:30
    json object below the header contains
    12:32
    metadata so that’s what algorithm was
    12:34
    used to sign the jot and other
    12:36
    information such as that
    12:38
    the payload is where things get really
    12:40
    interesting
    12:41
    so
    12:42
    uh
    12:44
    the claim that sorry the keys of this
    12:46
    json object are called claims
    12:48
    and that’s because they make claims
    12:51
    about something the whole the the entity
    12:55
    um the jot was created for
    12:58
    so
    12:59
    there are some claims that are
    13:00
    standardized and some claims that are
    13:02
    not
    13:03
    the four top claims here are standards
    13:06
    so they’re in that rfc that i mentioned
    13:08
    way back at the beginning of the the
    13:10
    talk
    13:11
    iss is the issuer
    13:13
    exp is the expiration time because jobs
    13:17
    are stateless and you’ll have to refer
    13:18
    to a central server you need some way to
    13:21
    determine for them to to be invalid and
    13:24
    so time basically they’re invalid past a
    13:28
    certain span of time
    13:29
    the odd claim aud
    13:32
    is the audience so that is who this shot
    13:35
    was intended for
    13:37
    so the issuer in our previous
    13:39
    architecture diagram was the
    13:42
    user api that that is the entity that
    13:45
    issued the jot
    13:46
    the odd claim would be the to do api
    13:48
    that is who should consume that giant
    13:51
    the subclaim is who the jaw is about so
    13:54
    that’s not particular in this particular
    13:55
    case it’s about me
    13:57
    and then the name and the roles claims
    14:00
    are just to show that you can put other
    14:01
    data in here you don’t have to just be
    14:04
    limited to the standardized claims and
    14:07
    roles the roles claim shows that it can
    14:08
    actually be a
    14:10
    relatively rich javascript object so you
    14:12
    can have an array of arrays or whatever
    14:15
    else you need anything that can be
    14:16
    represented by json can be put into a
    14:19
    jot
    14:20
    as long as you’re comfortable with it
    14:22
    being visible to anybody who gets a hold
    14:24
    of the jaw
    14:26
    the signature is the signature
    14:29
    you know and nothing special about this
    14:31
    but it is what determines
    14:34
    if this is what gets checked to
    14:36
    determine that the contents of the jot
    14:38
    haven’t been tampered with
    14:39
    how that works at a very high level is
    14:42
    when you’re creating a jot you take the
    14:45
    header and the
    14:47
    body you can candidate them together
    14:49
    you
    14:50
    run them through a cryptographic
    14:51
    algorithm with a secret of some kind
    14:55
    and then you
    14:55
    [Music]
    14:57
    pass that
    14:59
    and then you turn that into a string
    15:01
    right that the the results of that
    15:03
    algorithm
    15:04
    and then
    15:05
    the person consuming the json token can
    15:08
    do the same thing
    15:09
    in reverse if it’s a this is with if
    15:12
    it’s a symmetric key
    15:14
    um or not the same thing in reverse the
    15:15
    same thing right they can take that
    15:18
    header and that body and then run the
    15:20
    same algorithm and then see if the
    15:23
    signature string comes out the same if
    15:26
    it does
    15:27
    awesome the client and nobody else
    15:30
    tampered with it if it doesn’t then
    15:32
    immediately you shouldn’t do anything
    15:33
    else with this jock because you aren’t
    15:35
    sure
    15:37
    what’s happened to it
    15:39
    so the end goal is of course for the new
    15:41
    api to be able to run this query against
    15:43
    the to do’s data store to get that json
    15:46
    back to the
    15:48
    client
    15:51
    so jots are often used as bearer tokens
    15:54
    and a bear token is very similar to a
    15:56
    car key so that means that if i hand my
    15:59
    car key to ari he can and he’s next to
    16:02
    my car he can hop in and turn my car on
    16:05
    my car doesn’t my key doesn’t know
    16:07
    anything about
    16:08
    who is holding it
    16:10
    that access to the key is what
    16:11
    determines access
    16:14
    not anything else
    16:16
    so that means that if someone steals
    16:18
    your json web token
    16:20
    because
    16:21
    you have left it in a log file or
    16:25
    because you’re storing in local storage
    16:27
    or for other reasons then they can
    16:30
    present themselves as you for the length
    16:31
    of that token
    16:32
    the upshot of that is be careful where
    16:34
    you store your tokens and keep them
    16:37
    short
    16:38
    let she’s short-lived
    16:42
    where we recommend at fusionauth for
    16:44
    token storage we recommend storing it on
    16:47
    the server side if possible
    16:49
    and then just using a session
    16:51
    to
    16:52
    [Music]
    16:53
    identify the client so the client
    16:56
    presents the session cookie
    16:57
    and then
    16:58
    your server side code can then
    17:01
    take that jaw and present it around
    17:03
    where it needs to go based on what’s in
    17:05
    that session
    17:06
    that doesn’t work for everybody we
    17:08
    understand so if you’re going to send it
    17:10
    down to the browser send it as an http
    17:12
    only secure cookie this keeps it out of
    17:15
    javascript’s meddling hands
    17:17
    and if you’re setting it down to a
    17:20
    mobile storage place
    17:22
    or a mobile app put in secure storage
    17:25
    there are some standards out there that
    17:27
    can tie a token
    17:29
    to
    17:30
    a particular client
    17:32
    uh d-pop or mtls are the two standards i
    17:36
    know about
    17:37
    so if you’re interested in that level of
    17:39
    security where you’re no longer using a
    17:41
    jot as a bearer token but it is
    17:43
    um
    17:45
    kind of an a more secure
    17:48
    key or sorry more secure um what’s the
    17:50
    word
    17:52
    proof of identity that that can’t be
    17:55
    kind of
    17:56
    picked up and used by anybody who finds
    17:59
    it those are the standards you want to
    18:01
    look at
    18:02
    so there’s a lot of there are some foot
    18:05
    guns with json web tokens they’re very
    18:07
    powerful specification
    18:08
    so
    18:09
    i’m going to run through some of those
    18:12
    you know definitely use the library if
    18:13
    you can
    18:14
    go
    18:16
    has a pretty nice library that actually
    18:18
    was maintained for a number of years by
    18:20
    somebody and just got taken over by the
    18:21
    community i think in april
    18:24
    uh you want to verify your claims
    18:27
    and i’m going to run through some code
    18:28
    real quick about that so i think i have
    18:30
    to stop sharing because i
    18:33
    shared
    18:36
    this topic
    18:39
    all right
    18:42
    so
    18:43
    [Music]
    18:46
    um
    18:48
    can someone give me a thumbs up if you
    18:49
    can see my terminal i just want to make
    18:52
    sure i got that right
    18:58
    great thank you
    19:00
    so in this uh this code which is
    19:02
    available in
    19:04
    uh on github and it’s apache licensed so
    19:07
    you can pull it down and play with it if
    19:09
    you want
    19:10
    we have
    19:12
    this code basically encapsulates
    19:15
    the
    19:17
    user api and the tissue api in the same
    19:19
    place
    19:20
    in the same kind of code in the same
    19:23
    basically
    19:24
    go program you’re not going to do this
    19:26
    typically but it’s good for a demo
    19:29
    so lines
    19:31
    11 through
    19:33
    37 are
    19:35
    where i am creating the json web token
    19:38
    i’m signing it with that sign-in key on
    19:40
    line 11
    19:42
    and i’m going to print it out
    19:43
    and then i
    19:45
    can
    19:46
    decode the token
    19:47
    right i can receive the token string and
    19:49
    i can decode it
    19:51
    using this code
    19:52
    and
    19:53
    the interesting parts of this code to me
    19:56
    are
    19:58
    you know the decoding section because
    20:00
    again you’re probably not going to be
    20:01
    creating json web tokens very often
    20:05
    you’re probably going to use an idp or
    20:06
    something else to do that
    20:09
    so
    20:10
    lines 43 through 50
    20:12
    are pretty much using the library to
    20:14
    check the signing method to make sure
    20:17
    that we are
    20:18
    sorry the signature to make sure that it
    20:20
    wasn’t tampered with over the wire
    20:22
    and then we also look at our expected
    20:26
    claims and so that’s what i mean by
    20:28
    verify your claims you want to make sure
    20:29
    that you are
    20:32
    checking the audience to make sure that
    20:34
    you are
    20:35
    the intended consumer of the json web
    20:37
    token because if you’re not the intended
    20:40
    consumer then the contents of that json
    20:42
    web token might not mean very much to
    20:44
    you
    20:44
    the roles
    20:46
    uh for a um
    20:49
    forum software are probably different
    20:51
    than the roles for
    20:53
    a custom application or a piece of
    20:54
    banking software so even if they’re all
    20:57
    signed by the same idp
    20:59
    you probably wanna you don’t want to be
    21:01
    consuming a json token that isn’t
    21:02
    created for you
    21:04
    and then you also want to make sure that
    21:05
    you’re verifying the issuer just so
    21:07
    you’re not consuming a json token that
    21:10
    is created by somebody that you don’t
    21:11
    expect
    21:12
    that’s pretty unlikely because the keys
    21:15
    would probably be hard to
    21:16
    be matched up to but it’s definitely
    21:18
    better to be safe than sorry
    21:22
    cool
    21:23
    so let’s go back here
    21:26
    now
    21:28
    and
    21:28
    as i said that code is available and
    21:31
    there’s several other bits of jwt
    21:34
    related code that are in that repo
    21:39
    json tokens can contain arbitrary json
    21:42
    data but if they’re assigned json with
    21:44
    token don’t put any seekers in there
    21:46
    don’t put any social security numbers or
    21:47
    anything like that
    21:49
    because anyone who finds that token can
    21:51
    decode it
    21:53
    it looks like you know
    21:55
    code but it’s really just base64 encoded
    21:58
    and so that again i just want to drive
    22:01
    this home if you take one thing away
    22:02
    from this talk
    22:03
    it would be
    22:04
    if you have a sign json token anybody
    22:06
    can anybody who gets it can see the
    22:08
    contents of it
    22:12
    if you use the hmac
    22:14
    algorithm which is the symmetric signing
    22:15
    algorithm that
    22:17
    is supported for jots you want to make
    22:19
    sure your secret’s like nice and long
    22:21
    because there are programs out there
    22:23
    this is one
    22:25
    that actually will take just the json
    22:28
    web token and we’ll just try to brute
    22:29
    force it so that it can
    22:32
    get that key
    22:34
    and if it gets that key if it can brute
    22:36
    force the key
    22:37
    then it will be able to sign json tokens
    22:40
    with that key which is obviously a
    22:42
    negative situation
    22:45
    another foot gun that always comes up
    22:47
    when i post on hacker news about json
    22:49
    web tokens is there’s an algorithm
    22:52
    and that means that no signature is
    22:54
    required
    22:55
    and that turns this json web token
    22:57
    into this one
    22:59
    or something similar to it right so
    23:01
    there’s still three clauses but there’s
    23:03
    no last uh signature clause
    23:06
    the header changes a little bit
    23:08
    to al nunn
    23:10
    and
    23:11
    that means that anybody who finds a
    23:14
    service that accepts a json web token a
    23:16
    jot that doesn’t check the signature
    23:19
    can
    23:20
    craft any payload they want
    23:22
    and put it into an and your on base64
    23:25
    encoded and pass it to that service
    23:28
    that’s obviously
    23:30
    not great right you’re not just talking
    23:31
    about unsanitized input which we’re all
    23:33
    supposed to be careful of as developers
    23:35
    you’re talking about unsanitized
    23:36
    credentials right this is
    23:38
    the jot represents who i am
    23:41
    if you use if you accept al nunn
    23:44
    signatures
    23:45
    then
    23:47
    i can be whoever i want to be right i
    23:49
    could be the
    23:50
    uh
    23:52
    i don’t know who who would i want to be
    23:55
    um
    23:56
    wizard of oz i could be the wizard of oz
    23:58
    if i wanted to i could create those
    23:59
    credentials which obviously
    24:01
    is a really bad idea so you want to make
    24:03
    sure that you as a consumer of a json
    24:05
    with token
    24:07
    don’t ever
    24:08
    accept anything that’s signed with
    24:10
    algebra that doesn’t have a signature
    24:14
    the spec allows this and the simple fix
    24:16
    is as i mentioned earlier don’t allow
    24:19
    none
    24:20
    and the reason why it is in the
    24:22
    specification is actually because there
    24:24
    are other ways to verify a job is
    24:26
    unchanged
    24:28
    and
    24:29
    you know it could be a private network
    24:30
    it could be client certificates
    24:34
    and if you
    24:35
    are in
    24:36
    a situation that there are two criteria
    24:39
    that you should meet before you ever
    24:41
    allow alg none the first is that you
    24:43
    have this other way to verify the jobs
    24:46
    are unchanged
    24:47
    and that you trust your clients
    24:49
    and the second is that you have
    24:51
    benchmarked it to determine that the
    24:53
    signature of a json web token
    24:55
    is
    24:57
    enough of an uh it takes enough time
    24:58
    that impacts your system negatively
    25:02
    all right let’s talk about signing with
    25:04
    asymmetric key pairs
    25:06
    so some of the examples before were
    25:08
    signed with
    25:09
    hmac which is a symmetric key pair and
    25:11
    that means that both
    25:13
    the user api and the zoo api need to
    25:16
    have access to that shared secret the
    25:19
    first to sign it second to verify it
    25:23
    so that means that if you are trying to
    25:26
    make sure that you are securing this
    25:27
    thing that is used to sign
    25:30
    um
    25:31
    basically credentials right distributed
    25:34
    credentials which is what a jot
    25:35
    essentially is then you need to make
    25:37
    sure that you protect both the to do api
    25:38
    and the user api because if someone gets
    25:40
    a hold of that shared secret again they
    25:43
    can sign jots so they could like put
    25:45
    whatever they want into that jot
    25:48
    give themselves the role of super admin
    25:50
    and then sign it and then pass it to
    25:52
    your system which is a negative
    25:54
    situation
    25:57
    so if you use a
    26:00
    shared uh an asymmetric key then i’ll
    26:03
    make a step back here if you use an
    26:05
    asymmetric key the private key stays in
    26:07
    the user api and the public key is used
    26:10
    to
    26:11
    verify the signature
    26:13
    and
    26:14
    that obviously reduces the amount of
    26:18
    surface area that you need to secure to
    26:20
    keep that key and the ability to create
    26:23
    arbitrary jobs safe
    26:26
    it also scales organizationally
    26:28
    and what i mean by that i mean that the
    26:31
    user api can be run by one department
    26:34
    and
    26:34
    many other departments can
    26:37
    um
    26:38
    consume the jots that are created by it
    26:40
    and if the
    26:42
    user api folks decide they need to
    26:44
    rotate keys they can do so
    26:47
    and they don’t have to
    26:49
    find some way to share a secret with you
    26:51
    know the 10 other departments that are
    26:52
    using it in fact
    26:54
    the user api that the identity provider
    26:57
    and the
    26:58
    apis that are consuming the json web
    27:00
    token don’t even have to belong to the
    27:01
    same company
    27:03
    as i mentioned earlier it also changes
    27:06
    your security radius
    27:08
    and then q rotation is easier
    27:11
    [Music]
    27:13
    so and we’ll see a little bit about that
    27:15
    in a second
    27:17
    it is
    27:18
    a more complicated system and it’s
    27:22
    slower and i’ve run some benchmarks not
    27:25
    in go it was in ruby but it was between
    27:28
    two and ten times slower
    27:31
    so that’s a trade-off
    27:35
    so when the
    27:37
    json web token is presented to the two
    27:39
    api the two api needs that public key to
    27:43
    verify it
    27:44
    how can it get that public key
    27:47
    well you can deploy the to api with
    27:49
    public key it’s on on its own file
    27:52
    system
    27:53
    and this has a benefit of
    27:55
    basically really severing that
    27:57
    connection between the user api and the
    27:58
    two api they can live on totally
    28:00
    different networks there could be no
    28:02
    network path
    28:04
    between the two api and the user api
    28:06
    if you deploy the primary key or the
    28:09
    public key
    28:10
    it has some downsides too because
    28:11
    rotation is harder
    28:13
    if on the other hand and this is more
    28:15
    common
    28:16
    the two api can connect to the user api
    28:19
    over the network
    28:21
    it can pull down a list of the public
    28:23
    keys because again they’re public keys
    28:26
    the user api can publish them
    28:28
    and
    28:29
    um
    28:30
    it doesn’t really care who knows them so
    28:32
    it can publish those and the two api can
    28:34
    once in a while pull down those public
    28:36
    keys and then use those to validate the
    28:40
    private key how does it match between
    28:42
    them
    28:43
    well oh here’s um
    28:45
    a
    28:48
    thing that is probably pretty hard to
    28:50
    read this is a sample output from that
    28:52
    jwks
    28:54
    json file which basically just lists the
    28:57
    public keys
    28:59
    that’s another standard
    29:00
    i think 70
    29:03
    don’t don’t quote me on the number but
    29:04
    jbks is a standard
    29:06
    so
    29:07
    you have
    29:08
    the this information the algorithm that
    29:11
    this key corresponds to a key identifier
    29:14
    and then the actual pem
    29:17
    file
    29:18
    or the pem content of the public key
    29:21
    if you have multiple keys which you
    29:23
    probably will
    29:24
    then a jot can be signed with a key
    29:27
    identifier so this is that header that
    29:30
    we talked about earlier
    29:31
    the key identifier is going to map to
    29:35
    one of the public keys that is in
    29:38
    that jwks.js doc
    29:42
    so yeah so you can see that we have n
    29:44
    keys
    29:45
    and so you can see how rotation work
    29:47
    really easily in this situation because
    29:49
    basically what you do is
    29:50
    you want to rotate out
    29:52
    the um key that starts with uk zero well
    29:56
    what you do is you create another key
    29:58
    you might start with uk one
    30:00
    and then you start signing jots with
    30:02
    that key
    30:03
    and then after
    30:05
    a
    30:06
    period of time is expired right however
    30:08
    long the dots that you signed
    30:11
    um however long your jobs are signed for
    30:14
    like five minutes ten minutes after that
    30:16
    time period of time is expired you can
    30:18
    actually
    30:19
    delete
    30:20
    the key that starts with uk 0. so that’s
    30:23
    how rotation would work
    30:25
    refresh tokens are a key part of any jaw
    30:27
    based system no pun intended
    30:30
    and
    30:31
    what they do is
    30:33
    jots are supposed to be short-lived
    30:35
    refresh tokens can live for a long time
    30:38
    and refresh tokens are basically used to
    30:40
    mint new jobs
    30:42
    and you you you use your fresh tokens
    30:45
    because there’s
    30:46
    the because jots are stateless you have
    30:49
    kind of two unappealing alternatives if
    30:52
    you don’t have refresh tokens in the
    30:53
    picture
    30:54
    the first alternative is that you have
    30:56
    very short-lived dots which are is very
    30:59
    secure but any time a job expires the
    31:01
    user has to re-authenticate and go back
    31:04
    to that user api and say hey i’m
    31:06
    actually dan here’s my username and
    31:08
    password
    31:09
    that’s not great ux
    31:11
    the alternative is i’m going to have
    31:13
    long lived jots days months years
    31:16
    well now if anybody finds that jot
    31:21
    then they can use it for a very long
    31:23
    period of time
    31:24
    and for whatever sinister purposes they
    31:26
    may have
    31:28
    so refresh tokens basically let us
    31:31
    [Music]
    31:32
    have the best of both worlds so how does
    31:35
    this work in practice
    31:37
    well what we have is
    31:39
    the
    31:41
    basically the user api creates a json
    31:43
    web token and a refresh token at the
    31:45
    same time
    31:46
    and passes those back to the client the
    31:48
    client now presents that json web token
    31:51
    to the to api
    31:52
    and let’s say our token’s good for five
    31:54
    minutes
    31:55
    so for five minutes we’re making all
    31:57
    these requests everything’s hunky-dory
    32:00
    then
    32:02
    at the end of five minutes the two api
    32:05
    when it’s validating the json web token
    32:07
    notices it is expired and therefore the
    32:10
    access is not granted
    32:12
    the
    32:13
    client
    32:14
    can notice that request
    32:16
    failure
    32:17
    and
    32:18
    say hmm well i have a refresh token now
    32:21
    so
    32:22
    i know my job is no good but i’m going
    32:24
    to present my refresh token to the user
    32:26
    api the user api can look that refresh
    32:29
    token determine that the user is is
    32:32
    still a valid user in the system and
    32:34
    create a new json web token
    32:37
    that gets passed back to the jot
    32:39
    and then the jot is good for another a
    32:42
    new job is good for another five minutes
    32:44
    and that’s what’s presented as the drew
    32:45
    api
    32:46
    and the user has been basically silently
    32:48
    re-authenticated
    32:50
    without
    32:51
    ever
    32:52
    being bugged for the username and
    32:53
    password
    32:56
    by the way that means that refresh
    32:57
    tokens are very powerful and you should
    32:59
    secure them at least as well as you
    33:01
    secure your jots
    33:04
    all right uh talking about distributed
    33:06
    systems a little bit if you’re going to
    33:07
    be deleting your users you don’t have
    33:10
    onbleakcascade anymore
    33:13
    so
    33:14
    what are you going to do
    33:16
    you can actually set up webhooks so that
    33:19
    interested services
    33:21
    can be notified
    33:23
    when a
    33:26
    user is deleted
    33:28
    and then they can do that cleanup right
    33:29
    so
    33:30
    the client sends a delete request the
    33:32
    user api the user api
    33:35
    fires off web hooks to all the different
    33:37
    apis that are part of the system that
    33:39
    have user data
    33:41
    those
    33:42
    uh apis or services can then do the
    33:44
    deletion of of user data from their data
    33:47
    stores
    33:49
    when you’re talking about logging out
    33:52
    it’s really about revoking the refresh
    33:54
    token it’s not about revoking jots we’re
    33:56
    talking a little bit about revoking jots
    33:58
    but
    34:00
    really what you’re doing is you’re
    34:01
    revoking the refresh token in a
    34:03
    job-based system
    34:05
    so how does that look like
    34:07
    the user clicks log out
    34:09
    the
    34:11
    client fires off a request the user api
    34:14
    says hey this refresh token is no longer
    34:16
    good because the user’s logged out
    34:19
    and
    34:20
    then they delete the jot and the refresh
    34:22
    token from
    34:23
    their
    34:25
    storage system
    34:28
    so can your vogue jots the answer is
    34:30
    sort of
    34:31
    you have a couple options you can if a
    34:34
    user’s job has been stolen or you just
    34:35
    want to make sure the job is invalid
    34:38
    you can rotate your keys
    34:39
    and if you rotate those keys without
    34:42
    leaving the public
    34:44
    key
    34:45
    the corresponding public key available
    34:47
    in that jbu ks doc
    34:49
    then
    34:50
    if somebody comes in and
    34:53
    the job signed with a certain key
    34:55
    identifier and that the corresponding
    34:57
    public key can’t be found
    34:59
    it’s going to be invalid jot
    35:01
    so that is one option it does have the
    35:04
    effect of basically invalidating an
    35:07
    entire class of json web tokens
    35:09
    every json token that was signed with
    35:11
    that key
    35:13
    you can wait it out which isn’t really
    35:17
    a way to revoke things but it is a valid
    35:19
    choice you can also make jot lifetimes
    35:21
    pretty short
    35:24
    so that is one choice to deal with this
    35:25
    situation
    35:27
    the kind of
    35:30
    middle ground is to maintain a deny list
    35:33
    and so what this does is actually pushes
    35:35
    state back into the api so they’re now
    35:38
    responsible for things like
    35:40
    keeping track of a list of invalid apis
    35:42
    i’m sorry invalid jots
    35:45
    so
    35:46
    every time
    35:47
    a
    35:48
    refresh token is revoked which basically
    35:50
    means the user’s logged out
    35:52
    there’s a web hook fired off and the zoo
    35:54
    api can keep a list of those web hooks
    35:57
    so when it’s doing its job validation
    36:00
    it doesn’t just check the audience
    36:02
    it doesn’t just check the expiration
    36:04
    time it doesn’t just check the issuer
    36:06
    and the signature it actually will check
    36:09
    this stateful list of jots that have
    36:13
    been
    36:14
    invalidated and if it finds a jot
    36:18
    identifier on that list
    36:20
    then the
    36:21
    do api should treat that just the same
    36:23
    as the jaw has expired
    36:27
    so as far as johnson golang i hope so
    36:29
    first i’m wrapping things up i hope that
    36:31
    this has given you
    36:33
    a good idea of when it makes sense to
    36:35
    use jots and what the tradeoffs are with
    36:38
    using them
    36:39
    if you’re thinking about using them in
    36:41
    your go link program
    36:43
    you know i would just say
    36:45
    look at the jot
    36:48
    library right look at the job module and
    36:52
    well it’s actually library sorry um
    36:55
    the library is as i mentioned relatively
    36:58
    new in terms of where it lives on github
    37:01
    but it is
    37:02
    old in terms of the code right so it’s
    37:04
    will really
    37:06
    well written battle tested code
    37:08
    make sure you check check your claims
    37:09
    like i showed you in that sample code
    37:12
    if you’re checking it if you’re doing it
    37:14
    kind of at the authorization level then
    37:17
    depending on what your how your goaling
    37:19
    code is written you could use
    37:22
    a filter if it’s coming community or
    37:24
    http you also can
    37:26
    leverage there a lot of gateways out
    37:29
    there
    37:30
    they can plug in and do some jot checks
    37:33
    so i know kong does that i’m working
    37:36
    with
    37:37
    aha proxy right now
    37:39
    so
    37:41
    the actually you can basically check at
    37:44
    json web tokens claims before
    37:46
    your services ever see them if you want
    37:49
    to kind of externalize that level of
    37:51
    elevation
    37:53
    so a few more tools
    37:55
    there’s that code that i
    37:57
    said was available i’ve written a couple
    38:00
    thousand words on building a secure json
    38:02
    with token and i’ll make these slides
    38:04
    available i’ll send them to ari
    38:07
    after or maybe i’ll just post them on
    38:09
    the meetup after
    38:11
    probably tomorrow morning
    38:13
    so
    38:14
    just want to say thank you that’s in the
    38:16
    presentation if you want to learn more
    38:18
    about fusion auth and how we think about
    38:20
    oauth there’s a free ebook it’s about 80
    38:23
    pages talks about dots and
    38:26
    oauth scenarios we’ve seen in the real
    38:28
    world
    38:29
    and then
    38:30
    i will expect to hear from ari about the
    38:33
    t-shirt uh so if you want a t-shirt make
    38:36
    sure you dm ari and say hey i’m
    38:39
    interested in being um in the drawing
    38:41
    for the fusion off t-shirt
    38:43
    and thank you and i will take a couple
    38:45
    of minutes of questions if we have time
    38:48
    thanks dan well actually i’m gonna i’m
    38:50
    gonna i’m gonna make my life easy here
    38:52
    dan what i’m gonna do is i’m actually
    38:53
    just just send me your email address and
    38:55
    what i’m gonna do is i’m gonna forward
    38:57
    uh your email information to dan and
    38:59
    then you’ll you’ll exchange personal
    39:01
    information there as opposed to me uh
    39:03
    taking addresses and shirt size you can
    39:06
    do work with dan on that directly but
    39:08
    what i’ll do is i will be happy to uh
    39:10
    put your name on uh i’m gonna put
    39:12
    everyone that sent in for the jet brains
    39:15
    into the contest already unless you just
    39:16
    refuse to take a t-shirt from them um
    39:19
    and how many how many pieces of swag are
    39:21
    you giving out then
    39:22
    just a t-shirt
    39:24
    just one t-shirt one t-shirt okay you
    39:26
    got it so one person will win a t-shirt
    39:28
    and dan will send it to them and we’ll
    39:30
    do a wheel spin at the end for that so
    39:32
    just uh send me um send me if you uh
    39:36
    would like to be in that
    39:38
    um as well thanks
    39:39
    and any questions for dan that you want
    39:42
    to either put in the chat or uh feel
    39:44
    free to take yourself off mutant ass
    39:45
    directly
    39:52
    awesome i must uh i must have killed it
    39:55
    well
    39:57
    uh thank you all
    39:59
    i i really appreciate the chance to talk
    40:01
    to you and um congratulations on the
    40:03
    renaissance of your of your
    40:05
    uh meetup group so
    40:07
    thanks dan we really appreciate you
    40:08
    taking the time and coming out and
    40:10
    sharing with us
    40:11
    um
    40:12
    so
    40:13
    before we go on to our next speaker um i
    40:16
    want to
    40:17
    uh two things one is i’m gonna put the
    40:20
    uh the jfrog raffle i mentioned before i
    40:22
    just dropped that in this in the i call
    40:24
    the slack with the chat
    40:26
    for those who want to enter that
    40:28
    um and uh also i already told you to
    40:32
    send me the uh your email address if
    40:35
    you’d like to be in a spin for the
    40:37
    t-shirt
    40:38
    and also i’d just like to introduce our
    40:40
    next speaker it’s not like i’m using a
    40:41
    teleprompter or anything but i want to
    40:44
    introduce to you uh cedinger rayo
    40:46
    currently works for jfrog as a
    40:48
    development manager to help build
    40:50
    communities and partnership to provide
    40:51
    visibility into jfrog’s liquid software
    40:54
    mission he’s been working as a developer
    40:56
    and architect for critical business
    40:58
    applications developing in multiple
    41:00
    languages including go ruby and java and
    41:04
    after work after having worked in a
    41:06
    traditional application development
    41:08
    environment suit injury became part of
    41:10
    the pivotal team and built their
    41:12
    kubernetes platform offering
    41:14
    uh he also has diverse project
    41:16
    experience including building an
    41:17
    application for the largest publishing
    41:19
    company in chicago a large data center
    41:21
    automation effort a large auctioning
    41:24
    system and a voter campaigning
    41:26
    application for the us national election
    41:28
    so i guess you’ve been around a little
    41:29
    bit sutendra huh
    41:33
    yeah a few projects that’s great without
    41:35
    without any further ado thanks for
    41:36
    coming out tonight and we’re looking
    41:38
    forward to uh your talk as well so uh
    41:40
    please take it away
    41:42
    um thank you ari thank you for that
    41:44
    introduction
    41:45
    come on let me start sharing my screen
    41:53
    i just want to make sure that the right
    41:54
    slides are up
    41:55
    yeah
    41:56
    okay
    41:59
    are you able to see my screen and i’ll
    42:01
    move to presenter mode
    42:03
    yeah
    42:04
    are you able to see my screen given give
    42:06
    me a thumbs up or
    42:08
    a thumbs down if you can’t see it great
    42:10
    great
    42:11
    um thank you guys for having me um and
    42:14
    i’m glad uh to be here this is my first
    42:17
    local meetup i’m i’m based in the bay
    42:19
    area uh and uh this is the third time
    42:22
    i’m doing this um so i’m i’m hoping that
    42:25
    this is much more refined than the first
    42:26
    time that uh that i did this and i was
    42:29
    much more nervous at the time
    42:31
    um so this is uh
    42:33
    a talk i wrote up about my experience
    42:36
    with golang and dabbling into real
    42:38
    concurrency
    42:40
    and just understanding how it how it can
    42:42
    be used
    42:43
    and this this is this by no means builds
    42:45
    a real application but it it builds the
    42:48
    system that builds the real application
    42:50
    um so there’s a little bit meta stuff
    42:52
    going on there
    42:58
    yeah i think okay so this is a little
    43:00
    bit about me um i
    43:02
    currently uh do more management than uh
    43:05
    than writing code i have done multiple
    43:07
    languages and i came to golang
    43:10
    as a skeptic uh because i i wrote ruby
    43:12
    for a bunch of years and i found gold to
    43:14
    be
    43:15
    not so natural to my thinking and i i
    43:19
    really took took my time to to get used
    43:21
    to it and get familiar with what it can
    43:23
    do really well
    43:27
    so when when i was writing this talk i
    43:30
    felt that the talk was
    43:32
    becoming quite technical and i wanted to
    43:34
    find a good metaphor to sort of describe
    43:38
    the problems that we were facing and and
    43:41
    explain explain them you know in a plain
    43:43
    and simple way and that’s when i thought
    43:45
    about this movie that i had i had seen i
    43:48
    don’t know if
    43:49
    any of you have had a chance to see it
    43:51
    but i would totally recommend it and
    43:53
    here is why i thought you know it’s a
    43:55
    good metaphor for uh when talking about
    43:57
    concurrency
    43:58
    in as in concurrency this in in this
    44:01
    movie events happen out of order
    44:04
    uh they uh the
    44:06
    they’re looking for an antagonist who is
    44:08
    causing pain which is what our team was
    44:11
    going through
    44:12
    uh and as with concurrency all things
    44:14
    fall in place uh in the end
    44:17
    uh or as with most currency concurrency
    44:20
    applications where things fall in place
    44:22
    same thing happens in this movie uh so i
    44:25
    thought that there is a good metaphor so
    44:26
    you’ll see some images that are
    44:28
    representative of
    44:29
    what what i felt was appropriate uh
    44:33
    as what happens in the movie versus what
    44:35
    we were experiencing
    44:38
    uh so as things happen out how to order
    44:41
    this talk is also out of order uh but as
    44:44
    i promised things will come
    44:46
    come
    44:46
    together at the end so thank you for
    44:48
    being here thank you for you know
    44:51
    encouraging me to give this talk um
    44:52
    let’s see where we go
    44:55
    a few references just off the back uh
    44:57
    you will uh
    44:59
    if you go through this talk i want to
    45:00
    implement something uh that i talked
    45:02
    about uh look look for the go
    45:04
    programming language book the effective
    45:06
    go book is really good uh every time i
    45:09
    had a problem figuring out how some
    45:11
    things work uh go playground go like
    45:14
    playground was really useful uh because
    45:17
    it was able to reproduce
    45:19
    the issue that i could face in in real
    45:21
    code
    45:22
    in the playground uh
    45:24
    context as well uh we used a couple of
    45:26
    technologies that were
    45:29
    that were part of the cloud foundry
    45:30
    ecosystem uh bosch is one of them so
    45:33
    we’ll talk a little bit about bosch
    45:35
    and we were building a kubernetes
    45:36
    platform and we were building it by
    45:38
    using uh while using the ci cd system
    45:41
    called pivotal concourse so if you’re
    45:43
    looking for that the links are in the
    45:45
    presentation
    45:47
    uh so
    45:49
    i should tell you of the back that we
    45:51
    did have success we actually were
    45:53
    fighting a build time that
    45:56
    that that took six hours to complete
    45:59
    at the end of this effort it went down
    46:01
    to two hours uh and which was a 400
    46:04
    improvement i know you’re calculating in
    46:06
    your head it doesn’t actually add up to
    46:08
    400 improvement but i’ll explain when we
    46:11
    when we go through this
    46:12
    how it actually became for 400
    46:16
    uh we were able to trace uh what was
    46:19
    going wrong during a failed build which
    46:20
    was not
    46:22
    possible before
    46:23
    and given that we were able to fail fast
    46:26
    that means we were able to fix things
    46:27
    faster so this was overall the the uh
    46:31
    uh the impact was positive
    46:34
    like uh like the character in this movie
    46:36
    he finds
    46:37
    his wife’s killer
    46:40
    right so what were the main characters
    46:41
    in our story uh just to reflect
    46:45
    we were building a kubernetes platform
    46:47
    uh one
    46:49
    and this was meant to be you know you
    46:51
    you drop install it and then this
    46:54
    platform manages the virtual machines
    46:55
    and then you can bring up a kubernetes
    46:57
    platform on it
    46:58
    if you wanted to update that all you
    47:00
    would have to do is update this platform
    47:03
    and uh all the related libraries and
    47:06
    dependencies would be updated so if
    47:08
    there was a
    47:09
    a cve in one of the networking
    47:11
    components all you have to do is update
    47:14
    our platform and the cv we will be fixed
    47:17
    so you don’t have to worry about the
    47:19
    hundreds of binaries that came with it
    47:21
    and you don’t have to patch it yourself
    47:23
    so we really
    47:24
    were proud of the day two operation
    47:26
    support right day one is the easy one
    47:28
    where you you give them a good install
    47:31
    path but continuing to give them uh
    47:34
    forward compatibility to uh to new new
    47:37
    bugs fixes and patches
    47:39
    was was where we um
    47:42
    where we’re really good at
    47:44
    and uh during this time when we were
    47:46
    working on this we were starting to
    47:49
    support windows and if you have used
    47:51
    windows in any uh cloud uh
    47:54
    cloud scalable kubernetes way uh you you
    47:57
    would know that it is quite complicated
    48:00
    to support windows itself right
    48:02
    so that added an extra uh you know
    48:04
    complexity in in terms of how we were
    48:06
    building this platform
    48:08
    uh another character was that all the
    48:11
    whole platform was driven by bosch uh
    48:14
    bosch
    48:15
    is a service that uh that helps you
    48:19
    deploy a cloud maintain it scale that
    48:21
    cloud
    48:22
    and diagnose issues with it and repair
    48:25
    it as well
    48:26
    and bosch also allows you to you know
    48:29
    deploy things that are compatible with
    48:31
    with each other you can you can build
    48:33
    your you can
    48:34
    shift source code to it and bosh will
    48:36
    actually build it to the target
    48:38
    operating system so there are some
    48:39
    niceties that bosch does
    48:41
    so that you don’t have to do all the
    48:42
    hard work all the time
    48:44
    right and when bosch does any of these
    48:47
    it also maintains the cache so that if
    48:50
    you are going to do that again and again
    48:51
    like if you’re going to spin a similar
    48:53
    vm it it doesn’t create the image it
    48:56
    just uses the cache and publishes that
    48:58
    vm and so that you get faster scaling so
    49:00
    there are some nice things about bosch
    49:02
    and it was it was uh it was good to have
    49:04
    and leverage its uh
    49:06
    properties
    49:08
    and then uh to bring this to to put it
    49:11
    all together like
    49:13
    we had a continuous delivery system
    49:15
    where we had everything automated
    49:17
    imagine building such a system that
    49:19
    works on azure aws vmware vsphere red
    49:23
    hat
    49:25
    and
    49:26
    then and gcp
    49:28
    and growing and we were being asked to
    49:31
    publish this for uh more cloud systems
    49:34
    and uh one of the things that we were we
    49:36
    were doing we were supporting n n minus
    49:38
    one n minus two so that people had an
    49:40
    upgrade path uh so that if there are two
    49:43
    two versions behind we would still
    49:44
    support them and they would they would
    49:46
    have an upgrade for path from that to
    49:47
    the latest and then get the latest cv
    49:50
    fixes and keep their
    49:51
    system up to date
    49:53
    uh at the time we started this project
    49:55
    we had about 49 different acceptance
    49:58
    environments or variations like oh you
    50:00
    want to you want to build uh this system
    50:03
    with this type of networking on this
    50:05
    type of cloud
    50:07
    and and networking is one uh one
    50:10
    variation storage type is another
    50:12
    variation etcetera etcetera right so we
    50:13
    had 49 of them at the time we were
    50:15
    building this so that’s quite complex
    50:18
    and every single thing
    50:20
    every single variation takes a number of
    50:22
    hours to just bring up to just
    50:25
    invoke and then once it is invoked then
    50:27
    we are going to run tests on it so that
    50:29
    sort of adds up and when we have run all
    50:33
    these systems even in parallel it takes
    50:35
    multiple hours
    50:39
    uh so what was happening what was going
    50:40
    on
    50:42
    every morning
    50:44
    when we were talking at stand-up this is
    50:46
    how it felt it felt like we were running
    50:48
    really really fast
    50:49
    uh this character wakes up in the movie
    50:51
    uh and realizes that
    50:53
    he’s running
    50:55
    and first he thinks that he’s running
    50:57
    after somebody
    50:59
    you know in a bit he realizes that
    51:01
    somebody’s running running for him with
    51:04
    a gun in their hand
    51:05
    and and that’s the kind of uh you know
    51:08
    feeling i got when i was on this team
    51:09
    and we were running with with somebody
    51:11
    else pushing us
    51:12
    and that was really not
    51:14
    not good for the morale not good for how
    51:16
    how you build software
    51:18
    and uh
    51:19
    many of the issues were because the
    51:22
    build time was so long
    51:24
    even after we had gone through
    51:26
    the build we would get one binary which
    51:28
    is not not necessarily the good binary
    51:30
    because it would
    51:32
    fail multiple times many times
    51:34
    and then
    51:37
    we we ca we sort of calculated and it
    51:39
    took about 70 70 hours minimum to test a
    51:42
    tile tile was a combination of these
    51:44
    these binaries as we called it and every
    51:47
    single failure caused this uh 70-hour
    51:50
    cycle which was very very painful right
    51:52
    and uh given that it was six plus hours
    51:55
    sometimes six hours sometimes seven
    51:56
    hours so you started a build in the
    51:58
    morning and then by the time it’s time
    52:00
    for you to go home
    52:01
    you have received the result but you
    52:03
    don’t have enough time to fix it
    52:05
    and
    52:05
    the the first gut response is okay i’m
    52:08
    just going to rerun it i’ll check check
    52:10
    on it next morning and that’s the cycle
    52:13
    that we were we were going through and
    52:14
    then next morning again we would either
    52:16
    face the same failure or we would have
    52:19
    an um you know
    52:21
    unknown green we would just have a green
    52:22
    and we wouldn’t know why what went wrong
    52:24
    in the previous build why is it not
    52:25
    reproducible so we had those issues we
    52:27
    didn’t have the transparency
    52:31
    and this is what we are doing
    52:32
    we had a list of dependencies we had a
    52:34
    like i think of think of that tile
    52:36
    containing all these features and and
    52:38
    that being combined uh to to work with
    52:41
    bosch uh to create this kubernetes uh
    52:43
    platform right uh so for each of these
    52:47
    binary we had to check whether it’s
    52:48
    already been pre-compiled then we can
    52:50
    just use it
    52:51
    and make the installer tile
    52:53
    if it is pre-compiled find it in the
    52:55
    cache there’s some time to find in the
    52:56
    cache
    52:57
    sometimes we also have it pre-compiled
    52:59
    but it is not in the cache it is stored
    53:01
    in the cloud storage so we pull it from
    53:03
    there
    53:03
    if all of this fails then we compile it
    53:06
    and the compilation takes takes a while
    53:08
    because when you’re compiling against a
    53:09
    linux version you are pulling a large
    53:12
    linux stem cell
    53:14
    and then building it on top and then
    53:16
    that that adds up so overall it took
    53:18
    about two hours to just build the binary
    53:21
    and then then you would throw the binary
    53:23
    on to different variations and
    53:25
    that process would take four plus hours
    53:28
    so that is that is quite long right
    53:32
    uh and
    53:33
    and in in all of these places i think we
    53:36
    are so what would happen was we we used
    53:39
    to go through these in um in a
    53:41
    sequential manner and we did not and we
    53:44
    did not incorporate uh you know the
    53:46
    optimizations that we could for these
    53:48
    different types of latencies uh so we
    53:50
    are pulling
    53:51
    images or binaries from uh from the
    53:55
    cloud and different clouds had different
    53:57
    latencies uh we were pulling images from
    54:00
    our network itself the internal network
    54:01
    that had latency compilation took its
    54:04
    own time uh sometimes just moving from
    54:06
    one uh
    54:08
    hard disk to another took some time and
    54:10
    then when we were building something
    54:12
    specifically for windows because of how
    54:15
    it was configured there was some os
    54:17
    specific latency right so there are
    54:19
    there are all these latencies that are
    54:20
    actually hurting us but we have not we
    54:22
    have not addressed them right and we are
    54:24
    trying to find who are john g is
    54:28
    we had some tools that you know
    54:31
    were at our disposal
    54:35
    and something that i was learning at the
    54:37
    time and something that we were
    54:38
    discussing uh was how could we leverage
    54:41
    concurrency uh to
    54:43
    to make this to
    54:45
    uh to attack all these all these
    54:47
    different types of concurrency uh issues
    54:49
    that we were having right
    54:51
    uh i’ll go through a couple of
    54:53
    slides of on just the basics of
    54:55
    concurrency and the one as it applies to
    54:57
    uh what we were doing
    55:00
    uh
    55:01
    golang actually has a uh first first
    55:04
    class primitive called go writtens and
    55:06
    they support concurrency which is not
    55:09
    which is not like
    55:11
    it’s supported in other languages uh
    55:13
    they are very lightweight threads uh as
    55:16
    if you can think of think of it that way
    55:18
    they are easier to reschedule they have
    55:20
    a very light footprint uh and and uh
    55:23
    they are supported natively by the uh by
    55:25
    the language um
    55:27
    so it is it is easier to manage and and
    55:30
    go go has some nice uh
    55:33
    semantics around it
    55:35
    and by default they don’t
    55:37
    they don’t offer any thread local
    55:39
    storage so then a lot of you know race
    55:41
    conditions are eliminated just by the
    55:43
    design
    55:46
    then and the other concept that we
    55:48
    wanted to leverage was channels and this
    55:50
    is something i learned freshly when when
    55:53
    i started doing this
    55:54
    uh so go uh encourages uh
    55:57
    you to
    55:59
    share uh
    56:01
    share memory by communicating instead of
    56:03
    sharing memory uh itself so that that
    56:06
    eliminates all the all the mutexes and
    56:08
    and lock handling that you have to do uh
    56:10
    there are a couple of different types of
    56:12
    channels um that
    56:14
    that are available uh buffered and
    56:16
    unbuffered and will uh
    56:17
    uh
    56:18
    and uh buffered obviously means you have
    56:20
    you you specify size and you can keep on
    56:22
    adding stuff stuff to the channel or
    56:25
    messages to the channel unbuffered means
    56:27
    the size is unspecified and then your
    56:29
    mileage may vary if you know you send
    56:31
    too many things into the channel
    56:35
    so
    56:37
    with go routines and channels there is
    56:39
    uh
    56:40
    as we were brainstorming we were talking
    56:42
    about what is the other concept that we
    56:45
    could we could use to to build the
    56:47
    entire system and that is when we found
    56:50
    um
    56:51
    uh
    56:52
    and i think one of the people on our
    56:53
    teams knew about it but had never never
    56:55
    really implemented it
    56:57
    this concept that go defines as
    56:59
    pipelines pipelines is a combination of
    57:02
    go routines
    57:03
    connected through these channels
    57:05
    where
    57:06
    concurrency is
    57:09
    our
    57:10
    handshake is achieved by not by using
    57:13
    locks or not by using any any gating
    57:15
    mechanisms uh but just by improving the
    57:18
    flow across the across the system
    57:21
    right so you don’t have to you don’t
    57:22
    have to bookkeep uh bookkeep your state
    57:25
    you don’t have to have anything that
    57:27
    would cause a race condition and that’s
    57:29
    that’s the aim that that pipelines has
    57:32
    when when you build that system
    57:36
    so and and it was odd for me at the time
    57:39
    when i was learning uh in the go
    57:41
    programming language book there’s just
    57:43
    one page or one and a half page which
    57:44
    which has a short description and one
    57:47
    example i think of five lines but i
    57:49
    learned more by building one on the go
    57:52
    length playground uh then reading uh
    57:55
    after reading that page and i think uh
    57:58
    there is there is more to be said about
    58:00
    it uh than just that
    58:03
    so whatever people will ask what about
    58:05
    weight groups or other synchronization
    58:07
    mechanisms why wouldn’t you just use
    58:09
    them
    58:10
    because they’re all those are also
    58:12
    available in golang
    58:14
    so this is something that we learned by
    58:16
    doing we already had a couple of places
    58:20
    where go routines were used with
    58:22
    async
    58:23
    await kind of things with weight groups
    58:25
    and
    58:26
    building synchronization around that
    58:28
    and
    58:29
    we we actually start sometimes we ran
    58:32
    into deadlocks and
    58:34
    the often the culprit was that we forgot
    58:37
    uh to call uh weight group done or we
    58:39
    did not put the differ in the right
    58:41
    place and you know a thread was hanging
    58:43
    and waiting for for something else to
    58:45
    fix some other go routine to finish and
    58:47
    uh it became a mess
    58:49
    and to avoid this i think pipelines uh
    58:52
    the pipelines technique helps a lot
    58:56
    um
    58:57
    and when we when we set out to actually
    59:00
    build this build this pipeline to solve
    59:03
    all these issues
    59:04
    we decided that we don’t want to we
    59:06
    don’t
    59:07
    we don’t want to forget what we learned
    59:10
    about writing good code writing it has
    59:12
    we will continue to write test-driven
    59:13
    development we’ll continue to refactor
    59:15
    things we’ll continue to focus on
    59:17
    encapsulation and you know separation of
    59:20
    concerns and responsibility
    59:23
    um how does
    59:24
    so how did the new code look like and
    59:27
    let’s look at the new code because you
    59:29
    know again we are going out of order
    59:32
    so again going back to what the basic
    59:34
    algorithm was that
    59:36
    we needed to combine a bunch of binaries
    59:39
    and we needed to go through
    59:42
    different locations that such a binary
    59:44
    might be available if not we would need
    59:46
    to build it and uh if we needed to build
    59:48
    it then we would need to start from a
    59:50
    source uh find it in cache if not then
    59:52
    compile and then store it and store it
    59:54
    in the cache and then store it on the
    59:57
    cloud and then from the installer right
    60:00
    and this was complicated enough for one
    60:02
    target os and that’s when we threw it
    60:04
    through at it another os uh windows and
    60:07
    this became twice
    60:09
    if not
    60:10
    yeah it twice more difficult right
    60:12
    because we have to do the same thing
    60:13
    over and over and the algorithm that we
    60:15
    had
    60:16
    was was not able to handle that because
    60:18
    it did not have did not consider all the
    60:20
    different latencies that were in the
    60:22
    system
    60:24
    uh so how how does the pipeline look
    60:27
    so then once we
    60:30
    once we identified these different
    60:31
    latencies we actually built a pipeline
    60:34
    that went from one stage to the other
    60:36
    and uh when when it sent
    60:39
    a candidate from one stage to the other
    60:42
    it use channels and since channels are
    60:45
    already a blocking mechanism
    60:48
    they will block until until messages
    60:50
    have been read on them and you don’t
    60:52
    need any other synchronization mechanism
    60:55
    so how did it look we read the number of
    60:58
    uh releases uh or what we called as
    61:00
    releases or binaries uh those
    61:02
    dependencies we read them and we
    61:05
    we pre uh created a bunch of
    61:09
    parallel uh
    61:11
    parallelizable either downloaded
    61:13
    compiler or publishers because those
    61:15
    were the activities those were the
    61:16
    blocks that would form the pipeline uh
    61:19
    once we once we created that
    61:22
    we we identified what is that object
    61:24
    that is going to go through this process
    61:26
    so for each dependency we called it a
    61:28
    release candidate
    61:30
    if if that has to be included in the
    61:32
    title then we need to know the following
    61:35
    we need to know where the binary is a
    61:37
    url to where it is actually stored
    61:40
    whether building that or going through
    61:42
    that system or going through the flow
    61:45
    was a success or a failure so that if
    61:47
    any of this is failure we can just trip
    61:49
    and have fast feedback
    61:51
    and then an embedded result object which
    61:54
    contained the path where the source came
    61:56
    from where this the artifact is and if
    61:59
    there like if there was a failure then
    62:01
    the errors would be embedded in this so
    62:03
    so this was our encapsulated release
    62:05
    candidate object which maintained the
    62:07
    state about that really standard i think
    62:09
    this was
    62:10
    this was one of the key revelations we
    62:12
    had once we had had this design in place
    62:15
    that encapsulation really helped
    62:17
    nail down the state to where it belonged
    62:20
    and not scattered through the system not
    62:22
    scattered in in weight groups and
    62:24
    figuring out if this is done or that is
    62:26
    done
    62:29
    and we were very aggressive about
    62:32
    breaking down things like we broke down
    62:34
    the can really scanned it we figured out
    62:36
    how to handle the bot how to interact
    62:38
    with the bosch client
    62:40
    how are things going to be stored on the
    62:42
    gcs bucket
    62:44
    how do we how do we
    62:46
    write the result down what things need
    62:49
    to be uh
    62:50
    uh need to be part of it and then we
    62:52
    composed uh the uh uh release uh so the
    62:56
    candidate released included the struct
    62:58
    for the result and so on right so we
    63:00
    composed these objects to form
    63:02
    the objects that we needed
    63:06
    and the most important thing was to fi
    63:08
    to identify when will we be done
    63:11
    once
    63:12
    once the releases have flown through
    63:14
    this pipeline because we need a
    63:15
    termination condition otherwise since
    63:17
    the channels are blocking uh the they
    63:20
    will keep on waiting for the next
    63:21
    message right so we need to close this
    63:23
    channel some somewhere
    63:24
    and given that we had a predefined list
    63:27
    of uh
    63:28
    dependencies as input we knew that so
    63:31
    many releases or candidates have to come
    63:33
    through the pipeline uh so we were able
    63:35
    to uh uh identify the termination
    63:37
    condition right and and at the
    63:39
    termination condition we were able to
    63:41
    identify if there are errors and then
    63:43
    trip and uh signal failure uh and once
    63:46
    once we had processed all this we were
    63:48
    able to close all the um
    63:51
    all the go to all the channels that we
    63:53
    we were using so so we had a
    63:56
    safe termination so there was no
    63:57
    deadlock
    64:00
    so why did we do all this why did we
    64:02
    spend all the time building this
    64:05
    it was because
    64:08
    the old code had the following right
    64:11
    every time we had a release or a
    64:13
    dependency this is what happened we went
    64:16
    through each of the different dependency
    64:17
    and uh look and you can see that it was
    64:20
    already a go routine that handled each
    64:21
    of the dependencies separately so that
    64:23
    that’s there was a little bit of
    64:24
    concurrency there
    64:25
    but what happened when we called process
    64:27
    release it called process so processor
    64:30
    is called download
    64:32
    download call compile
    64:34
    compile call publish so there was it was
    64:36
    basically
    64:38
    a bunch of a bunch of concurrent
    64:40
    routines but then
    64:42
    then it became uh deeply sequential and
    64:45
    it did not address the problem of
    64:47
    uh download has its own dependency
    64:49
    compile has a separate dependency and
    64:50
    publishing has a different dependency so
    64:52
    it did not handle many many concurrency
    64:54
    opportunities that that were in the
    64:56
    system so that’s why that’s what was
    64:58
    hurting about
    65:01
    about this old way of doing concurrency
    65:04
    also the errors were never recorded or
    65:08
    they were not they were recorded as a
    65:09
    bunch like so we wouldn’t be able to
    65:11
    identify that this error happened
    65:13
    because of this binary they were just
    65:15
    added into a single object uh so it made
    65:18
    sense to actually attach the error to
    65:21
    the to the binary uh that was that we
    65:23
    were building instead of just collecting
    65:25
    all the errors so that was another
    65:26
    transparency issue which we were able to
    65:28
    fix during this uh and it became obvious
    65:30
    when we did the new design or we were
    65:32
    working on the new design that this this
    65:35
    was an issue
    65:39
    and
    65:40
    so so these were some of the some of the
    65:42
    issues like i mentioned that they were
    65:44
    using weight groups so we had to wait on
    65:46
    the synchronization mechanisms it was
    65:48
    mostly sequential and it did not do the
    65:51
    right thing in terms of encapsulation
    65:53
    and separating its responsibilities and
    65:55
    uh it didn’t have a good it had very
    65:58
    loose uh don’t didn’t have good coupling
    66:00
    based on where the data needs to be
    66:04
    um and this
    66:06
    started breaking as soon as we
    66:08
    introduced the new os which had which
    66:10
    had its own complexities right and and
    66:13
    it would have ballooned our build time
    66:16
    uh
    66:17
    by by a factor of two at least
    66:19
    and and that that is the problem that we
    66:21
    were able to find that we have we found
    66:24
    our are john g and we were able to fix
    66:25
    that uh with the new code uh as i
    66:27
    explained
    66:29
    uh
    66:30
    and so when we reflected on it how what
    66:32
    did we change we uh we we made it um so
    66:36
    that it’s easier to read it’s easier to
    66:38
    understand there’s no chain of
    66:40
    events that is happening and
    66:42
    we don’t go down the rabbit hole
    66:44
    we were able to address different types
    66:45
    of latencies we were able to uh quickly
    66:49
    look at the code and see what was going
    66:50
    wrong and uh this this project involved
    66:53
    multiple teams like imagine uh there’s a
    66:55
    networking team that builds a networking
    66:57
    component storage team that builds a
    66:59
    storage component core kubernetes team
    67:01
    that builds the core kubernetes
    67:02
    controllers etc etc right so we
    67:04
    our job was to go and give them feedback
    67:06
    if and any of the build was broken
    67:08
    because of the code they committed so
    67:10
    that made it this kind of reorganization
    67:13
    made it super easy super transparent
    67:15
    when we can just point to the team
    67:18
    where we saw the issue
    67:21
    so how did we measure success so
    67:23
    previously for one os and for 10 plus
    67:26
    dependencies and 40 plus environments
    67:29
    we were able to build one tile a day
    67:31
    like i said six plus hours which took
    67:34
    our whole working day
    67:35
    when when we did this refactoring or and
    67:38
    added the new os
    67:39
    we almost doubled the number of binaries
    67:41
    because you know we had to build it for
    67:42
    two different things
    67:44
    and uh the number of environment group
    67:47
    environments grew to 100 or more than
    67:49
    double uh because we also added another
    67:51
    cloud provider provider in between
    67:54
    we were able to produce four tiles a day
    67:56
    uh because we were able to handle these
    67:58
    different concurrency uh sorry we didn’t
    68:00
    have the sequential uh processing
    68:02
    and that
    68:04
    that means that
    68:06
    our our efficiency went from one x to
    68:08
    four x so that’s that’s how we got four
    68:10
    400 times better
    68:12
    right and the fact that it failed in two
    68:15
    hours means i could do something that
    68:17
    same day when i had the context i had
    68:19
    the team sitting right there with me um
    68:21
    instead of you know leaving on a friday
    68:23
    disappointed dejected that my last build
    68:26
    failed and then coming back on monday
    68:28
    and saying oh where was i i don’t know
    68:30
    what we were doing because last build
    68:31
    had failed and we don’t know who to talk
    68:33
    to right so that saved a bunch of time
    68:35
    that improved morale quite a bit we
    68:38
    actually had started having confidence
    68:39
    in our system we were able to give
    68:41
    feedback to the people who
    68:43
    uh we were integrating with and and they
    68:45
    were happy that they were getting you
    68:47
    know uh
    68:49
    like errors that they could actually
    68:50
    work on
    68:53
    so
    68:54
    here are some things that i learned
    68:57
    to not be afraid
    68:59
    be you know take that opportunity to
    69:01
    refactor
    69:03
    do do remember to go back to basics um
    69:06
    test test driving this helped us a lot
    69:10
    learning via go playground helped us a
    69:12
    lot we were actually
    69:13
    able to reproduce some deadlocks that
    69:16
    would have happened in our code
    69:18
    by running by writing tests um which was
    69:20
    amazing i had not done that before
    69:23
    and uh sometimes this kind of tech debt
    69:26
    gets uh de-prioritized in the beginning
    69:28
    and there’ll be a time when you know
    69:30
    there is no choice and it will get
    69:32
    prioritized so wait for that moment as
    69:34
    soon as you get that moment seize it and
    69:36
    run run with it right
    69:38
    that is the time for you know to solve
    69:40
    these kinds of tricky problems uh and
    69:43
    that that is what makes engineers happy
    69:46
    uh also don’t be afraid to ask for
    69:48
    support when i actually started working
    69:50
    on this i had about six months of golang
    69:53
    programming experience and the only go
    69:55
    routine i had written
    69:57
    was a minor refactoring to the to the
    69:59
    old code that you saw
    70:01
    and i had i had only read and you know
    70:03
    practiced go but never really written uh
    70:06
    production level code
    70:08
    uh but i had a couple of people on my
    70:10
    team kevin and adrian who encouraged me
    70:12
    to do this and they were there to
    70:14
    support you know
    70:15
    uh support me support me in making the
    70:17
    shift from from my ruby thinking to go
    70:20
    things uh which which i’m really
    70:22
    appreciative of so find such people i
    70:25
    think
    70:26
    there will be some on your team
    70:28
    uh
    70:29
    and uh don’t be afraid to try new things
    70:31
    um
    70:32
    uh so if you if you want to take
    70:33
    anything away from this uh
    70:35
    talk then learn about go ratings learn
    70:37
    about how you can
    70:39
    do the
    70:40
    do the concurrency using async weight
    70:43
    groups try it build it for yourself
    70:46
    uh learn about channels learn about
    70:48
    pipelines convert that same thing that
    70:50
    you built uh
    70:51
    with async weight groups into channels
    70:52
    and pipelines and see
    70:54
    uh see for for yourself
    70:56
    right and as a challenge is you know
    70:59
    build build a real application uh with
    71:01
    without using uh the old school uh
    71:04
    synchronization uh constructs uh it it
    71:07
    will
    71:08
    it will feel different it will it will
    71:10
    it will feel like you you have been
    71:11
    liberated in some sense of of managing
    71:14
    these things
    71:19
    that’s all that’s all i had uh and i
    71:21
    hope you enjoyed the talk
    71:23
    any questions
    71:25
    well sudhendra thank you very much if
    71:27
    anyone has some questions please uh if
    71:29
    you want to either put them in the chat
    71:30
    or if you want to go ahead and uh take
    71:32
    yourself off mute but andrea would you
    71:34
    like to get a drink of water
    71:36
    yes i will get
    71:38
    yes before you answer questions i think
    71:39
    that’s more than uh you just say
    71:42
    i was i was impressed that you could go
    71:43
    that long without the water but uh
    71:46
    let’s he’ll do that but does we does
    71:48
    anyone have any questions for a student
    71:49
    this evening
    71:54
    customers
    72:03
    hey and um if you don’t have any
    72:04
    questions now you can see you can reach
    72:07
    out to me uh via twitter i think there’s
    72:09
    somebody raise their hand yeah go ahead
    72:12
    hey sir thanks for the talk uh my
    72:14
    question really relates to the amount of
    72:16
    concurrency
    72:18
    uh that you’re processing at any one
    72:20
    time
    72:21
    so you know you can launch a bunch of go
    72:23
    routines and they all run concurrently
    72:25
    they can block you know they can do some
    72:27
    work
    72:28
    uh it kind of depends on what you’re
    72:29
    doing right it sounds like the amount of
    72:31
    concurrency that you have in the system
    72:33
    is maybe not that great uh you sort of
    72:35
    had it on that slide where you had the
    72:37
    the triangle going down i think you had
    72:39
    some figures
    72:42
    the question really is is uh in your
    72:45
    situation
    72:46
    did you
    72:47
    not have that much concurrency where you
    72:49
    had to worry about the number of go
    72:51
    routines and the number of things that
    72:54
    you’re doing in terms of you know system
    72:56
    resource memory utilization or something
    72:58
    like that and if you did
    73:00
    what did you do to limit the amount of
    73:02
    concurrency so you don’t blow up in
    73:04
    terms of resource use
    73:06
    that is an excellent question and um
    73:10
    we
    73:11
    didn’t we did by failing uh so yes we
    73:14
    had issue resource issues
    73:16
    at least in the beginning
    73:18
    so i’ll go back to one slide where i’ll
    73:21
    show you
    73:23
    yeah
    73:24
    so um but there is concurrency that you
    73:26
    can port where uh so
    73:29
    one thing i want to mention is all the
    73:30
    concurrency opportunities that i had in
    73:33
    my system were not related to moving
    73:36
    data locally or handling data
    73:38
    locally it was about getting it from the
    73:40
    network getting it from this resource
    73:42
    and that resource right and one of them
    73:45
    was posh for example and if you don’t
    73:47
    have enough bosch directors
    73:49
    you are limited in how many uh
    73:52
    how many concurrent routines you can run
    73:54
    or
    73:55
    effective so that is the first roadblock
    73:57
    we had we had one boss director and we
    74:00
    started throwing the routines at it
    74:02
    so that became the blocker uh so we
    74:04
    scaled that we scaled uh bosch and
    74:07
    started bringing bringing up multiple
    74:09
    bosch directors
    74:10
    so that helped
    74:13
    with the um um
    74:16
    yes and one the other thing that we
    74:18
    tuned was the vm uh or the container
    74:21
    that actually brought up uh that
    74:22
    concourse brought up uh that had to be
    74:25
    tuned so that it gets much more many
    74:27
    more cores
    74:29
    than we had before
    74:30
    so i don’t remember how many it finally
    74:32
    got i think it was more than more than
    74:34
    four because i think we had
    74:36
    one with two cores and one with four
    74:38
    cores but then we had to go up to eight
    74:41
    hours at least when i was tuning it
    74:45
    so you didn’t have to limit concurrency
    74:47
    in your case you kind of had unlimited
    74:48
    resource you could just you know
    74:51
    go as parallel as possible uh did you
    74:54
    ever have to you know constrain how much
    74:56
    parallelism you did uh used and and how
    74:59
    did you do that in that case
    75:00
    yeah so we given that we were only
    75:02
    handling so like see how we were
    75:04
    handling 10 binaries and you know 10 by
    75:06
    so 10 routines right
    75:09
    so we didn’t have to limit that much or
    75:11
    we didn’t hit the boundaries i think
    75:13
    okay so and going to 20 binaries didn’t
    75:16
    give us
    75:18
    we didn’t make us hit the boundaries
    75:20
    uh so the constraints were like i
    75:22
    mentioned they were in the bosch
    75:23
    director or anytime you hit the network
    75:26
    and there were there were some things we
    75:27
    could do about it and some things we
    75:29
    couldn’t
    75:30
    i say thank you very you’re very lucky
    75:32
    you didn’t have to constrain yourself it
    75:34
    sounds yeah thank you
    75:37
    [Music]
    75:39
    any other questions for student intro
    75:46
    okay
    75:47
    well thank you so much i really
    75:48
    appreciate that you coming out
    75:50
    and sharing with us
    75:52
    it was really a great talk
    75:53
    and uh
    75:55
    um if you know and student you’re on
    75:58
    you’re you’re on social media if people
    75:59
    want to reach out to you on linkedin and
    76:01
    twitter if there’s any additional
    76:02
    questions after

    3:36:55
    NOW PLAYING
    Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]
    TechWorld with Nana
    2.6M