Advanced Binary Management for C/C++ with JFrog Artifactory

Jonathan Roquelaurej

7月 27, 2020

< 1 min read

Webinar Description:

Learn how to use Artifactory, not only for Conan but for many other package types, get tips and tricks for improving your developer productivity and best practices for using Artifactory as part of your CI/CD pipelines to accelerate your application delivery for C and C++ languages.

At the heart of the JFrog DevOps Platform, Artifactory provides universal management for all binary artifacts, container images and Helm charts used throughout your SDLC.

Who should attend:

C/C++ developers and those who work with C/C++ packages (like Conan) who are looking to get an overview of JFrog Artifactory and its common usage scenarios.

The Agenda

What is JFrog Artifactory and why is it needed?
Resolve dependencies
Manage deployment of artifacts as a system of record for CI/CD
Bill of Materials
Artifactory Query Language (AQL)
Using Artifactory for C/C++ applications and more
Product Demo with Conan Package

Speakers

Jonathan Roquelaure

    Jonathan Roquelaure

    Solution Engineer

    Jonathan Roquelaure (@roquelaurej) is a Solution Engineer at JFrog, Based in France, he is a multi-skilled developer, Agilist and specialised in continuous integration and delivery. He's focused on making developers life easier through processes fluidisation and automation.

    Video Transcript

    hi
    good morning everyone uh so i’m jonathan
    roccolo
    and i will be uh your speaker your
    presenter for today with you know
    so uh quickly uh the agenda for today
    uh i would like to present you so i will
    start with a quick introduction
    about uh j frog uh to just
    for the people who don’t know us uh for
    now
    i’m sure you know us but just in case
    uh i will take a few minutes also to
    give you a quick overview of the jfr
    platform
    uh what it means what are the different
    services and components
    and what we want to uh
    where we want to help you and how we
    want to
    to to help you in your journey
    and after this uh let’s say i live i uh
    i live a overview i would like to go
    back to the basics and talk a bit more
    about
    uh development lifecycle with a quick
    focus
    regarding uh c plus plus development
    lifecycle
    challenges and then we’ll talk about
    binary repository what is it exactly
    why do we need it and what is j for the
    artifactory
    in this area how it can be used
    for cc plus plus but other
    challenges other package types within
    your company within your organization
    and after explaining some concepts i
    will
    take some minutes to show you a quick
    demo
    of all the things i will explain and we
    should have some
    additional minutes to address some
    questions
    if i don’t have time to answer all the
    questions or
    if i’m not sure to to for being able to
    provide you a clear answer
    no worries we’ll address those questions
    offline
    uh with follow-up email so feel free as
    i say to ask any question in the
    question box
    so let’s start with a quick introduction
    about jfrog so jfrog as you may know is
    a
    an israeli company founded in 2008
    now we have offices all over the world
    in the us in india in china in japan
    in europe and israel of course we are
    more than 500 employees in all those
    offices and
    we have more than 5 600 paying customers
    for now
    paying customers because we have also
    community
    and open source edition of our products
    so but we know we have this base of
    paying customers
    what is interesting is that uh part of
    those customers
    we have almost any kind of companies and
    we are proud to
    to have 70 percent of the 1400
    companies as part of our customer
    we have an interesting growth
    with almost 150 new enterprise customers
    monthly coming to use our products
    and uh we are also interesting
    statistics so
    growth in terms of
    ar but also in terms of usage of our
    products we know for example that uh
    today we have
    two billion uh dependencies downloaded
    permits
    from our bintray distribution platform
    and around three million
    enterprise developers are using on a
    daily basis
    our products we are also
    community champions and we are
    involved a lot with different community
    so uh if you’re here i guess you’re
    familiar with conan
    and you know that conan is part of jfrog
    you might know so that we are
    hosting and managing conan center at
    jfrog
    so for the cc blue space community and
    conan community
    but we are also involved with go
    community for example we are managing
    and hosting
    go center which is uh a center for
    go modules uh we released recently uh
    last month’s i think chart center which
    is the first
    uh central central repository for helm
    charts
    with uh security analysis and
    we are formed for a long time
    also involved with community for java
    and other package type
    with bin tray last but not least we are
    we are considered as devops unicorn
    because devops is really the area where
    we are involved these slides uh
    show you uh a short list of our
    customers
    uh what is interesting here is not so
    much the name even if it’s interesting
    to see
    uh famous names like google or
    oracle nasa and so on
    but what interests me is that
    we have customers in almost any industry
    area and this is a thing that
    is very interesting for me is to see how
    the develops and this
    software development revolution or
    yeah evolution is embracing
    all the industry area it’s so you have
    the
    let’s say the cool kids that are of
    course
    in this area like google amazon
    and that are shown here in the devops
    world
    but it’s interesting to see that even
    uh companies that are pure retail
    company like
    walgreens targets are also
    even involved in this devops
    transformation
    and trying to modernize their
    software delivery lifecycle to change
    at jfrogand why we have
    we are so successful i believe is
    because of
    our uh approach which is from the
    beginning
    uh unified and universal approach first
    of all
    all our products are
    built in a universal
    with a goal to provide universal support
    which means that we started with art
    factory as a universal binary manager
    which means that it supports many
    package types not only java or docker
    but we are more than we are supporting
    more than 25
    package tags for today but we are also
    universal in this
    way you can integrate artifactory
    with the ecosystem we have
    integration with more than 50 technology
    partners
    you can integrate our products with most
    of the tools you already have in your
    tool chain
    such as ci servers code analysis tools
    and anything you can find
    in this kind of tool chain we provide
    hybrids and
    multi-cloud support also which means
    that all our product can be installed
    on premise on the cloud any kind of
    cloud
    but you can also go with a sas offer
    provided by jfrog
    running your j4 platform on our sas
    or you can also go with hybrid solution
    with a combination of your own private
    cloud on-premise
    and sas offers we provide
    a continuous security along the tool
    chain
    we have a dedicated tool for that called
    j for x-ray
    that allows you to scan and give
    analysis and reports
    from the beginning of your tool chain
    when you start to consume
    external packages to the end of the tool
    chain when you
    distribute to the runtime
    your releases and
    all the products uh we built are
    building a
    fashion that can be uh highly available
    allow you allowing you to have horizons
    tall or vertical scaling
    uh to scale uh whatever
    the needs you have regarding your
    business
    so what we have in
    the default platform today i guess some
    of you
    are familiar with the first product art
    factory
    that we can see on the left of this
    diagram
    art factory is the flagship our flagship
    products are jfrog
    this is the one we built and we built
    the companion
    and all the products are using art
    factory uh
    for uh access authentication and
    uh other uh integration this is really
    the base
    of the jfoc platform so
    art factory is our universal package
    manager
    and the idea with it is to be able to
    manage your artefacts
    artifact you consume from
    subparty repositories artifact your
    produce
    and to track them with all their
    metadata so
    this is really where you start to manage
    binaries and not
    only code the second product
    uh we have in the platform is called
    jeffrey xray
    it’s a security and license analysis
    tool
    the idea with x-ray is to be able to
    scan any binary stall in art factory
    and to give you a result regarding
    first of all it’s recursive analysis
    showing you all embedded binaries within
    your own continents
    but also to give you reports and
    analysis
    regarding known cvs and license
    information
    x-ray allows you also to automate
    things policies you want to apply
    when you detect something wrong like a
    new cv
    or license that you don’t want to have a
    new product
    on the right side of triforce x3 you can
    see distribution j4 distribution
    this is a service we released
    two years ago now and
    distribution the main idea here is to be
    able to
    ship in a secure atomic and
    imitable way your releases
    to production time to production sites
    what i mean by production site is
    to ship your binaries your releases
    as close as possible to the final
    consumer
    to what we call the edges and
    for that we also release the
    jfrog edge nodes uh components
    which are kind of very light artifactory
    for read-only to provision
    edges edges that can be
    kubernetes cluster if you are managing
    your own runtime
    but it can be also your uh
    final consumers like your your own
    customers
    for delivery purpose or it can be
    your data center
    any places where you want to consume
    final releases and you want to make sure
    those releases
    this set of packages are shipped
    together
    in an atomic and immutable way
    at the bottom of the diagram you can see
    default pipelines
    this is a last kid from jeffree
    it came from the acquisition of shipable
    and it’s a declarative cicd tool
    that covers and allows you to automate
    the entire stool chain from
    code repository to the runtime
    it integrates obviously with all default
    tools
    so you have integration with all the
    default products
    but also with most of
    uh devops tools in general so you can
    integrate j4 pipelines with
    your existing ci servers such as jenkins
    or with your jira with a kubernetes
    cluster
    or any kind of tool you already have in
    your organization
    last but not least on the top of my
    diagram you can see
    mission control and insights mission
    control
    is a central dashboard to
    control and monitor all the default
    tools
    along your tool chain insights
    that is part of mission control is in
    charge of
    gathering metrics and
    give you tendencies across the different
    tools in your tool chain so
    the main idea with it is to enable
    what we call data drive and devops and
    to get
    feedback flow from the tool chain
    as you can see with this um this
    platform
    and this is the main goal we have at
    jfrog
    we are we want to cover the entire
    space or gap that is between
    the code repository let’s say your git
    and the runtime and with this set of
    tools
    we are covering the build the testing
    release and deployment phase
    but we are not managing the runtime yet
    or
    code repository
    so uh let’s go back uh a minute to
    something more
    let’s say basic or day-to-day work
    let’s talk a bit about development
    lifecycle
    and what it means
    the main purpose when you are doing
    developments
    and when you are building software is uh
    to release this software at some time
    and to
    run it in some runtime with
    uh the evolution of
    software management during the past
    decades
    let’s say with uh the
    agility then the develops and all these
    nice practices we built
    we are we all agree that we need
    iterative
    and uh continuous uh
    improvements in our processes this is
    why
    this kind of diagram is uh
    quite common and the idea is always the
    same when you
    are building a tool chain you start by
    coding
    coding in your own language you build
    your code to generate some artifact
    binaries
    that will be tested released distributed
    to the edge
    deployed to a runtime environment and
    based on monitoring on metrics
    you will learn get some knowledge and
    you will code again and make your
    software better this is the main idea
    if you look in a in the timeline
    of this iterative process it’s
    always the same you start by
    some source code so in fact
    you start by source code and consuming
    dependencies
    because source code relies on
    what you are using as dependencies there
    are only few projects today that starts
    fully from scratch so the first thing
    the developer
    does uh in a company one is
    his writing software is to write its own
    code to choose dependencies
    you want to use and then at the build
    time
    it will generate some artifacts and
    will deploy them somewhere this is uh
    basically
    what a developer is focusing on
    and it’s pretty simple it’s pretty
    straightforward
    a few weeks ago we
    gave our first conan training
    for c c plus plus people who want to use
    conan it was called conan days and there
    was
    a full days of training it happened
    remotely
    but i think it was really interesting i
    guess some of you were part
    of those training days if you were not
    part of it
    the good news is that we will have
    another training day other training
    classes that will happen in the future
    we will
    send you some information uh in the
    follow up email i guess
    if you are interested but in a nutshell
    what we did during this um this training
    day
    is first of all to show that
    in uh ccp c and c
    c plus plus world development lifecycle
    is not
    so trivial and so easy
    this is the kind of diagram we add at
    the end of the training
    for managing the full project and i will
    try
    to give a quick summary of what we did
    during this training but
    [Music]
    i really encourage you to have a look
    for the next conan days
    and to see and go to the hands-on
    and to build this training step by step
    during the training we we built a very
    simple application
    with a graph of dependency that you can
    see on the left
    of the diagram so there were some
    libraries a b c d
    and two different applications with
    different component graph
    for each application and
    the use k was the following we had a
    developer
    working on a particular library
    the library b on this graph
    and working on a new feature within a
    feature branch
    we show how you can automate
    the the tool chain
    on a commit on its feature branch how to
    test it
    and then after merging the pull request
    how conan were used to
    first of all get this component graph
    and know that this change on the library
    b
    will need to rebuild a set of libraries
    in my case library d and the application
    one the product uh package
    and based on this component graph i will
    be able to
    first run in parallel below that we
    were able to run in parallel and how we
    were able also to
    after creating the
    products package to store it
    uh in a repository manager but moreover
    to generate
    the log file for this particular package
    for this particular build
    this slot file used
    to then guarantee immutability
    of this particular build i will show you
    um
    this log file during the demo just for
    people who missed it
    but the main idea here is that in
    during the training we stored this
    particular file in a command metadata
    repository
    and this file gives you the exact
    component graph
    that were used at the build time
    of a particular common package
    last but not least during the training
    what we did
    is to use the
    product package to
    and the log file to install this
    application deploy it
    and generate then a debian package
    that we stored in artifactory
    and this is more or less where
    i will start my demo later on
    so during this training we used
    a bit artifactory in the background we
    explained a bit about it
    but we focused mainly
    on canon itself and cc plus plus
    but in terms of
    binary managements if
    you are talking if you are speaking
    about conan you have to understand that
    conan is relying on a
    package type conan is a package type and
    at some point you need
    to store those packages and
    what can you do to store those packages
    to share it within your organization
    you can use a shared file space
    so an ftp basically this is what people
    used to do
    in the past not with canon but with
    tools like maven for example where you
    can rely on a pure layout
    and it can work to some extent
    but uh when you start to have a specific
    protocol
    such as with canal or with other
    technology like
    docker or nugets and you start to have
    this protocol an api how you will handle
    it
    this is the first thing so how you will
    expose
    api and uh
    and all the logic needed to
    fetch the right dependencies and so on
    another alternative and i’m sure
    many people here did it in the past or
    still doing it
    i did it in the past is to use your vcs
    your git repo your svn or whatever to
    store
    your binaries the result of your build
    the thing is first of all vcs are not
    built for that
    vcs are called code
    source code management so it’s for
    code it’s not for binaries and why it’s
    first of all because the main idea with
    the vcs is to be able to give
    text to merge them to compare
    a different version of a file which is
    not doable
    at binary level there are other reason
    why you don’t want to do that for
    indexation
    issues uh but the main thing is that the
    vcs is not
    built to do that also if you look at the
    underlying mechanism in a vcs
    like git
    the the versioning is not built
    to store binaries it’s stored by
    contents
    not by name and
    [Music]
    the idea with binaries is different
    from code in code you want to check the
    diff
    you want to be able to go back in time
    while in binary you want to be able to
    track to version and to keep
    immutability
    of your binaries so
    uh the conclusion is uh quite obvious
    but
    this is why we built binary management
    tools and
    today it’s maybe obvious to say that
    but believe me 10 years ago it was not
    so
    what can you do to store binaries is to
    use a binary management tool
    and if you look at binary management
    tool
    there are plenty of solutions on the
    market
    but if you look at what you want
    first of all you want some functionality
    so if you work with canon for example
    you want to have
    a binary manager that will work with
    conan that’s
    pretty obvious you want
    security you want to be able to control
    who
    can access to what binary
    what package within your binary manager
    binary management tool but you want
    also to be able to track who accessed
    uh who produced which binary who
    downloaded which binary
    and so on and
    moreover you want to have availability
    at the time you start to build
    automation and tool chain
    using a binary repository manager this
    binary repository manages that to be
    the back end of your tool chain which
    means that
    it’s also a point of failure for your
    tool chain
    and if your delivery process your
    engineers are
    relying on your tool chain losing this
    binary manager
    or having done time on it means that you
    will have downtime on your entire tool
    chain and your degree process
    which is not suitable for your business
    so in terms of functionality
    uh if i want if i look
    at a more enterprise grade level
    i want to be able to have control on my
    storage
    i want to be able to scale in time
    because
    i know that i’m producing binaries
    every day and so this storage will
    increase
    every day i want to have performance and
    not to be impacted because my storage is
    growing
    i don’t want to have an impact on my
    performance
    i want to be able to address different
    technologies
    because even if i’m a cc space engineer
    working with canon and because i went to
    the conan days i know that
    i’ll need at least some generic
    repositories
    but if i want to address also the
    delivery parts
    i might need some debian repositories or
    rpm repositories or docker repositories
    something from another flavor that is
    more suitable for
    deployment and runtime i want to be able
    to manage version and updates
    to have easy maintenance and
    administration
    in terms of security i want to be able
    to authenticate and give authorization
    to my people
    to keep integrity and traceability of
    my artifacts basically who’s doing what
    and making sure that crucial information
    are kept forever or at least for the
    time i need them
    and as i said i want availability and
    performance
    i want to be able to have my local tool
    chain
    with no downtime but i want also to have
    a distributed team working with good
    performance
    so what about when i have engineers
    spread it
    all across the globe for my organization
    how can i make sure they will have
    all the same performance and same
    availability and sla i want to have
    h a and dr i don’t want to lose data i
    want to have node on time once again
    and caching this is for the performance
    and one uh of
    the solution you can find and uh i truly
    advise you
    to have a look to it this is the main
    purpose for today uh it’s jfrog art
    factory
    and white geforce factory is a good fit
    for all of those challenges
    it’s first of all because jfrog factory
    provides an advanced storage
    solution first of all
    in j4g art factory you can really choose
    which
    storage provider you want to use you can
    use a local file system
    or an nfs you can store your binaries
    as blob in a database you can use object
    storage
    on amazon azure or gcp
    or even any implementation of s3
    protocol
    for on-premise on-premise
    object storage you can change different
    providers you can use local caching when
    you have a low performance
    storage system
    you can use sharding implement residency
    across different charts across different
    data centers
    and it’s really
    flexible and advanced in term
    of what you can do in terms of storage
    both in performance stability and
    reliability
    arts factory itself is as i said a
    universal binary repository manager
    which means that you can store all kinds
    of artifacts and resolve all kind of
    dependencies from there
    you can use it as a proxy or remote
    repository
    and it integrates with
    all native package manager
    we have also integration with most ci
    servers
    that exist like jenkins bamboo bitbucket
    azure devops platform and of course our
    rgf pipelines
    and the main idea as i said is that
    using artifactory
    as your binary manager with its
    metadata system you can really
    use it as a backend for your tool chain
    and consider it as a single source of
    truth for
    your binaries
    very quickly i said it you can store
    more than 25 packages
    in that factory and also generic
    packages
    so in fact any kind of binaries
    even mp4 pictures documentation
    current packages of course and so on
    so in that factory as i said you can
    store any kind of binaries
    but more importance with
    all those binaries is also to store
    all the metadata build results the
    concept
    the context of your build what happened
    when it happened
    did i checked and tested this library
    so this is where you can start to really
    use artifactory
    to both store your binaries but also
    track the lifecycle of those binaries
    along your tool chain
    [Music]
    i said it’s integrated with most
    binaries and
    let’s have a quick very quick look to
    artifactory main components
    so i will go pretty fast on this one
    just keep in mind artifactory is
    uh available through http https
    there are three ways to communicate with
    that factory
    through the ui through webdav there are
    still
    some companies using it or rest api
    keep in mind that even when you go
    through the ui
    behind the scene that factory is using
    rest api to communicate with its
    backend what does it mean means that you
    can do
    everything in our factory through the
    rest api
    and we know by statistic that more than
    90 percent of the use of that factor is
    done through rest api
    then in a factory you’ll find a caching
    layer for performances and
    the core concepts which is a virtual
    file system
    of art factory virtual file system
    because
    it’s uh divided in two parts the
    binary store i mentioned before this is
    where you specify if you want to use
    object straight sharding
    the local file system or whatever and
    the second part
    is the database database that will
    contain
    all metadata and here will take just
    one minute to give a bit more
    information about artifactory and how
    art factory stores
    artifacts when you deploy a file in that
    factory
    art factory calculates its checksum
    and will store it in your file store so
    in your s3 bucket or your local file
    system
    based only checksum so here i deployed
    the file with the checksum that starts
    with
    two efc blah blah blah and
    so this file will be stored in a folder
    starting with 2e
    under the name equal to its sha 1
    and this is the actual binary
    on my file system in the database
    i have a table called nodes that
    contain references to all occurrences of
    this file are in my art factory
    so here i have a record that says that
    my file
    with the checksum to efc
    and so on is stored in the repository
    libs with local under this pass and it’s
    a jar file
    what does it mean means that if i have
    the same file
    with the same checksum even with a
    different name stored
    in a different repository or different
    paths
    under the same repository i won’t have a
    duplicate data a duplicate binary on my
    file system
    which means optimization of the file
    system
    uh so this is one of the the first
    advantage of
    this checksum based storage is the
    duplication of size
    for content packages those are
    small but when you start to play with
    promotion and having them in several
    places
    it can be interesting when we start to
    talk about docker images or
    yeah images are a good example it starts
    to be even more interesting in terms of
    storage optimization one
    very interesting thing also is that copy
    and move operation
    are very cheap on that factory because
    when you copy a file from one repo
    to another one at the end it’s just a
    new record
    inserted in the database
    deletion also are just a database
    operation
    and then there is a garbage collection
    in our factory that will delete
    and reference file from the file system
    but
    the delete operation that you are doing
    as a user
    is something very fast that is just on
    the database level
    and also with checksum based storage
    we do not need any luck
    at file level which means uploads
    download
    operation are very fast and all
    replication i will explain a bit what
    our replication
    are checksum based so we avoid to send
    files
    that already exist on the network
    and this provides better performance for
    the file system
    because of the non-looking uh stability
    in
    term of for the searches and
    performances because
    all searches are database searches not
    by stem searches
    okay uh so as i said in the database you
    have
    all the metadata all the information
    stored
    about your art effect so it can be built
    information it’s
    calculated metadata from the package
    revision number version number and so on
    but
    also custom metadata properties
    everything you want to add to your
    binaries
    by default factories using the daddy
    derby database
    uh just keep in mind this is uh only for
    small medium installation
    when you go at company level you should
    consider using
    another database and we support any
    relational database such as postgres
    mysql and so on
    as i said uh art factory supports rest
    api for
    everything we have also a cli tool
    that will be shown during the demo very
    quickly
    keep in mind the cli is just a wrapper
    for the rest api
    optimized wrapper which means that you
    can do
    parallel requests multithreading and
    you have some logic implementing the cli
    but at the end the cli is just
    using rest api
    we have plugins for ci we have uh
    also the capacity to add user plugins
    within art factory
    to implement your own logic your own
    features if you you want to implement
    some
    workflow and processes within art
    factory
    and and metadata are very important you
    will see
    why in the demo last but not least
    some points when you should think about
    enterprise grade repository
    as i said artifactory is built to be
    universal and agnostic
    for deployments which means that you can
    deploy it
    on kubernetes we have official m chart
    for that you can deploy it on vm on
    windows on unix
    on your mac you can install it with brew
    with debian packages rpm packages
    we support multi-cloud multi-us
    it’s really the goal is to be universal
    and agnostic
    we have hybrid model we support
    replication and multi-site topology this
    is
    when you start to have people in
    different
    location as i said before and to give
    them
    same experience same performances and
    all our products support hca deployment
    which are pure active active deployments
    which means that
    all nodes within a cluster within an
    architecture cluster
    can uh self download upload requests
    in the same way and distribution
    is last point as i say to the to the
    edge
    this is uh i forgot i added this point
    because
    it will be released very soon uh we
    announced it uh
    few weeks ago during our uh
    user conference but something very
    exciting for
    people with massive deployments we are
    releasing a peer-to-peer
    download capacity on that factory to
    also
    increase performances when you have
    massive edges
    or endpoints to to upgrade
    in parallel
    okay and i think i
    spoke a lot about uh
    the slides and now it’s time to show you
    uh a quick demo to
    to go further after the conan days as i
    say
    so i’m gonna share my screen
    first of all
    okay so application and i’m going to
    start with a terminal
    so you might be able to see my screen i
    hope it’s
    big enough and i will start from
    almost where we stopped during the conan
    days training
    so for the people who were there
    what we did after building uh
    the application is to use aql
    aql means art factory qre language
    and the cli to
    find the right command package
    to ship um as part of our release
    so here i have a five spec it’s a json
    file
    that can be consumed by the jfox cli
    and this size spec is simply using a
    nut factory qa language statements to
    find
    all items from a particular repository
    connect metadata
    and the files i want to find are lock
    file
    so canon lock files produced by
    a particular build so a build with this
    name and this number and with a property
    profile
    equal to release gcc6 so
    running these commands allows me to find
    the right five spec and this is what
    we did during the training we use the
    jfox cli
    as it to download the fast pace
    and here after that what we did
    is simply to run a canon install command
    to install the application we tested it
    and then
    we have generated a debian package
    called uh app debian if i remember
    we deployed it in at factory we tested
    it
    and what we did was a promotion
    of these artifacts to a qualified
    repository so now at this stage what we
    have in art factory
    is a debian package deployed that
    contains
    our application and
    we have also the a particular object for
    this debian package
    a bill information object so a set of
    metadata
    that reference the debian package has a
    generated artifact
    and the canon log file as a dependency
    of the build and why i’m talking about
    that because
    for today i will show you how to
    navigate
    to navigate in the other way around
    so first of all i will go to another
    folder on my machine here and i will
    show you
    another 5 spec
    which is a bit different from the
    previous one
    the previous one we were starting from
    the canon log 5 looking so we knew that
    we had a can unlock file produced by a
    particular project with a particular
    number
    now i want to do the other way around i
    know that
    i have somewhere uh build debian
    app with a particular number and
    i know that this build is using a
    particular dependency
    a particular content package and i want
    to find it sorry a particular
    a particular conan log file and i want
    to find
    it to be able to retrieve my command
    package
    and test it so here i have this
    file spec which is using again actual
    and you can see it’s a bit different
    because
    with aql which is a very powerful tool
    you can navigate in all
    properties and entities stored in that
    factory and here
    i’m saying i want to find also in conan
    metadata
    a log file but i want
    a log file that is
    a dependency of a module
    that is part of a build named debian app
    with the number one what it means
    here i’m not looking for the bill that
    generated this
    file but for the build or for the
    dependency that is included in this
    build i know my build and i want to find
    the dependency
    and as before what i can do is
    first to use a cli to run a search
    command
    and with no surprise here i can find my
    log file so here it started in my
    factory
    i can see this is the one i’m consistent
    with the previous example i can see that
    this
    command lock file has been generated by
    the build
    product master with build number seven
    if you remember this is
    what i was looking for on the
    log file itself so i’m consistent and so
    now
    what i can do is to download it
    run the command install command and try
    it
    so so here with the cli rt download
    using the js the file spec
    i got the file and now i will run
    a current install command
    you can see 12 and now
    i will just give a try to the
    application to see
    if it works and it works so
    it was just to show you how you can
    navigate through the metadata and the
    importance of metadata
    i will just stop sharing this and
    i will share again my screen but now i
    will go in
    my artifactory
    okay
    let’s go to art factory here it is
    and we i’m going to start with
    the artifactories that were used
    uh for the entire lab
    so here is that art factory i’m on
    what we call the package viewer i can
    see my
    debian application the different command
    packages
    uh sent to my art factory
    and uh so here if i go in my builds
    i can see the build dividend app with
    build number one
    this is what i uh
    i was working on with my xql and my face
    pack
    so what i did with my aql was to find
    regarding this build name
    with this build number i wanted to find
    the module that contained a dependency
    and i wanted to find this dependency
    so here is a log file i
    was looking for with my spec and this is
    what i downloaded
    so here you can find the dependency id
    so the lock file that i will use as a
    dependency
    to generate my debian package okay
    you can see that this debian package and
    this was parts
    of the conan days training
    so this dividend packet has been
    promoted in a debian uat
    local repository and
    so i can navigate to it here in my
    debian repository see the debian
    information
    here i have the build part where i can
    see
    who has produced this
    debian package and i can navigate back
    to uh the build itself
    and then go to the log file that is the
    dependency
    and here i can see it and see also that
    it has been used only by this build so
    this is what i wanted to show you in the
    command line
    with pure rest api i was able to
    navigate
    back and forth to dependencies generated
    packages and so on
    and i have this traceability and thanks
    to the lock fire i have also this
    immutability of my command build
    now i mentioned a bit about advanced
    repository features and what we didn’t
    show you
    during conan days was some interesting
    things regarding
    uh the repository here if
    i look at repository structure in my art
    factory
    so i can find the different local
    repositories i used
    for my lab so here i have the app
    debian uat local where is my promoted
    debian package
    i have the conan used by my developers
    as a canon local where is my thermal
    product package the current metadata for
    my log files and so on
    what is interesting also is that you can
    see here
    in the last column of
    this view there is a repetition column
    and if i have a look let’s say to the
    current metadata
    repository in the replication tab i can
    see that
    this repository is configured to
    replicate all its binaries every day
    at 1 pm to
    2 different targets in fact two
    different
    canon metadata local repositories but
    located in two different
    artifactories and what is in
    any delete change of metadata
    deploy write operation done on this
    repository my current metadata
    repository
    will trigger the replication and this
    replication this thing
    will be done on the two different
    targets what does it mean
    means that if i go now on one
    of those targets let’s say this one
    which is a different artifactory
    and okay
    i didn’t login okay if i log in
    okay so here i’m connected to an art
    factory that is
    uh running in sunnyvale and you can see
    on my map so this is a map provided by j
    for control
    jeffrey mission control i’m connected
    from sunnyvale here i have the other
    artifactory where i replicated my
    packages in europe
    in saint petersburg but what is
    interesting here is that if i look at
    the package view
    here i will find sorry
    i will find different packages because
    this is used by different
    teams but if i filter on canon
    here i will find all the conan packages
    produced from the other data center
    pushed in this artifactory
    and sync to my u.s sites
    what does it mean means that if an
    engineer
    in the u.s site now run the same
    sql statement the same five spec and the
    same can uninstall command
    as i run on my own data center
    he will get the same result he would get
    the same
    current package he can consume the same
    debian package because it’s also
    replicated
    so here if i have a look to the debian
    package no
    okay it’s not indexed but here i can see
    that i have the same build
    i have the same build information the
    same metadata
    and uh yeah and i have my uh my debian
    package also
    deployed on my art factory so what it
    means
    if i go back to my map
    i have some developers working on the
    feature branch
    in toulouse this is where i live after
    pushing to their git repositories this
    has generated the
    product package command package after
    merging the change
    and based on art factory replication
    because i’m using it all the metadata
    all the binaries are synced to
    the distant sites which means that now
    now
    developers local developers on each site
    can consume
    their package from their local instance
    their local
    canon repository and this can be applied
    to
    the debian packages docker images helm
    charts
    and so on
    okay so
    i will stop sharing from now
    and go back to the slides
    okay
    so um what’s next
    uh first of all uh i i
    really encourage you to uh have a look
    to the next conan day
    edition and to the training uh it was
    pretty hard to show you all the things
    uh i wanted in a single hour
    so really have a look to it
    then you can go on our website uh to
    ask for a trial we have also a not
    factory community edition if you want to
    start only with conan
    but if you want to start with more
    packages
    and advanced features you can have a
    look to the trial for 30 days it’s free
    go to our documentation page we have a
    j4 academy with a lot of
    video content and training materials
    and stay tuned we have other webinars
    coming
    white papers and so on