Before learning how to use Pipelines, here are some fundamental concepts you will need to be familiar with.
Connections
These facilities connect Pipelines to information and services that are not part of the JFrog Platform Deployment but are accessible elsewhere on the network.
Integrations
An Integration connects Pipelines to an external service/tool. Each integration type defines the endpoint, credentials, and any other configuration detail required for Pipelines to exchange information with the service. All credential information is encrypted and held in secure storage, in conformance with best security practices.
For more information, and a list of all available integration types, see the Pipelines Integrations reference.
Pipeline Sources
A Pipeline Source is a location in an external repository (such as GitHub or BitBucket) where pipeline configuration files can be found. A pipeline source connects to the repository through an integration.
Pipelines
A pipeline is an event-driven workflow that you construct using Pipelines DSL, which is based on YAML. The YAML file containing the DSL is called a pipeline configuration (config).
Steps
A Step is a unit of execution in a pipeline. It is triggered by some event and uses resources to perform an action as part of the pipeline.
For more information, and a list of all available step types, see the Pipelines Steps reference.
Resources
Resource is one of the key building blocks of all pipelines. They are information entities that are used for storing and exchanging information across steps and pipelines.
For more information, and a list of all available resource types, see the Pipelines Resources reference.
Steplets
The Matrix native step enables your pipeline to repeatedly execute the same set of actions in a variety of configurations and runtime environments, with each variant executing as an independent step, also called steplet. These steplets can, when configured, execute in parallel on multiple build nodes. On completion of all steplets, Pipelines aggregates the result status, giving the appearance of a single step.
Runs
A run is an instance of execution of a pipeline. Pipelines maintains an ordered history of all runs of each pipeline, with an execution log that can be examined through the JFrog Platform.
Runtimes
Every step in your pipeline executes on a build node that has been provisioned with a runtime environment. Through Pipelines DSL, you can control which runtimes your steps execute in.
For more information, see Managing Runtimes.
Runtime Images
A runtime image is a preconfigured Docker container that includes the necessary OS, software tools, packages, and configurations that a step needs to execute.
The JFrog Platform Deployment provides a standard set of runtime images that can be used for most applications. This set includes baseline runtimes with variants to support many commonly used languages. You can also create your own runtime images for specialized needs.
Nodes
To run any step in a pipeline, you need a build node (virtual machine) that will receive the runtime container where the step will execute.
You must provide nodes and attach them to your JFrog Pipelines project. A node can be on any infrastructure that you choose to use, whether it is from a cloud provider (such as AWS, GCP, or Azure), or on your own infrastructure if your security policies require your operations to remain behind your own firewall.
Nodes can be either static, which are available all the time, or dynamic, which are spun up on-demand through a cloud service.
Node Pools
A node pool is a convenient way to logically group nodes. This enables you to run steps simultaneously in a pipeline, maintain nodes of different architecture and operating system, pin steps to run on specific node types, and more.
A node pool is assigned a default runtime image. This default is automatically provisioned to its node unless a step overrides this behavior by specifying a different runtime.