The Bash is a generic step type that enables executing any shell command. This general-purpose step can be used to execute any action that can be scripted, even with tools and services that haven't been integrated with JFrog Pipelines. This is the most versatile of the steps while taking full advantage of what the lifecycle offers.
All native steps derive from the Bash step. This means that all steps share the same base set of tags from Bash, while native steps have their own additional tags as well that support the step's particular function. So it's important to be familiar with the Bash step definition, since it's the core of the definition of all other steps.
pipelines: - name: <string> steps: - name: <string> type: Bash configuration: affinityGroup: bldGroup priority: <[0-10000]> timeoutSeconds: <job timeout limit> nodePool: <name of the nodePool> chronological: <true/false> allowFailure: <true/false> environmentVariables: env1: <string> env2: <string> env3: default: <string> description: <string> values: <array> allowCustom: <true/false> integrations: - name: <integration name> inputSteps: - name: <step name> status: - <terminal_status> - <terminal_status> - <terminal_status> inputResources: - name: <resource name> trigger: <true/false> # default true newVersionOnly: <true/false> # default false branch: <string> # see description of defaults below outputResources: - name: <resource name> branch: <string> # see description of defaults below runtime: type: <image/host> image: auto: language: <string> version: <string> # specifies a single version. Cannot be used if "versions" is defined. versions: # specifies multiple versions. Cannot be used if "version" is defined. - <string> custom: name: <string> tag: <string> options: <string> registry: <integration> # optional integration for private registry sourceRepository: <path> # required if registry is Artifactory. e.g. docker-local region: # required if registry is AWS. e.g. us-east-1 autoPull: <true/false> # default true; pulls image before run execution: onStart: - echo "Preparing for work..." onExecute: - echo "executing task command 1" - echo "executing task command 2" onSuccess: - echo "Job well done!" onFailure: - echo "uh oh, something went wrong" onComplete: #always - echo "Cleaning up some stuff"
An alphanumeric string (underscores are permitted) that identifies the step. The name should be chosen to accurately describe what the step does, e.g.
prov_test_env to represent a job that provisions a test environment. Names of steps must be unique within a pipeline.
Bash for this step type.
Specifies all optional configuration selections for the step's execution environment.
Description of usage
Label that controls affinity to a Node. All the steps with the same affinityGroup will be executed on the same node. This will allow sharing state between the steps. An example is having the same affinityGroup for DockerBuild and DockerPush steps in a Pipeline so that Image being built in the DockerBuild step can be used to published in the DockerPush step.
Controls the priority of a step when there are parallel steps in a pipeline or multiple pipelines executing. It determines which step will run first across all steps that could run if there were no constraints on the number of steps running. Steps with a lower number will run before steps with higher numbers. For example, priority 10 will run before priority 100. The default priority is 9999.
Priority does not apply to steps that are still waiting for an input to complete or configured to run in a node pool with no available nodes. Also, if there are two steps ready to run and only one available node, the one with the lower priority number runs first, regardless of which pipeline each step belongs to.
Time limit, in the number of seconds, for the step to complete. If the step does not complete in the given time limit, the step will be forced to a completion state of failed.
Assigns the node pool the step executes on. If node pool isn't specified, a step will execute on the default node pool. See here to learn more about node pool
Specifies the step must execute in chronological order, to ensure receipt of all state updates from preceding steps.
A step with
If you do not want a step to contribute to the final status of the run, add allowFailure: true to the configuration section of that step. When this option is used, even when a step fails or is skipped, the final status of the run is not affected.
For more information, see Conditional Workflows.
Create a condition based on the values of add_run_variables environment variable, so that a step can be skipped based on dynamically set variables before it gets assigned to a node.
For more information, see Run Variable Conditional Workflow.
Assigns any environment variables and their values in key:value format. All environment variables assigned within a step definition are active only for the scope of the execution of that step.
If the following variables are set, they will be used:
A collection of integrations that will be used by this step. Integrations can be used directly in step without a resource.
A collection of named steps whose completion will trigger execution of this step.
In addition, you can set status conditional workflow for input steps. When configured for a step, it executes only if an input step’s status, during its current run, is satisfied. You can configure any number of statuses for an input step.
It is important to note that the status of an input step in the current run only is considered for conditional workflows. If a step is not part of the current run, it is always assumed that the condition for that input step is met.
For more information, see Step Status Conditional Workflow.
A collection of named Pipelines Resources that will be used by this step as inputs.
A collection of named Pipelines Resources that will be generated or changed by this step.
Specifies the runtime for the execution node.
Declare sets of shell command sequences to perform for different execution phases:
Description of usage
Commands to execute in advance of
Main commands to execute for the step
Commands to execute on successful completion of
Commands to execute on failed completion of
Commands to execute on any completion of
onComplete are reserved keywords. Using these keywords in any other context in your execution scripts can cause unexpected behavior.
Perform a build activity
This is an example of how to use the Bash step to perform a build activity.
Bash step to build
- name: build type: Bash configuration: nodePool: my_node_pool environmentVariables: env1: value1 env2: default: value2 description: Example Variable values: - value2 - value3 allowCustom: false runtime: type: image image: auto: language: node versions: - "16" inputResources: - name: src execution: onExecute: - cd $res_src_resourcePath - npm install - mkdir -p testresults && mkdir -p codecoverage - $res_src_resourcePath/node_modules/.bin/mocha --recursive "tests/**/*.spec.js" -R mocha-junit-reporter --reporter-options mochaFile=testresults/testresults.xml - $res_src_resourcePath/node_modules/.bin/istanbul --include-all-sources cover -root "routes" node_modules/mocha/bin/_mocha -- -R spec-xunit-file --recursive "tests/**/*.spec.js" - $res_src_resourcePath/node_modules/.bin/istanbul report cobertura --dir codecoverage - save_tests $res_src_resourcePath/testresults/testresults.xml onSuccess: - send_notification mySlack "build completed"
Python in bash step
This is an example of how to use Python in a bash step.
resources: - name: script type: GitRepo configuration: path: jfrog/sample-script gitProvider: myGithub pipelines: - name: test_stepTestReports steps: - name: testReport type: Bash configuration: inputResources: - name: script execution: onExecute: - cd $res_script_resourcePath - ls - python -m py_compile calc.py - pip install --upgrade pip - hash -d pip - pip install pytest - py.test --verbose --junit-xml test-reports/results.xml test_calc.py onComplete: - save_tests $res_script_resourcePath/test-reports/results.xml
This examples uses the
pipelines: - name: bash_chronological steps: - name: Start type: Bash execution: onExecute: - echo "It's a start." - name: Step1 type: Bash configuration: chronological: true inputSteps: - name: Start execution: onExecute: - add_run_variables step1=foo - name: Step2 type: Bash configuration: chronological: true inputSteps: - name: Start execution: onExecute: - add_run_variables step2=bar - name: Step3 type: Bash configuration: chronological: true inputSteps: - name: Start execution: onExecute: - add_run_variables step3=baz - name: Finish type: Bash configuration: inputSteps: - name: Step1 - name: Step2 - name: Step3 execution: onExecute: - | echo "Step1: $step1" echo "Step2: $step2" echo "Step3: $step3"
This example uses the
pipelines: - name: pipelines_S_Bash_0023 steps: - name: S_Bash_0023 type: Bash configuration: timeoutSeconds: 10 execution: onExecute: - sleep 3m