The Bash is a generic step type that enables executing any shell command. This general-purpose step can be used to execute any action that can be scripted, even with tools and services that haven't been integrated with JFrog Pipelines. This is the most versatile of the steps while taking full advantage of what the lifecycle offers.
All native steps derive from the Bash step. This means that all steps share the same base set of tags from Bash, while native steps have their own additional tags as well that support the step's particular function. So it's important to be familiar with the Bash step definition, since it's the core of the definition of all other steps.
Usage
Bash
pipelines: - name: <string> steps: - name: <string> type: Bash configuration: affinityGroup: bldGroup priority: <[0-10000]> timeoutSeconds: <job timeout limit> nodePool: <name of the nodePool> chronological: <true/false> allowFailure: <true/false> environmentVariables: env1: <string> env2: <string> env3: default: <string> description: <string> values: <array> allowCustom: <true/false> integrations: - name: <integration name> inputSteps: - name: <step name> status: - <terminal_status> - <terminal_status> - <terminal_status> inputResources: - name: <resource name> trigger: <true/false> # default true newVersionOnly: <true/false> # default false branch: <string> # see description of defaults below outputResources: - name: <resource name> branch: <string> # see description of defaults below runtime: type: <image/host> image: auto: language: <string> version: <string> # specifies a single version. Cannot be used if "versions" is defined. versions: # specifies multiple versions. Cannot be used if "version" is defined. - <string> custom: name: <string> tag: <string> options: <string> registry: <integration> # optional integration for private registry sourceRepository: <path> # required if registry is Artifactory. e.g. docker-local region: # required if registry is AWS. e.g. us-east-1 autoPull: <true/false> # default true; pulls image before run execution: onStart: - echo "Preparing for work..." onExecute: - echo "executing task command 1" - echo "executing task command 2" onSuccess: - echo "Job well done!" onFailure: - echo "uh oh, something went wrong" onComplete: #always - echo "Cleaning up some stuff"
Tags
name
An alphanumeric string (underscores are permitted) that identifies the step. The name should be chosen to accurately describe what the step does, e.g. prov_test_env
to represent a job that provisions a test environment. Names of steps must be unique within a pipeline.
type
Must be Bash
for this step type.
configuration
Specifies all optional configuration selections for the step's execution environment.
Tag | Description of usage | Required/Optional |
---|---|---|
| Label that controls affinity to a Node. All the steps with the same affinityGroup will be executed on the same node. This will allow sharing state between the steps. An example is having the same affinityGroup for DockerBuild and DockerPush steps in a Pipeline so that Image being built in the DockerBuild step can be used to published in the DockerPush step. | Optional |
| Controls the priority of a step when there are parallel steps in a pipeline or multiple pipelines executing. It determines which step will run first across all steps that could run if there were no constraints on the number of steps running. Steps with a lower number will run before steps with higher numbers. For example, priority 10 will run before priority 100. The default priority is 9999. Priority does not apply to steps that are still waiting for an input to complete or configured to run in a node pool with no available nodes. Also, if there are two steps ready to run and only one available node, the one with the lower priority number runs first, regardless of which pipeline each step belongs to. | Optional |
| Time limit, in the number of seconds, for the step to complete. If the step does not complete in the given time limit, the step will be forced to a completion state of failed. | Optional |
| Assigns the node pool the step executes on. If node pool isn't specified, a step will execute on the default node pool. See here to learn more about node pool | Optional |
| Specifies the step must execute in chronological order, to ensure receipt of all state updates from preceding steps. A step with | Optional |
| If you do not want a step to contribute to the final status of the run, add allowFailure: true to the configuration section of that step. When this option is used, even when a step fails or is skipped, the final status of the run is not affected. For more information, see Conditional Workflows. | Optional |
| Create a condition based on the values of add_run_variables environment variable, so that a step can be skipped based on dynamically set variables before it gets assigned to a node. For more information, see Run Variable Conditional Workflow. | Optional |
| Assigns any environment variables and their values in key:value format. All environment variables assigned within a step definition are active only for the scope of the execution of that step. If the following variables are set, they will be used:
| Optional |
| A collection of integrations that will be used by this step. Integrations can be used directly in step without a resource. | Optional |
| A collection of named steps whose completion will trigger execution of this step. In addition, you can set status conditional workflow for input steps. When configured for a step, it executes only if an input step’s status, during its current run, is satisfied. You can configure any number of statuses for an input step. NoteIt is important to note that the status of an input step in the current run only is considered for conditional workflows. If a step is not part of the current run, it is always assumed that the condition for that input step is met. For more information, see Step Status Conditional Workflow. | Optional |
| A collection of named Pipelines Resources that will be used by this step as inputs.
| Optional |
| A collection of named Pipelines Resources that will be generated or changed by this step.
| Optional |
| Specifies the runtime for the execution node. | Optional |
execution
Declare sets of shell command sequences to perform for different execution phases:
Tag | Description of usage | Required/Optional |
---|---|---|
| Commands to execute in advance of | Optional |
| Main commands to execute for the step | Optional |
| Commands to execute on successful completion of | Optional |
| Commands to execute on failed completion of | Optional |
| Commands to execute on any completion of | Optional |
Note
onExecute
, onStart
, onSuccess
, onFailure
, and onComplete
are reserved keywords. Using these keywords in any other context in your execution scripts can cause unexpected behavior.
Examples
The Pipelines DSL for these examples is available in this repository in the JFrog GitHub account.
Perform a build activity
This is an example of how to use the Bash step to perform a build activity.
Bash step to build
- name: build type: Bash configuration: nodePool: my_node_pool environmentVariables: env1: value1 env2: default: value2 description: Example Variable values: - value2 - value3 allowCustom: false runtime: type: image image: auto: language: node versions: - "16" inputResources: - name: src execution: onExecute: - cd $res_src_resourcePath - npm install - mkdir -p testresults && mkdir -p codecoverage - $res_src_resourcePath/node_modules/.bin/mocha --recursive "tests/**/*.spec.js" -R mocha-junit-reporter --reporter-options mochaFile=testresults/testresults.xml - $res_src_resourcePath/node_modules/.bin/istanbul --include-all-sources cover -root "routes" node_modules/mocha/bin/_mocha -- -R spec-xunit-file --recursive "tests/**/*.spec.js" - $res_src_resourcePath/node_modules/.bin/istanbul report cobertura --dir codecoverage - save_tests $res_src_resourcePath/testresults/testresults.xml onSuccess: - send_notification mySlack "build completed"
Python in bash step
This is an example of how to use Python in a bash step.
Python
resources: - name: script type: GitRepo configuration: path: jfrog/sample-script gitProvider: myGithub pipelines: - name: test_stepTestReports steps: - name: testReport type: Bash configuration: inputResources: - name: script execution: onExecute: - cd $res_script_resourcePath - ls - python -m py_compile calc.py - pip install --upgrade pip - hash -d pip - pip install pytest - py.test --verbose --junit-xml test-reports/results.xml test_calc.py onComplete: - save_tests $res_script_resourcePath/test-reports/results.xml
runtime, environmentVariables, and inputSteps tags
This example uses the runtime
, environmentVariables
, and inputSteps
tags:
pipelines: - name: api_steps steps: - name: api_steps type: Bash configuration: runtime: type: host environmentVariables: env1: value1 env2: value2 execution: onExecute: - touch cachefile.txt - add_cache_files cachefile.txt my_file - name: api_steps_2 type: Bash configuration: runtime: type: host inputSteps: - name: api_steps execution: onExecute: - echo "step 2.." - name: api_steps_ProjectAdmin steps: - name: api_steps_ProjectAdmin type: Bash configuration: runtime: type: host environmentVariables: env1: value1 env2: value2 execution: onExecute: - touch cachefile.txt - add_cache_files cachefile.txt my_file - name: api_steps_ProjectAdmin_2 type: Bash configuration: runtime: type: host inputSteps: - name: api_steps_ProjectAdmin execution: onExecute: - echo "step 2.."
affinityGroup and priority tags
This example uses the affinityGroup
and priority
tags:
pipelines: - name: S_WF_019 steps: - name: S_WF_019_001 type: Bash execution: onStart: - add_run_variables step_1_var="step_1" onExecute: - echo "step 1 is running" - name: S_WF_019_002 type: Bash configuration: inputSteps: - name: S_WF_019_001 affinityGroup: ag_foo priority: 4 execution: onStart: - echo "step_4_var - ${step_4_var}" - if [ "$step_4_var" != "step_4" ]; then exit 1; fi - add_run_variables step_2_var="step_2" onExecute: - echo "step 2 is running" - name: S_WF_019_003 type: Bash configuration: inputSteps: - name: S_WF_019_001 affinityGroup: ag_foo priority: 1 execution: onStart: - echo "step_1_var - ${step_1_var}" - if [ "$step_1_var" != "step_1" ]; then exit 1; fi - add_run_variables step_3_var="step_3" onExecute: - echo "step 3 is running" - name: S_WF_019_004 type: Bash configuration: inputSteps: - name: S_WF_019_001 affinityGroup: ag_foo priority: 3 execution: onStart: - echo "step_3_var - ${step_3_var}" - if [ "$step_3_var" != "step_3" ]; then exit 1; fi - add_run_variables step_4_var="step_4" onExecute: - echo "step 4 is running" - name: S_WF_019_005 type: Bash configuration: inputSteps: - name: S_WF_019_002 - name: S_WF_019_003 - name: S_WF_019_004 affinityGroup: ag_foo priority: 4 execution: onStart: - echo "step_6_var - ${step_6_var}" - if [ "$step_6_var" != "step_6" ]; then exit 1; fi - add_run_variables step_5_var="step_5" onExecute: - echo "step 5 is running" - name: S_WF_019_006 type: Bash configuration: inputSteps: - name: S_WF_019_002 - name: S_WF_019_003 - name: S_WF_019_004 affinityGroup: ag_foo priority: 2 execution: onStart: - echo "step_2_var - ${step_2_var}" - echo "step_3_var - ${step_3_var}" - echo "step_4_var - ${step_4_var}" - if [ "$step_2_var" != "step_2" ]; then exit 1; fi - if [ "$step_3_var" != "step_3" ]; then exit 1; fi - if [ "$step_4_var" != "step_4" ]; then exit 1; fi - add_run_variables step_6_var="step_6" onExecute: - echo "step 6 is running" - name: S_WF_019_007 type: Bash configuration: inputSteps: - name: S_WF_019_005 - name: S_WF_019_006 affinityGroup: ag_foo priority: 2 execution: onStart: - echo "step_1_var - ${step_1_var}" - echo "step_2_var - ${step_2_var}" - echo "step_3_var - ${step_3_var}" - echo "step_4_var - ${step_4_var}" - echo "step_5_var - ${step_5_var}" - echo "step_6_var - ${step_6_var}" - if [ "$step_1_var" != "step_1" ]; then exit 1; fi - if [ "$step_2_var" != "step_2" ]; then exit 1; fi - if [ "$step_3_var" != "step_3" ]; then exit 1; fi - if [ "$step_4_var" != "step_4" ]; then exit 1; fi - if [ "$step_5_var" != "step_5" ]; then exit 1; fi - if [ "$step_6_var" != "step_6" ]; then exit 1; fi onExecute: - echo "step 7 is running"
chronological tag
This examples uses the chronological
tag:
pipelines: - name: bash_chronological steps: - name: Start type: Bash execution: onExecute: - echo "It's a start." - name: Step1 type: Bash configuration: chronological: true inputSteps: - name: Start execution: onExecute: - add_run_variables step1=foo - name: Step2 type: Bash configuration: chronological: true inputSteps: - name: Start execution: onExecute: - add_run_variables step2=bar - name: Step3 type: Bash configuration: chronological: true inputSteps: - name: Start execution: onExecute: - add_run_variables step3=baz - name: Finish type: Bash configuration: inputSteps: - name: Step1 - name: Step2 - name: Step3 execution: onExecute: - | echo "Step1: $step1" echo "Step2: $step2" echo "Step3: $step3"
timeoutSeconds tag
This example uses the timeoutSeconds
tag:
pipelines: - name: pipelines_S_Bash_0023 steps: - name: S_Bash_0023 type: Bash configuration: timeoutSeconds: 10 execution: onExecute: - sleep 3m