Frequently Asked Questions

JFrog Pipelines Documentation

JFrog Pipelines
Content Type
User Guide

This section lists some of the most frequently asked questions (FAQs) about Pipelines.

  • How to skip a run on a Git Repository Commit?

    If a specific commit should not trigger any runs for a GitRepo resource, include the text [skipRun] anywhere in the commit message and no runs will be triggered by that commit. For more information, see Skipping a Git Repository Commit.

  • How to view run execution time?

    In Pipelines view, hover over the Triggered column for a run. This shows the time it took for the run to execute.

  • Is step_id (environment variable) the same for multiple nodes in one matrix step?

    Yes, it is.

  • Is there any difference between the execution stages?

    No, there are no strict technical limitations that dictate the use of execution stages, such as onStart or onExecute. The choice between the stages is more about logically structuring your script to ensure that initialization and execution are separated appropriately. This separation can make your code more organized and easier to maintain.

  • Why is my pipeline not triggering when a change is pushed to the repository?

    Confirm that the webhook is correctly configured in your version control system. Check for any firewall or network restrictions that might be blocking incoming webhooks.

  • I'm trying to use the waitOnParent flow in Pipelines, but it doesn't seem to be working as expected. For example, Pipeline A doesn't wait for Pipeline B or C. How can I make only the second pipeline wait, without the first step waiting?

    In Pipelines, once a pipeline run starts, it cannot dynamically wait for another pipeline. This behavior is by design. The waitOnParent flow is designed to ensure that a child pipeline waits for its parent pipeline's success, but it doesn't support dynamic waiting for specific sibling pipelines. If you need specific ordering or dependencies between pipelines, you may need to design your workflow differently or use alternative methods to achieve your desired orchestration.

  • Can Pipelines display the results of Go tests executed on a Java project at the current step?

    Pipelines does not natively support this. If you want to display the results of Go tests within a Java project's pipeline step, you'll need to use a tool that can translate Go test results into a format that Pipelines can understand, such as JUnit format. A common tool used for this purpose is "gotestsum". By using "gotestsum", you can ensure that Go test results can be shown on the relevant step's tab in your Pipelines setup.

  • I need to migrate one Jenkins Job to Pipelines. One of the problems is that the original Jenkins Job is using Groovy and Python scripts. As I know Pipeline nodes have Python but have no Groovy installed. Is it possible to install Groovy to nodes?

    Currently, we do not have any build runtime images in which we install Groovy. However, you can install Groovy on the fly within your pipeline. To do this, follow these steps in the execution section of your pipeline:

    1. Install SDKman:

      curl -s | bash
    2. Source SDKman initialization:

      source "$HOME/.sdkman/bin/"
    3. Install Groovy:

      sdk install groovy
  • Will files added using add_affinity_group_files be available for steps only in the current pipeline run, or for the current run and consecutive runs (steps with the same affinity group, of course)?

    Files added using add_affinity_group_files will be available only for the current pipeline run and its associated steps.

  • What is a shared_workspace in the context of my system?

    The shared_workspace refers to a folder created on a node within your system. This folder is designed to be accessible and available for various system processes and steps.

  • Will the files stored in the shared_workspace be available only for the current run, or will they persist for consecutive runs as well?

    The files in the shared_workspace will remain available as long as the node on which it resides is up and running. This means that the folder will be accessible not only for the current run but also for consecutive runs, provided the node remains operational. It essentially serves as a persistent storage location that can be used by steps within an affinity group across multiple runs.

  • If I have a pipeline that builds an artifact (e.g., an executable), is there a way to upload it to Artifactory without using direct JFrog CLI commands inside the pipeline? Is it possible to combine the usage of $shared_workspace with native steps like uploadArtifact or DockerBuild to upload the binary?

    Yes, you can streamline the process of uploading pipeline-built artifacts to Artifactory without using direct JFrog CLI commands. Here's how to achieve this using $shared_workspace and native steps like uploadArtifact:

    • Prepare Your Binary: Place the binary file you want to upload in the $shared_workspace directory. This directory is available to all steps within an affinity group and can be used to share files between steps.

    • Utilize the uploadArtifact Step: In your pipeline, use the uploadArtifact step to upload the binary to Artifactory. In the configuration of the uploadArtifact step, specify the location of the binary within the $shared_workspace directory.

    • Affinity Group: Ensure that both the steps, the one responsible for building the binary and the uploadArtifact step, belong to the same affinity group. Set the affinity group attribute to a common value, like "aff_group," for both of these steps to ensure that they run on the same node. This is important because it allows the uploadArtifact step to access the binary in the $shared_workspace and upload it to Artifactory.

  • I am trying to build a stateful pipeline where, after running a "Matrix" step, I can access files that were uploaded during the Matrix step. How do I achieve this?

    You can access files in a stateful pipeline without the need for additional functions like add_run_files or add_pipeline_files. In a matrix step, files created in the current workspace are automatically uploaded. Here's how you can achieve this:

    • PreMatrix Step: In your PreMatrix step, add any files to the current directory. Let's name this step pre.

    • Matrix Step: During the Matrix step, the files you added in the PreMatrix step will be available in the pre folder under the current workspace. Let's name this step matrix. In the Matrix step, you can also add any additional files as needed.

    • PostMatrix Step: After the Matrix step, the files from the Matrix step will be available in the matrix folder under the current workspace. The files added in the PreMatrix step will be available in the matrix/pre folder.

    This approach allows you to work with files in a structured manner within your stateful pipeline, making it easy to access and manage them at each stage.