Distribution Application Config YAML File

JFrog Installation & Setup Documentation

Content Type
Installation & Setup
ft:sourceType
Paligo

The YAML configuration file offers an alternative way to specify your initial settings for JFrog Distribution.

To get you up and running as quickly and easily as possible for a new installation, you can configure your basic initial setup through the filesystem, before starting Distribution for the first time.

Any edits will apply to the whole Distribution cluster.

Take care when modifying Distribution configurations

Modifying the Distribution configurations is an advanced feature, and if done incorrectly may render the Distribution service in an undefined and unusable state. Since it is easy to overwrite configurations, we strongly recommend backing up the configuration before making any direct changes, and taking great care when doing so.

Default Home Directory / $JFROG_HOME

The default Distribution home directory is defined according to the installation type. For additional details see the System Directories page.

Note: This guide uses $JFROG_HOME to represent the JFrog root directory containing the deployed product.

Apply configuration changes

To update the application configuration using the YAML file, follow these steps:

  1. Copy the template YAML file.

    cd $JFROG_HOME/distribution/var/etc/distribution/
    cp template.distribution.config.import.yml distribution.config.import.yml
  2. Edit the properties in the new distribution.config.import.yml file

  3. Restart the Distribution service with the updated YAML file. This update will propagate to the additional nodes.

Configuration file changes

A snapshot of the last imported configuration state will be saved as distribution.config.latest.yml.

Previous yaml configuration files will be saved as distribution.config.TIMESTAMP.yml. Up to a maximum of 10 previous configuration states.

Supported configurations

Application YAML Configuration File

##################################################################
# To edit the Distribution cluster configurations,               #
# edit this file and rename it to distribution.config.import.yml #
# then restart Distribution                                      #
##################################################################
schema-version: 1
task:
  deleted-at-target-scraping:
    enabled: true             # enable or disable deleted at target scraping task
    batch-size: 200           # number of returned records per request
    interval-seconds: 900     # interval between successive runs of the delete at target scraping job
  heartbeat:
    interval-seconds: 5        # interval between successive runs of the heartbeat job
    consider-stale-seconds: 30 # the time period (seconds) a server can remain unresponsive before being considered stale in the cluster
  distribute:
    interval-seconds: 5        # interval between successive runs of the distribute job
  release-bundle-handler:
    interval-seconds: 5        # interval between successive runs of the release bundle handler job
  health-check:
    interval-secs: 15          # interval between successive runs of the health check job
probes:
  liveness:
    fail-on-long-failing-readiness:
      enabled: true
      failure-duration-secs: 60       # maximum failure time before starting liveness auto healing
cluster:
  sync:
    timeout-millis: 100               # initial time (ms) to wait before retrying a request
    socket-timeout-millis: 5000       # time to wait (ms) before giving up on executing a REST call on another server
    exponential-backoff-multiplier: 2 # number by which the retry timeout should be multiplied before a subsequent retry. For example, by default, the third retry will happen after 200 ms
    number-of-retries: 3              # maximum number of retries
    backoff-max-delay-millis: 1000    # maximum time between successive retries regardless of other settings
call-home:
  enabled: true
release-bundle:
  max-artifacts: 3000                           # maximum number of artifacts to fetch from artifactory on release bundle creation
  enable-docker-manifest-resolver: true         # enable or disable the Docker manifest resolver
  enable-docker-fat-manifest-resolver: true     # enable or disable the Docker fat manifest resolver
  use-signing-key-stored-in-distribution: false # whether to use the GPG key set in Distribution service for the distribution process of release bundle v2
distribute:
  distribution-to-source-artifactory:
    use-user-permissions: true                  # whether to use user token in distribution to source artifactory
  edge-node-token-expiration-minutes: 180       # the time period (minutes) a token lives for communicating with edge node
  load-balancer: “weight-based”                 # algorithm to use for distributing the work between the Distribution nodes
  auto-create-target-repo-advance: true         # allow auto target repositories creation if complex mappings are set
  close-in-progress-consider-stuck-minutes: 180 # the time in minutes for close_inprogress distribution status to be considered as stuck
server:
  max-http-header-size: 16384     # 16kb in bytes
frontend:
  serve-statics-from-distribution: true # whether the frontend static files should be served from the distribution server. NOTE: for this change to take place all distribution nodes in the cluster need to be restarted
access-client:
  client-connection-timeout: 60000    # connection timeout in millis for Access client requests. e.g.: token verification, authentication
  client-socket-timeout: 60000        # socket timeout in millis for Access client responses.
  client-max-connections: 3           # max connections for Access client
artifactory-client:
  client-short-socket-timeout: 10000    # socket timeout in millis for Artifactory bound short tasks, e.g.: auth and pairing with Artifactory
  client-long-socket-timeout: 120000    # socket timeout in millis for Artifactory bound long tasks, e.g.: release-bundles’ store, deletion and artifacts’ gathering (AQL) queries
xray:
  enable-xray-integration: false                      # enable integration with xray
  release-bundle-scan-consider-stuck: 1800000         # xray vulnerability scan wait timeout (in milliseconds)
  grace-period-for-xray-triggering-in-millis: 600000  # retry xray vulnerability scan triggering (in milliseconds)
  allow-distribution-when-xray-is-unavailable: false  # allow distribution when xray is configured but not responsive
token:
  distribution-token-cache-expiration-millis: 3540000   # the amount of millis for the distribution tokens cache to be expired
  master-token-cache-expiration-millis: 30240000000     # the amount of millis for the master tokens cache to be expired
system:
  service-info:
    service-info-cache-expiration-millis: 30000  # the amount of millis for the service info cache to be expired
internal-events:
  max-retries: 1000                 # the max amount of retries that the system will perform in order to subscribe to the internal events (default: 1000)
  retry-delay-seconds: 300          # the initial delay between retries (default: 300 seconds)
  retry-max-delay-seconds: 1800     # the max delay between retries (default: 1800 seconds)
  token-expiration-minutes: 5       # the events subscription token expiration (default: 5 minutes)
  connection-timeout-seconds: 120   # the connection timeout (default: 120 seconds)
  socket-timeout-seconds: 120       # the socket timeout (default: 120 seconds)
metrics:
  log-flush-interval-seconds: 180   # interval between successive runs of the metric log flush job (default: 180 seconds)
internal: # Do not change unless instructed by the JFrog team
  task:
    monitoring:
      interval-seconds: 600
  monitoring:
    max-number-of-stuck-distribution: 50
    distribution-consider-stuck-seconds: 3600
  performance:
    ui-dedicated-workers: 8
  global-security:
    hide-unauthorized-resources: false
  base-url:
    base-url: “”
  call-home:
    publish-url: “https://api.bintray.com/products/jfrog/distribution/stats/usage”
    cron-expression: “0 30 0 * * ?”
    lock-expiration-millis: 43200000 # 12 hours
    random-delay-max-millis: 3600000 # 1 hour
  rbv2:
    support-projects: true
    subscribe-to-rbv2-distribution-federated-event: true
  xray:
    allow-distribution-if-xray-not-triggered: false
    block-unfinished-scans-timeout-seconds: 600
    block-unscanned-timeout-seconds: 60
  lifecycle-events:
    publish-audit-events-enabled: true
    publish-evidence-enabled: true
    max-retries: 5
    retry-delay-seconds: 1
    retry-max-delay-seconds: 10
  database-native-lock: # Change adoption requires restart
    initial-acquire-sleep-time-millis: 100 # Used in case of locking with retry
    max-acquire-sleep-time-millis: 10000 # Used in case of locking with retry
    enabled: true

Increasing the Header Size (server.max-http-header-size)

The Request header max size can manually be increased to prevent receiving an HTTP 400 message when signing into Distribution with SSO.

For this change to take effect, each distribution node in the cluster must be restarted. Start with rebooting the first node one on which the config file is installed and proceed to the remaining Edge nodes.

Setting the User Scoped Token for Distribution to Source Artifactory

From Distribution 2.13.2, user permissions will be enforced when distributing to the source JPD. The permissions are as follows:

  • To distribute release bundles: Only users with read and deploy permissions on the target repositories can successfully complete a distribution process to the source Artifactory (in this case the JPD acts as the target JPD).

    Note

    If this type of user (non-admin) tries to distribute to a target repository that does not exist, they will receive an error message.

  • To delete release bundles: Only users with delete permission for the target repository can delete these bundles.

Permissions

distribute:
  distribution-to-source-artifactory:
    use-user-permissions: true  # whether to use user token in distribution to source artifactory

Once you upgrade your Distribution version to this release, the user-scoped token becomes the default configuration for Distribution.

Requirements and Limitations

This feature requires you to have Artifactory 7.40.x and later installed for the source Artifactory (not target). Otherwise, even if you have set this configuration to true, it will revert to the current setting.

Important

This setting is relevant only for distributing to the source Artifactory.

Deploy Your GPG Key on the Source Artifactory

Distribution will trigger the source Artifactory to clone the contents of signed Release Bundles into an isolated release-bundles repository. To allow this, you need to deploy the GPG key that is used in each of your Artifactory Edge nodes to the source Artifactory service.Release Bundle Repositories

For more details, refer to Setting a GPG Key.GPG Signing

For more details on additional required configurations, refer to configuring Distribution.Configure Distribution