The sharding-cluster binary provider can be used together with other binary providers for both local or cloud-native storage.
It adds a crossNetworkStrategy parameter to be used as read and write behaviors for validation of the redundancy values and the balance mechanism. It must include a Remote Binary Provider in its dynamic-provider setting to allow synchronizing providers across the cluster.
The Sharding-Cluster provider listens to cluster topology events and creates or removes dynamic providers based on the current state of nodes in the cluster.
The zones defined in the sharding mechanism. Read/write strategies take providers based on zones.
The minimum number of successful writes that must be maintained for an upload to be successful. The next balance cycle (triggered with the GC mechanism) will eventually transfer the binary to enough nodes such that the redundancy commitment is preserved.
In other words, Leniency governs the minimal allowed redundancy in cases where the redundancy commitment was not kept temporarily.
For example, if lenientLimit is set to 3, my setup includes 4 filestores, and 1 of them goes down, writing will continue. If a 2nd filestore goes down, writing will stop.
The amount of currently active nodes must always be greater or equal than the configured lenientLimit. If set to 0, the redundancy value has to be kept.
The type of provider that can be added and removed dynamically based on cluster topology changes. Currently only the Remote Binary Provider is supported as a dynamic provider.
Default: r = 1
The number of copies that should be stored for each binary in the filestore. Note that redundancy must be less than or equal to the number of mounts in your system for Artifactory to work with this configuration.
This parameter dictates the strategy for reading binaries from the mounts that make up the sharded filestore.
Possible values are:
This parameter dictates the strategy for writing binaries to the mounts that make up the sharded filestore.
Possible values are:
The number of threads to use for the rebalancing operations.
The number of threads to use for checking that shards are accessible.
Default: 15,000 milliseconds (15 seconds)
The maximum time to wait while checking if shards are accessible.
Sharding-Cluster Binary Provider Example
<config version="v1"> <chain> <provider id="sharding-cluster" type="sharding-cluster"> <sub-provider id="state-aware" type="state-aware"/> <dynamic-provider id="remote" type="remote"/> <property name="zones" value="remote"/> </provider> </chain> <provider id="sharding-cluster" type="sharding-cluster"> <readBehavior>crossNetworkStrategy</readBehavior> <writeBehavior>crossNetworkStrategy</writeBehavior> <redundancy>2</redundancy> <lenientLimit>1</lenientLimit> </provider> <provider id="state-aware" type="state-aware"> <fileStoreDir>filestore1</fileStoreDir> </provider> <provider id="remote" type="remote"> <checkPeriod>15000</checkPeriod> <connectionTimeout>5000</connectionTimeout> <socketTimeout>15000</socketTimeout> <maxConnections>200</maxConnections> <connectionRetry>2</connectionRetry> <zone>remote</zone> </provider> </config>