When using the S3 as filestore, there are chances that a huge file upload fails to get uploaded to S3 and the Artifactory will log the below error,
2020-11-17T10:01:58.247Z [jfrt ] [ERROR] [bdc07b2dbfb0ee42] [.c.EventualDownstreamWorker:74] [-cluster-s3-worker-2] - EventualDownstreamWorker caught error while operating on actions for sha1 801dae099bb63dbf4be20a3d9daab7248f747eda: com.amazonaws.services.s3.model.AmazonS3Exception: Your proposed upload exceeds the maximum allowed size (Service: Amazon S3; Status Code: 400; Error Code: EntityTooLarge; Request ID: CCA93A72B1AEA873; S3 Extended Request ID: OPiD1kYBrpe4lFmg5Dv4MWGX4B8qaXFXB9t0JMCbC8mJYAuB8zsEwqxavEfoo/2xvKyLwoLYfeE=)and the binary file will just be stuck in the eventual directory and will never ever be uploaded to S3.
The reason for the issue most of the time if use of the parameter <usePresigning> in the binarystore.xml, this <usePresigning> parameter when set to "true" will restrict the maximum size of objects to be uploaded to S3 under 5GB, refer to this Amazon documentation for more insights as well as you may find more information on Amazon S3 Official SDK Template in this Wiki page.
However, when <usePresigning> is set to false, the ‘multiPartLimit’ file size threshold (in bytes) over which file uploads are chunked and multi-threaded, which will allow uploading objects in parts using the multipart upload and the huge file size should get uploaded to S3.
To upload files bigger than 5GB, set the <usePresigning> to false in the binarystore.xml file under $JFROG_HOME/artifactory/var/etc/artiafctory/binarystore.xml for Artifactory v7.x and $ARTIFACTORY_HOME/etc/binarystore.xml for Artifactory v6.x, do a rolling restart of the cluster, and perform the upload again.