Issue When uploading artifacts failed to Artifactory, we get the error "Status Code: 404; Error Code: NoSuchKey" in the Artifactory log. The Artifactory uses the JuiceFS S3 as the filestore and the S3 Binary Storage Configuration uses the S3 Direct Upload Template.
We can find a cause for the error from KB, "How to resolve “AmazonS3Exception (Status Code: 404; Error Code: NoSuchKey)” error during Artifactory initialization?", but the KMS key Id used in the Binary store configuration already exists in the AWS account, which isn’t the cause for this case.
Debug Add the filestore debug log for the S3 config:
We can edit the logback.xml($JFROG_HOME/artifactory/var/etc/artifactory/logback.xml) to add the S3 debug logs in the Artifactory to get more logs.
The debug config needs to be added as follows, and an artifactory-filestore.log file in the log dir will be created.
<?xml version="1.0" encoding="UTF-8"?>
<!-- // @formatter:off -->
<configuration debug="false">
.... ....
<appender name="filestore" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${log.dir}/artifactory-filestore.log</File>
<rollingPolicy class="org.jfrog.common.logging.logback.rolling.FixedWindowWithDateRollingPolicy">
<FileNamePattern>${log.dir.archived}/artifactory-filestore.%i.log.gz</FileNamePattern>
<maxIndex>10</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>25MB</MaxFileSize>
</triggeringPolicy>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.jfrog.common.logging.logback.layout.BackTracePatternLayout">
<pattern>%date{yyyy-MM-dd'T'HH:mm:ss.SSS, UTC}Z [jfrt ] [%-5p] [%-16X{uber-trace-id}] [%-30.30(%c{3}:%L)] [%-20.20thread] - %m%n</pattern>
</layout>
</encoder>
</appender>
<logger name="org.jfrog.type.s3" additivity="false">
<level value="DEBUG"/>
<appender-ref ref="filestore"/>
</logger>
</configuration>
When uploading artifacts fails, we can get the error logs in the artifactory-filestore.log and the error logs are as follows:
2023-11-10T03:41:02.774Z [jfrt ] [DEBUG] [fca79f95d534093e] [.j.t.s.S3AwsBinaryProvider:217] [http-nio-8081-exec-3] - Uploading to S3 using a stream 2023-11-10T03:41:02.774Z [jfrt ] [DEBUG] [fca79f95d534093e] [t.s.S3ClientStorageService:135] [http-nio-8081-exec-3] - Uploading file to temporary location: <bucket_path>/filestore/_p/_pre_168427ac-b41c-4310-973c-996687a60e44 in S3 2023-11-10T03:41:02.942Z [jfrt ] [DEBUG] [fca79f95d534093e] [t.s.S3ClientStorageService:141] [http-nio-8081-exec-3] - Binary upload finished successfully with MD5: <MD5> 2023-11-10T03:41:02.942Z [jfrt ] [DEBUG] [fca79f95d534093e] [t.s.S3ClientStorageService:109] [http-nio-8081-exec-3] - Moving file from temporary location to permanent location <bucket_path>/filestore/44/4424e05a007857567c63e11a3afcfb1b22650563 in S3 2023-11-10T03:41:02.995Z [jfrt ] [ERROR] [fca79f95d534093e] [t.s.S3ClientStorageService:119] [http-nio-8081-exec-3] - Unable to copy binary <bucket_path>/filestore/_p/_pre_168427ac-b41c-4310-973c-996687a60e44 to correct location <bucket_path>/filestore/44/4424e05a007857567c63e11a3afcfb1b22650563: com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 179625F9838D8794; S3 Extended Request ID: dfeea2c0-4c13-4106-91f4-c85aef0176bf; Proxy: null) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1879) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1418) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1387) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5456) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5403) at com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:2078) at org.jfrog.type.s3.S3ClientStorageService.addStream(S3ClientStorageService.java:116) at org.jfro .... .... 2023-11-10T03:41:02.996Z [jfrt ] [DEBUG] [fca79f95d534093e] [t.s.S3ClientStorageService:122] [http-nio-8081-exec-3] - Deleting temporary file at <bucket_path>/filestore/_p/_pre_168427ac-b41c-4310-973c-996687a60e44
Root cause
Upload artifacts from Artifactory to S3 will follow the steps:
- Uploading the file to a tmp dir in the S3 bucket like <bucket_path>/filestore/_p and file will name as _pre_<filename>
- After the file is successfully uploaded to the S3 bucket _p tmp directory, the file will copy to the specified bucket directory.
- The specified bucket directory is named as the first two digits of the sha1 value of the file. If the specified bucket directory exists, it will copy the file directly to the specified bucket directory. If the specified bucket directory does not exist, the specified bucket directory needs to be created, and then the file is copied to the specified bucket directory.
- The file will be deleted from the _p directory in the bucket regardless of whether the file is copied successfully
The reason for the issue is in Step3, when copying the file from the bucket _p tmp dir to the specified bucket directory, the specified bucket directory doesn't exist and JuiceFS doesn't create the specified bucket directory.
Solutions
The S3 needs support to create the nonexistent dir when copying the file to the nonexistent dir and the role needs to have sufficient permissions.
- Modify the upload logic in JuiceFS code to automatically create a nonexistent directory when copying files to a nonexistent directory in S3
- The IAM role for the S3 config needs to have the following permission:
s3:ListBucket, s3:ListBucketVersions, s3:ListBucketMultipartUploads, s3:GetBucketLocation, s3:GetObject, s3:GetObjectVersion, s3:PutObject, s3:DeleteObject, s3:ListMultipartUploadParts, s3:AbortMultipartUpload, s3:ListAllMyBuckets, s3:HeadBucket, s3:CreateBucket.