ARTIFACTORY: JFrog Artifactory Data Collector

ARTIFACTORY: JFrog Artifactory Data Collector

AuthorFullName__c
Jayanth Suresh
articleNumber
000005997
ft:sourceType
Salesforce
FirstPublishedDate
2024-01-23T09:48:46Z
lastModifiedDate
2024-01-23
VersionNumber
1

While the support team is investigating the cases logged, the engineers must require the baseline information such as CPU, Memory, Disk utilization, thread dumps etc., to understand the status of the concerned instance better. In the course of collecting all the required information, there could be multiple emails sent to and fro between the support team and the clients.

Here’s the tool ‘JFrogArtifactoryDataCollector’ that would ease the information gathering process by capturing all the required baseline information at once and uploading it to the support logs portal. Executing the script against the system multiple times even during the Artifactory crash scenarios, would not harm the system.

Points to be noted before running the script:

  1. Collecting heap dump (HEAP_DUMP_FLAG) and disk details (DISK_DETAILS_FLAG) is resource-consuming. So, please opt for it only if it is required and the Artifactory server has enough memory available. There are chances that the Artifactory instance might crash if these 2 options are opted when the Artifactory server doesn’t have enough memory available.
  2. When the script starts running in the Artifactory server, a PID file (JFrogArtifactoryDataCollector.pid) will be created in the /tmp folder of the Artifactory server. Once the script is completed then the PID file will be deleted. In case the script is already running then we need to wait for the current script to complete, to run it again.
  3. When the script is running, if we would like to cancel it or were interrupted in between due to some reasons then the PID file will not be deleted, so in this situation, we need to kill the process of the script, delete the PID file, and run again.
Note: This script does not require any extra library (third-party tools) to capture the below list of data. i.e., By executing the script in the Artifactory server, it will use the OS’s default libraries and collect the data.
  1. Thread_dump.
  2. Heap_dump on request as it is resource/space consuming.
  3. Java_Heap_Histograms.
  4. CPU % utilized by each process.
  5. Memory % utilized by each process.
  6. Artifactory Server details like Memory, CPU, Disk, File limits, and processes running.
  7. Disk throughput and latency.
  8. Connection to the server Details.
  9. List of open port details.
  10. Artifactory Readiness and Health check of each microservice.
The script is available in interactive and non-interactive ways of execution and you may download it from the GitHub page.