Subject
Long term monitoring of DB query times can reveal that there is a slow down in some DB queries - this can be due to a number of factors. We will go over some simple checks that can help alleviate these slowdowns.
Affected Versions
N/A - this issue affects Postgres backed applications in general. All JFrog products use postgres primarily, with the exception of Artifactory, where it is an option.
Description
DB queries are usually sped up when paired with an index - however, indexes can become corrupt and require reindexing.
Resolution
Log into the Postgres backend (as an admin user) and run the following query:
SELECT * FROM pg_class, pg_index WHERE pg_index.indisvalid = false AND pg_index.indexrelid = pg_class.oid;
If the query returns 0 rows, your indexes are good from this perspective. However, if it returns some rows, this indicates that there are some indexes that require re-indexing. The ‘relname’ column will show the affected indexes. The first part will indicate the affected table. For example, if your nodes_node_name_idx is listed, then that index belongs to the nodes table. You can resolve this by:
1. Stop <JFrog product> completely
2. Run the following in your database for each listed INDEX in the above query:
REINDEX INDEX <index name>
3. Restart <JFrog product>
Optional Verification
Spin up a fresh instance of the product, backed by Postgres. Our helm chart is a quick way to get this up and running. It does not need to be licensed - as we’re testing the validity of the index and explain queries, rather than actual existing data. In my example below, I’m testing Artifactory so I use “npm-local”, which does not have any references in my nodes table:
artifactory=> select * from nodes; node_id | node_type | repo | node_path | node_name | depth | created | created_by | modified | modified_by | updated | bin_length | sha1_actual | sha1_original | md5_actual | md5_original | sha256 | repo_path_checksum ---------+-----------+------------------------+-----------+-----------+-------+---------------+------------+---------------+-------------+---------------+------------+-------------+---------------+------------+--------------+--------+-------- ---------------------------------- 1 | 0 | auto-trashcan | . | . | 0 | 1657736169666 | | 1657736169666 | | 1657736169666 | 0 | | | | | | 3321383 d7f6d80140563c23d3a22d647e20cafb9 2 | 0 | jfrog-support-bundle | . | . | 0 | 1657736169735 | | 1657736169735 | | 1657736169735 | 0 | | | | | | 59c86c2 bf5350c9c8961a57057819bfeeba79617 4 | 0 | artifactory-build-info | . | . | 0 | 1657736169769 | | 1657736169769 | | 1657736169769 | 0 | | | | | | 1d06e7f a42f6b62e8e8acf2ca0f0aba4d8b9e7ac 6 | 0 | example-repo-local | . | . | 0 | 1657736169777 | | 1657736169777 | | 1657736169777 | 0 | | | | | | ebc8002 cd055a81553b42555dcad19268adfff43 (4 rows) Pick a slow query and test if it uses an index using EXPLAIN ANALYZE <query>. For example, this query uses an index normally, per my fresh DB instance: artifactory=> explain analyze select node_id from nodes where repo = 'npm-local' AND node_path = 'somepath' and node_name = 'somename'; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------- Index Scan using nodes_node_name_idx on nodes (cost=0.14..8.16 rows=1 width=8) (actual time=0.015..0.016 rows=0 loops=1) Index Cond: ((node_name)::text = 'somename'::text) Filter: (((repo)::text = 'npm-local'::text) AND ((node_path)::text = 'somepath'::text)) Planning Time: 0.252 ms Execution Time: 0.048 ms (5 rows) If the query plan uses an index scan, this indicates that it used an index (we can see here that it used nodes_node_name_idx). When the index is invalid, running the same explain analyze will show ‘seq scan’ instead: artifactory=> explain analyze select node_id from nodes where repo = 'npm-local' AND node_path = 'somepath' and node_name = 'somename'; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on nodes (cost=0.00..10.35 rows=1 width=8) (actual time=0.013..0.013 rows=0 loops=1) Filter: (((repo)::text = 'npm-local'::text) AND ((node_path)::text = 'somepath'::text) AND ((node_name)::text = 'somename'::text)) Rows Removed by Filter: 4 Planning Time: 0.176 ms Execution Time: 0.036 ms (5 rows)
Compare the two outputs between the fresh instance and the real instance - if the index scan matches then your indexes are working again.