Troubleshooting Code Insights
This is a collection of issues or other information that might be helpful when troubleshooting a problem with Code Insights.
frontend
service
Recurring OOM (out of memory) alerts from the This may be the result of an excessively large query being executed by code insights.
Code Insights processes some queries in the background of the worker
service. These queries
use the GraphQL API, which means they are aggregated entirely on the frontend
service. Large result
sets can cause the frontend
service to run out of memory and crash with an OOM error. These queries
can get stuck in an error loop until they hit the maximum retry value, causing repeated frontend
crashes.
Queries such as matching on every line in every repository or other queries with a similar scale may be responsible.
Diagnose
-
Check the
frontend
dashboards (General / Frontend) in Grafana- Check for individual instances with spiked (to 100%) memory usage on
Container monitoring
-Container memory by instance
- Check for individual instances with spiked (to 100%) memory usage on
-
Check the background
worker
dashboards for Code Insights (General / Worker) in Grafana- Check for elevated error rates on
Codeinsights: dbstore stats
-Aggregate store operation error rate over 5m
- Check for a queue size greater than zero on
Codeinsights: Query Runner Queue
-Code insights search queue queue size
- Check for elevated error rates on
-
(admin-only) Check the queries currently in background processing using the GraphQL query
query seriesStatus { insightSeriesQueryStatus { seriesId query enabled completed errored processing failed queued } }
- Inspecting queries with
errored
orfailed
counts may provide a hint to which query is responsible.
- Inspecting queries with
-
Check Postgres
pgsql
for any queries stuck in a retry loop
select * from insights_query_runner_jobs where state = 'errored' and started_at > current_timestamp - INTERVAL '1 day' order by insights_query_runner_jobs.started_at desc;
Resolution Options
- Increase the memory available to the
frontend
pods until it is sufficiently large enough to execute the responsible query.- The error rate on the Code Insights dashboards should return to zero.
- (admin-only) Disable any specific queries identified to be problematic using the GraphQL operation by providing a specific
SeriesId
.
mutation updateInsightSeries($input: UpdateInsightSeriesInput!) { updateInsightSeries(input:$input) { series { seriesId query enabled } } } { "input": { "seriesId": "s:5FE04D15D1150A134407E7EF078028F6DA5224BBADB1718A92E46046AC9F2E0B", "enabled": false } }
- Disable any problematic queries stuck in an error loop in Postgres
pgsql
update insights_query_runner_jobs set state = 'failed' where id = ?;
OOB Migration has made progress, but is stuck before reaching 100%
This out-of-band migration is titled: Migrating insight definitions from settings files to database tables as a last stage to use the GraphQL API.
The out-of-band migration shouldn't take more than an hour to complete. (It really shouldn't take more than a few minutes.) If the progress hasn't reached 100% in this duration some records may be stuck due to errors.
Known issues:
- Deleted users/orgs will cause processing errors, and those jobs wil need to be manually marked as complete.
Diagnose and Resolve
- First check the Recent Errors under the migration in the UI.
-
If the error messages are all:
UserStoreGetById: user not found
-
This is caused by deleted users. It will be safe to mark these rows as completed by running the following against
pgsql
:UPDATE insights_settings_migration_jobs SET completed_at = NOW() WHERE completed_at IS NULL;
-
-
If the error messages are all:
OrgStoreGetByID: org not found
-
This is caused by deleted orgs. In this case, mark just the org rows as completed by running the following against
pgsql
:UPDATE insights_settings_migration_jobs SET completed_at = NOW() WHERE completed_at IS NULL AND org_id IS NOT NULL;
-
Note: this only completes the failing org jobs. You may then see the
user not found
error above, and will still need to mark the rest of the jobs as complete.
-
-
If the error messages are neither of those two things, this is not currently a known issue. Contact support and we can help!
-