Dashboards reference
This document contains a complete reference on Sourcegraph's available dashboards, as well as details on how to interpret the panels and metrics.
To learn more about Sourcegraph's metrics and how to view these dashboards, see our metrics guide.
Frontend
Serves all end-user browser and API requests.
To see this dashboard, visit /-/debug/grafana/d/frontend/frontend
on your Sourcegraph instance.
Frontend: Search at a glance
frontend: 99th_percentile_search_request_duration
99th percentile successful search request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.99, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))
frontend: 90th_percentile_search_request_duration
90th percentile successful search request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))
frontend: hard_timeout_search_responses
Hard timeout search responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: (sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name!="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: hard_error_search_responses
Hard error search responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: partial_timeout_search_responses
Partial timeout search responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: search_alert_user_suggestions
Search alert user suggestions shown every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: page_load_latency
90th percentile page load latency over all routes over 10m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100020
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route!="raw",route!="blob",route!~"graphql.*"}[10m])))
frontend: blob_load_latency
90th percentile blob load latency over 10m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100021
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route="blob"}[10m])))
Frontend: Search-based code intelligence at a glance
frontend: 99th_percentile_search_codeintel_request_duration
99th percentile code-intel successful search request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))
frontend: 90th_percentile_search_codeintel_request_duration
90th percentile code-intel successful search request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))
frontend: hard_timeout_search_codeintel_responses
Hard timeout search code-intel responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: (sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: hard_error_search_codeintel_responses
Hard error search code-intel responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: partial_timeout_search_codeintel_responses
Partial timeout search code-intel responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: search_codeintel_alert_user_suggestions
Search code-intel alert user suggestions shown every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
Frontend: Search GraphQL API usage at a glance
frontend: 99th_percentile_search_api_request_duration
99th percentile successful search API request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))
frontend: 90th_percentile_search_api_request_duration
90th percentile successful search API request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))
frontend: hard_error_search_api_responses
Hard error search API responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="other"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="other"}[5m]))
frontend: partial_timeout_search_api_responses
Partial timeout search API responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(increase(src_graphql_search_response{status="partial_timeout",source="other"}[5m])) / sum(increase(src_graphql_search_response{source="other"}[5m]))
frontend: search_api_alert_user_suggestions
Search API alert user suggestions shown every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="other"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{status="alert",source="other"}[5m]))
Frontend: Codeintel: Precise code intelligence usage at a glance
frontend: codeintel_resolvers_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_resolvers_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_resolvers_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: Auto-index enqueuer
frontend: codeintel_autoindex_enqueuer_total
Aggregate enqueuer operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_99th_percentile_duration
Aggregate successful enqueuer operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_errors_total
Aggregate enqueuer operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_error_rate
Aggregate enqueuer operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_autoindex_enqueuer_total
Enqueuer operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_99th_percentile_duration
99th percentile successful enqueuer operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_autoindex_enqueuer_errors_total
Enqueuer operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_error_rate
Enqueuer operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dbstore stats
frontend: codeintel_uploads_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_uploads_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_uploads_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Workerutil: lsif_indexes dbworker/store stats
frontend: workerutil_dbworker_store_codeintel_index_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_index_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: lsifstore stats
frontend: codeintel_uploads_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_uploads_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100710
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100711
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_uploads_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100712
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100713
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: gitserver client
frontend: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100802
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100803
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100812
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100813
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: uploadstore stats
frontend: codeintel_uploadstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_uploadstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100910
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100911
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_uploadstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100912
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100913
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dependencies service stats
frontend: codeintel_dependencies_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101003
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_dependencies_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_dependencies_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101013
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dependencies service store stats
frontend: codeintel_dependencies_background_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101103
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_dependencies_background_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_dependencies_background_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dependencies service background stats
frontend: codeintel_dependencies_background_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101203
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_dependencies_background_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_dependencies_background_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101212
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101213
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: lockfiles service stats
frontend: codeintel_lockfiles_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101302
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101303
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_lockfiles_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101310
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101311
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_lockfiles_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101312
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101313
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Gitserver: Gitserver Client
frontend: gitserver_client_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101400
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101401
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101402
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101403
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: gitserver_client_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101410
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (op)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101411
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: gitserver_client_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101412
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101413
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: dbstore stats
frontend: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101500
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101501
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101502
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101503
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101510
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101511
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101512
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101513
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: service stats
frontend: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101600
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101601
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101602
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101603
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101610
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101611
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101612
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101613
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: Workspace execution dbstore
frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101700
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101701
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101702
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101703
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: HTTP API File Handler
frontend: batches_httpapi_total
Aggregate http handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101800
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_99th_percentile_duration
Aggregate successful http handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101801
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (le)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_errors_total
Aggregate http handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101802
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_error_rate
Aggregate http handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101803
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: batches_httpapi_total
Http handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101810
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_99th_percentile_duration
99th percentile successful http handler operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101811
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: batches_httpapi_errors_total
Http handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101812
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_error_rate
Http handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101813
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Out-of-band migrations: up migration invocation (one batch processed)
frontend: oobmigration_total
Migration handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101900
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_99th_percentile_duration
Aggregate successful migration handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101901
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_errors_total
Migration handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101902
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_error_rate
Migration handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101903
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Out-of-band migrations: down migration invocation (one batch processed)
frontend: oobmigration_total
Migration handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_99th_percentile_duration
Aggregate successful migration handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_errors_total
Migration handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_error_rate
Migration handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102003
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Internal service requests
frontend: internal_indexed_search_error_responses
Internal indexed search error responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102100
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by(code) (increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100
frontend: internal_unindexed_search_error_responses
Internal unindexed search error responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102101
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by(code) (increase(searcher_service_request_total{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total[5m])) * 100
frontend: internalapi_error_responses
Internal API error responses every 5m by route
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102102
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum by(category) (increase(src_frontend_internal_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_frontend_internal_request_duration_seconds_count[5m])) * 100
frontend: 99th_percentile_gitserver_duration
99th percentile successful gitserver query duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102110
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.99, sum by (le,category)(rate(src_gitserver_request_duration_seconds_bucket{job=~"(sourcegraph-)?frontend"}[5m])))
frontend: gitserver_error_responses
Gitserver error responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102111
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend",code!~"2.."}[5m])) / ignoring(code) group_left sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend"}[5m])) * 100
frontend: observability_test_alert_warning
Warning test alert metric
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102120
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by(owner) (observability_test_metric_warning)
frontend: observability_test_alert_critical
Critical test alert metric
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102121
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by(owner) (observability_test_metric_critical)
Frontend: Authentication API requests
frontend: sign_in_rate
Rate of API requests to sign-in
Rate (QPS) of requests to sign-in
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102200
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))
frontend: sign_in_latency_p99
99 percentile of sign-in latency
99% percentile of sign-in latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102201
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-in",method="post"}[5m])) by (le))
frontend: sign_in_error_rate
Percentage of sign-in requests by http code
Percentage of sign-in requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102202
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))*100
frontend: sign_up_rate
Rate of API requests to sign-up
Rate (QPS) of requests to sign-up
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102210
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))
frontend: sign_up_latency_p99
99 percentile of sign-up latency
99% percentile of sign-up latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102211
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-up",method="post"}[5m])) by (le))
frontend: sign_up_code_percentage
Percentage of sign-up requests by http code
Percentage of sign-up requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102212
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100
frontend: sign_out_rate
Rate of API requests to sign-out
Rate (QPS) of requests to sign-out
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102220
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))
frontend: sign_out_latency_p99
99 percentile of sign-out latency
99% percentile of sign-out latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102221
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-out"}[5m])) by (le))
frontend: sign_out_error_rate
Percentage of sign-out requests that return non-303 http code
Percentage of sign-out requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102222
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100
frontend: account_failed_sign_in_attempts
Rate of failed sign-in attempts
Failed sign-in attempts per minute
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102230
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(rate(src_frontend_account_failed_sign_in_attempts_total[1m]))
frontend: account_lockouts
Rate of account lockouts
Account lockouts per minute
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102231
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(rate(src_frontend_account_lockouts_total[1m]))
Frontend: Organisation GraphQL API requests
frontend: org_members_rate
Rate of API requests to list organisation members
Rate (QPS) of API requests to list organisation members
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102300
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers"}[5m]))
frontend: org_members_latency_p99
99 percentile latency of API requests to list organisation members
99 percentile latency ofAPI requests to list organisation members
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102301
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="OrganizationMembers"}[5m])) by (le))
frontend: org_members_error_rate
Percentage of API requests to list organisation members that return an error
Percentage of API requests to list organisation members that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102302
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers"}[5m]))*100
frontend: create_org_rate
Rate of API requests to create an organisation
Rate (QPS) of API requests to create an organisation
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102310
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="CreateOrganization"}[5m]))
frontend: create_org_latency_p99
99 percentile latency of API requests to create an organisation
99 percentile latency ofAPI requests to create an organisation
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102311
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="CreateOrganization"}[5m])) by (le))
frontend: create_org_error_rate
Percentage of API requests to create an organisation that return an error
Percentage of API requests to create an organisation that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102312
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="CreateOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="CreateOrganization"}[5m]))*100
frontend: remove_org_member_rate
Rate of API requests to remove organisation member
Rate (QPS) of API requests to remove organisation member
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102320
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization"}[5m]))
frontend: remove_org_member_latency_p99
99 percentile latency of API requests to remove organisation member
99 percentile latency ofAPI requests to remove organisation member
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102321
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="RemoveUserFromOrganization"}[5m])) by (le))
frontend: remove_org_member_error_rate
Percentage of API requests to remove organisation member that return an error
Percentage of API requests to remove organisation member that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102322
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization"}[5m]))*100
frontend: invite_org_member_rate
Rate of API requests to invite a new organisation member
Rate (QPS) of API requests to invite a new organisation member
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102330
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization"}[5m]))
frontend: invite_org_member_latency_p99
99 percentile latency of API requests to invite a new organisation member
99 percentile latency ofAPI requests to invite a new organisation member
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102331
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="InviteUserToOrganization"}[5m])) by (le))
frontend: invite_org_member_error_rate
Percentage of API requests to invite a new organisation member that return an error
Percentage of API requests to invite a new organisation member that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102332
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization"}[5m]))*100
frontend: org_invite_respond_rate
Rate of API requests to respond to an org invitation
Rate (QPS) of API requests to respond to an org invitation
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102340
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation"}[5m]))
frontend: org_invite_respond_latency_p99
99 percentile latency of API requests to respond to an org invitation
99 percentile latency ofAPI requests to respond to an org invitation
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102341
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="RespondToOrganizationInvitation"}[5m])) by (le))
frontend: org_invite_respond_error_rate
Percentage of API requests to respond to an org invitation that return an error
Percentage of API requests to respond to an org invitation that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102342
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation"}[5m]))*100
frontend: org_repositories_rate
Rate of API requests to list repositories owned by an org
Rate (QPS) of API requests to list repositories owned by an org
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102350
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="OrgRepositories"}[5m]))
frontend: org_repositories_latency_p99
99 percentile latency of API requests to list repositories owned by an org
99 percentile latency ofAPI requests to list repositories owned by an org
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102351
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="OrgRepositories"}[5m])) by (le))
frontend: org_repositories_error_rate
Percentage of API requests to list repositories owned by an org that return an error
Percentage of API requests to list repositories owned by an org that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102352
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="OrgRepositories",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="OrgRepositories"}[5m]))*100
Frontend: Cloud KMS and cache
frontend: cloudkms_cryptographic_requests
Cryptographic requests to Cloud KMS every 1m
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102400
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(increase(src_cloudkms_cryptographic_total[1m]))
frontend: encryption_cache_hit_ratio
Average encryption cache hit ratio per workload
- Encryption cache hit ratio (hits/(hits+misses)) - minimum across all instances of a workload.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102401
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: min by (kubernetes_name) (src_encryption_cache_hit_total/(src_encryption_cache_hit_total+src_encryption_cache_miss_total))
frontend: encryption_cache_evictions
Rate of encryption cache evictions - sum across all instances of a given workload
- Rate of encryption cache evictions (caused by cache exceeding its maximum size) - sum across all instances of a workload
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102402
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (kubernetes_name) (irate(src_encryption_cache_eviction_total[5m]))
Frontend: Database connections
frontend: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102500
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="frontend"})
frontend: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102501
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="frontend"})
frontend: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102510
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="frontend"})
frontend: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102511
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="frontend"})
frontend: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102520
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="frontend"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="frontend"}[5m]))
frontend: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102530
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="frontend"}[5m]))
frontend: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102531
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="frontend"}[5m]))
frontend: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102532
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="frontend"}[5m]))
Frontend: Container monitoring (not available on server)
frontend: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod (frontend|sourcegraph-frontend)
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p (frontend|sourcegraph-frontend)
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' (frontend|sourcegraph-frontend)
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the (frontend|sourcegraph-frontend) container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs (frontend|sourcegraph-frontend)
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102600
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend).*"}) > 60)
frontend: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102601
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
frontend: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102602
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
frontend: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102603
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]) + rate(container_fs_writes_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]))
Frontend: Provisioning indicators (not available on server)
frontend: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102700
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])
frontend: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102701
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])
frontend: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102710
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])
frontend: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102711
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])
frontend: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102712
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^(frontend|sourcegraph-frontend).*"})
Frontend: Golang runtime monitoring
frontend: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102800
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*(frontend|sourcegraph-frontend)"})
frontend: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102801
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*(frontend|sourcegraph-frontend)"})
Frontend: Kubernetes monitoring (only available on Kubernetes)
frontend: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102900
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(app) (up{app=~".*(frontend|sourcegraph-frontend)"}) / count by (app) (up{app=~".*(frontend|sourcegraph-frontend)"}) * 100
Frontend: Ranking
frontend: mean_position_of_clicked_search_result_6h
Mean position of clicked search result over 6h
The top-most result on the search results has position 0. Low values are considered better. This metric only tracks top-level items and not individual line matches.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103000
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (type) (rate(src_search_ranking_result_clicked_sum[6h]))/sum by (type) (rate(src_search_ranking_result_clicked_count[6h]))
frontend: distribution_of_clicked_search_result_type_over_6h_in_percent
Distribution of clicked search result type over 6h in %
The distribution of clicked search results by result type. At every point in time, the values should sum to 100.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103001
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: round(sum(increase(src_search_ranking_result_clicked_sum{type="commit"}[6h])) / sum (increase(src_search_ranking_result_clicked_sum[6h]))*100)
Frontend: Email delivery
frontend: email_delivery_failures
Email delivery failures every 30 minutes
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103100
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum(increase(src_email_send{success="false"}[30m]))
frontend: email_deliveries_total
Total emails successfully delivered every 30 minutes
Total emails successfully delivered.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103110
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum (increase(src_email_send{success="true"}[30m]))
frontend: email_deliveries_by_source
Emails successfully delivered every 30 minutes by source
Emails successfully delivered by source, i.e. product feature.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103111
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (email_source) (increase(src_email_send{success="true"}[30m]))
Frontend: Sentinel queries (only on sourcegraph.com)
frontend: mean_successful_sentinel_duration_over_2h
Mean successful sentinel search duration over 2h
Mean search duration for all successful sentinel queries
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103200
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_search_response_latency_seconds_sum{source=~
searchblitz., status=
success}[2h])) / sum(rate(src_search_response_latency_seconds_count{source=~
searchblitz., status=
success}[2h]))
frontend: mean_sentinel_stream_latency_over_2h
Mean successful sentinel stream latency over 2h
Mean time to first result for all successful streaming sentinel queries
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103201
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[2h])) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[2h]))
frontend: 90th_percentile_successful_sentinel_duration_over_2h
90th percentile successful sentinel search duration over 2h
90th percentile search duration for all successful sentinel queries
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103210
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))
frontend: 90th_percentile_sentinel_stream_latency_over_2h
90th percentile successful sentinel stream latency over 2h
90th percentile time to first result for all successful streaming sentinel queries
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103211
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))
frontend: mean_successful_sentinel_duration_by_query
Mean successful sentinel search duration by query
Mean search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103220
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_search_response_latency_seconds_sum{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_response_latency_seconds_count{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source)
frontend: mean_sentinel_stream_latency_by_query
Mean successful sentinel stream latency by query
Mean time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103221
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source)
frontend: 90th_percentile_successful_sentinel_duration_by_query
90th percentile successful sentinel search duration by query
90th percentile search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103230
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))
frontend: 90th_percentile_successful_stream_latency_by_query
90th percentile successful sentinel stream latency by query
90th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103231
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))
frontend: 90th_percentile_unsuccessful_duration_by_query
90th percentile unsuccessful sentinel search duration by query
90th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103240
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~
searchblitz.*, status!=
success}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_successful_sentinel_duration_by_query
75th percentile successful sentinel search duration by query
75th percentile search duration of successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103250
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_successful_stream_latency_by_query
75th percentile successful sentinel stream latency by query
75th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103251
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.75, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_unsuccessful_duration_by_query
75th percentile unsuccessful sentinel search duration by query
75th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103260
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~
searchblitz.*, status!=
success}[$sentinel_sampling_duration])) by (le, source))
frontend: unsuccessful_status_rate
Unsuccessful status rate
The rate of unsuccessful sentinel queries, broken down by failure type.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103270
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_graphql_search_response{source=~"searchblitz.*", status!="success"}[$sentinel_sampling_duration])) by (status)
Frontend: Incoming webhooks
frontend: p95_time_to_handle_incoming_webhooks
P95 time to handle incoming webhooks
p95 response time to incoming webhook requests from code hosts.
Increases in response time can point to too much load on the database to keep up with the incoming requests.
See this documentation page for more details on webhook requests: (https://docs.sourcegraph.com/admin/config/webhooks)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103300
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum (rate(src_http_request_duration_seconds_bucket{route=~"webhooks|github.webhooks|gitlab.webhooks|bitbucketServer.webhooks|bitbucketCloud.webhooks"}[5m])) by (le, route))
Frontend: Search aggregations: proactive and expanded search aggregations
frontend: insights_aggregations_total
Aggregate search aggregations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103400
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_99th_percentile_duration
Aggregate successful search aggregations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103401
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (le)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_errors_total
Aggregate search aggregations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103402
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_error_rate
Aggregate search aggregations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103403
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: insights_aggregations_total
Search aggregations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103410
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_99th_percentile_duration
99th percentile successful search aggregations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103411
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op,extended_mode)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: insights_aggregations_errors_total
Search aggregations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103412
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_error_rate
Search aggregations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103413
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Git Server
Stores, manages, and operates Git repositories.
To see this dashboard, visit /-/debug/grafana/d/gitserver/gitserver
on your Sourcegraph instance.
gitserver: memory_working_set
Memory working set
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (container_memory_working_set_bytes{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}})
gitserver: go_routines
Go routines
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: go_goroutines{app="gitserver", instance=~
${shard:regex}}
gitserver: cpu_throttling_time
Container CPU throttling time %
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) ((rate(container_cpu_cfs_throttled_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]) / rate(container_cpu_cfs_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m])) * 100)
gitserver: cpu_usage_seconds
Cpu usage seconds
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_cpu_usage_seconds_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: disk_space_remaining
Disk space remaining by instance
Indicates disk space remaining for each gitserver instance, which is used to determine when to start evicting least-used repository clones from disk (default 10%, configured by SRC_REPOS_DESIRED_PERCENT_FREE
).
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100020
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (src_gitserver_disk_space_available / src_gitserver_disk_space_total) * 100
gitserver: io_reads_total
I/o reads total
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100030
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_container_name) (rate(container_fs_reads_total{container_label_io_kubernetes_container_name="gitserver"}[5m]))
gitserver: io_writes_total
I/o writes total
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100031
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_container_name) (rate(container_fs_writes_total{container_label_io_kubernetes_container_name="gitserver"}[5m]))
gitserver: io_reads
I/o reads
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100040
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_reads_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: io_writes
I/o writes
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100041
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_writes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: io_read_througput
I/o read throughput
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100050
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_reads_bytes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: io_write_throughput
I/o write throughput
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100051
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_writes_bytes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: running_git_commands
Git commands running on each gitserver instance
A high value signals load.
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100060
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (instance, cmd) (src_gitserver_exec_running{instance=~
${shard:regex}})
gitserver: git_commands_received
Rate of git commands received across all instances
per second rate per command across all instances
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100061
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (cmd) (rate(src_gitserver_exec_duration_seconds_count[5m]))
gitserver: repository_clone_queue_size
Repository clone queue size
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100070
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(src_gitserver_clone_queue)
gitserver: repository_existence_check_queue_size
Repository existence check queue size
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100071
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(src_gitserver_lsremote_queue)
gitserver: echo_command_duration_test
Echo test command duration
A high value here likely indicates a problem, especially if consistently high.
You can query for individual commands using sum by (cmd)(src_gitserver_exec_running)
in Grafana (/-/debug/grafana
) to see if a specific Git Server command might be spiking in frequency.
If this value is consistently high, consider the following:
- Single container deployments: Upgrade to a Docker Compose deployment which offers better scalability and resource isolation.
- Kubernetes and Docker Compose: Check that you are running a similar number of git server replicas and that their CPU/memory limits are allocated according to what is shown in the Sourcegraph resource estimator.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100080
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(src_gitserver_echo_duration_seconds)
gitserver: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100081
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="gitserver",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="gitserver"}[5m]))
Git Server: Gitserver: Gitserver API (powered by internal/observation)
gitserver: gitserver_api_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (le)(rate(src_gitserver_api_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m])) / (sum(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m])) + sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))) * 100
gitserver: gitserver_api_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (op)(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_gitserver_api_duration_seconds_bucket{job=~"^gitserver.*"}[5m])))
gitserver: gitserver_api_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))) * 100
Git Server: Global operation semaphores
gitserver: batch_log_semaphore_wait_99th_percentile_duration
Aggregate successful batch log semaphore operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (le)(rate(src_batch_log_semaphore_wait_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))
Git Server: Gitservice for internal cloning
gitserver: aggregate_gitservice_request_duration
95th percentile gitservice request duration aggregate
A high value means any internal service trying to clone a repo from gitserver is slowed down.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=
gitserver, error=
false}[5m])) by (le))
gitserver: gitservice_request_duration
95th percentile gitservice request duration per shard
A high value means any internal service trying to clone a repo from gitserver is slowed down.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=
gitserver, error=
false, instance=~
${shard:regex}}[5m])) by (le, instance))
gitserver: aggregate_gitservice_error_request_duration
95th percentile gitservice error request duration aggregate
95th percentile gitservice error request duration aggregate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=
gitserver, error=
true}[5m])) by (le))
gitserver: gitservice_request_duration
95th percentile gitservice error request duration per shard
95th percentile gitservice error request duration per shard
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=
gitserver, error=
true, instance=~
${shard:regex}}[5m])) by (le, instance))
gitserver: aggregate_gitservice_request_rate
Aggregate gitservice request rate
Aggregate gitservice request rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100320
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=
gitserver, error=
false}[5m]))
gitserver: gitservice_request_rate
Gitservice request rate per shard
Per shard gitservice request rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100321
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=
gitserver, error=
false, instance=~
${shard:regex}}[5m]))
gitserver: aggregate_gitservice_request_error_rate
Aggregate gitservice request error rate
Aggregate gitservice request error rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100330
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=
gitserver, error=
true}[5m]))
gitserver: gitservice_request_error_rate
Gitservice request error rate per shard
Per shard gitservice request error rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100331
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=
gitserver, error=
true, instance=~
${shard:regex}}[5m]))
gitserver: aggregate_gitservice_requests_running
Aggregate gitservice requests running
Aggregate gitservice requests running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100340
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(src_gitserver_gitservice_running{type=
gitserver})
gitserver: gitservice_requests_running
Gitservice requests running per shard
Per shard gitservice requests running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100341
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(src_gitserver_gitservice_running{type=
gitserver, instance=~
${shard:regex}}) by (instance)
Git Server: Gitserver cleanup jobs
gitserver: janitor_running
If the janitor process is running
1, if the janitor process is currently running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (instance) (src_gitserver_janitor_running)
gitserver: janitor_job_duration
95th percentile job run duration
95th percentile job run duration
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_janitor_job_duration_seconds_bucket[5m])) by (le, job_name))
gitserver: janitor_job_failures
Failures over 5m (by job)
the rate of failures over 5m (by job)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100420
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (job_name) (rate(src_gitserver_janitor_job_duration_seconds_count{success="false"}[5m]))
gitserver: repos_removed
Repositories removed due to disk pressure
Repositories removed due to disk pressure
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100430
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (instance) (rate(src_gitserver_repos_removed_disk_pressure[5m]))
gitserver: non_existent_repos_removed
Repositories removed because they are not defined in the DB
Repositoriess removed because they are not defined in the DB
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100440
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (instance) (increase(src_gitserver_non_existing_repos_removed[5m]))
gitserver: sg_maintenance_reason
Successful sg maintenance jobs over 1h (by reason)
the rate of successful sg maintenance jobs and the reason why they were triggered
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100450
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (reason) (rate(src_gitserver_maintenance_status{success="true"}[1h]))
gitserver: git_prune_skipped
Successful git prune jobs over 1h
the rate of successful git prune jobs over 1h and whether they were skipped
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100460
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (skipped) (rate(src_gitserver_prune_status{success="true"}[1h]))
Git Server: Search
gitserver: search_latency
Mean time until first result is sent
Mean latency (time to first result) of gitserver search requests
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: rate(src_gitserver_search_latency_seconds_sum[5m]) / rate(src_gitserver_search_latency_seconds_count[5m])
gitserver: search_duration
Mean search duration
Mean duration of gitserver search requests
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: rate(src_gitserver_search_duration_seconds_sum[5m]) / rate(src_gitserver_search_duration_seconds_count[5m])
gitserver: search_rate
Rate of searches run by pod
The rate of searches executed on gitserver by pod
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: rate(src_gitserver_search_latency_seconds_count{instance=~
${shard:regex}}[5m])
gitserver: running_searches
Number of searches currently running by pod
The number of searches currently executing on gitserver by pod
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (instance) (src_gitserver_search_running{instance=~
${shard:regex}})
Git Server: Repos disk I/O metrics
gitserver: repos_disk_reads_sec
Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.*}[1m])))))
gitserver: repos_disk_writes_sec
Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.*}[1m])))))
gitserver: repos_disk_read_throughput
Read throughput over 1m (per instance)
The amount of data that was read from the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~
node-exporter.*}[1m])))))
gitserver: repos_disk_write_throughput
Write throughput over 1m (per instance)
The amount of data that was written to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~
node-exporter.*}[1m])))))
gitserver: repos_disk_read_duration
Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100620
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.}[1m])))))))
gitserver: repos_disk_write_duration
Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100621
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.}[1m])))))))
gitserver: repos_disk_read_request_size
Average read request size over 1m (per instance)
The average size of read requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100630
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.}[1m])))))))
gitserver: repos_disk_write_request_size)
Average write request size over 1m (per instance)
The average size of write requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100631
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.}[1m])))))))
gitserver: repos_disk_reads_merged_sec
Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100640
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~
node-exporter.*}[1m])))))
gitserver: repos_disk_writes_merged_sec
Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100641
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~
node-exporter.*}[1m])))))
gitserver: repos_disk_average_queue_size
Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100650
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~
${shard:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~
node-exporter.*}[1m])))))
Git Server: Codeintel: Coursier invocation stats
gitserver: codeintel_coursier_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
gitserver: codeintel_coursier_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100710
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100711
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))
gitserver: codeintel_coursier_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100712
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100713
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
Git Server: Codeintel: npm invocation stats
gitserver: codeintel_npm_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100802
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100803
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
gitserver: codeintel_npm_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))
gitserver: codeintel_npm_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100812
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100813
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
Git Server: HTTP handlers
gitserver: healthy_request_rate
Requests per second, by route, when status code is 200
The number of healthy HTTP requests per second to internal HTTP api
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code=~"2.."}[5m]))
gitserver: unhealthy_request_rate
Requests per second, by route, when status code is not 200
The number of unhealthy HTTP requests per second to internal HTTP api
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code!~"2.."}[5m]))
gitserver: request_rate_by_code
Requests per second, by status code
The number of HTTP requests per second by code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (code) (rate(src_http_request_duration_seconds_count{app="gitserver"}[5m]))
gitserver: 95th_percentile_healthy_requests
95th percentile duration by route, when status code is 200
The 95th percentile duration by route when the status code is 200
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100910
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code=~"2.."}[5m])) by (le, route))
gitserver: 95th_percentile_unhealthy_requests
95th percentile duration by route, when status code is not 200
The 95th percentile duration by route when the status code is not 200
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100911
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code!~"2.."}[5m])) by (le, route))
Git Server: Database connections
gitserver: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="gitserver"})
gitserver: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="gitserver"})
gitserver: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101010
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="gitserver"})
gitserver: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101011
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="gitserver"})
gitserver: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101020
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="gitserver"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="gitserver"}[5m]))
gitserver: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101030
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="gitserver"}[5m]))
gitserver: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101031
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="gitserver"}[5m]))
gitserver: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101032
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="gitserver"}[5m]))
Git Server: Container monitoring (not available on server)
gitserver: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod gitserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p gitserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' gitserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the gitserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs gitserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^gitserver.*"}) > 60)
gitserver: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}
gitserver: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101102
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}
gitserver: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101103
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^gitserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^gitserver.*"}[1h]))
Git Server: Provisioning indicators (not available on server)
gitserver: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[1d])
gitserver: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Git Server is expected to use up all the memory it is provided.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101201
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[1d])
gitserver: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101210
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[5m])
gitserver: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Git Server is expected to use up all the memory it is provided.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101211
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[5m])
gitserver: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101212
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^gitserver.*"})
Git Server: Golang runtime monitoring
gitserver: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101300
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*gitserver"})
gitserver: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101301
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*gitserver"})
Git Server: Kubernetes monitoring (only available on Kubernetes)
gitserver: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101400
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by(app) (up{app=~".*gitserver"}) / count by (app) (up{app=~".*gitserver"}) * 100
GitHub Proxy
Proxies all requests to github.com, keeping track of and managing rate limits.
To see this dashboard, visit /-/debug/grafana/d/github-proxy/github-proxy
on your Sourcegraph instance.
GitHub Proxy: GitHub API monitoring
github-proxy: github_proxy_waiting_requests
Number of requests waiting on the global mutex
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(github_proxy_waiting_requests)
GitHub Proxy: Container monitoring (not available on server)
github-proxy: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod github-proxy
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p github-proxy
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' github-proxy
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the github-proxy container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs github-proxy
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^github-proxy.*"}) > 60)
github-proxy: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}
github-proxy: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}
github-proxy: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^github-proxy.*"}[1h]) + rate(container_fs_writes_total{name=~"^github-proxy.*"}[1h]))
GitHub Proxy: Provisioning indicators (not available on server)
github-proxy: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}[1d])
github-proxy: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}[1d])
github-proxy: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}[5m])
github-proxy: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}[5m])
github-proxy: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^github-proxy.*"})
GitHub Proxy: Golang runtime monitoring
github-proxy: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*github-proxy"})
github-proxy: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*github-proxy"})
GitHub Proxy: Kubernetes monitoring (only available on Kubernetes)
github-proxy: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(app) (up{app=~".*github-proxy"}) / count by (app) (up{app=~".*github-proxy"}) * 100
Postgres
Postgres metrics, exported from postgres_exporter (not available on server).
To see this dashboard, visit /-/debug/grafana/d/postgres/postgres
on your Sourcegraph instance.
postgres: connections
Active connections
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (job) (pg_stat_activity_count{datname!~"template.*|postgres|cloudsqladmin"}) OR sum by (job) (pg_stat_activity_count{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})
postgres: usage_connections_percentage
Connection in use
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum(pg_stat_activity_count) by (job) / (sum(pg_settings_max_connections) by (job) - sum(pg_settings_superuser_reserved_connections) by (job)) * 100
postgres: transaction_durations
Maximum transaction durations
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (job) (pg_stat_activity_max_tx_duration{datname!~"template.*|postgres|cloudsqladmin",job!="codeintel-db"}) OR sum by (job) (pg_stat_activity_max_tx_duration{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})
Postgres: Database and collector status
postgres: postgres_up
Database availability
A non-zero value indicates the database is online.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: pg_up
postgres: invalid_indexes
Invalid indexes (unusable by the query planner)
A non-zero value indicates the that Postgres failed to build an index. Expect degraded performance until the index is manually rebuilt.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (relname)(pg_invalid_index_count)
postgres: pg_exporter_err
Errors scraping postgres exporter
This value indicates issues retrieving metrics from postgres_exporter.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: pg_exporter_last_scrape_error
postgres: migration_in_progress
Active schema migration
A 0 value indicates that no migration is in progress.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: pg_sg_migration_status
Postgres: Object size and bloat
postgres: pg_table_size
Table size
Total size of this table
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (relname)(pg_table_bloat_size)
postgres: pg_table_bloat_ratio
Table bloat ratio
Estimated bloat ratio of this table (high bloat = high overhead)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (relname)(pg_table_bloat_ratio) * 100
postgres: pg_index_size
Index size
Total size of this index
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (relname)(pg_index_bloat_size)
postgres: pg_index_bloat_ratio
Index bloat ratio
Estimated bloat ratio of this index (high bloat = high overhead)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (relname)(pg_index_bloat_ratio) * 100
Postgres: Provisioning indicators (not available on server)
postgres: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])
postgres: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])
postgres: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])
postgres: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])
postgres: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^(pgsql|codeintel-db|codeinsights).*"})
Postgres: Kubernetes monitoring (only available on Kubernetes)
postgres: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) / count by (app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) * 100
Precise Code Intel Worker
Handles conversion of uploaded precise code intelligence bundles.
To see this dashboard, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker
on your Sourcegraph instance.
Precise Code Intel Worker: Codeintel: LSIF uploads
precise-code-intel-worker: codeintel_upload_queue_size
Unprocessed upload record queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"})
precise-code-intel-worker: codeintel_upload_queue_growth_rate
Unprocessed upload record queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[30m])) / sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[30m]))
precise-code-intel-worker: codeintel_upload_queued_max_age
Unprocessed upload record queue longest time in queue
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_upload_queued_duration_seconds_total{job=~"^precise-code-intel-worker.*"})
Precise Code Intel Worker: Codeintel: LSIF uploads
precise-code-intel-worker: codeintel_upload_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(src_codeintel_upload_processor_handlers{job=~"^precise-code-intel-worker.*"})
precise-code-intel-worker: codeintel_upload_processor_upload_size
Sum of upload sizes in bytes being processed by each precise code-intel worker instance
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by(instance) (src_codeintel_upload_processor_upload_size{job="precise-code-intel-worker"})
precise-code-intel-worker: codeintel_upload_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_upload_processor_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: dbstore stats
precise-code-intel-worker: codeintel_uploads_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100203
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_uploads_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_uploads_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100213
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: lsifstore stats
precise-code-intel-worker: codeintel_uploads_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_uploads_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Workerutil: lsif_uploads dbworker/store stats
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_upload_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: gitserver client
precise-code-intel-worker: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: uploadstore stats
precise-code-intel-worker: codeintel_uploadstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_uploadstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_uploadstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Internal service requests
precise-code-intel-worker: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="precise-code-intel-worker",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="precise-code-intel-worker"}[5m]))
Precise Code Intel Worker: Database connections
precise-code-intel-worker: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="precise-code-intel-worker"})
precise-code-intel-worker: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="precise-code-intel-worker"})
precise-code-intel-worker: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="precise-code-intel-worker"})
precise-code-intel-worker: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="precise-code-intel-worker"})
precise-code-intel-worker: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100820
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="precise-code-intel-worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100830
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100831
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100832
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="precise-code-intel-worker"}[5m]))
Precise Code Intel Worker: Container monitoring (not available on server)
precise-code-intel-worker: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod precise-code-intel-worker
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p precise-code-intel-worker
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' precise-code-intel-worker
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the precise-code-intel-worker container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs precise-code-intel-worker
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^precise-code-intel-worker.*"}) > 60)
precise-code-intel-worker: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
precise-code-intel-worker: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
precise-code-intel-worker: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^precise-code-intel-worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^precise-code-intel-worker.*"}[1h]))
Precise Code Intel Worker: Provisioning indicators (not available on server)
precise-code-intel-worker: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])
precise-code-intel-worker: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])
precise-code-intel-worker: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])
precise-code-intel-worker: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])
precise-code-intel-worker: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^precise-code-intel-worker.*"})
Precise Code Intel Worker: Golang runtime monitoring
precise-code-intel-worker: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*precise-code-intel-worker"})
precise-code-intel-worker: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*precise-code-intel-worker"})
Precise Code Intel Worker: Kubernetes monitoring (only available on Kubernetes)
precise-code-intel-worker: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by(app) (up{app=~".*precise-code-intel-worker"}) / count by (app) (up{app=~".*precise-code-intel-worker"}) * 100
Redis
Metrics from both redis databases.
To see this dashboard, visit /-/debug/grafana/d/redis/redis
on your Sourcegraph instance.
Redis: Redis Store
redis: redis-store_up
Redis-store availability
A value of 1 indicates the service is currently running
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: redis_up{app="redis-store"}
Redis: Redis Cache
redis: redis-cache_up
Redis-cache availability
A value of 1 indicates the service is currently running
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: redis_up{app="redis-cache"}
Redis: Provisioning indicators (not available on server)
redis: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[1d])
redis: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[1d])
redis: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[5m])
redis: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[5m])
redis: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^redis-cache.*"})
Redis: Provisioning indicators (not available on server)
redis: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[1d])
redis: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[1d])
redis: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[5m])
redis: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[5m])
redis: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^redis-store.*"})
Redis: Kubernetes monitoring (only available on Kubernetes)
redis: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(app) (up{app=~".*redis-cache"}) / count by (app) (up{app=~".*redis-cache"}) * 100
Redis: Kubernetes monitoring (only available on Kubernetes)
redis: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(app) (up{app=~".*redis-store"}) / count by (app) (up{app=~".*redis-store"}) * 100
Worker
Manages background processes.
To see this dashboard, visit /-/debug/grafana/d/worker/worker
on your Sourcegraph instance.
Worker: Active jobs
worker: worker_job_count
Number of worker instances running each job
The number of worker instances running each job type. It is necessary for each job type to be managed by at least one worker instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query: sum by (job_name) (src_worker_jobs{job="worker"})
worker: worker_job_codeintel-upload-janitor_count
Number of worker instances running the codeintel-upload-janitor job
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum (src_worker_jobs{job="worker", job_name="codeintel-upload-janitor"})
worker: worker_job_codeintel-commitgraph-updater_count
Number of worker instances running the codeintel-commitgraph-updater job
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum (src_worker_jobs{job="worker", job_name="codeintel-commitgraph-updater"})
worker: worker_job_codeintel-autoindexing-scheduler_count
Number of worker instances running the codeintel-autoindexing-scheduler job
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum (src_worker_jobs{job="worker", job_name="codeintel-autoindexing-scheduler"})
Worker: Database record encrypter
worker: records_encrypted_at_rest_percentage
Percentage of database records encrypted at rest
Percentage of encrypted database records
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: (max(src_records_encrypted_at_rest_total) by (tableName)) / ((max(src_records_encrypted_at_rest_total) by (tableName)) + (max(src_records_unencrypted_at_rest_total) by (tableName))) * 100
worker: records_encrypted_total
Database records encrypted every 5m
Number of encrypted database records every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (tableName)(increase(src_records_encrypted_total{job=~"^worker.*"}[5m]))
worker: records_decrypted_total
Database records decrypted every 5m
Number of encrypted database records every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (tableName)(increase(src_records_decrypted_total{job=~"^worker.*"}[5m]))
worker: record_encryption_errors_total
Encryption operation errors every 5m
Number of database record encryption/decryption errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum(increase(src_record_encryption_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: Repository with stale commit graph
worker: codeintel_commit_graph_queue_size
Repository queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_commit_graph_total{job=~"^worker.*"})
worker: codeintel_commit_graph_queue_growth_rate
Repository queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[30m]))
worker: codeintel_commit_graph_queued_max_age
Repository queue longest time in queue
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_commit_graph_queued_duration_seconds_total{job=~"^worker.*"})
Worker: Codeintel: Repository commit graph updates
worker: codeintel_commit_graph_processor_total
Update operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_99th_percentile_duration
Aggregate successful update operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_commit_graph_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_errors_total
Update operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_error_rate
Update operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Dependency index job
worker: codeintel_dependency_index_queue_size
Dependency index job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_dependency_index_total{job=~"^worker.*"})
worker: codeintel_dependency_index_queue_growth_rate
Dependency index job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependency_index_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[30m]))
worker: codeintel_dependency_index_queued_max_age
Dependency index job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_dependency_index_queued_duration_seconds_total{job=~"^worker.*"})
Worker: Codeintel: Dependency index jobs
worker: codeintel_dependency_index_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(src_codeintel_dependency_index_processor_handlers{job=~"^worker.*"})
worker: codeintel_dependency_index_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_dependency_index_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Janitor stats
worker: codeintel_background_repositories_scanned_total
Repository records scanned every 5m
Number of repositories considered for data retention scanning every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_repositories_scanned_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_records_scanned_total
Lsif upload records scanned every 5m
Number of upload records considered for data retention scanning every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_scanned_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_commits_scanned_total
Lsif upload commits scanned every 5m
Number of commits considered for data retention scanning every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_commits_scanned_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_records_expired_total
Lsif upload records expired every 5m
Number of upload records found to be expired every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_expired_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_records_removed_total
Lsif upload records deleted every 5m
Number of LSIF upload records deleted due to expiration or unreachability every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_removed_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_index_records_removed_total
Lsif index records deleted every 5m
Number of LSIF index records deleted due to expiration or unreachability every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_index_records_removed_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_uploads_purged_total
Lsif upload data bundles deleted every 5m
Number of LSIF upload data bundles purged from the codeintel-db database every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_uploads_purged_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_documentation_search_records_removed_total
Documentation search record records deleted every 5m
Number of documentation search records removed from the codeintel-db database every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_documentation_search_records_removed_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_audit_log_records_expired_total
Lsif upload audit log records deleted every 5m
Number of LSIF upload audit log records deleted due to expiration every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100620
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_audit_log_records_expired_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_errors_total
Janitor operation errors every 5m
Number of code intelligence janitor errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100621
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: Auto-index scheduler
worker: codeintel_autoindexing_total
Auto-indexing job scheduler operations every 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
worker: codeintel_autoindexing_99th_percentile_duration
Aggregate successful auto-indexing job scheduler operation duration distribution over 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindexing_duration_seconds_bucket{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
worker: codeintel_autoindexing_errors_total
Auto-indexing job scheduler operation errors every 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
worker: codeintel_autoindexing_error_rate
Auto-indexing job scheduler operation error rate over 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) / (sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) + sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))) * 100
Worker: Codeintel: dbstore stats
worker: codeintel_uploads_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100802
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100803
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_uploads_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_uploads_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100812
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100813
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: lsifstore stats
worker: codeintel_uploads_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_uploads_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100910
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100911
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_uploads_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100912
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100913
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Workerutil: lsif_dependency_indexes dbworker/store stats
worker: workerutil_dbworker_store_codeintel_dependency_index_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_dependency_index_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101003
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: gitserver client
worker: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101103
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Dependency repository insert
worker: codeintel_dependency_repos_total
Aggregate insert operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_99th_percentile_duration
Aggregate successful insert operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_errors_total
Aggregate insert operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_error_rate
Aggregate insert operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101203
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_dependency_repos_total
Insert operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_99th_percentile_duration
99th percentile successful insert operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,scheme,new)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_dependency_repos_errors_total
Insert operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101212
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_error_rate
Insert operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101213
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: dbstore stats
worker: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101300
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101301
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101302
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101303
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101310
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101311
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101312
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101313
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: service stats
worker: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101400
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_total{job=~"^worker.*"}[5m]))
worker: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101401
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101402
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))
worker: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101403
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101410
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m]))
worker: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101411
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101412
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))
worker: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101413
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Workspace resolver dbstore
worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101500
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101501
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101502
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101503
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Bulk operation processor dbstore
worker: workerutil_dbworker_store_batches_bulk_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101600
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batches_bulk_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101601
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batches_bulk_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_batches_bulk_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101602
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batches_bulk_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101603
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Changeset reconciler dbstore
worker: workerutil_dbworker_store_batches_reconciler_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101700
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batches_reconciler_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101701
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batches_reconciler_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_batches_reconciler_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101702
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batches_reconciler_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101703
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Workspace execution dbstore
worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101800
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101801
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101802
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101803
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: lsif_upload record resetter
worker: codeintel_background_upload_record_resets_total
Lsif upload records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101900
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_record_reset_failures_total
Lsif upload records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101901
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_record_reset_errors_total
Lsif upload operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101902
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: lsif_index record resetter
worker: codeintel_background_index_record_resets_total
Lsif index records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_index_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_index_record_reset_failures_total
Lsif index records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_index_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_index_record_reset_errors_total
Lsif index operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_index_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: lsif_dependency_index record resetter
worker: codeintel_background_dependency_index_record_resets_total
Lsif dependency index records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_dependency_index_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_dependency_index_record_reset_failures_total
Lsif dependency index records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_dependency_index_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_dependency_index_record_reset_errors_total
Lsif dependency index operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_dependency_index_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeinsights: Query Runner Queue
worker: query_runner_worker_queue_size
Code insights query runner queue queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102200
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: max(src_query_runner_worker_total{job=~"^worker.*"})
worker: query_runner_worker_queue_growth_rate
Code insights query runner queue queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102201
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_query_runner_worker_total{job=~"^worker.*"}[30m])) / sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[30m]))
Worker: Codeinsights: insights queue processor
worker: query_runner_worker_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102300
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(src_query_runner_worker_processor_handlers{job=~"^worker.*"})
worker: query_runner_worker_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102310
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m]))
worker: query_runner_worker_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102311
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (le)(rate(src_query_runner_worker_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: query_runner_worker_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102312
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))
worker: query_runner_worker_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102313
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeinsights: code insights query runner queue record resetter
worker: query_runner_worker_record_resets_total
Insights query runner queue records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102400
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_query_runner_worker_record_resets_total{job=~"^worker.*"}[5m]))
worker: query_runner_worker_record_reset_failures_total
Insights query runner queue records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102401
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_query_runner_worker_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: query_runner_worker_record_reset_errors_total
Insights query runner queue operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102402
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_query_runner_worker_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeinsights: dbstore stats
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102500
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102501
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (le)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102502
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102503
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102510
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102511
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102512
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102513
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Code Insights queue utilization
worker: insights_queue_unutilized_size
Insights queue size that is not utilized (not processing)
Any value on this panel indicates code insights is not processing queries from its queue. This observable and alert only fire if there are records in the queue and there have been no dequeue attempts for 30 minutes.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102600
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: max(src_query_runner_worker_total{job=~"^worker.*"}) > 0 and on(job) sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*",op="Dequeue"}[5m])) < 1
Worker: Internal service requests
worker: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102700
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="worker",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="worker"}[5m]))
Worker: Database connections
worker: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102800
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="worker"})
worker: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102801
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="worker"})
worker: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102810
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="worker"})
worker: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102811
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="worker"})
worker: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102820
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="worker"}[5m]))
worker: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102830
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="worker"}[5m]))
worker: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102831
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="worker"}[5m]))
worker: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102832
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="worker"}[5m]))
Worker: Container monitoring (not available on server)
worker: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod worker
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p worker
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' worker
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the worker container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs worker
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102900
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^worker.*"}) > 60)
worker: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102901
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}
worker: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102902
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}
worker: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102903
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^worker.*"}[1h]))
Worker: Provisioning indicators (not available on server)
worker: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[1d])
worker: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[1d])
worker: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[5m])
worker: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[5m])
worker: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^worker.*"})
Worker: Golang runtime monitoring
worker: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*worker"})
worker: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*worker"})
Worker: Kubernetes monitoring (only available on Kubernetes)
worker: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by(app) (up{app=~".*worker"}) / count by (app) (up{app=~".*worker"}) * 100
Repo Updater
Manages interaction with code hosts, instructs Gitserver to update repositories.
To see this dashboard, visit /-/debug/grafana/d/repo-updater/repo-updater
on your Sourcegraph instance.
Repo Updater: Repositories
repo-updater: syncer_sync_last_time
Time since last sync
A high value here indicates issues synchronizing repo metadata. If the value is persistently high, make sure all external services have valid tokens.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(timestamp(vector(time()))) - max(src_repoupdater_syncer_sync_last_time)
repo-updater: src_repoupdater_max_sync_backoff
Time since oldest sync
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(src_repoupdater_max_sync_backoff)
repo-updater: src_repoupdater_syncer_sync_errors_total
Site level external service sync error rate
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (family) (rate(src_repoupdater_syncer_sync_errors_total{owner!="user",reason!="invalid_npm_path",reason!="internal_rate_limit"}[5m]))
repo-updater: syncer_sync_start
Repo metadata sync was started
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (family) (rate(src_repoupdater_syncer_start_sync{family="Syncer.SyncExternalService"}[9h0m0s]))
repo-updater: syncer_sync_duration
95th repositories sync duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, max by (le, family, success) (rate(src_repoupdater_syncer_sync_duration_seconds_bucket[1m])))
repo-updater: source_duration
95th repositories source duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, max by (le) (rate(src_repoupdater_source_duration_seconds_bucket[1m])))
repo-updater: syncer_synced_repos
Repositories synced
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100020
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(rate(src_repoupdater_syncer_synced_repos_total[1m]))
repo-updater: sourced_repos
Repositories sourced
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100021
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(rate(src_repoupdater_source_repos_total[1m]))
repo-updater: purge_failed
Repositories purge failed
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100030
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(rate(src_repoupdater_purge_failed[1m]))
repo-updater: sched_auto_fetch
Repositories scheduled due to hitting a deadline
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100040
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(rate(src_repoupdater_sched_auto_fetch[1m]))
repo-updater: sched_manual_fetch
Repositories scheduled due to user traffic
Check repo-updater logs if this value is persistently high. This does not indicate anything if there are no user added code hosts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100041
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(rate(src_repoupdater_sched_manual_fetch[1m]))
repo-updater: sched_known_repos
Repositories managed by the scheduler
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100050
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(src_repoupdater_sched_known_repos)
repo-updater: sched_update_queue_length
Rate of growth of update queue length over 5 minutes
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100051
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(deriv(src_repoupdater_sched_update_queue_length[5m]))
repo-updater: sched_loops
Scheduler loops
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100052
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(rate(src_repoupdater_sched_loops[1m]))
repo-updater: src_repoupdater_stale_repos
Repos that haven't been fetched in more than 8 hours
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100060
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(src_repoupdater_stale_repos)
repo-updater: sched_error
Repositories schedule error rate
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100061
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(rate(src_repoupdater_sched_error[1m]))
Repo Updater: Permissions
repo-updater: permissions_syncs_scheduled_reason
Number of users/repos scheduled for permissions sync grouped by reason
Indicates the number of users/repos scheduled for permissions sync grouped by reason.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum by (type) (src_repoupdater_perms_syncer_items_sync_scheduled)
repo-updater: permissions_syncs_scheduled_priority
Number of users/repos scheduled for permissions sync grouped by priority
Indicates the number of users/repos scheduled for permissions sync grouped by priority.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum by (priority) (src_repoupdater_perms_syncer_items_sync_scheduled)
repo-updater: user_success_syncs_total
Total number of user permissions syncs
Indicates the total number of user permissions sync completed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(src_repoupdater_perms_syncer_success_syncs{type="user"})
repo-updater: user_success_syncs
Number of user permissions syncs [5m]
Indicates the number of users permissions syncs completed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(increase(src_repoupdater_perms_syncer_success_syncs{type="user"}[5m]))
repo-updater: user_initial_syncs
Number of first user permissions syncs [5m]
Indicates the number of permissions syncs done for the first time for the user.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(increase(src_repoupdater_perms_syncer_initial_syncs{type="user"}[5m]))
repo-updater: user_failed_syncs
Number of user permissions failed syncs [5m]
Indicates the number of users permissions syncs failed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100120
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(increase(src_repoupdater_perms_syncer_failed_syncs{type="user"}[5m]))
repo-updater: repo_success_syncs_total
Total number of repo permissions syncs
Indicates the total number of repo permissions sync completed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100130
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(src_repoupdater_perms_syncer_success_syncs{type="repo"})
repo-updater: repo_success_syncs
Number of repo permissions syncs over 5m
Indicates the number of repos permissions syncs completed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100131
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(increase(src_repoupdater_perms_syncer_success_syncs{type="repo"}[5m]))
repo-updater: repo_initial_syncs
Number of first repo permissions syncs over 5m
Indicates the number of permissions syncs done for the first time for the repo.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100132
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(increase(src_repoupdater_perms_syncer_initial_syncs{type="repo"}[5m]))
repo-updater: repo_failed_syncs
Number of repo permissions failed syncs over 5m
Indicates the number of repos permissions syncs failed in last 5 minute.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100140
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum(increase(src_repoupdater_perms_syncer_failed_syncs{type="repo"}[5m]))
repo-updater: users_consecutive_sync_delay
Max duration between two consecutive permissions sync for user
Indicates the max delay between two consecutive permissions sync for a user during the period.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100150
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max(max_over_time (src_repoupdater_perms_syncer_perms_consecutive_sync_delay{type="user"} [1m]))
repo-updater: repos_consecutive_sync_delay
Max duration between two consecutive permissions sync for repo
Indicates the max delay between two consecutive permissions sync for a repo during the period.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100151
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max(max_over_time (src_repoupdater_perms_syncer_perms_consecutive_sync_delay{type="repo"} [1m]))
repo-updater: users_first_sync_delay
Max duration between user creation and first permissions sync
Indicates the max delay between user creation and their permissions sync
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100160
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max(max_over_time(src_repoupdater_perms_syncer_perms_first_sync_delay{type="user"}[1m]))
repo-updater: repos_first_sync_delay
Max duration between repo creation and first permissions sync over 1m
Indicates the max delay between repo creation and their permissions sync
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100161
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max(max_over_time(src_repoupdater_perms_syncer_perms_first_sync_delay{type="repo"}[1m]))
repo-updater: permissions_found_count
Number of permissions found during user/repo permissions sync
Indicates the number permissions found during users/repos permissions sync.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100170
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: sum by (type) (src_repoupdater_perms_syncer_perms_found)
repo-updater: permissions_found_avg
Average number of permissions found during permissions sync per user/repo
Indicates the average number permissions found during permissions sync per user/repo.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100171
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: avg by (type) (src_repoupdater_perms_syncer_perms_found)
repo-updater: perms_syncer_perms
Time gap between least and most up to date permissions
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100180
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max by (type) (src_repoupdater_perms_syncer_perms_gap_seconds)
repo-updater: perms_syncer_stale_perms
Number of entities with stale permissions
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100181
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max by (type) (src_repoupdater_perms_syncer_stale_perms)
repo-updater: perms_syncer_no_perms
Number of entities with no permissions
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100190
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max by (type) (src_repoupdater_perms_syncer_no_perms)
repo-updater: perms_syncer_outdated_perms
Number of entities with outdated permissions
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100191
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max by (type) (src_repoupdater_perms_syncer_outdated_perms)
repo-updater: perms_syncer_sync_duration
95th permissions sync duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: histogram_quantile(0.95, max by (le, type) (rate(src_repoupdater_perms_syncer_sync_duration_seconds_bucket[1m])))
repo-updater: perms_syncer_queue_size
Permissions sync queued items
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max(src_repoupdater_perms_syncer_queue_size)
repo-updater: perms_syncer_sync_errors
Permissions sync error rate
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max by (type) (ceil(rate(src_repoupdater_perms_syncer_sync_errors_total[1m])))
repo-updater: perms_syncer_scheduled_repos_total
Total number of repos scheduled for permissions sync
Indicates how many repositories have been scheduled for a permissions sync. More about repository permissions synchronization here
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Identity and Access Management team.
Technical details
Query: max(rate(src_repoupdater_perms_syncer_schedule_repos_total[1m]))
Repo Updater: External services
repo-updater: src_repoupdater_external_services_total
The total number of external services
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(src_repoupdater_external_services_total)
repo-updater: repoupdater_queued_sync_jobs_total
The total number of queued sync jobs
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(src_repoupdater_queued_sync_jobs_total)
repo-updater: repoupdater_completed_sync_jobs_total
The total number of completed sync jobs
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(src_repoupdater_completed_sync_jobs_total)
repo-updater: repoupdater_errored_sync_jobs_percentage
The percentage of external services that have failed their most recent sync
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max(src_repoupdater_errored_sync_jobs_percentage)
repo-updater: github_graphql_rate_limit_remaining
Remaining calls to GitHub graphql API before hitting the rate limit
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100220
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (name) (src_github_rate_limit_remaining_v2{resource="graphql"})
repo-updater: github_rest_rate_limit_remaining
Remaining calls to GitHub rest API before hitting the rate limit
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100221
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (name) (src_github_rate_limit_remaining_v2{resource="rest"})
repo-updater: github_search_rate_limit_remaining
Remaining calls to GitHub search API before hitting the rate limit
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100222
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (name) (src_github_rate_limit_remaining_v2{resource="search"})
repo-updater: github_graphql_rate_limit_wait_duration
Time spent waiting for the GitHub graphql API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100230
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="graphql"}[5m]))
repo-updater: github_rest_rate_limit_wait_duration
Time spent waiting for the GitHub rest API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100231
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="rest"}[5m]))
repo-updater: github_search_rate_limit_wait_duration
Time spent waiting for the GitHub search API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100232
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="search"}[5m]))
repo-updater: gitlab_rest_rate_limit_remaining
Remaining calls to GitLab rest API before hitting the rate limit
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100240
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (name) (src_gitlab_rate_limit_remaining{resource="rest"})
repo-updater: gitlab_rest_rate_limit_wait_duration
Time spent waiting for the GitLab rest API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100241
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (name) (rate(src_gitlab_rate_limit_wait_duration_seconds{resource="rest"}[5m]))
repo-updater: src_internal_rate_limit_wait_duration_bucket
95th percentile time spent successfully waiting on our internal rate limiter
Indicates how long we`re waiting on our internal rate limiter when communicating with a code host
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100250
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_internal_rate_limit_wait_duration_bucket{failed="false"}[5m])) by (le, urn))
repo-updater: src_internal_rate_limit_wait_error_count
Rate of failures waiting on our internal rate limiter
The rate at which we fail our internal rate limiter.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100251
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (urn) (rate(src_internal_rate_limit_wait_duration_count{failed="true"}[5m]))
Repo Updater: Batches: dbstore stats
repo-updater: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))
repo-updater: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Batches: service stats
repo-updater: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))
repo-updater: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Batch Changes team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Codeintel: Coursier invocation stats
repo-updater: codeintel_coursier_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: codeintel_coursier_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m])))
repo-updater: codeintel_coursier_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Codeintel: npm invocation stats
repo-updater: codeintel_npm_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: codeintel_npm_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m])))
repo-updater: codeintel_npm_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: HTTP handlers
repo-updater: healthy_request_rate
Requests per second, by route, when status code is 200
The number of healthy HTTP requests per second to internal HTTP api
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="repo-updater",code=~"2.."}[5m]))
repo-updater: unhealthy_request_rate
Requests per second, by route, when status code is not 200
The number of unhealthy HTTP requests per second to internal HTTP api
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (route) (rate(src_http_request_duration_seconds_count{app="repo-updater",code!~"2.."}[5m]))
repo-updater: request_rate_by_code
Requests per second, by status code
The number of HTTP requests per second by code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (code) (rate(src_http_request_duration_seconds_count{app="repo-updater"}[5m]))
repo-updater: 95th_percentile_healthy_requests
95th percentile duration by route, when status code is 200
The 95th percentile duration by route when the status code is 200
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100710
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="repo-updater",code=~"2.."}[5m])) by (le, route))
repo-updater: 95th_percentile_unhealthy_requests
95th percentile duration by route, when status code is not 200
The 95th percentile duration by route when the status code is not 200
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100711
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="repo-updater",code!~"2.."}[5m])) by (le, route))
Repo Updater: Internal service requests
repo-updater: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="repo-updater",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="repo-updater"}[5m]))
Repo Updater: Database connections
repo-updater: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="repo-updater"})
repo-updater: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="repo-updater"})
repo-updater: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100910
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="repo-updater"})
repo-updater: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100911
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="repo-updater"})
repo-updater: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100920
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="repo-updater"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="repo-updater"}[5m]))
repo-updater: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100930
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="repo-updater"}[5m]))
repo-updater: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100931
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="repo-updater"}[5m]))
repo-updater: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100932
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="repo-updater"}[5m]))
Repo Updater: Container monitoring (not available on server)
repo-updater: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod repo-updater
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p repo-updater
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' repo-updater
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the repo-updater container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs repo-updater
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^repo-updater.*"}) > 60)
repo-updater: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}
repo-updater: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101002
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}
repo-updater: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101003
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^repo-updater.*"}[1h]) + rate(container_fs_writes_total{name=~"^repo-updater.*"}[1h]))
Repo Updater: Provisioning indicators (not available on server)
repo-updater: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}[1d])
repo-updater: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}[1d])
repo-updater: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101110
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}[5m])
repo-updater: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101111
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}[5m])
repo-updater: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101112
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^repo-updater.*"})
Repo Updater: Golang runtime monitoring
repo-updater: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*repo-updater"})
repo-updater: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101201
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*repo-updater"})
Repo Updater: Kubernetes monitoring (only available on Kubernetes)
repo-updater: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101300
on your Sourcegraph instance.
Managed by the Sourcegraph Repo Management team.
Technical details
Query: sum by(app) (up{app=~".*repo-updater"}) / count by (app) (up{app=~".*repo-updater"}) * 100
Searcher
Performs unindexed searches (diff and commit search, text search for unindexed branches).
To see this dashboard, visit /-/debug/grafana/d/searcher/searcher
on your Sourcegraph instance.
searcher: unindexed_search_request_errors
Unindexed search request errors every 5m by code
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (code)(increase(searcher_service_request_total{code!="200",code!="canceled"}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total[5m])) * 100
searcher: replica_traffic
Requests per second over 10m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by(instance) (rate(searcher_service_request_total[10m]))
Searcher: Index use
searcher: searcher_hybrid_final_state_total
Hybrid search final state over 10m
This graph is about our interactions with the search index (zoekt) to help complete unindexed search requests. Searcher will use indexed search for the files that have not changed between the unindexed commit and the index.
This graph should mostly be "success". The next most common state should be "search-canceled" which happens when result limits are hit or the user starts a new search. Finally the next most common should be "diff-too-large", which happens if the commit is too far from the indexed commit. Otherwise other state should be rare and likely are a sign for further investigation.
Note: On sourcegraph.com "zoekt-list-missing" is also common due to it indexing a subset of repositories. Otherwise every other state should occur rarely.
For a full list of possible state see recordHybridFinalState.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (state)(increase(searcher_hybrid_final_state_total[10m]))
searcher: searcher_hybrid_retry_total
Hybrid search retrying over 10m
Expectation is that this graph should mostly be 0. It will trigger if a user manages to do a search and the underlying index changes while searching or Zoekt goes down. So occasional bursts can be expected, but if this graph is regularly above 0 it is a sign for further investigation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (reason)(increase(searcher_hybrid_retry_total[10m]))
Searcher: Cache disk I/O metrics
searcher: cache_disk_reads_sec
Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.*}[1m])))))
searcher: cache_disk_writes_sec
Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.*}[1m])))))
searcher: cache_disk_read_throughput
Read throughput over 1m (per instance)
The amount of data that was read from the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~
node-exporter.*}[1m])))))
searcher: cache_disk_write_throughput
Write throughput over 1m (per instance)
The amount of data that was written to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~
node-exporter.*}[1m])))))
searcher: cache_disk_read_duration
Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100220
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.}[1m])))))))
searcher: cache_disk_write_duration
Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100221
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.}[1m])))))))
searcher: cache_disk_read_request_size
Average read request size over 1m (per instance)
The average size of read requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100230
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.}[1m])))))))
searcher: cache_disk_write_request_size)
Average write request size over 1m (per instance)
The average size of write requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100231
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.}[1m])))))))
searcher: cache_disk_reads_merged_sec
Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100240
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~
node-exporter.*}[1m])))))
searcher: cache_disk_writes_merged_sec
Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100241
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~
node-exporter.*}[1m])))))
searcher: cache_disk_average_queue_size
Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100250
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~
node-exporter.*}[1m])))))
Searcher: Database connections
searcher: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="searcher"})
searcher: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="searcher"})
searcher: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="searcher"})
searcher: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="searcher"})
searcher: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100320
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="searcher"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="searcher"}[5m]))
searcher: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100330
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="searcher"}[5m]))
searcher: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100331
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="searcher"}[5m]))
searcher: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100332
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="searcher"}[5m]))
Searcher: Internal service requests
searcher: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="searcher",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="searcher"}[5m]))
Searcher: Container monitoring (not available on server)
searcher: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod searcher
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p searcher
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' searcher
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the searcher container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs searcher
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^searcher.*"}) > 60)
searcher: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}
searcher: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}
searcher: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^searcher.*"}[1h]) + rate(container_fs_writes_total{name=~"^searcher.*"}[1h]))
Searcher: Provisioning indicators (not available on server)
searcher: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}[1d])
searcher: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}[1d])
searcher: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}[5m])
searcher: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}[5m])
searcher: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^searcher.*"})
Searcher: Golang runtime monitoring
searcher: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*searcher"})
searcher: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*searcher"})
Searcher: Kubernetes monitoring (only available on Kubernetes)
searcher: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by(app) (up{app=~".*searcher"}) / count by (app) (up{app=~".*searcher"}) * 100
Symbols
Handles symbol searches for unindexed branches.
To see this dashboard, visit /-/debug/grafana/d/symbols/symbols
on your Sourcegraph instance.
Symbols: Codeintel: Symbols API
symbols: codeintel_symbols_api_total
Aggregate API operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_99th_percentile_duration
Aggregate successful API operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_symbols_api_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_errors_total
Aggregate API operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_error_rate
Aggregate API operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100003
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_api_total
API operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op,parseAmount)(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_99th_percentile_duration
99th percentile successful API operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op,parseAmount)(rate(src_codeintel_symbols_api_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_api_errors_total
API operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_error_rate
API operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op,parseAmount)(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m])) + sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols parser
symbols: symbols
In-flight parse jobs
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_symbols_parsing{job=~"^symbols.*"})
symbols: symbols
Parser queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_symbols_parse_queue_size{job=~"^symbols.*"})
symbols: symbols
Parse queue timeouts
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_symbols_parse_queue_timeouts_total{job=~"^symbols.*"})
symbols: symbols
Parse failures every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: rate(src_codeintel_symbols_parse_failed_total{job=~"^symbols.*"}[5m])
symbols: codeintel_symbols_parser_total
Aggregate parser operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_99th_percentile_duration
Aggregate successful parser operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_symbols_parser_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_errors_total
Aggregate parser operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_error_rate
Aggregate parser operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_parser_total
Parser operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100120
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_99th_percentile_duration
99th percentile successful parser operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100121
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_parser_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_parser_errors_total
Parser operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100122
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_error_rate
Parser operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100123
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols cache janitor
symbols: symbols
Size in bytes of the on-disk cache
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: src_codeintel_symbols_store_cache_size_bytes
symbols: symbols
Cache eviction operations every 5m
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: rate(src_codeintel_symbols_store_evictions_total[5m])
symbols: symbols
Cache eviction operation errors every 5m
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: rate(src_codeintel_symbols_store_errors_total[5m])
Symbols: Codeintel: Symbols repository fetcher
symbols: symbols
In-flight repository fetch operations
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: src_codeintel_symbols_fetching
symbols: symbols
Repository fetch queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_symbols_fetch_queue_size{job=~"^symbols.*"})
symbols: codeintel_symbols_repository_fetcher_total
Aggregate fetcher operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_99th_percentile_duration
Aggregate successful fetcher operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_symbols_repository_fetcher_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_errors_total
Aggregate fetcher operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_error_rate
Aggregate fetcher operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_repository_fetcher_total
Fetcher operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100320
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_99th_percentile_duration
99th percentile successful fetcher operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100321
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_repository_fetcher_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_repository_fetcher_errors_total
Fetcher operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100322
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_error_rate
Fetcher operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100323
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols gitserver client
symbols: codeintel_symbols_gitserver_total
Aggregate gitserver client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_99th_percentile_duration
Aggregate successful gitserver client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_symbols_gitserver_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_errors_total
Aggregate gitserver client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_error_rate
Aggregate gitserver client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_gitserver_total
Gitserver client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_99th_percentile_duration
99th percentile successful gitserver client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_gitserver_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_gitserver_errors_total
Gitserver client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_error_rate
Gitserver client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Database connections
symbols: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="symbols"})
symbols: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="symbols"})
symbols: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="symbols"})
symbols: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="symbols"})
symbols: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100520
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="symbols"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="symbols"}[5m]))
symbols: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100530
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="symbols"}[5m]))
symbols: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100531
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="symbols"}[5m]))
symbols: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100532
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="symbols"}[5m]))
Symbols: Internal service requests
symbols: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="symbols",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="symbols"}[5m]))
Symbols: Container monitoring (not available on server)
symbols: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod symbols
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p symbols
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' symbols
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the symbols container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs symbols
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^symbols.*"}) > 60)
symbols: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}
symbols: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}
symbols: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^symbols.*"}[1h]) + rate(container_fs_writes_total{name=~"^symbols.*"}[1h]))
Symbols: Provisioning indicators (not available on server)
symbols: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}[1d])
symbols: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}[1d])
symbols: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}[5m])
symbols: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}[5m])
symbols: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100812
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^symbols.*"})
Symbols: Golang runtime monitoring
symbols: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*symbols"})
symbols: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*symbols"})
Symbols: Kubernetes monitoring (only available on Kubernetes)
symbols: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by(app) (up{app=~".*symbols"}) / count by (app) (up{app=~".*symbols"}) * 100
Syntect Server
Handles syntax highlighting for code files.
To see this dashboard, visit /-/debug/grafana/d/syntect-server/syntect-server
on your Sourcegraph instance.
syntect-server: syntax_highlighting_errors
Syntax highlighting errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_syntax_highlighting_requests{status="error"}[5m])) / sum(increase(src_syntax_highlighting_requests[5m])) * 100
syntect-server: syntax_highlighting_timeouts
Syntax highlighting timeouts every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_syntax_highlighting_requests{status="timeout"}[5m])) / sum(increase(src_syntax_highlighting_requests[5m])) * 100
syntect-server: syntax_highlighting_panics
Syntax highlighting panics every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_syntax_highlighting_requests{status="panic"}[5m]))
syntect-server: syntax_highlighting_worker_deaths
Syntax highlighter worker deaths every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_syntax_highlighting_requests{status="hss_worker_timeout"}[5m]))
Syntect Server: Container monitoring (not available on server)
syntect-server: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod syntect-server
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p syntect-server
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' syntect-server
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the syntect-server container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs syntect-server
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^syntect-server.*"}) > 60)
syntect-server: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}
syntect-server: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}
syntect-server: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^syntect-server.*"}[1h]) + rate(container_fs_writes_total{name=~"^syntect-server.*"}[1h]))
Syntect Server: Provisioning indicators (not available on server)
syntect-server: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}[1d])
syntect-server: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}[1d])
syntect-server: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}[5m])
syntect-server: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}[5m])
syntect-server: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^syntect-server.*"})
Syntect Server: Kubernetes monitoring (only available on Kubernetes)
syntect-server: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(app) (up{app=~".*syntect-server"}) / count by (app) (up{app=~".*syntect-server"}) * 100
Zoekt
Indexes repositories, populates the search index, and responds to indexed search queries.
To see this dashboard, visit /-/debug/grafana/d/zoekt/zoekt
on your Sourcegraph instance.
zoekt: total_repos_aggregate
Total number of repos (aggregate)
Sudden changes can be caused by indexing configuration changes.
Additionally, a discrepancy between "index_num_assigned" and "index_queue_cap" could indicate a bug.
Legend:
- index_num_assigned: # of repos assigned to Zoekt
- index_num_indexed: # of repos Zoekt has indexed
- index_queue_cap: # of repos Zoekt is aware of, including those that it has finished indexing
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (__name__) ({__name__=~"index_num_assigned|index_num_indexed|index_queue_cap"})
zoekt: total_repos_per_instance
Total number of repos (per instance)
Sudden changes can be caused by indexing configuration changes.
Additionally, a discrepancy between "index_num_assigned" and "index_queue_cap" could indicate a bug.
Legend:
- index_num_assigned: # of repos assigned to Zoekt
- index_num_indexed: # of repos Zoekt has indexed
- index_queue_cap: # of repos Zoekt is aware of, including those that it has finished processing
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (__name__, instance) ({__name__=~"index_num_assigned|index_num_indexed|index_queue_cap",instance=~"${instance:regex}"})
zoekt: repos_stopped_tracking_total_aggregate
The number of repositories we stopped tracking over 5m (aggregate)
Repositories we stop tracking are soft-deleted during the next cleanup job.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(increase(index_num_stopped_tracking_total[5m]))
zoekt: repos_stopped_tracking_total_per_instance
The number of repositories we stopped tracking over 5m (per instance)
Repositories we stop tracking are soft-deleted during the next cleanup job.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (instance) (increase(index_num_stopped_tracking_total{instance=~
${instance:regex}}[5m]))
zoekt: average_resolve_revision_duration
Average resolve revision duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100020
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(rate(resolve_revision_seconds_sum[5m])) / sum(rate(resolve_revision_seconds_count[5m]))
zoekt: get_index_options_error_increase
The number of repositories we failed to get indexing options over 5m
When considering indexing a repository we ask for the index configuration from frontend per repository. The most likely reason this would fail is failing to resolve branch names to git SHAs.
This value can spike up during deployments/etc. Only if you encounter sustained periods of errors is there an underlying issue. When sustained this indicates repositories will not get updated indexes.
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100021
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(increase(get_index_options_error_total[5m]))
Zoekt: Search requests
zoekt: indexed_search_request_duration_p99_aggregate
99th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 99th percentile of search request durations over the last minute (aggregated across all instances).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.99, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
zoekt: indexed_search_request_duration_p90_aggregate
90th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 90th percentile of search request durations over the last minute (aggregated across all instances).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
zoekt: indexed_search_request_duration_p75_aggregate
75th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 75th percentile of search request durations over the last minute (aggregated across all instances).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
zoekt: indexed_search_request_duration_p99_by_instance
99th percentile indexed search duration over 1m (per instance)
This dashboard shows the 99th percentile of search request durations over the last minute (broken out per instance).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.99, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~
${instance:regex}}[1m])))
zoekt: indexed_search_request_duration_p90_by_instance
90th percentile indexed search duration over 1m (per instance)
This dashboard shows the 90th percentile of search request durations over the last minute (broken out per instance).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~
${instance:regex}}[1m])))
zoekt: indexed_search_request_duration_p75_by_instance
75th percentile indexed search duration over 1m (per instance)
This dashboard shows the 75th percentile of search request durations over the last minute (broken out per instance).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~
${instance:regex}}[1m])))
zoekt: indexed_search_num_concurrent_requests_aggregate
Amount of in-flight indexed search requests (aggregate)
This dashboard shows the current number of indexed search requests that are in-flight, aggregated across all instances.
In-flight search requests include both running and queued requests.
The number of in-flight requests can serve as a proxy for the general load that webserver instances are under.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100120
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (name) (zoekt_search_running)
zoekt: indexed_search_num_concurrent_requests_by_instance
Amount of in-flight indexed search requests (per instance)
This dashboard shows the current number of indexed search requests that are-flight, broken out per instance.
In-flight search requests include both running and queued requests.
The number of in-flight requests can serve as a proxy for the general load that webserver instances are under.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100121
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (instance, name) (zoekt_search_running{instance=~
${instance:regex}})
zoekt: indexed_search_concurrent_request_growth_rate_1m_aggregate
Rate of growth of in-flight indexed search requests over 1m (aggregate)
This dashboard shows the rate of growth of in-flight requests, aggregated across all instances.
In-flight search requests include both running and queued requests.
This metric gives a notion of how quickly the indexed-search backend is working through its request load (taking into account the request arrival rate and processing time). A sustained high rate of growth can indicate that the indexed-search backend is saturated.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100130
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (name) (deriv(zoekt_search_running[1m]))
zoekt: indexed_search_concurrent_request_growth_rate_1m_per_instance
Rate of growth of in-flight indexed search requests over 1m (per instance)
This dashboard shows the rate of growth of in-flight requests, broken out per instance.
In-flight search requests include both running and queued requests.
This metric gives a notion of how quickly the indexed-search backend is working through its request load (taking into account the request arrival rate and processing time). A sustained high rate of growth can indicate that the indexed-search backend is saturated.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100131
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (instance) (deriv(zoekt_search_running[1m]))
zoekt: indexed_search_request_errors
Indexed search request errors every 5m by code
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100140
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (code)(increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100
zoekt: zoekt_shards_sched
Current number of zoekt scheduler processes in a state
Each ongoing search request starts its life as an interactive query. If it takes too long it becomes a batch query. Between state transitions it can be queued.
If you have a high number of batch queries it is a sign there is a large load of slow queries. Alternatively your systems are underprovisioned and normal search queries are taking too long.
For a full explanation of the states see https://github.com/sourcegraph/zoekt/blob/930cd1c28917e64c87f0ce354a0fd040877cbba1/shards/sched.go#L311-L340
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100150
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (type, state) (zoekt_shards_sched)
zoekt: zoekt_shards_sched_total
Rate of zoekt scheduler process state transitions in the last 5m
Each ongoing search request starts its life as an interactive query. If it takes too long it becomes a batch query. Between state transitions it can be queued.
If you have a high number of batch queries it is a sign there is a large load of slow queries. Alternatively your systems are underprovisioned and normal search queries are taking too long.
For a full explanation of the states see https://github.com/sourcegraph/zoekt/blob/930cd1c28917e64c87f0ce354a0fd040877cbba1/shards/sched.go#L311-L340
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100151
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (type, state) (rate(zoekt_shards_sched[5m]))
Zoekt: Git fetch durations
zoekt: 90th_percentile_successful_git_fetch_durations_5m
90th percentile successful git fetch durations over 5m
Long git fetch times can be a leading indicator of saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(index_fetch_seconds_bucket{success="true"}[5m])))
zoekt: 90th_percentile_failed_git_fetch_durations_5m
90th percentile failed git fetch durations over 5m
Long git fetch times can be a leading indicator of saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(index_fetch_seconds_bucket{success="false"}[5m])))
Zoekt: Indexing results
zoekt: repo_index_state_aggregate
Index results state count over 5m (aggregate)
This dashboard shows the outcomes of recently completed indexing jobs across all index-server instances.
A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts.
Legend:
- fail -> the indexing jobs failed
- success -> the indexing job succeeded and the index was updated
- success_meta -> the indexing job succeeded, but only metadata was updated
- noop -> the indexing job succeed, but we didn`t need to update anything
- empty -> the indexing job succeeded, but the index was empty (i.e. the repository is empty)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (state) (increase(index_repo_seconds_count[5m]))
zoekt: repo_index_state_per_instance
Index results state count over 5m (per instance)
This dashboard shows the outcomes of recently completed indexing jobs, split out across each index-server instance.
(You can use the "instance" filter at the top of the page to select a particular instance.)
A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts.
Legend:
- fail -> the indexing jobs failed
- success -> the indexing job succeeded and the index was updated
- success_meta -> the indexing job succeeded, but only metadata was updated
- noop -> the indexing job succeed, but we didn`t need to update anything
- empty -> the indexing job succeeded, but the index was empty (i.e. the repository is empty)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (instance, state) (increase(index_repo_seconds_count{instance=~
${instance:regex}}[5m]))
zoekt: repo_index_success_speed_heatmap
Successful indexing durations
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (le, state) (increase(index_repo_seconds_bucket{state="success"}[$__rate_interval]))
zoekt: repo_index_fail_speed_heatmap
Failed indexing durations
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (le, state) (increase(index_repo_seconds_bucket{state="fail"}[$__rate_interval]))
zoekt: repo_index_success_speed_p99
99th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p99 duration of successful indexing jobs aggregated across all Zoekt instances.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100320
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.99, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
zoekt: repo_index_success_speed_p90
90th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p90 duration of successful indexing jobs aggregated across all Zoekt instances.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100321
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
zoekt: repo_index_success_speed_p75
75th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p75 duration of successful indexing jobs aggregated across all Zoekt instances.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100322
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
zoekt: repo_index_success_speed_p99_per_instance
99th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p99 duration of successful indexing jobs broken out per Zoekt instance.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100330
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.99, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~
${instance:regex}}[5m])))
zoekt: repo_index_success_speed_p90_per_instance
90th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p90 duration of successful indexing jobs broken out per Zoekt instance.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100331
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~
${instance:regex}}[5m])))
zoekt: repo_index_success_speed_p75_per_instance
75th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p75 duration of successful indexing jobs broken out per Zoekt instance.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100332
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~
${instance:regex}}[5m])))
zoekt: repo_index_failed_speed_p99
99th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p99 duration of failed indexing jobs aggregated across all Zoekt instances.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100340
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.99, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
zoekt: repo_index_failed_speed_p90
90th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p90 duration of failed indexing jobs aggregated across all Zoekt instances.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100341
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
zoekt: repo_index_failed_speed_p75
75th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p75 duration of failed indexing jobs aggregated across all Zoekt instances.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100342
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
zoekt: repo_index_failed_speed_p99_per_instance
99th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p99 duration of failed indexing jobs broken out per Zoekt instance.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100350
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.99, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~
${instance:regex}}[5m])))
zoekt: repo_index_failed_speed_p90_per_instance
90th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p90 duration of failed indexing jobs broken out per Zoekt instance.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100351
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~
${instance:regex}}[5m])))
zoekt: repo_index_failed_speed_p75_per_instance
75th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p75 duration of failed indexing jobs broken out per Zoekt instance.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100352
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~
${instance:regex}}[5m])))
Zoekt: Indexing queue statistics
zoekt: indexed_num_scheduled_jobs_aggregate
# scheduled index jobs (aggregate)
A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(index_queue_len)
zoekt: indexed_num_scheduled_jobs_per_instance
# scheduled index jobs (per instance)
A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: index_queue_len{instance=~
${instance:regex}}
zoekt: indexed_queueing_delay_heatmap
Job queuing delay heatmap
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better .
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (le) (increase(index_queue_age_seconds_bucket[$__rate_interval]))
zoekt: indexed_queueing_delay_p99_9_aggregate
99.9th percentile job queuing delay over 5m (aggregate)
This dashboard shows the p99.9 job queueing delay aggregated across all Zoekt instances.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
The 99.9 percentile dashboard is useful for capturing the long tail of queueing delays (on the order of 24+ hours, etc.).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100420
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.999, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
zoekt: indexed_queueing_delay_p90_aggregate
90th percentile job queueing delay over 5m (aggregate)
This dashboard shows the p90 job queueing delay aggregated across all Zoekt instances.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100421
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
zoekt: indexed_queueing_delay_p75_aggregate
75th percentile job queueing delay over 5m (aggregate)
This dashboard shows the p75 job queueing delay aggregated across all Zoekt instances.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100422
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
zoekt: indexed_queueing_delay_p99_9_per_instance
99.9th percentile job queuing delay over 5m (per instance)
This dashboard shows the p99.9 job queueing delay, broken out per Zoekt instance.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
The 99.9 percentile dashboard is useful for capturing the long tail of queueing delays (on the order of 24+ hours, etc.).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100430
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.999, sum by (le, instance)(rate(index_queue_age_seconds_bucket{instance=~
${instance:regex}}[5m])))
zoekt: indexed_queueing_delay_p90_per_instance
90th percentile job queueing delay over 5m (per instance)
This dashboard shows the p90 job queueing delay, broken out per Zoekt instance.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100431
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, instance)(rate(index_queue_age_seconds_bucket{instance=~
${instance:regex}}[5m])))
zoekt: indexed_queueing_delay_p75_per_instance
75th percentile job queueing delay over 5m (per instance)
This dashboard shows the p75 job queueing delay, broken out per Zoekt instance.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100432
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, instance)(rate(index_queue_age_seconds_bucket{instance=~
${instance:regex}}[5m])))
Zoekt: Virtual Memory Statistics
zoekt: memory_map_areas_percentage_used
Process memory map areas percentage used (per instance)
Processes have a limited about of memory map areas that they can use. In Zoekt, memory map areas are mainly used for loading shards into memory for queries (via mmap). However, memory map areas are also used for loading shared libraries, etc.
See https://en.wikipedia.org/wiki/Memory-mapped_file and the related articles for more information about memory maps.
Once the memory map limit is reached, the Linux kernel will prevent the process from creating any additional memory map areas. This could cause the process to crash.
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (proc_metrics_memory_map_current_count{instance=~
${instance:regex}} / proc_metrics_memory_map_max_limit{instance=~
${instance:regex}}) * 100
Zoekt: Compound shards (experimental)
zoekt: compound_shards_aggregate
# of compound shards (aggregate)
The total number of compound shards aggregated over all instances.
This number should be consistent if the number of indexed repositories doesn`t change.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(index_number_compound_shards) by (app)
zoekt: compound_shards_per_instance
# of compound shards (per instance)
The total number of compound shards per instance.
This number should be consistent if the number of indexed repositories doesn`t change.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(index_number_compound_shards{instance=~
${instance:regex}}) by (instance)
zoekt: average_shard_merging_duration_success
Average successful shard merging duration over 1 hour
Average duration of a successful merge over the last hour.
The duration depends on the target compound shard size. The larger the compound shard the longer a merge will take. Since the target compound shard size is set on start of zoekt-indexserver, the average duration should be consistent.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(rate(index_shard_merging_duration_seconds_sum{error="false"}[1h])) / sum(rate(index_shard_merging_duration_seconds_count{error="false"}[1h]))
zoekt: average_shard_merging_duration_error
Average failed shard merging duration over 1 hour
Average duration of a failed merge over the last hour.
This curve should be flat. Any deviation should be investigated.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(rate(index_shard_merging_duration_seconds_sum{error="true"}[1h])) / sum(rate(index_shard_merging_duration_seconds_count{error="true"}[1h]))
zoekt: shard_merging_errors_aggregate
Number of errors during shard merging (aggregate)
Number of errors during shard merging aggregated over all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100620
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(index_shard_merging_duration_seconds_count{error="true"}) by (app)
zoekt: shard_merging_errors_per_instance
Number of errors during shard merging (per instance)
Number of errors during shard merging per instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100621
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(index_shard_merging_duration_seconds_count{instance=~
${instance:regex}, error="true"}) by (instance)
zoekt: shard_merging_merge_running_per_instance
If shard merging is running (per instance)
Set to 1 if shard merging is running.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100630
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max by (instance) (index_shard_merging_running{instance=~
${instance:regex}})
zoekt: shard_merging_vacuum_running_per_instance
If vacuum is running (per instance)
Set to 1 if vacuum is running.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100631
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max by (instance) (index_vacuum_running{instance=~
${instance:regex}})
Zoekt: Network I/O pod metrics (only available on Kubernetes)
zoekt: network_sent_bytes_aggregate
Transmission rate over 5m (aggregate)
The rate of bytes sent over the network across all Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~
.indexed-search.}[5m]))
zoekt: network_received_packets_per_instance
Transmission rate over 5m (per instance)
The amount of bytes sent over the network by individual Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_received_bytes_aggregate
Receive rate over 5m (aggregate)
The amount of bytes received from the network across Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100710
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum(rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~
.indexed-search.}[5m]))
zoekt: network_received_bytes_per_instance
Receive rate over 5m (per instance)
The amount of bytes received from the network by individual Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100711
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_transmitted_packets_dropped_by_instance
Transmit packet drop rate over 5m (by instance)
An increase in dropped packets could be a leading indicator of network saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100720
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_packets_dropped_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_transmitted_packets_errors_per_instance
Errors encountered while transmitting over 5m (per instance)
An increase in transmission errors could indicate a networking issue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100721
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_errors_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_received_packets_dropped_by_instance
Receive packet drop rate over 5m (by instance)
An increase in dropped packets could be a leading indicator of network saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100722
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_packets_dropped_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_transmitted_packets_errors_by_instance
Errors encountered while receiving over 5m (per instance)
An increase in errors while receiving could indicate a networking issue.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100723
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_errors_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
Zoekt: Data disk I/O metrics
zoekt: data_disk_reads_sec
Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.*}[1m])))))
zoekt: data_disk_writes_sec
Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.*}[1m])))))
zoekt: data_disk_read_throughput
Read throughput over 1m (per instance)
The amount of data that was read from the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~
node-exporter.*}[1m])))))
zoekt: data_disk_write_throughput
Write throughput over 1m (per instance)
The amount of data that was written to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~
node-exporter.*}[1m])))))
zoekt: data_disk_read_duration
Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100820
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.}[1m])))))))
zoekt: data_disk_write_duration
Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100821
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.}[1m])))))))
zoekt: data_disk_read_request_size
Average read request size over 1m (per instance)
The average size of read requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100830
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~
node-exporter.}[1m])))))))
zoekt: data_disk_write_request_size)
Average write request size over 1m (per instance)
The average size of write requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100831
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~
node-exporter.}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~
node-exporter.}[1m])))))))
zoekt: data_disk_reads_merged_sec
Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100840
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~
node-exporter.*}[1m])))))
zoekt: data_disk_writes_merged_sec
Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100841
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~
node-exporter.*}[1m])))))
zoekt: data_disk_average_queue_size
Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100850
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: (max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~
${instance:regex}} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~
node-exporter.*}[1m])))))
Zoekt: [zoekt-indexserver] Container monitoring (not available on server)
zoekt: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod zoekt-indexserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p zoekt-indexserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' zoekt-indexserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the zoekt-indexserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs zoekt-indexserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^zoekt-indexserver.*"}) > 60)
zoekt: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}
zoekt: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}
zoekt: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^zoekt-indexserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^zoekt-indexserver.*"}[1h]))
Zoekt: [zoekt-webserver] Container monitoring (not available on server)
zoekt: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod zoekt-webserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p zoekt-webserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' zoekt-webserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the zoekt-webserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs zoekt-webserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^zoekt-webserver.*"}) > 60)
zoekt: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}
zoekt: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101002
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}
zoekt: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101003
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^zoekt-webserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^zoekt-webserver.*"}[1h]))
Zoekt: [zoekt-indexserver] Provisioning indicators (not available on server)
zoekt: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}[1d])
zoekt: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}[1d])
zoekt: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101110
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}[5m])
zoekt: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101111
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}[5m])
zoekt: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101112
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^zoekt-indexserver.*"})
Zoekt: [zoekt-webserver] Provisioning indicators (not available on server)
zoekt: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}[1d])
zoekt: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101201
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}[1d])
zoekt: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101210
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}[5m])
zoekt: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101211
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}[5m])
zoekt: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101212
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^zoekt-webserver.*"})
Zoekt: Kubernetes monitoring (only available on Kubernetes)
zoekt: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101300
on your Sourcegraph instance.
Managed by the Sourcegraph Search Core team.
Technical details
Query: sum by(app) (up{app=~".*indexed-search"}) / count by (app) (up{app=~".*indexed-search"}) * 100
Prometheus
Sourcegraph's all-in-one Prometheus and Alertmanager service.
To see this dashboard, visit /-/debug/grafana/d/prometheus/prometheus
on your Sourcegraph instance.
Prometheus: Metrics
prometheus: prometheus_rule_eval_duration
Average prometheus rule group evaluation duration over 10m by rule group
A high value here indicates Prometheus rule evaluation is taking longer than expected. It might indicate that certain rule groups are taking too long to evaluate, or Prometheus is underprovisioned.
Rules that Sourcegraph ships with are grouped under /sg_config_prometheus
. Custom rules are grouped under /sg_prometheus_addons
.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(rule_group) (avg_over_time(prometheus_rule_group_last_duration_seconds[10m]))
prometheus: prometheus_rule_eval_failures
Failed prometheus rule evaluations over 5m by rule group
Rules that Sourcegraph ships with are grouped under /sg_config_prometheus
. Custom rules are grouped under /sg_prometheus_addons
.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(rule_group) (rate(prometheus_rule_evaluation_failures_total[5m]))
Prometheus: Alerts
prometheus: alertmanager_notification_latency
Alertmanager notification latency over 1m by integration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(integration) (rate(alertmanager_notification_latency_seconds_sum[1m]))
prometheus: alertmanager_notification_failures
Failed alertmanager notifications over 1m by integration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(integration) (rate(alertmanager_notifications_failed_total[1m]))
Prometheus: Internals
prometheus: prometheus_config_status
Prometheus configuration reload status
A 1
indicates Prometheus reloaded its configuration successfully.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: prometheus_config_last_reload_successful
prometheus: alertmanager_config_status
Alertmanager configuration reload status
A 1
indicates Alertmanager reloaded its configuration successfully.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: alertmanager_config_last_reload_successful
prometheus: prometheus_tsdb_op_failure
Prometheus tsdb failures by operation over 1m by operation
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: increase(label_replace({__name__=~"prometheus_tsdb_(.*)_failed_total"}, "operation", "$1", "__name__", "(.+)s_failed_total")[5m:1m])
prometheus: prometheus_target_sample_exceeded
Prometheus scrapes that exceed the sample limit over 10m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: increase(prometheus_target_scrapes_exceeded_sample_limit_total[10m])
prometheus: prometheus_target_sample_duplicate
Prometheus scrapes rejected due to duplicate timestamps over 10m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: increase(prometheus_target_scrapes_sample_duplicate_timestamp_total[10m])
Prometheus: Container monitoring (not available on server)
prometheus: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod prometheus
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p prometheus
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' prometheus
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the prometheus container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs prometheus
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^prometheus.*"}) > 60)
prometheus: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}
prometheus: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}
prometheus: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^prometheus.*"}[1h]) + rate(container_fs_writes_total{name=~"^prometheus.*"}[1h]))
Prometheus: Provisioning indicators (not available on server)
prometheus: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}[1d])
prometheus: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}[1d])
prometheus: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}[5m])
prometheus: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}[5m])
prometheus: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^prometheus.*"})
Prometheus: Kubernetes monitoring (only available on Kubernetes)
prometheus: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(app) (up{app=~".*prometheus"}) / count by (app) (up{app=~".*prometheus"}) * 100
Executor
Executes jobs in an isolated environment.
To see this dashboard, visit /-/debug/grafana/d/executor/executor
on your Sourcegraph instance.
Executor: Executor: Executor jobs
executor: executor_queue_size
Unprocessed executor job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by (queue)(src_executor_total{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
executor: executor_queue_growth_rate
Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (queue)(increase(src_executor_total{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m])) / sum by (queue)(increase(src_executor_processor_total{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m]))
executor: executor_queued_max_age
Unprocessed executor job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by (queue)(src_executor_queued_duration_seconds_total{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
Executor: Executor: Executor jobs
executor: executor_handlers
Executor active handlers
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(src_executor_processor_handlers{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"})
executor: executor_processor_total
Executor operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_executor_processor_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: executor_processor_99th_percentile_duration
Aggregate successful executor operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_executor_processor_duration_seconds_bucket{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: executor_processor_errors_total
Executor operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: executor_processor_error_rate
Executor operation error rate over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_executor_processor_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Queue API client
executor: apiworker_apiclient_queue_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_apiworker_apiclient_queue_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100203
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_apiclient_queue_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_apiclient_queue_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_apiclient_queue_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100213
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Files API client
executor: apiworker_apiclient_files_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_apiworker_apiclient_files_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_apiclient_files_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_apiclient_files_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_apiclient_files_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Job setup
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Job execution
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Job teardown
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Compute instance metrics
executor: node_cpu_utilization
CPU utilization (minus idle/iowait)
Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_cpu_seconds_total{sg_job=~"sourcegraph-executors",mode!~"(idle|iowait)",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) / count(node_cpu_seconds_total{sg_job=~"sourcegraph-executors",mode="system",sg_instance=~"$instance"}) by (sg_instance) * 100
executor: node_cpu_saturation_cpu_wait
CPU saturation (time waiting)
Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_cpu_waiting_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
executor: node_memory_utilization
Memory utilization
Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query: (1 - sum(node_memory_MemAvailable_bytes{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}) by (sg_instance) / sum(node_memory_MemTotal_bytes{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}) by (sg_instance)) * 100
executor: node_memory_saturation_vmeff
Memory saturation (vmem efficiency)
Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers >~100% may be sign of imminent memory exhaustion, while sustained 0% < x < ~100% figures are very serious.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query: (rate(node_vmstat_pgsteal_anon{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_direct{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_file{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_kswapd{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) / (rate(node_vmstat_pgscan_anon{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_direct{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_file{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_kswapd{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) * 100
executor: node_memory_saturation_pressure_stalled
Memory saturation (fully stalled)
Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100712
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_memory_stalled_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
executor: node_io_disk_utilization
Disk IO utilization (percentage time spent in IO)
Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100720
on your Sourcegraph instance.
Technical details
Query: sum(label_replace(label_replace(rate(node_disk_io_time_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk) * 100
executor: node_io_disk_saturation
Disk IO saturation (avg IO queue size)
Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100721
on your Sourcegraph instance.
Technical details
Query: sum(label_replace(label_replace(rate(node_disk_io_time_weighted_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk)
executor: node_io_disk_saturation_pressure_full
Disk IO saturation (avg time of all processes stalled)
Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100722
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_io_stalled_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
executor: node_io_network_utilization
Network IO utilization (Rx)
Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100730
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_bytes_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) * 8
executor: node_io_network_saturation
Network IO saturation (Rx packets dropped)
Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100731
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_drop_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_saturation
Network IO errors (Rx)
Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100732
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_errs_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_utilization
Network IO utilization (Tx)
Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100740
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_bytes_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) * 8
executor: node_io_network_saturation
Network IO saturation (Tx packets dropped)
Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100741
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_drop_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_saturation
Network IO errors (Tx)
Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100742
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_errs_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
Executor: Executor: Docker Registry Mirror instance metrics
executor: node_cpu_utilization
CPU utilization (minus idle/iowait)
Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_cpu_seconds_total{sg_job=~"sourcegraph-executors-registry",mode!~"(idle|iowait)",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) / count(node_cpu_seconds_total{sg_job=~"sourcegraph-executors-registry",mode="system",sg_instance=~"docker-registry"}) by (sg_instance) * 100
executor: node_cpu_saturation_cpu_wait
CPU saturation (time waiting)
Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_cpu_waiting_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
executor: node_memory_utilization
Memory utilization
Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query: (1 - sum(node_memory_MemAvailable_bytes{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}) by (sg_instance) / sum(node_memory_MemTotal_bytes{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}) by (sg_instance)) * 100
executor: node_memory_saturation_vmeff
Memory saturation (vmem efficiency)
Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers >~100% may be sign of imminent memory exhaustion, while sustained 0% < x < ~100% figures are very serious.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query: (rate(node_vmstat_pgsteal_anon{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_direct{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_file{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_kswapd{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) / (rate(node_vmstat_pgscan_anon{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_direct{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_file{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_kswapd{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) * 100
executor: node_memory_saturation_pressure_stalled
Memory saturation (fully stalled)
Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_memory_stalled_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
executor: node_io_disk_utilization
Disk IO utilization (percentage time spent in IO)
Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100820
on your Sourcegraph instance.
Technical details
Query: sum(label_replace(label_replace(rate(node_disk_io_time_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk) * 100
executor: node_io_disk_saturation
Disk IO saturation (avg IO queue size)
Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100821
on your Sourcegraph instance.
Technical details
Query: sum(label_replace(label_replace(rate(node_disk_io_time_weighted_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk)
executor: node_io_disk_saturation_pressure_full
Disk IO saturation (avg time of all processes stalled)
Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100822
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_io_stalled_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
executor: node_io_network_utilization
Network IO utilization (Rx)
Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100830
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_bytes_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) * 8
executor: node_io_network_saturation
Network IO saturation (Rx packets dropped)
Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100831
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_drop_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_saturation
Network IO errors (Rx)
Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100832
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_errs_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_utilization
Network IO utilization (Tx)
Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100840
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_bytes_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) * 8
executor: node_io_network_saturation
Network IO saturation (Tx packets dropped)
Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100841
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_drop_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_saturation
Network IO errors (Tx)
Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100842
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_errs_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
Executor: Golang runtime monitoring
executor: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by(sg_instance) (go_goroutines{sg_job=~".*sourcegraph-executors"})
executor: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by(sg_instance) (go_gc_duration_seconds{sg_job=~".*sourcegraph-executors"})
Global Containers Resource Usage
Container usage and provisioning indicators of all services.
To see this dashboard, visit /-/debug/grafana/d/containers/containers
on your Sourcegraph instance.
Global Containers Resource Usage: Containers (not available on server)
containers: container_memory_usage
Container memory usage of all services
This value indicates the memory usage of all containers.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|github-proxy|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}
containers: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
This value indicates the CPU usage of all containers.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|github-proxy|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}
Global Containers Resource Usage: Containers: Provisioning Indicators (not available on server)
containers: container_memory_usage_provisioning
Container memory usage (5m maximum) of services that exceed 80% memory limit
Containers that exceed 80% memory limit. The value indicates potential underprovisioned resources.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|github-proxy|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}[5m]) >= 80
containers: container_cpu_usage_provisioning
Container cpu usage total (5m maximum) across all cores of services that exceed 80% cpu limit
Containers that exceed 80% CPU limit. The value indicates potential underprovisioned resources.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|github-proxy|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}[5m]) >= 80
containers: container_oomkill_events_total
Container OOMKILL events total
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100120
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: max by (name) (container_oom_events_total{name=~"^(frontend|sourcegraph-frontend|gitserver|github-proxy|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}) >= 1
containers: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100130
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend|gitserver|github-proxy|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}) > 60)
Code Intelligence > Autoindexing
The service at `enterprise/internal/codeintel/autoindexing`.
To see this dashboard, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing
on your Sourcegraph instance.
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Summary
codeintel-autoindexing:
Auto-index jobs inserted over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_dbstore_indexes_inserted[5m]))
codeintel-autoindexing: codeintel_autoindexing_error_rate
Auto-indexing job scheduler operation error rate over 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m])) / (sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m])) + sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m]))) * 100
codeintel-autoindexing: executor_queue_size
Unprocessed executor job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by (queue)(src_executor_total{queue=~"codeintel",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
codeintel-autoindexing: executor_queue_growth_rate
Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (queue)(increase(src_executor_total{queue=~"codeintel",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m])) / sum by (queue)(increase(src_executor_processor_total{queue=~"codeintel",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m]))
codeintel-autoindexing: executor_queued_max_age
Unprocessed executor job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max by (queue)(src_executor_queued_duration_seconds_total{queue=~"codeintel",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Service
codeintel-autoindexing: codeintel_autoindexing_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindexing_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > GQL transport
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_total
Aggregate resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_99th_percentile_duration
Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindexing_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_errors_total
Aggregate resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_error_rate
Aggregate resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100203
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_total
Resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_99th_percentile_duration
99th percentile successful resolver operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_errors_total
Resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_error_rate
Resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100213
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Store (internal)
codeintel-autoindexing: codeintel_autoindexing_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindexing_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Background jobs (internal)
codeintel-autoindexing: codeintel_autoindexing_background_total
Aggregate background operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_99th_percentile_duration
Aggregate successful background operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindexing_background_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_errors_total
Aggregate background operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_error_rate
Aggregate background operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_background_total
Background operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_99th_percentile_duration
99th percentile successful background operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_background_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_background_errors_total
Background operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_error_rate
Background operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Inference service (internal)
codeintel-autoindexing: codeintel_autoindexing_inference_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindexing_inference_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_inference_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_inference_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_inference_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Luasandbox service
codeintel-autoindexing: luasandbox_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_luasandbox_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: luasandbox_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_luasandbox_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: luasandbox_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Code Nav
The service at `enterprise/internal/codeintel/codenav`.
To see this dashboard, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav
on your Sourcegraph instance.
Code Intelligence > Code Nav: Codeintel: CodeNav > Service
codeintel-codenav: codeintel_codenav_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_codenav_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100003
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-codenav: codeintel_codenav_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_codenav_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-codenav: codeintel_codenav_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Code Nav: Codeintel: CodeNav > LSIF store
codeintel-codenav: codeintel_codenav_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_codenav_lsifstore_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-codenav: codeintel_codenav_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_codenav_lsifstore_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-codenav: codeintel_codenav_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Code Nav: Codeintel: CodeNav > GQL Transport
codeintel-codenav: codeintel_codenav_transport_graphql_total
Aggregate resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_99th_percentile_duration
Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_codenav_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_errors_total
Aggregate resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_error_rate
Aggregate resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100203
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-codenav: codeintel_codenav_transport_graphql_total
Resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_99th_percentile_duration
99th percentile successful resolver operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_codenav_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-codenav: codeintel_codenav_transport_graphql_errors_total
Resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_error_rate
Resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100213
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Code Nav: Codeintel: CodeNav > Store
codeintel-codenav: codeintel_codenav_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_codenav_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-codenav: codeintel_codenav_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_codenav_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-codenav: codeintel_codenav_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Policies
The service at `enterprise/internal/codeintel/policies`.
To see this dashboard, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies
on your Sourcegraph instance.
Code Intelligence > Policies: Codeintel: Policies > Service
codeintel-policies: codeintel_policies_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_policies_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100003
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-policies: codeintel_policies_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_policies_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-policies: codeintel_policies_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Policies: Codeintel: Policies > Store
codeintel-policies: codeintel_policies_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_policies_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-policies: codeintel_policies_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_policies_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-policies: codeintel_policies_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Policies: Codeintel: Policies > GQL Transport
codeintel-policies: codeintel_policies_transport_graphql_total
Aggregate resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_99th_percentile_duration
Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_policies_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_errors_total
Aggregate resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_error_rate
Aggregate resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100203
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-policies: codeintel_policies_transport_graphql_total
Resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_99th_percentile_duration
99th percentile successful resolver operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_policies_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-policies: codeintel_policies_transport_graphql_errors_total
Resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_error_rate
Resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100213
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Policies: Codeintel: Policies > Repository Pattern Matcher task
codeintel-policies: codeintel_background_policies_updated_total_total
Lsif repository pattern matcher repositories pattern matcher every 5m
Number of configuration policies whose repository membership list was updated
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_policies_updated_total_total{job=~"^${source:regex}.*"}[5m]))
Code Intelligence > Ranking
The service at `enterprise/internal/codeintel/ranking`.
To see this dashboard, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking
on your Sourcegraph instance.
Code Intelligence > Ranking: Codeintel: Ranking > Service
codeintel-ranking: codeintel_ranking_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_ranking_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100003
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_ranking_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-ranking: codeintel_ranking_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_ranking_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Ranking > Store
codeintel-ranking: codeintel_ranking_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_ranking_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_ranking_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-ranking: codeintel_ranking_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_ranking_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Ranking > PageRank
codeintel-ranking: codeintel_ranking_repositories_updated_total
Repository path ranks updated repository path ranks updated every 5m
The number of updates to document scores of any repository.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_repositories_updated_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_csv_files_processed_total
Csv files read and processed files read from GCS every 5m
The number of input CSV records read from GCS.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_csv_files_processed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_input_rows_processed_total
Csv result rows processed csv result rows processed every 5m
The number of input row records merged into document scores for
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_input_rows_processed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_uploads_ranking_uploads_read_total
Uploads read uploads read for export every 5m
The number of upload records read.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_ranking_uploads_read_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_uploads_ranking_stale_uploads_removed_total
Uploads removed stale upload records removed every 5m
The number of stale upload records removed from GCS.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_ranking_stale_uploads_removed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_csv_files_bytes_read_total
Bytes read bytes read from GCS every 5m
The number of bytes read from GCS.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100220
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_ranking_csv_files_bytes_read_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_uploads_ranking_bytes_uploaded_total
Bytes uploaded bytes uploaded to GCS every 5m
The number of bytes uploaded to GCS.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100221
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_ranking_bytes_uploaded_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_uploads_ranking_bytes_deleted_total
Bytes deleted bytes deleted from GCS every 5m
The number of bytes deleted from GCS.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100222
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_ranking_bytes_deleted_total{job=~"^${source:regex}.*"}[5m]))
Code Intelligence > Uploads
The service at `enterprise/internal/codeintel/uploads`.
To see this dashboard, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads
on your Sourcegraph instance.
Code Intelligence > Uploads: Codeintel: Uploads > Service
codeintel-uploads: codeintel_uploads_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100003
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Store (internal)
codeintel-uploads: codeintel_uploads_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Background (internal)
codeintel-uploads: codeintel_uploads_background_total
Aggregate background operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_background_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_background_99th_percentile_duration
Aggregate successful background operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_background_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_background_errors_total
Aggregate background operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_background_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_background_error_rate
Aggregate background operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100203
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_background_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_background_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_background_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_background_total
Background operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_background_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_background_99th_percentile_duration
99th percentile successful background operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_background_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_background_errors_total
Background operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_background_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_background_error_rate
Background operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100213
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_background_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_background_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_background_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > GQL Transport
codeintel-uploads: codeintel_uploads_transport_graphql_total
Aggregate resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_99th_percentile_duration
Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_errors_total
Aggregate resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_error_rate
Aggregate resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_transport_graphql_total
Resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_99th_percentile_duration
99th percentile successful resolver operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_transport_graphql_errors_total
Resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_error_rate
Resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > HTTP Transport
codeintel-uploads: codeintel_uploads_transport_http_total
Aggregate http handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_99th_percentile_duration
Aggregate successful http handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploads_transport_http_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_errors_total
Aggregate http handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_error_rate
Aggregate http handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_transport_http_total
Http handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_99th_percentile_duration
99th percentile successful http handler operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_transport_http_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_transport_http_errors_total
Http handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_error_rate
Http handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Cleanup task
codeintel-uploads: codeintel_background_upload_records_removed_total
Lsif upload records deleted every 5m
Number of LSIF upload records deleted due to expiration or unreachability every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_removed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_index_records_removed_total
Lsif index records deleted every 5m
Number of LSIF index records deleted due to expiration or unreachability every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_index_records_removed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_uploads_purged_total
Lsif upload data bundles deleted every 5m
Number of LSIF upload data bundles purged from the codeintel-db database every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_uploads_purged_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_audit_log_records_expired_total
Lsif upload audit log records deleted every 5m
Number of LSIF upload audit log records deleted due to expiration every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_audit_log_records_expired_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_background_cleanup_errors_total
Cleanup task operation errors every 5m
Number of code intelligence uploads cleanup task errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_uploads_background_cleanup_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_autoindexing_background_cleanup_errors_total
Cleanup task operation errors every 5m
Number of code intelligence autoindexing cleanup task errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_autoindexing_background_cleanup_errors_total{job=~"^${source:regex}.*"}[5m]))
Code Intelligence > Uploads: Codeintel: Repository with stale commit graph
codeintel-uploads: codeintel_commit_graph_queue_size
Repository queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_commit_graph_total{job=~"^${source:regex}.*"})
codeintel-uploads: codeintel_commit_graph_queue_growth_rate
Repository queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_total{job=~"^${source:regex}.*"}[30m])) / sum(increase(src_codeintel_commit_graph_processor_total{job=~"^${source:regex}.*"}[30m]))
codeintel-uploads: codeintel_commit_graph_queued_max_age
Repository queue longest time in queue
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: max(src_codeintel_commit_graph_queued_duration_seconds_total{job=~"^${source:regex}.*"})
Code Intelligence > Uploads: Codeintel: Uploads > Expiration task
codeintel-uploads: codeintel_background_repositories_scanned_total
Lsif upload repository scan repositories scanned every 5m
Number of repositories scanned for data retention
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_repositories_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_upload_records_scanned_total
Lsif upload records scan records scanned every 5m
Number of codeintel upload records scanned for data retention
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_commits_scanned_total
Lsif upload commits scanned commits scanned every 5m
Number of commits reachable from a codeintel upload record scanned for data retention
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_commits_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_upload_records_expired_total
Lsif upload records expired uploads scanned every 5m
Number of codeintel upload records marked as expired
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Code intelligence team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_expired_total{job=~"^${source:regex}.*"}[5m]))
Telemetry
Monitoring telemetry services in Sourcegraph.
To see this dashboard, visit /-/debug/grafana/d/telemetry/telemetry
on your Sourcegraph instance.
Telemetry: Usage data exporter: Job operations
telemetry: telemetry_job_total
Aggregate usage data exporter operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: sum(increase(src_telemetry_job_total{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_99th_percentile_duration
Aggregate successful usage data exporter operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: sum by (le)(rate(src_telemetry_job_duration_seconds_bucket{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_errors_total
Aggregate usage data exporter operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: sum(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_error_rate
Aggregate usage data exporter operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100003
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: sum(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_telemetry_job_total{job=~"^worker.*"}[5m])) + sum(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m]))) * 100
telemetry: telemetry_job_total
Usage data exporter operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: sum by (op)(increase(src_telemetry_job_total{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_99th_percentile_duration
99th percentile successful usage data exporter operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_telemetry_job_duration_seconds_bucket{job=~"^worker.*"}[5m])))
telemetry: telemetry_job_errors_total
Usage data exporter operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: sum by (op)(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_error_rate
Usage data exporter operation error rate over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: sum by (op)(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_telemetry_job_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m]))) * 100
Telemetry: Usage data exporter: Queue size
telemetry: telemetry_job_queue_size_queue_size
Event level usage data queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: max(src_telemetry_job_queue_size_total{job=~"^worker.*"})
telemetry: telemetry_job_queue_size_queue_growth_rate
Event level usage data queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code Insights team.
Technical details
Query: sum(increase(src_telemetry_job_queue_size_total{job=~"^worker.*"}[30m])) / sum(increase(src_telemetry_job_queue_size_processor_total{job=~"^worker.*"}[30m]))
Telemetry: Usage data exporter: Utilization
telemetry: telemetry_job_utilized_throughput
Utilized percentage of maximum throughput
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Data & Analytics team.
Technical details
Query: rate(src_telemetry_job_total{op="SendEvents"}[1h]) / on() group_right() src_telemetry_job_max_throughput * 100
OpenTelemetry Collector
The OpenTelemetry collector ingests OpenTelemetry data from Sourcegraph and exports it to the configured backends.
To see this dashboard, visit /-/debug/grafana/d/otel-collector/otel-collector
on your Sourcegraph instance.
OpenTelemetry Collector: Receivers
otel-collector: otel_span_receive_rate
Spans received per receiver per minute
Shows the rate of spans accepted by the configured reveiver
A Trace is a collection of spans and a span represents a unit of work or operation. Spans are the building blocks of Traces. The spans have only been accepted by the receiver, which means they still have to move through the configured pipeline to be exported. For more information on tracing and configuration of a OpenTelemetry receiver see https://opentelemetry.io/docs/collector/configuration/#receivers.
See the Exporters section see spans that have made it through the pipeline and are exported.
Depending the configured processors, received spans might be dropped and not exported. For more information on configuring processors see https://opentelemetry.io/docs/collector/configuration/#processors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (receiver) (rate(otelcol_receiver_accepted_spans[1m]))
otel-collector: otel_span_refused
Spans refused per receiver
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (receiver) (rate(otelcol_receiver_refused_spans[1m]))
OpenTelemetry Collector: Exporters
otel-collector: otel_span_export_rate
Spans exported per exporter per minute
Shows the rate of spans being sent by the exporter
A Trace is a collection of spans. A Span represents a unit of work or operation. Spans are the building blocks of Traces. The rate of spans here indicates spans that have made it through the configured pipeline and have been sent to the configured export destination.
For more information on configuring a exporter for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#exporters.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (exporter) (rate(otelcol_exporter_sent_spans[1m]))
otel-collector: otel_span_export_failures
Span export failures by exporter
Shows the rate of spans failed to be sent by the configured reveiver. A number higher than 0 for a long period can indicate a problem with the exporter configuration or with the service that is being exported too
For more information on configuring a exporter for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#exporters.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (exporter) (rate(otelcol_exporter_send_failed_spans[1m]))
OpenTelemetry Collector: Collector resource usage
otel-collector: otel_cpu_usage
Cpu usage of the collector
Shows CPU usage as reported by the OpenTelemetry collector.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (job) (rate(otelcol_process_cpu_seconds{job=~"^.*"}[1m]))
otel-collector: otel_memory_resident_set_size
Memory allocated to the otel collector
Shows the allocated memory Resident Set Size (RSS) as reported by the OpenTelemetry collector.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (job) (rate(otelcol_process_memory_rss{job=~"^.*"}[1m]))
otel-collector: otel_memory_usage
Memory used by the collector
Shows how much memory is being used by the otel collector.
- High memory usage might indicate thad the configured pipeline is keeping a lot of spans in memory for processing
- Spans failing to be sent and the exporter is configured to retry
- A high batch count by using a batch processor
For more information on configuring processors for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#processors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by (job) (rate(otelcol_process_runtime_total_alloc_bytes{job=~"^.*"}[1m]))
OpenTelemetry Collector: Container monitoring (not available on server)
otel-collector: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod otel-collector
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p otel-collector
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' otel-collector
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the otel-collector container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs otel-collector
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^otel-collector.*"}) > 60)
otel-collector: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^otel-collector.*"}
otel-collector: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^otel-collector.*"}
otel-collector: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^otel-collector.*"}[1h]) + rate(container_fs_writes_total{name=~"^otel-collector.*"}[1h]))
OpenTelemetry Collector: Kubernetes monitoring (only available on Kubernetes)
otel-collector: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud DevOps team.
Technical details
Query: sum by(app) (up{app=~".*otel-collector"}) / count by (app) (up{app=~".*otel-collector"}) * 100