Dashboards reference
This document contains a complete reference on Sourcegraph's available dashboards, as well as details on how to interpret the panels and metrics.
To learn more about Sourcegraph's metrics and how to view these dashboards, see our metrics guide.
Frontend
Serves all end-user browser and API requests.
To see this dashboard, visit /-/debug/grafana/d/frontend/frontend
on your Sourcegraph instance.
Frontend: Search at a glance
frontend: 99th_percentile_search_request_duration
99th percentile successful search request duration over 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.99, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))
frontend: 90th_percentile_search_request_duration
90th percentile successful search request duration over 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))
frontend: hard_timeout_search_responses
Hard timeout search responses every 5m
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: (sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name!="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: hard_error_search_responses
Hard error search responses every 5m
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: partial_timeout_search_responses
Partial timeout search responses every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: search_alert_user_suggestions
Search alert user suggestions shown every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: page_load_latency
90th percentile page load latency over all routes over 10m
Investigate potential sources of latency by selecting Explore and modifying the sum by(le)
section to include additional labels: for example, sum by(le, job)
or sum by (le, instance)
.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100020
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud Software-as-a-Service team.
Technical details
Query: histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route!="raw",route!="blob",route!~"graphql.*"}[10m])))
frontend: blob_load_latency
90th percentile blob load latency over 10m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100021
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route="blob"}[10m])))
Frontend: Search-based code intelligence at a glance
frontend: 99th_percentile_search_codeintel_request_duration
99th percentile code-intel successful search request duration over 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))
frontend: 90th_percentile_search_codeintel_request_duration
90th percentile code-intel successful search request duration over 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))
frontend: hard_timeout_search_codeintel_responses
Hard timeout search code-intel responses every 5m
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: (sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: hard_error_search_codeintel_responses
Hard error search code-intel responses every 5m
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: partial_timeout_search_codeintel_responses
Partial timeout search code-intel responses every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: search_codeintel_alert_user_suggestions
Search code-intel alert user suggestions shown every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
Frontend: Search GraphQL API usage at a glance
frontend: 99th_percentile_search_api_request_duration
99th percentile successful search API request duration over 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))
frontend: 90th_percentile_search_api_request_duration
90th percentile successful search API request duration over 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))
frontend: hard_error_search_api_responses
Hard error search API responses every 5m
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (status)(increase(src_graphql_search_response{status=~"error",source="other"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="other"}[5m]))
frontend: partial_timeout_search_api_responses
Partial timeout search API responses every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(increase(src_graphql_search_response{status="partial_timeout",source="other"}[5m])) / sum(increase(src_graphql_search_response{source="other"}[5m]))
frontend: search_api_alert_user_suggestions
Search API alert user suggestions shown every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="other"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{status="alert",source="other"}[5m]))
Frontend: Codeintel: Precise code intelligence usage at a glance
frontend: codeintel_resolvers_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_resolvers_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_resolvers_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: Auto-index enqueuer
frontend: codeintel_autoindex_enqueuer_total
Aggregate enqueuer operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_99th_percentile_duration
Aggregate successful enqueuer operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_errors_total
Aggregate enqueuer operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_error_rate
Aggregate enqueuer operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_autoindex_enqueuer_total
Enqueuer operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_99th_percentile_duration
99th percentile successful enqueuer operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_autoindex_enqueuer_errors_total
Enqueuer operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_error_rate
Enqueuer operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dbstore stats
frontend: codeintel_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Workerutil: lsif_indexes dbworker/store stats
frontend: workerutil_dbworker_store_codeintel_index_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_index_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: lsifstore stats
frontend: codeintel_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100710
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100711
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100712
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100713
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: gitserver client
frontend: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100802
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100803
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100812
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100813
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: repo-updater client
frontend: codeintel_repoupdater_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_repoupdater_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_repoupdater_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_repoupdater_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_repoupdater_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_repoupdater_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_repoupdater_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_repoupdater_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_repoupdater_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_repoupdater_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_repoupdater_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100910
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_repoupdater_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_repoupdater_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100911
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_repoupdater_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_repoupdater_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100912
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_repoupdater_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_repoupdater_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100913
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_repoupdater_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_repoupdater_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_repoupdater_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: uploadstore stats
frontend: codeintel_uploadstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101002
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101003
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_uploadstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101010
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101011
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_uploadstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101012
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101013
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dependencies service stats
frontend: codeintel_dependencies_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101102
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101103
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_dependencies_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101110
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101111
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_dependencies_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101112
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101113
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: lockfiles service stats
frontend: codeintel_lockfiles_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101201
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101202
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101203
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_lockfiles_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101210
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101211
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_lockfiles_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101212
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101213
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Gitserver: Gitserver Client
frontend: gitserver_client_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101300
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101301
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101302
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101303
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: gitserver_client_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101310
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (op)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101311
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: gitserver_client_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101312
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101313
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: dbstore stats
frontend: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101400
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101401
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101402
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101403
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101410
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101411
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101412
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101413
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: service stats
frontend: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101500
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101501
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101502
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101503
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101510
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101511
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101512
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101513
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Out-of-band migrations: up migration invocation (one batch processed)
frontend: oobmigration_total
Migration handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_99th_percentile_duration
Aggregate successful migration handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_errors_total
Migration handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_error_rate
Migration handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Out-of-band migrations: down migration invocation (one batch processed)
frontend: oobmigration_total
Migration handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101700
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_99th_percentile_duration
Aggregate successful migration handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101701
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_errors_total
Migration handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101702
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_error_rate
Migration handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101703
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Internal service requests
frontend: internal_indexed_search_error_responses
Internal indexed search error responses every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101800
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by(code) (increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100
frontend: internal_unindexed_search_error_responses
Internal unindexed search error responses every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101801
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by(code) (increase(searcher_service_request_total{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total[5m])) * 100
frontend: internalapi_error_responses
Internal API error responses every 5m by route
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101802
on your Sourcegraph instance.
Managed by the Sourcegraph Cloud Software-as-a-Service team.
Technical details
Query: sum by(category) (increase(src_frontend_internal_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_frontend_internal_request_duration_seconds_count[5m])) * 100
frontend: 99th_percentile_gitserver_duration
99th percentile successful gitserver query duration over 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101810
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum by (le,category)(rate(src_gitserver_request_duration_seconds_bucket{job=~"(sourcegraph-)?frontend"}[5m])))
frontend: gitserver_error_responses
Gitserver error responses every 5m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101811
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend",code!~"2.."}[5m])) / ignoring(code) group_left sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend"}[5m])) * 100
frontend: observability_test_alert_warning
Warning test alert metric
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101820
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by(owner) (observability_test_metric_warning)
frontend: observability_test_alert_critical
Critical test alert metric
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101821
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by(owner) (observability_test_metric_critical)
Frontend: Authentication API requests
frontend: sign_in_rate
Rate of API requests to sign-in
Rate (QPS) of requests to sign-in
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101900
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))
frontend: sign_in_latency_p99
99 percentile of sign-in latency
99% percentile of sign-in latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101901
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-in",method="post"}[5m])) by (le))
frontend: sign_in_error_rate
Percentage of sign-in requests by http code
Percentage of sign-in requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101902
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))*100
frontend: sign_up_rate
Rate of API requests to sign-up
Rate (QPS) of requests to sign-up
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101910
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))
frontend: sign_up_latency_p99
99 percentile of sign-up latency
99% percentile of sign-up latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101911
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-up",method="post"}[5m])) by (le))
frontend: sign_up_code_percentage
Percentage of sign-up requests by http code
Percentage of sign-up requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101912
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100
frontend: sign_out_rate
Rate of API requests to sign-out
Rate (QPS) of requests to sign-out
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101920
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))
frontend: sign_out_latency_p99
99 percentile of sign-out latency
99% percentile of sign-out latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101921
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-out"}[5m])) by (le))
frontend: sign_out_error_rate
Percentage of sign-out requests that return non-303 http code
Percentage of sign-out requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101922
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100
Frontend: Organisation GraphQL API requests
frontend: org_members_rate
Rate of API requests to list organisation members
Rate (QPS) of API requests to list organisation members
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102000
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers"}[5m]))
frontend: org_members_latency_p99
99 percentile latency of API requests to list organisation members
99 percentile latency ofAPI requests to list organisation members
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102001
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="OrganizationMembers"}[5m])) by (le))
frontend: org_members_error_rate
Percentage of API requests to list organisation members that return an error
Percentage of API requests to list organisation members that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102002
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="OrganizationMembers"}[5m]))*100
frontend: create_org_rate
Rate of API requests to create an organisation
Rate (QPS) of API requests to create an organisation
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102010
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="CreateOrganization"}[5m]))
frontend: create_org_latency_p99
99 percentile latency of API requests to create an organisation
99 percentile latency ofAPI requests to create an organisation
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102011
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="CreateOrganization"}[5m])) by (le))
frontend: create_org_error_rate
Percentage of API requests to create an organisation that return an error
Percentage of API requests to create an organisation that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102012
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="CreateOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="CreateOrganization"}[5m]))*100
frontend: remove_org_member_rate
Rate of API requests to remove organisation member
Rate (QPS) of API requests to remove organisation member
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102020
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization"}[5m]))
frontend: remove_org_member_latency_p99
99 percentile latency of API requests to remove organisation member
99 percentile latency ofAPI requests to remove organisation member
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102021
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="RemoveUserFromOrganization"}[5m])) by (le))
frontend: remove_org_member_error_rate
Percentage of API requests to remove organisation member that return an error
Percentage of API requests to remove organisation member that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102022
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="RemoveUserFromOrganization"}[5m]))*100
frontend: invite_org_member_rate
Rate of API requests to invite a new organisation member
Rate (QPS) of API requests to invite a new organisation member
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102030
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization"}[5m]))
frontend: invite_org_member_latency_p99
99 percentile latency of API requests to invite a new organisation member
99 percentile latency ofAPI requests to invite a new organisation member
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102031
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="InviteUserToOrganization"}[5m])) by (le))
frontend: invite_org_member_error_rate
Percentage of API requests to invite a new organisation member that return an error
Percentage of API requests to invite a new organisation member that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102032
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="InviteUserToOrganization"}[5m]))*100
frontend: org_invite_respond_rate
Rate of API requests to respond to an org invitation
Rate (QPS) of API requests to respond to an org invitation
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102040
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation"}[5m]))
frontend: org_invite_respond_latency_p99
99 percentile latency of API requests to respond to an org invitation
99 percentile latency ofAPI requests to respond to an org invitation
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102041
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="RespondToOrganizationInvitation"}[5m])) by (le))
frontend: org_invite_respond_error_rate
Percentage of API requests to respond to an org invitation that return an error
Percentage of API requests to respond to an org invitation that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102042
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="RespondToOrganizationInvitation"}[5m]))*100
frontend: org_repositories_rate
Rate of API requests to list repositories owned by an org
Rate (QPS) of API requests to list repositories owned by an org
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102050
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(irate(src_graphql_request_duration_seconds_count{route="OrgRepositories"}[5m]))
frontend: org_repositories_latency_p99
99 percentile latency of API requests to list repositories owned by an org
99 percentile latency ofAPI requests to list repositories owned by an org
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102051
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum(rate(src_graphql_request_duration_seconds_bucket{route="OrgRepositories"}[5m])) by (le))
frontend: org_repositories_error_rate
Percentage of API requests to list repositories owned by an org that return an error
Percentage of API requests to list repositories owned by an org that return an error
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102052
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum (irate(src_graphql_request_duration_seconds_count{route="OrgRepositories",success="false"}[5m]))/sum(irate(src_graphql_request_duration_seconds_count{route="OrgRepositories"}[5m]))*100
Frontend: Cloud KMS and cache
frontend: cloudkms_cryptographic_requests
Cryptographic requests to Cloud KMS every 1m
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102100
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(increase(src_cloudkms_cryptographic_total[1m]))
frontend: encryption_cache_hit_ratio
Average encryption cache hit ratio per workload
- Encryption cache hit ratio (hits/(hits+misses)) - minimum across all instances of a workload.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102101
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: min by (kubernetes_name) (src_encryption_cache_hit_total/(src_encryption_cache_hit_total+src_encryption_cache_miss_total))
frontend: encryption_cache_evictions
Rate of encryption cache evictions - sum across all instances of a given workload
- Rate of encryption cache evictions (caused by cache exceeding its maximum size) - sum across all instances of a workload
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102102
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (kubernetes_name) (irate(src_encryption_cache_eviction_total[5m]))
Frontend: Database connections
frontend: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102200
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="frontend"})
frontend: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102201
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="frontend"})
frontend: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102210
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="frontend"})
frontend: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102211
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="frontend"})
frontend: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102220
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="frontend"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="frontend"}[5m]))
frontend: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102230
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="frontend"}[5m]))
frontend: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102231
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="frontend"}[5m]))
frontend: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102232
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="frontend"}[5m]))
Frontend: Container monitoring (not available on server)
frontend: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod (frontend|sourcegraph-frontend)
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p (frontend|sourcegraph-frontend)
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' (frontend|sourcegraph-frontend)
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the (frontend|sourcegraph-frontend) container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs (frontend|sourcegraph-frontend)
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102300
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend).*"}) > 60)
frontend: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102301
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
frontend: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102302
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
frontend: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102303
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]) + rate(container_fs_writes_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]))
Frontend: Provisioning indicators (not available on server)
frontend: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102400
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])
frontend: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102401
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])
frontend: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102410
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])
frontend: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102411
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])
Frontend: Golang runtime monitoring
frontend: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102500
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*(frontend|sourcegraph-frontend)"})
frontend: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102501
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*(frontend|sourcegraph-frontend)"})
Frontend: Kubernetes monitoring (only available on Kubernetes)
frontend: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102600
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(app) (up{app=~".*(frontend|sourcegraph-frontend)"}) / count by (app) (up{app=~".*(frontend|sourcegraph-frontend)"}) * 100
Frontend: Ranking
frontend: mean_position_of_clicked_search_result_6h
Mean position of clicked search result over 6h
The top-most result on the search results has position 0. Low values are considered better. This metric only tracks top-level items and not individual line matches.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102700
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (type) (rate(src_search_ranking_result_clicked_sum[6h]))/sum by (type) (rate(src_search_ranking_result_clicked_count[6h]))
frontend: distribution_of_clicked_search_result_type_over_6h_in_percent
Distribution of clicked search result type over 6h in %
The distribution of clicked search results by result type. At every point in time, the values should sum to 100.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102701
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: round(sum(increase(src_search_ranking_result_clicked_sum{type="commit"}[6h])) / sum (increase(src_search_ranking_result_clicked_sum[6h]))*100)
Frontend: Sentinel queries (only on sourcegraph.com)
frontend: mean_successful_sentinel_duration_over_2h
Mean successful sentinel search duration over 2h
Mean search duration for all successful sentinel queries
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102800
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_search_response_latency_seconds_sum{source=~
searchblitz., status=
success}[2h])) / sum(rate(src_search_response_latency_seconds_count{source=~
searchblitz., status=
success}[2h]))
frontend: mean_sentinel_stream_latency_over_2h
Mean successful sentinel stream latency over 2h
Mean time to first result for all successful streaming sentinel queries
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102801
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[2h])) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[2h]))
frontend: 90th_percentile_successful_sentinel_duration_over_2h
90th percentile successful sentinel search duration over 2h
90th percentile search duration for all successful sentinel queries
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102810
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))
frontend: 90th_percentile_sentinel_stream_latency_over_2h
90th percentile successful sentinel stream latency over 2h
90th percentile time to first result for all successful streaming sentinel queries
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102811
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))
frontend: mean_successful_sentinel_duration_by_query
Mean successful sentinel search duration by query
Mean search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102820
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_search_response_latency_seconds_sum{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_response_latency_seconds_count{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source)
frontend: mean_sentinel_stream_latency_by_query
Mean successful sentinel stream latency by query
Mean time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102821
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source)
frontend: 90th_percentile_successful_sentinel_duration_by_query
90th percentile successful sentinel search duration by query
90th percentile search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102830
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))
frontend: 90th_percentile_successful_stream_latency_by_query
90th percentile successful sentinel stream latency by query
90th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102831
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))
frontend: 90th_percentile_unsuccessful_duration_by_query
90th percentile unsuccessful sentinel search duration by query
90th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102840
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~
searchblitz.*, status!=
success}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_successful_sentinel_duration_by_query
75th percentile successful sentinel search duration by query
75th percentile search duration of successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102850
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_successful_stream_latency_by_query
75th percentile successful sentinel stream latency by query
75th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102851
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.75, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_unsuccessful_duration_by_query
75th percentile unsuccessful sentinel search duration by query
75th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102860
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~
searchblitz.*, status!=
success}[$sentinel_sampling_duration])) by (le, source))
frontend: unsuccessful_status_rate
Unsuccessful status rate
The rate of unsuccessful sentinel queries, broken down by failure type.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102870
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum(rate(src_graphql_search_response{source=~"searchblitz.*", status!="success"}[$sentinel_sampling_duration])) by (status)
Git Server
Stores, manages, and operates Git repositories.
To see this dashboard, visit /-/debug/grafana/d/gitserver/gitserver
on your Sourcegraph instance.
gitserver: memory_working_set
Memory working set
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (container_memory_working_set_bytes{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}})
gitserver: go_routines
Go routines
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: go_goroutines{app="gitserver", instance=~
${shard:regex}}
gitserver: cpu_throttling_time
Container CPU throttling time %
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) ((rate(container_cpu_cfs_throttled_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]) / rate(container_cpu_cfs_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m])) * 100)
gitserver: cpu_usage_seconds
Cpu usage seconds
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_cpu_usage_seconds_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: disk_space_remaining
Disk space remaining by instance
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100020
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: (src_gitserver_disk_space_available / src_gitserver_disk_space_total) * 100
gitserver: io_reads_total
I/o reads total
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100030
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_container_name) (rate(container_fs_reads_total{container_label_io_kubernetes_container_name="gitserver"}[5m]))
gitserver: io_writes_total
I/o writes total
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100031
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_container_name) (rate(container_fs_writes_total{container_label_io_kubernetes_container_name="gitserver"}[5m]))
gitserver: io_reads
I/o reads
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100040
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_reads_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: io_writes
I/o writes
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100041
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_writes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: io_read_througput
I/o read throughput
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100050
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_reads_bytes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: io_write_throughput
I/o write throughput
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100051
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_fs_writes_bytes_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~
${shard:regex}}[5m]))
gitserver: running_git_commands
Git commands running on each gitserver instance
A high value signals load.
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100060
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (instance, cmd) (src_gitserver_exec_running{instance=~
${shard:regex}})
gitserver: git_commands_received
Rate of git commands received across all instances
per second rate per command across all instances
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100061
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (cmd) (rate(src_gitserver_exec_duration_seconds_count[5m]))
gitserver: repository_clone_queue_size
Repository clone queue size
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100070
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(src_gitserver_clone_queue)
gitserver: repository_existence_check_queue_size
Repository existence check queue size
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100071
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(src_gitserver_lsremote_queue)
gitserver: echo_command_duration_test
Echo test command duration
A high value here likely indicates a problem, especially if consistently high.
You can query for individual commands using sum by (cmd)(src_gitserver_exec_running)
in Grafana (/-/debug/grafana
) to see if a specific Git Server command might be spiking in frequency.
If this value is consistently high, consider the following:
- Single container deployments: Upgrade to a Docker Compose deployment which offers better scalability and resource isolation.
- Kubernetes and Docker Compose: Check that you are running a similar number of git server replicas and that their CPU/memory limits are allocated according to what is shown in the Sourcegraph resource estimator.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100080
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_gitserver_echo_duration_seconds)
gitserver: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100081
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="gitserver",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="gitserver"}[5m]))
Git Server: Gitserver: Gitserver API (powered by internal/observation)
gitserver: gitserver_api_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (le)(rate(src_gitserver_api_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m])) / (sum(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m])) + sum(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))) * 100
gitserver: gitserver_api_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (op)(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_gitserver_api_duration_seconds_bucket{job=~"^gitserver.*"}[5m])))
gitserver: gitserver_api_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_api_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_gitserver_api_total{job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_gitserver_api_errors_total{job=~"^gitserver.*"}[5m]))) * 100
Git Server: Global operation semaphores
gitserver: batch_log_semaphore_wait_99th_percentile_duration
Aggregate successful batch log semaphore operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (le)(rate(src_batch_log_semaphore_wait_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))
Git Server: Gitservice for internal cloning
gitserver: aggregate_gitservice_request_duration
95th percentile gitservice request duration aggregate
A high value means any internal service trying to clone a repo from gitserver is slowed down.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=
gitserver, error=
false}[5m])) by (le))
gitserver: gitservice_request_duration
95th percentile gitservice request duration per shard
A high value means any internal service trying to clone a repo from gitserver is slowed down.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=
gitserver, error=
false, instance=~
${shard:regex}}[5m])) by (le, instance))
gitserver: aggregate_gitservice_error_request_duration
95th percentile gitservice error request duration aggregate
95th percentile gitservice error request duration aggregate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=
gitserver, error=
true}[5m])) by (le))
gitserver: gitservice_request_duration
95th percentile gitservice error request duration per shard
95th percentile gitservice error request duration per shard
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=
gitserver, error=
true, instance=~
${shard:regex}}[5m])) by (le, instance))
gitserver: aggregate_gitservice_request_rate
Aggregate gitservice request rate
Aggregate gitservice request rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100320
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=
gitserver, error=
false}[5m]))
gitserver: gitservice_request_rate
Gitservice request rate per shard
Per shard gitservice request rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100321
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=
gitserver, error=
false, instance=~
${shard:regex}}[5m]))
gitserver: aggregate_gitservice_request_error_rate
Aggregate gitservice request error rate
Aggregate gitservice request error rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100330
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=
gitserver, error=
true}[5m]))
gitserver: gitservice_request_error_rate
Gitservice request error rate per shard
Per shard gitservice request error rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100331
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(rate(src_gitserver_gitservice_duration_seconds_count{type=
gitserver, error=
true, instance=~
${shard:regex}}[5m]))
gitserver: aggregate_gitservice_requests_running
Aggregate gitservice requests running
Aggregate gitservice requests running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100340
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(src_gitserver_gitservice_running{type=
gitserver})
gitserver: gitservice_requests_running
Gitservice requests running per shard
Per shard gitservice requests running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100341
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum(src_gitserver_gitservice_running{type=
gitserver, instance=~
${shard:regex}}) by (instance)
Git Server: Gitserver cleanup jobs
gitserver: janitor_running
If the janitor process is running
1, if the janitor process is currently running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (instance) (src_gitserver_janitor_running)
gitserver: janitor_job_duration
95th percentile job run duration
95th percentile job run duration
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.95, sum(rate(src_gitserver_janitor_job_duration_seconds_bucket[5m])) by (le, job_name))
gitserver: janitor_job_failures
Failures over 5m (by job)
the rate of failures over 5m (by job)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100420
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (job_name) (rate(src_gitserver_janitor_job_duration_seconds_count{success="false"}[5m]))
gitserver: repos_removed
Repositories removed due to disk pressure
Repositories removed due to disk pressure
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100430
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (instance) (rate(src_gitserver_repos_removed_disk_pressure[5m]))
gitserver: sg_maintenance_reason
Successful sg maintenance jobs over 1h (by reason)
the rate of successful sg maintenance jobs and the reason why they were triggered
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100440
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (reason) (rate(src_gitserver_maintenance_status{success="true"}[1h]))
gitserver: git_prune_skipped
Successful git prune jobs over 1h
the rate of successful git prune jobs over 1h and whether they were skipped
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100450
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (skipped) (rate(src_gitserver_prune_status{success="true"}[1h]))
Git Server: Search
gitserver: search_latency
Mean time until first result is sent
Mean latency (time to first result) of gitserver search requests
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: rate(src_gitserver_search_latency_seconds_sum[5m]) / rate(src_gitserver_search_latency_seconds_count[5m])
gitserver: search_duration
Mean search duration
Mean duration of gitserver search requests
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: rate(src_gitserver_search_duration_seconds_sum[5m]) / rate(src_gitserver_search_duration_seconds_count[5m])
gitserver: search_rate
Rate of searches run by pod
The rate of searches executed on gitserver by pod
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: rate(src_gitserver_search_latency_seconds_count{instance=~
${shard:regex}}[5m])
gitserver: running_searches
Number of searches currently running by pod
The number of searches currently executing on gitserver by pod
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Search team.
Technical details
Query: sum by (instance) (src_gitserver_search_running{instance=~
${shard:regex}})
Git Server: Codeintel: Coursier invocation stats
gitserver: codeintel_coursier_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
gitserver: codeintel_coursier_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))
gitserver: codeintel_coursier_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
Git Server: Codeintel: npm invocation stats
gitserver: codeintel_npm_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
gitserver: codeintel_npm_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100710
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100711
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))
gitserver: codeintel_npm_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100712
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100713
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
Git Server: Database connections
gitserver: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="gitserver"})
gitserver: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="gitserver"})
gitserver: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="gitserver"})
gitserver: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="gitserver"})
gitserver: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100820
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="gitserver"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="gitserver"}[5m]))
gitserver: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100830
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="gitserver"}[5m]))
gitserver: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100831
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="gitserver"}[5m]))
gitserver: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100832
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="gitserver"}[5m]))
Git Server: Container monitoring (not available on server)
gitserver: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod gitserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p gitserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' gitserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the gitserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs gitserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^gitserver.*"}) > 60)
gitserver: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}
gitserver: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}
gitserver: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^gitserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^gitserver.*"}[1h]))
Git Server: Provisioning indicators (not available on server)
gitserver: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[1d])
gitserver: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Git Server is expected to use up all the memory it is provided.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[1d])
gitserver: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101010
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[5m])
gitserver: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Git Server is expected to use up all the memory it is provided.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101011
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[5m])
Git Server: Golang runtime monitoring
gitserver: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*gitserver"})
gitserver: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*gitserver"})
Git Server: Kubernetes monitoring (only available on Kubernetes)
gitserver: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by(app) (up{app=~".*gitserver"}) / count by (app) (up{app=~".*gitserver"}) * 100
GitHub Proxy
Proxies all requests to github.com, keeping track of and managing rate limits.
To see this dashboard, visit /-/debug/grafana/d/github-proxy/github-proxy
on your Sourcegraph instance.
GitHub Proxy: GitHub API monitoring
github-proxy: github_proxy_waiting_requests
Number of requests waiting on the global mutex
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(github_proxy_waiting_requests)
GitHub Proxy: Container monitoring (not available on server)
github-proxy: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod github-proxy
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p github-proxy
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' github-proxy
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the github-proxy container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs github-proxy
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^github-proxy.*"}) > 60)
github-proxy: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}
github-proxy: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}
github-proxy: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^github-proxy.*"}[1h]) + rate(container_fs_writes_total{name=~"^github-proxy.*"}[1h]))
GitHub Proxy: Provisioning indicators (not available on server)
github-proxy: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}[1d])
github-proxy: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}[1d])
github-proxy: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^github-proxy.*"}[5m])
github-proxy: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^github-proxy.*"}[5m])
GitHub Proxy: Golang runtime monitoring
github-proxy: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*github-proxy"})
github-proxy: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*github-proxy"})
GitHub Proxy: Kubernetes monitoring (only available on Kubernetes)
github-proxy: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/github-proxy/github-proxy?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by(app) (up{app=~".*github-proxy"}) / count by (app) (up{app=~".*github-proxy"}) * 100
Postgres
Postgres metrics, exported from postgres_exporter (not available on server).
To see this dashboard, visit /-/debug/grafana/d/postgres/postgres
on your Sourcegraph instance.
postgres: connections
Active connections
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (job) (pg_stat_activity_count{datname!~"template.*|postgres|cloudsqladmin"})
postgres: transaction_durations
Maximum transaction durations
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (datname) (pg_stat_activity_max_tx_duration{datname!~"template.*|postgres|cloudsqladmin"})
Postgres: Database and collector status
postgres: postgres_up
Database availability
A non-zero value indicates the database is online.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: pg_up
postgres: invalid_indexes
Invalid indexes (unusable by the query planner)
A non-zero value indicates the that Postgres failed to build an index. Expect degraded performance until the index is manually rebuilt.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by (relname)(pg_invalid_index_count)
postgres: pg_exporter_err
Errors scraping postgres exporter
This value indicates issues retrieving metrics from postgres_exporter.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: pg_exporter_last_scrape_error
postgres: migration_in_progress
Active schema migration
A 0 value indicates that no migration is in progress.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: pg_sg_migration_status
Postgres: Object size and bloat
postgres: pg_table_size
Table size
Total size of this table
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by (relname)(pg_table_bloat_size)
postgres: pg_table_bloat_ratio
Table bloat ratio
Estimated bloat ratio of this table (high bloat = high overhead)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by (relname)(pg_table_bloat_ratio) * 100
postgres: pg_index_size
Index size
Total size of this index
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by (relname)(pg_index_bloat_size)
postgres: pg_index_bloat_ratio
Index bloat ratio
Estimated bloat ratio of this index (high bloat = high overhead)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max by (relname)(pg_index_bloat_ratio) * 100
Postgres: Provisioning indicators (not available on server)
postgres: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights-db).*"}[1d])
postgres: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights-db).*"}[1d])
postgres: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights-db).*"}[5m])
postgres: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights-db).*"}[5m])
Postgres: Kubernetes monitoring (only available on Kubernetes)
postgres: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(app) (up{app=~".*(pgsql|codeintel-db|codeinsights-db)"}) / count by (app) (up{app=~".*(pgsql|codeintel-db|codeinsights-db)"}) * 100
Precise Code Intel Worker
Handles conversion of uploaded precise code intelligence bundles.
To see this dashboard, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker
on your Sourcegraph instance.
Precise Code Intel Worker: Codeintel: LSIF uploads
precise-code-intel-worker: codeintel_upload_queue_size
Unprocessed upload record queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"})
precise-code-intel-worker: codeintel_upload_queue_growth_rate
Unprocessed upload record queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[30m])) / sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[30m]))
precise-code-intel-worker: codeintel_upload_queued_max_age
Unprocessed upload record queue longest time in queue
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_upload_queued_duration_seconds_total{job=~"^precise-code-intel-worker.*"})
Precise Code Intel Worker: Codeintel: LSIF uploads
precise-code-intel-worker: codeintel_upload_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(src_codeintel_upload_processor_handlers{job=~"^precise-code-intel-worker.*"})
precise-code-intel-worker: codeintel_upload_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_upload_processor_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: dbstore stats
precise-code-intel-worker: codeintel_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_dbstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100203
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_dbstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_dbstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dbstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100213
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_dbstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: lsifstore stats
precise-code-intel-worker: codeintel_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Workerutil: lsif_uploads dbworker/store stats
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_upload_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: gitserver client
precise-code-intel-worker: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: uploadstore stats
precise-code-intel-worker: codeintel_uploadstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_uploadstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_uploadstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Internal service requests
precise-code-intel-worker: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="precise-code-intel-worker",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="precise-code-intel-worker"}[5m]))
Precise Code Intel Worker: Database connections
precise-code-intel-worker: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="precise-code-intel-worker"})
precise-code-intel-worker: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="precise-code-intel-worker"})
precise-code-intel-worker: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="precise-code-intel-worker"})
precise-code-intel-worker: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="precise-code-intel-worker"})
precise-code-intel-worker: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100820
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="precise-code-intel-worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100830
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100831
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100832
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="precise-code-intel-worker"}[5m]))
Precise Code Intel Worker: Container monitoring (not available on server)
precise-code-intel-worker: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod precise-code-intel-worker
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p precise-code-intel-worker
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' precise-code-intel-worker
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the precise-code-intel-worker container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs precise-code-intel-worker
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^precise-code-intel-worker.*"}) > 60)
precise-code-intel-worker: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
precise-code-intel-worker: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
precise-code-intel-worker: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^precise-code-intel-worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^precise-code-intel-worker.*"}[1h]))
Precise Code Intel Worker: Provisioning indicators (not available on server)
precise-code-intel-worker: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])
precise-code-intel-worker: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])
precise-code-intel-worker: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101010
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])
precise-code-intel-worker: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101011
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])
Precise Code Intel Worker: Golang runtime monitoring
precise-code-intel-worker: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*precise-code-intel-worker"})
precise-code-intel-worker: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*precise-code-intel-worker"})
Precise Code Intel Worker: Kubernetes monitoring (only available on Kubernetes)
precise-code-intel-worker: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by(app) (up{app=~".*precise-code-intel-worker"}) / count by (app) (up{app=~".*precise-code-intel-worker"}) * 100
Redis
Metrics from both redis databases.
To see this dashboard, visit /-/debug/grafana/d/redis/redis
on your Sourcegraph instance.
Redis: Redis Store
redis: redis-store_up
Redis-store availability
A value of 1 indicates the service is currently running
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: redis_up{app="redis-store"}
Redis: Redis Cache
redis: redis-cache_up
Redis-cache availability
A value of 1 indicates the service is currently running
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: redis_up{app="redis-cache"}
Redis: Provisioning indicators (not available on server)
redis: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[1d])
redis: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[1d])
redis: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[5m])
redis: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[5m])
Redis: Provisioning indicators (not available on server)
redis: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[1d])
redis: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[1d])
redis: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[5m])
redis: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[5m])
Redis: Kubernetes monitoring (only available on Kubernetes)
redis: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(app) (up{app=~".*redis-cache"}) / count by (app) (up{app=~".*redis-cache"}) * 100
Redis: Kubernetes monitoring (only available on Kubernetes)
redis: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(app) (up{app=~".*redis-store"}) / count by (app) (up{app=~".*redis-store"}) * 100
Worker
Manages background processes.
To see this dashboard, visit /-/debug/grafana/d/worker/worker
on your Sourcegraph instance.
Worker: Active jobs
worker: worker_job_count
Number of worker instances running each job
The number of worker instances running each job type. It is necessary for each job type to be managed by at least one worker instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query: sum by (job_name) (src_worker_jobs{job="worker"})
worker: worker_job_codeintel-janitor_count
Number of worker instances running the codeintel-janitor job
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum (src_worker_jobs{job="worker", job_name="codeintel-janitor"})
worker: worker_job_codeintel-commitgraph_count
Number of worker instances running the codeintel-commitgraph job
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum (src_worker_jobs{job="worker", job_name="codeintel-commitgraph"})
worker: worker_job_codeintel-auto-indexing_count
Number of worker instances running the codeintel-auto-indexing job
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum (src_worker_jobs{job="worker", job_name="codeintel-auto-indexing"})
Worker: Codeintel: Repository with stale commit graph
worker: codeintel_commit_graph_queue_size
Repository queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_commit_graph_total{job=~"^worker.*"})
worker: codeintel_commit_graph_queue_growth_rate
Repository queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[30m]))
worker: codeintel_commit_graph_queued_max_age
Repository queue longest time in queue
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_commit_graph_queued_duration_seconds_total{job=~"^worker.*"})
Worker: Codeintel: Repository commit graph updates
worker: codeintel_commit_graph_processor_total
Update operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_99th_percentile_duration
Aggregate successful update operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_commit_graph_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_errors_total
Update operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_error_rate
Update operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100203
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Dependency index job
worker: codeintel_dependency_index_queue_size
Dependency index job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_dependency_index_total{job=~"^worker.*"})
worker: codeintel_dependency_index_queue_growth_rate
Dependency index job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependency_index_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[30m]))
worker: codeintel_dependency_index_queued_max_age
Dependency index job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_dependency_index_queued_duration_seconds_total{job=~"^worker.*"})
Worker: Codeintel: Dependency index jobs
worker: codeintel_dependency_index_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(src_codeintel_dependency_index_processor_handlers{job=~"^worker.*"})
worker: codeintel_dependency_index_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_dependency_index_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Janitor stats
worker: codeintel_background_repositories_scanned_total
Repository records scanned every 5m
Number of repositories considered for data retention scanning every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_repositories_scanned_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_records_scanned_total
Lsif upload records scanned every 5m
Number of upload records considered for data retention scanning every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_scanned_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_commits_scanned_total
Lsif upload commits scanned every 5m
Number of commits considered for data retention scanning every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_commits_scanned_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_records_expired_total
Lsif upload records expired every 5m
Number of upload records found to be expired every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_expired_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_records_removed_total
Lsif upload records deleted every 5m
Number of LSIF upload records deleted due to expiration or unreachability every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_upload_records_removed_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_index_records_removed_total
Lsif index records deleted every 5m
Number of LSIF index records deleted due to expiration or unreachability every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_index_records_removed_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_uploads_purged_total
Lsif upload data bundles deleted every 5m
Number of LSIF upload data bundles purged from the codeintel-db database every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_uploads_purged_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_documentation_search_records_removed_total
Documentation search record records deleted every 5m
Number of documentation search records removed from the codeintel-db database every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_documentation_search_records_removed_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_errors_total
Janitor operation errors every 5m
Number of code intelligence janitor errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100520
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: Auto-index scheduler
worker: codeintel_index_scheduler_total
Aggregate scheduler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_index_scheduler_total{job=~"^worker.*"}[5m]))
worker: codeintel_index_scheduler_99th_percentile_duration
Aggregate successful scheduler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_index_scheduler_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_index_scheduler_errors_total
Aggregate scheduler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_index_scheduler_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_index_scheduler_error_rate
Aggregate scheduler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_index_scheduler_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_index_scheduler_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_index_scheduler_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_index_scheduler_total
Scheduler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_index_scheduler_total{job=~"^worker.*"}[5m]))
worker: codeintel_index_scheduler_99th_percentile_duration
99th percentile successful scheduler operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_index_scheduler_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_index_scheduler_errors_total
Scheduler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_index_scheduler_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_index_scheduler_error_rate
Scheduler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_index_scheduler_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_index_scheduler_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_index_scheduler_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Auto-index enqueuer
worker: codeintel_autoindex_enqueuer_total
Aggregate enqueuer operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^worker.*"}[5m]))
worker: codeintel_autoindex_enqueuer_99th_percentile_duration
Aggregate successful enqueuer operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_autoindex_enqueuer_errors_total
Aggregate enqueuer operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_autoindex_enqueuer_error_rate
Aggregate enqueuer operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_autoindex_enqueuer_total
Enqueuer operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100710
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^worker.*"}[5m]))
worker: codeintel_autoindex_enqueuer_99th_percentile_duration
99th percentile successful enqueuer operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100711
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_autoindex_enqueuer_errors_total
Enqueuer operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100712
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_autoindex_enqueuer_error_rate
Enqueuer operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100713
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: dbstore stats
worker: codeintel_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_total{job=~"^worker.*"}[5m]))
worker: codeintel_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100802
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100803
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dbstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_total{job=~"^worker.*"}[5m]))
worker: codeintel_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100812
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100813
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_dbstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: lsifstore stats
worker: codeintel_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_total{job=~"^worker.*"}[5m]))
worker: codeintel_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_lsifstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100910
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_total{job=~"^worker.*"}[5m]))
worker: codeintel_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100911
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100912
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100913
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_lsifstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Workerutil: lsif_dependency_indexes dbworker/store stats
worker: workerutil_dbworker_store_codeintel_dependency_index_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_workerutil_dbworker_store_codeintel_dependency_index_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101002
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101003
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: gitserver client
worker: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101102
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101103
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101110
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101111
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101112
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101113
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: repo-updater client
worker: codeintel_repoupdater_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_repoupdater_total{job=~"^worker.*"}[5m]))
worker: codeintel_repoupdater_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101201
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_repoupdater_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_repoupdater_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101202
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_repoupdater_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_repoupdater_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101203
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_repoupdater_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_repoupdater_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_repoupdater_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_repoupdater_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101210
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_repoupdater_total{job=~"^worker.*"}[5m]))
worker: codeintel_repoupdater_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101211
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_repoupdater_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_repoupdater_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101212
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_repoupdater_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_repoupdater_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101213
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_repoupdater_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_repoupdater_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_repoupdater_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Dependency repository insert
worker: codeintel_dependency_repos_total
Aggregate insert operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101300
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_99th_percentile_duration
Aggregate successful insert operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101301
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_errors_total
Aggregate insert operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101302
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_error_rate
Aggregate insert operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101303
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_dependency_repos_total
Insert operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101310
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_99th_percentile_duration
99th percentile successful insert operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101311
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,scheme,new)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_dependency_repos_errors_total
Insert operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101312
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_error_rate
Insert operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101313
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: dbstore stats
worker: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101400
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101401
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101402
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101403
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101410
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101411
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101412
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101413
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: service stats
worker: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101500
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_total{job=~"^worker.*"}[5m]))
worker: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101501
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101502
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))
worker: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101503
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101510
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m]))
worker: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101511
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101512
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))
worker: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101513
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: lsif_upload record resetter
worker: codeintel_background_upload_record_resets_total
Lsif upload records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_upload_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_record_reset_failures_total
Lsif upload records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_upload_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_record_reset_errors_total
Lsif upload operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_upload_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: lsif_index record resetter
worker: codeintel_background_index_record_resets_total
Lsif index records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101700
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_index_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_index_record_reset_failures_total
Lsif index records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101701
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_index_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_index_record_reset_errors_total
Lsif index operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101702
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_index_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: lsif_dependency_index record resetter
worker: codeintel_background_dependency_index_record_resets_total
Lsif dependency index records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101800
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_dependency_index_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_dependency_index_record_reset_failures_total
Lsif dependency index records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101801
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_dependency_index_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_dependency_index_record_reset_errors_total
Lsif dependency index operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101802
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_background_dependency_index_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeinsights: Query Runner Queue
worker: insights_search_queue_queue_size
Code insights search queue queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101900
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: max(src_insights_search_queue_total{job=~"^worker.*"})
worker: insights_search_queue_queue_growth_rate
Code insights search queue queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101901
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_insights_search_queue_total{job=~"^worker.*"}[30m])) / sum(increase(src_insights_search_queue_processor_total{job=~"^worker.*"}[30m]))
Worker: Codeinsights: insights queue processor
worker: insights_search_queue_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102000
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(src_insights_search_queue_processor_handlers{job=~"^worker.*"})
worker: insights_search_queue_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102010
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_insights_search_queue_processor_total{job=~"^worker.*"}[5m]))
worker: insights_search_queue_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102011
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum by (le)(rate(src_insights_search_queue_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: insights_search_queue_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102012
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_insights_search_queue_processor_errors_total{job=~"^worker.*"}[5m]))
worker: insights_search_queue_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102013
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_insights_search_queue_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_insights_search_queue_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_insights_search_queue_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeinsights: code insights search queue record resetter
worker: insights_search_queue_record_resets_total
Insights search queue records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102100
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_insights_search_queue_record_resets_total{job=~"^worker.*"}[5m]))
worker: insights_search_queue_record_reset_failures_total
Insights search queue records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102101
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_insights_search_queue_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: insights_search_queue_record_reset_errors_total
Insights search queue operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102102
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_insights_search_queue_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeinsights: dbstore stats
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102200
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102201
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum by (le)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102202
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102203
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102210
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102211
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102212
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102213
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Code Insights queue utilization
worker: insights_queue_unutilized_size
Insights queue size that is not utilized (not processing)
Any value on this panel indicates code insights is not processing queries from its queue. This observable and alert only fire if there are records in the queue and there have been no dequeue attempts for 30 minutes.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102300
on your Sourcegraph instance.
Managed by the Sourcegraph Code-insights team.
Technical details
Query: max(src_insights_search_queue_total{job=~"^worker.*"}) > 0 and on(job) sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*",op="Dequeue"}[5m])) < 1
Worker: Internal service requests
worker: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102400
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="worker",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="worker"}[5m]))
Worker: Database connections
worker: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102500
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="worker"})
worker: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102501
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="worker"})
worker: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102510
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="worker"})
worker: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102511
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="worker"})
worker: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102520
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="worker"}[5m]))
worker: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102530
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="worker"}[5m]))
worker: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102531
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="worker"}[5m]))
worker: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102532
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="worker"}[5m]))
Worker: Container monitoring (not available on server)
worker: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod worker
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p worker
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' worker
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the worker container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs worker
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^worker.*"}) > 60)
worker: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}
worker: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}
worker: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^worker.*"}[1h]))
Worker: Provisioning indicators (not available on server)
worker: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102700
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[1d])
worker: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102701
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[1d])
worker: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102710
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[5m])
worker: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102711
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[5m])
Worker: Golang runtime monitoring
worker: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102800
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*worker"})
worker: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102801
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*worker"})
Worker: Kubernetes monitoring (only available on Kubernetes)
worker: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102900
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by(app) (up{app=~".*worker"}) / count by (app) (up{app=~".*worker"}) * 100
Repo Updater
Manages interaction with code hosts, instructs Gitserver to update repositories.
To see this dashboard, visit /-/debug/grafana/d/repo-updater/repo-updater
on your Sourcegraph instance.
Repo Updater: Repositories
repo-updater: syncer_sync_last_time
Time since last sync
A high value here indicates issues synchronizing repo metadata. If the value is persistently high, make sure all external services have valid tokens.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(timestamp(vector(time()))) - max(src_repoupdater_syncer_sync_last_time)
repo-updater: src_repoupdater_max_sync_backoff
Time since oldest sync
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_max_sync_backoff)
repo-updater: src_repoupdater_syncer_sync_errors_total
Site level external service sync error rate
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (family) (rate(src_repoupdater_syncer_sync_errors_total{owner!="user"}[5m]))
repo-updater: syncer_sync_start
Repo metadata sync was started
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (family) (rate(src_repoupdater_syncer_start_sync{family="Syncer.SyncExternalService"}[9h0m0s]))
repo-updater: syncer_sync_duration
95th repositories sync duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.95, max by (le, family, success) (rate(src_repoupdater_syncer_sync_duration_seconds_bucket[1m])))
repo-updater: source_duration
95th repositories source duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.95, max by (le) (rate(src_repoupdater_source_duration_seconds_bucket[1m])))
repo-updater: syncer_synced_repos
Repositories synced
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100020
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(max by (state) (rate(src_repoupdater_syncer_synced_repos_total[1m])))
repo-updater: sourced_repos
Repositories sourced
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100021
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(rate(src_repoupdater_source_repos_total[1m]))
repo-updater: user_added_repos
Total number of user added repos
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100022
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_user_repos_total)
repo-updater: purge_failed
Repositories purge failed
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100030
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(rate(src_repoupdater_purge_failed[1m]))
repo-updater: sched_auto_fetch
Repositories scheduled due to hitting a deadline
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100040
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(rate(src_repoupdater_sched_auto_fetch[1m]))
repo-updater: sched_manual_fetch
Repositories scheduled due to user traffic
Check repo-updater logs if this value is persistently high. This does not indicate anything if there are no user added code hosts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100041
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(rate(src_repoupdater_sched_manual_fetch[1m]))
repo-updater: sched_known_repos
Repositories managed by the scheduler
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100050
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_sched_known_repos)
repo-updater: sched_update_queue_length
Rate of growth of update queue length over 5 minutes
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100051
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(deriv(src_repoupdater_sched_update_queue_length[5m]))
repo-updater: sched_loops
Scheduler loops
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100052
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(rate(src_repoupdater_sched_loops[1m]))
repo-updater: src_repoupdater_stale_repos
Repos that haven't been fetched in more than 8 hours
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100060
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_stale_repos)
repo-updater: sched_error
Repositories schedule error rate
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100061
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(rate(src_repoupdater_sched_error[1m]))
Repo Updater: Permissions
repo-updater: perms_syncer_perms
Time gap between least and most up to date permissions
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (type) (src_repoupdater_perms_syncer_perms_gap_seconds)
repo-updater: perms_syncer_stale_perms
Number of entities with stale permissions
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (type) (src_repoupdater_perms_syncer_stale_perms)
repo-updater: perms_syncer_no_perms
Number of entities with no permissions
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (type) (src_repoupdater_perms_syncer_no_perms)
repo-updater: perms_syncer_outdated_perms
Number of entities with outdated permissions
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (type) (src_repoupdater_perms_syncer_outdated_perms)
repo-updater: perms_syncer_sync_duration
95th permissions sync duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100120
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: histogram_quantile(0.95, max by (le, type) (rate(src_repoupdater_perms_syncer_sync_duration_seconds_bucket[1m])))
repo-updater: perms_syncer_queue_size
Permissions sync queued items
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100121
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_perms_syncer_queue_size)
repo-updater: perms_syncer_sync_errors
Permissions sync error rate
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100130
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (type) (ceil(rate(src_repoupdater_perms_syncer_sync_errors_total[1m])))
repo-updater: perms_syncer_scheduled_repos_total
Total number of repos scheduled for permissions sync
Indicates how many repositories have been scheduled for a permissions sync. More about repository permissions synchronization here
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100131
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(rate(src_repoupdater_perms_syncer_schedule_repos_total[1m]))
Repo Updater: External services
repo-updater: src_repoupdater_external_services_total
The total number of external services
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_external_services_total)
repo-updater: src_repoupdater_user_external_services_total
The total number of user added external services
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_user_external_services_total)
repo-updater: repoupdater_queued_sync_jobs_total
The total number of queued sync jobs
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_queued_sync_jobs_total)
repo-updater: repoupdater_completed_sync_jobs_total
The total number of completed sync jobs
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_completed_sync_jobs_total)
repo-updater: repoupdater_errored_sync_jobs_percentage
The percentage of external services that have failed their most recent sync
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max(src_repoupdater_errored_sync_jobs_percentage)
repo-updater: github_graphql_rate_limit_remaining
Remaining calls to GitHub graphql API before hitting the rate limit
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100220
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (name) (src_github_rate_limit_remaining_v2{resource="graphql"})
repo-updater: github_rest_rate_limit_remaining
Remaining calls to GitHub rest API before hitting the rate limit
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100221
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (name) (src_github_rate_limit_remaining_v2{resource="rest"})
repo-updater: github_search_rate_limit_remaining
Remaining calls to GitHub search API before hitting the rate limit
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100222
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (name) (src_github_rate_limit_remaining_v2{resource="search"})
repo-updater: github_graphql_rate_limit_wait_duration
Time spent waiting for the GitHub graphql API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100230
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="graphql"}[5m]))
repo-updater: github_rest_rate_limit_wait_duration
Time spent waiting for the GitHub rest API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100231
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="rest"}[5m]))
repo-updater: github_search_rate_limit_wait_duration
Time spent waiting for the GitHub search API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100232
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="search"}[5m]))
repo-updater: gitlab_rest_rate_limit_remaining
Remaining calls to GitLab rest API before hitting the rate limit
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100240
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by (name) (src_gitlab_rate_limit_remaining{resource="rest"})
repo-updater: gitlab_rest_rate_limit_wait_duration
Time spent waiting for the GitLab rest API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100241
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(name) (rate(src_gitlab_rate_limit_wait_duration_seconds{resource="rest"}[5m]))
Repo Updater: Batches: dbstore stats
repo-updater: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))
repo-updater: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Batches: service stats
repo-updater: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))
repo-updater: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Batches team.
Technical details
Query: sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Codeintel: Coursier invocation stats
repo-updater: codeintel_coursier_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: codeintel_coursier_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m])))
repo-updater: codeintel_coursier_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Codeintel: npm invocation stats
repo-updater: codeintel_npm_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: codeintel_npm_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m])))
repo-updater: codeintel_npm_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Internal service requests
repo-updater: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="repo-updater",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="repo-updater"}[5m]))
Repo Updater: Database connections
repo-updater: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="repo-updater"})
repo-updater: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="repo-updater"})
repo-updater: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100810
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="repo-updater"})
repo-updater: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100811
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="repo-updater"})
repo-updater: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100820
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="repo-updater"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="repo-updater"}[5m]))
repo-updater: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100830
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="repo-updater"}[5m]))
repo-updater: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100831
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="repo-updater"}[5m]))
repo-updater: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100832
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="repo-updater"}[5m]))
Repo Updater: Container monitoring (not available on server)
repo-updater: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod repo-updater
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p repo-updater
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' repo-updater
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the repo-updater container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs repo-updater
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^repo-updater.*"}) > 60)
repo-updater: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}
repo-updater: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100902
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}
repo-updater: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100903
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^repo-updater.*"}[1h]) + rate(container_fs_writes_total{name=~"^repo-updater.*"}[1h]))
Repo Updater: Provisioning indicators (not available on server)
repo-updater: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}[1d])
repo-updater: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}[1d])
repo-updater: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101010
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}[5m])
repo-updater: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101011
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}[5m])
Repo Updater: Golang runtime monitoring
repo-updater: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*repo-updater"})
repo-updater: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101101
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*repo-updater"})
Repo Updater: Kubernetes monitoring (only available on Kubernetes)
repo-updater: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101200
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by(app) (up{app=~".*repo-updater"}) / count by (app) (up{app=~".*repo-updater"}) * 100
Searcher
Performs unindexed searches (diff and commit search, text search for unindexed branches).
To see this dashboard, visit /-/debug/grafana/d/searcher/searcher
on your Sourcegraph instance.
searcher: unindexed_search_request_errors
Unindexed search request errors every 5m by code
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (code)(increase(searcher_service_request_total{code!="200",code!="canceled"}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total[5m])) * 100
searcher: replica_traffic
Requests per second over 10m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by(instance) (rate(searcher_service_request_total[10m]))
Searcher: Database connections
searcher: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="searcher"})
searcher: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_open{app_name="searcher"})
searcher: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="searcher"})
searcher: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="searcher"})
searcher: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100120
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="searcher"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="searcher"}[5m]))
searcher: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100130
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="searcher"}[5m]))
searcher: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100131
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="searcher"}[5m]))
searcher: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100132
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="searcher"}[5m]))
Searcher: Internal service requests
searcher: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="searcher",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="searcher"}[5m]))
Searcher: Container monitoring (not available on server)
searcher: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod searcher
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p searcher
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' searcher
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the searcher container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs searcher
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^searcher.*"}) > 60)
searcher: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}
searcher: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}
searcher: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^searcher.*"}[1h]) + rate(container_fs_writes_total{name=~"^searcher.*"}[1h]))
Searcher: Provisioning indicators (not available on server)
searcher: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}[1d])
searcher: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}[1d])
searcher: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}[5m])
searcher: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}[5m])
Searcher: Golang runtime monitoring
searcher: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*searcher"})
searcher: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*searcher"})
Searcher: Kubernetes monitoring (only available on Kubernetes)
searcher: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by(app) (up{app=~".*searcher"}) / count by (app) (up{app=~".*searcher"}) * 100
Symbols
Handles symbol searches for unindexed branches.
To see this dashboard, visit /-/debug/grafana/d/symbols/symbols
on your Sourcegraph instance.
Symbols: Codeintel: Symbols API
symbols: codeintel_symbols_api_total
Aggregate API operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_99th_percentile_duration
Aggregate successful API operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_symbols_api_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_errors_total
Aggregate API operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_error_rate
Aggregate API operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100003
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_api_total
API operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op,parseAmount)(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_99th_percentile_duration
99th percentile successful API operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op,parseAmount)(rate(src_codeintel_symbols_api_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_api_errors_total
API operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100012
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_error_rate
API operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100013
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op,parseAmount)(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m])) + sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols parser
symbols: symbols
In-flight parse jobs
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_symbols_parsing{job=~"^symbols.*"})
symbols: symbols
Parser queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_symbols_parse_queue_size{job=~"^symbols.*"})
symbols: symbols
Parse queue timeouts
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_symbols_parse_queue_timeouts_total{job=~"^symbols.*"})
symbols: symbols
Parse failures every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: rate(src_codeintel_symbols_parse_failed_total{job=~"^symbols.*"}[5m])
symbols: codeintel_symbols_parser_total
Aggregate parser operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_99th_percentile_duration
Aggregate successful parser operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_symbols_parser_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_errors_total
Aggregate parser operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_error_rate
Aggregate parser operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_parser_total
Parser operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100120
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_99th_percentile_duration
99th percentile successful parser operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100121
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_parser_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_parser_errors_total
Parser operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100122
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_error_rate
Parser operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100123
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols cache janitor
symbols: symbols
Size in bytes of the on-disk cache
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: src_codeintel_symbols_store_cache_size_bytes
symbols: symbols
Cache eviction operations every 5m
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: rate(src_codeintel_symbols_store_evictions_total[5m])
symbols: symbols
Cache eviction operation errors every 5m
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100202
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: rate(src_codeintel_symbols_store_errors_total[5m])
Symbols: Codeintel: Symbols repository fetcher
symbols: symbols
In-flight repository fetch operations
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: src_codeintel_symbols_fetching
symbols: symbols
Repository fetch queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max(src_codeintel_symbols_fetch_queue_size{job=~"^symbols.*"})
symbols: codeintel_symbols_repository_fetcher_total
Aggregate fetcher operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_99th_percentile_duration
Aggregate successful fetcher operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_symbols_repository_fetcher_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_errors_total
Aggregate fetcher operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_error_rate
Aggregate fetcher operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_repository_fetcher_total
Fetcher operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100320
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_99th_percentile_duration
99th percentile successful fetcher operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100321
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_repository_fetcher_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_repository_fetcher_errors_total
Fetcher operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100322
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_error_rate
Fetcher operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100323
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols gitserver client
symbols: codeintel_symbols_gitserver_total
Aggregate gitserver client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_99th_percentile_duration
Aggregate successful gitserver client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_codeintel_symbols_gitserver_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_errors_total
Aggregate gitserver client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_error_rate
Aggregate gitserver client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_gitserver_total
Gitserver client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_99th_percentile_duration
99th percentile successful gitserver client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_gitserver_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_gitserver_errors_total
Gitserver client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_error_rate
Gitserver client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Internal service requests
symbols: frontend_internal_api_error_responses
Frontend-internal API error responses every 5m by route
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (category)(increase(src_frontend_internal_request_duration_seconds_count{job="symbols",code!~"2.."}[5m])) / ignoring(category) group_left sum(increase(src_frontend_internal_request_duration_seconds_count{job="symbols"}[5m]))
Symbols: Container monitoring (not available on server)
symbols: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod symbols
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p symbols
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' symbols
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the symbols container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs symbols
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^symbols.*"}) > 60)
symbols: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}
symbols: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}
symbols: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^symbols.*"}[1h]) + rate(container_fs_writes_total{name=~"^symbols.*"}[1h]))
Symbols: Provisioning indicators (not available on server)
symbols: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}[1d])
symbols: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}[1d])
symbols: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100710
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}[5m])
symbols: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100711
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}[5m])
Symbols: Golang runtime monitoring
symbols: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*symbols"})
symbols: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*symbols"})
Symbols: Kubernetes monitoring (only available on Kubernetes)
symbols: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by(app) (up{app=~".*symbols"}) / count by (app) (up{app=~".*symbols"}) * 100
Syntect Server
Handles syntax highlighting for code files.
To see this dashboard, visit /-/debug/grafana/d/syntect-server/syntect-server
on your Sourcegraph instance.
syntect-server: syntax_highlighting_errors
Syntax highlighting errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_syntax_highlighting_requests{status="error"}[5m])) / sum(increase(src_syntax_highlighting_requests[5m])) * 100
syntect-server: syntax_highlighting_timeouts
Syntax highlighting timeouts every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_syntax_highlighting_requests{status="timeout"}[5m])) / sum(increase(src_syntax_highlighting_requests[5m])) * 100
syntect-server: syntax_highlighting_panics
Syntax highlighting panics every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_syntax_highlighting_requests{status="panic"}[5m]))
syntect-server: syntax_highlighting_worker_deaths
Syntax highlighter worker deaths every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_syntax_highlighting_requests{status="hss_worker_timeout"}[5m]))
Syntect Server: Container monitoring (not available on server)
syntect-server: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod syntect-server
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p syntect-server
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' syntect-server
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the syntect-server container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs syntect-server
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^syntect-server.*"}) > 60)
syntect-server: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}
syntect-server: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100102
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}
syntect-server: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100103
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^syntect-server.*"}[1h]) + rate(container_fs_writes_total{name=~"^syntect-server.*"}[1h]))
Syntect Server: Provisioning indicators (not available on server)
syntect-server: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}[1d])
syntect-server: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}[1d])
syntect-server: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}[5m])
syntect-server: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}[5m])
Syntect Server: Kubernetes monitoring (only available on Kubernetes)
syntect-server: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Core application team.
Technical details
Query: sum by(app) (up{app=~".*syntect-server"}) / count by (app) (up{app=~".*syntect-server"}) * 100
Zoekt
Indexes repositories, populates the search index, and responds to indexed search queries.
To see this dashboard, visit /-/debug/grafana/d/zoekt/zoekt
on your Sourcegraph instance.
zoekt: total_repos_aggregate
Total number of repos (aggregate)
Sudden changes can be caused by indexing configuration changes.
Additionally, a discrepancy between "assigned" and "tracked" could indicate a bug.
Legend:
- assigned: # of repos assigned to Zoekt
- indexed: # of repos Zoekt has indexed
- tracked: # of repos Zoekt is aware of, including those that it has finished indexing
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(index_num_assigned)
zoekt: total_repos_per_instance
Total number of repos (per instance)
Sudden changes can be caused by indexing configuration changes.
Additionally, a discrepancy between "assigned" and "tracked" could indicate a bug.
Legend:
- assigned: # of repos assigned to Zoekt
- indexed: # of repos Zoekt has indexed
- tracked: # of repos Zoekt is aware of, including those that it has finished processing
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (instance) (index_num_assigned{instance=~
${instance:regex}})
zoekt: repo_index_success_speed
Successful indexing durations
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100010
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (le, state) (increase(index_repo_seconds_bucket{state="success"}[$__rate_interval]))
zoekt: repo_index_fail_speed
Failed indexing durations
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100011
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (le, state) (increase(index_repo_seconds_bucket{state="fail"}[$__rate_interval]))
zoekt: repos_stopped_tracking_total_aggregate
The number of repositories we stopped tracking over 5m (aggregate)
Repositories we stop tracking are soft-deleted during the next cleanup job.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100020
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(increase(index_num_stopped_tracking_total[5m]))
zoekt: repos_stopped_tracking_total_per_instance
The number of repositories we stopped tracking over 5m (per instance)
Repositories we stop tracking are soft-deleted during the next cleanup job.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100021
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (instance) (increase(index_num_stopped_tracking_total{instance=~
${instance:regex}}[5m]))
zoekt: average_resolve_revision_duration
Average resolve revision duration over 5m
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100030
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(rate(resolve_revision_seconds_sum[5m])) / sum(rate(resolve_revision_seconds_count[5m]))
zoekt: get_index_options_error_increase
The number of repositories we failed to get indexing options over 5m
When considering indexing a repository we ask for the index configuration from frontend per repository. The most likely reason this would fail is failing to resolve branch names to git SHAs.
This value can spike up during deployments/etc. Only if you encounter sustained periods of errors is there an underlying issue. When sustained this indicates repositories will not get updated indexes.
Refer to the alert solutions reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100031
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(increase(get_index_options_error_total[5m]))
Zoekt: Search requests
zoekt: indexed_search_request_errors
Indexed search request errors every 5m by code
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (code)(increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100
Zoekt: Git fetch durations
zoekt: 90th_percentile_successful_git_fetch_durations_5m
90th percentile successful git fetch durations over 5m
Long git fetch times can be a leading indicator of saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(index_fetch_seconds_bucket{success="true"}[5m])))
zoekt: 90th_percentile_failed_git_fetch_durations_5m
90th percentile failed git fetch durations over 5m
Long git fetch times can be a leading indicator of saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(index_fetch_seconds_bucket{success="false"}[5m])))
Zoekt: Indexing results
zoekt: repo_index_state_aggregate
Index results state count over 5m (aggregate)
This dashboard shows the outcomes of recently completed indexing jobs across all index-server instances.
A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts.
Legend:
- fail -> the indexing jobs failed
- success -> the indexing job succeeded and the index was updated
- success_meta -> the indexing job succeeded, but only metadata was updated
- noop -> the indexing job succeed, but we didn`t need to update anything
- empty -> the indexing job succeeded, but the index was empty (i.e. the repository is empty)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (state) (increase(index_repo_seconds_count[5m]))
zoekt: repo_index_state_per_instance
Index results state count over 5m (per instance)
This dashboard shows the outcomes of recently completed indexing jobs, split out across each index-server instance.
(You can use the "instance" filter at the top of the page to select a particular instance.)
A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts.
Legend:
- fail -> the indexing jobs failed
- success -> the indexing job succeeded and the index was updated
- success_meta -> the indexing job succeeded, but only metadata was updated
- noop -> the indexing job succeed, but we didn`t need to update anything
- empty -> the indexing job succeeded, but the index was empty (i.e. the repository is empty)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (instance, state) (increase(index_repo_seconds_count{instance=~
${instance:regex}}[5m]))
Zoekt: Indexing queue statistics
zoekt: indexed_num_scheduled_jobs_aggregate
# scheduled index jobs (aggregate)
A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(index_queue_len)
zoekt: indexed_num_scheduled_jobs_per_instance
# scheduled index jobs (per instance)
A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: index_queue_len{instance=~
${instance:regex}}
zoekt: indexed_queueing_delay_heatmap
Job queuing delay heatmap
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better .
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (le) (increase(index_queue_age_seconds_bucket[$__rate_interval]))
zoekt: indexed_queueing_delay_p99_9
99.9th percentile job queuing delay over 5m
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
The 99.9 percentile dashboard is useful for capturing the long tail of queueing delays (on the order of 24+ hours, etc.).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100420
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: histogram_quantile(0.999, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
zoekt: indexed_queueing_delay_p90
90th percentile job queueing delay over 5m
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100421
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: histogram_quantile(0.90, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
zoekt: indexed_queueing_delay_p75
75th percentile job queueing delay over 5m
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of: - resource saturation - each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100422
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: histogram_quantile(0.75, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
Zoekt: Compound shards (experimental)
zoekt: compound_shards_aggregate
# of compound shards (aggregate)
The total number of compound shards aggregated over all instances.
This number should be consistent if the number of indexed repositories doesn`t change.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(index_number_compound_shards) by (app)
zoekt: compound_shards_per_instance
# of compound shards (per instance)
The total number of compound shards per instance.
This number should be consistent if the number of indexed repositories doesn`t change.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(index_number_compound_shards{instance=~
${instance:regex}}) by (instance)
zoekt: average_shard_merging_duration_success
Average successful shard merging duration over 1 hour
Average duration of a successful merge over the last hour.
The duration depends on the target compound shard size. The larger the compound shard the longer a merge will take. Since the target compound shard size is set on start of zoekt-indexserver, the average duration should be consistent.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(rate(index_shard_merging_duration_seconds_sum{error="false"}[1h])) / sum(rate(index_shard_merging_duration_seconds_count{error="false"}[1h]))
zoekt: average_shard_merging_duration_error
Average failed shard merging duration over 1 hour
Average duration of a failed merge over the last hour.
This curve should be flat. Any deviation should be investigated.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(rate(index_shard_merging_duration_seconds_sum{error="true"}[1h])) / sum(rate(index_shard_merging_duration_seconds_count{error="true"}[1h]))
zoekt: shard_merging_errors_aggregate
Number of errors during shard merging (aggregate)
Number of errors during shard merging aggregated over all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100520
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(index_shard_merging_duration_seconds_count{error="true"}) by (app)
zoekt: shard_merging_errors_per_instance
Number of errors during shard merging (per instance)
Number of errors during shard merging per instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100521
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(index_shard_merging_duration_seconds_count{instance=~
${instance:regex}, error="true"}) by (instance)
zoekt: shard_merging_merge_running_per_instance
If shard merging is running (per instance)
Set to 1 if shard merging is running.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100530
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max by (instance) (index_shard_merging_running{instance=~
${instance:regex}})
zoekt: shard_merging_vacuum_running_per_instance
If vacuum is running (per instance)
Set to 1 if vacuum is running.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100531
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max by (instance) (index_vacuum_running{instance=~
${instance:regex}})
Zoekt: Network I/O pod metrics (only available on Kubernetes)
zoekt: network_sent_bytes_aggregate
Transmission rate over 5m (aggregate)
The rate of bytes sent over the network across all Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~
.indexed-search.}[5m]))
zoekt: network_received_packets_per_instance
Transmission rate over 5m (per instance)
The amount of bytes sent over the network by individual Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_received_bytes_aggregate
Receive rate over 5m (aggregate)
The amount of bytes received from the network across Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum(rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~
.indexed-search.}[5m]))
zoekt: network_received_bytes_per_instance
Receive rate over 5m (per instance)
The amount of bytes received from the network by individual Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_transmitted_packets_dropped_by_instance
Transmit packet drop rate over 5m (by instance)
An increase in dropped packets could be a leading indicator of network saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100620
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_packets_dropped_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_transmitted_packets_errors_per_instance
Errors encountered while transmitting over 5m (per instance)
An increase in transmission errors could indicate a networking issue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100621
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_errors_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_received_packets_dropped_by_instance
Receive packet drop rate over 5m (by instance)
An increase in dropped packets could be a leading indicator of network saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100622
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_packets_dropped_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
zoekt: network_transmitted_packets_errors_by_instance
Errors encountered while receiving over 5m (per instance)
An increase in errors while receiving could indicate a networking issue.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100623
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_errors_total{container_label_io_kubernetes_pod_name=~
${instance:regex}}[5m]))
Zoekt: [zoekt-indexserver] Container monitoring (not available on server)
zoekt: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod zoekt-indexserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p zoekt-indexserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' zoekt-indexserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the zoekt-indexserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs zoekt-indexserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100700
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^zoekt-indexserver.*"}) > 60)
zoekt: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100701
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}
zoekt: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100702
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}
zoekt: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100703
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^zoekt-indexserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^zoekt-indexserver.*"}[1h]))
Zoekt: [zoekt-webserver] Container monitoring (not available on server)
zoekt: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod zoekt-webserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p zoekt-webserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' zoekt-webserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the zoekt-webserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs zoekt-webserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100800
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^zoekt-webserver.*"}) > 60)
zoekt: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100801
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}
zoekt: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100802
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}
zoekt: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100803
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^zoekt-webserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^zoekt-webserver.*"}[1h]))
Zoekt: [zoekt-indexserver] Provisioning indicators (not available on server)
zoekt: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}[1d])
zoekt: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}[1d])
zoekt: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100910
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}[5m])
zoekt: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100911
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}[5m])
Zoekt: [zoekt-webserver] Provisioning indicators (not available on server)
zoekt: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101000
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}[1d])
zoekt: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101001
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}[1d])
zoekt: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101010
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}[5m])
zoekt: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101011
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}[5m])
Zoekt: Kubernetes monitoring (only available on Kubernetes)
zoekt: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101100
on your Sourcegraph instance.
Managed by the Sourcegraph Search-core team.
Technical details
Query: sum by(app) (up{app=~".*indexed-search"}) / count by (app) (up{app=~".*indexed-search"}) * 100
Prometheus
Sourcegraph's all-in-one Prometheus and Alertmanager service.
To see this dashboard, visit /-/debug/grafana/d/prometheus/prometheus
on your Sourcegraph instance.
Prometheus: Metrics
prometheus: prometheus_rule_eval_duration
Average prometheus rule group evaluation duration over 10m by rule group
A high value here indicates Prometheus rule evaluation is taking longer than expected. It might indicate that certain rule groups are taking too long to evaluate, or Prometheus is underprovisioned.
Rules that Sourcegraph ships with are grouped under /sg_config_prometheus
. Custom rules are grouped under /sg_prometheus_addons
.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(rule_group) (avg_over_time(prometheus_rule_group_last_duration_seconds[10m]))
prometheus: prometheus_rule_eval_failures
Failed prometheus rule evaluations over 5m by rule group
Rules that Sourcegraph ships with are grouped under /sg_config_prometheus
. Custom rules are grouped under /sg_prometheus_addons
.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(rule_group) (rate(prometheus_rule_evaluation_failures_total[5m]))
Prometheus: Alerts
prometheus: alertmanager_notification_latency
Alertmanager notification latency over 1m by integration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(integration) (rate(alertmanager_notification_latency_seconds_sum[1m]))
prometheus: alertmanager_notification_failures
Failed alertmanager notifications over 1m by integration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100101
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(integration) (rate(alertmanager_notifications_failed_total[1m]))
Prometheus: Internals
prometheus: prometheus_config_status
Prometheus configuration reload status
A 1
indicates Prometheus reloaded its configuration successfully.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: prometheus_config_last_reload_successful
prometheus: alertmanager_config_status
Alertmanager configuration reload status
A 1
indicates Alertmanager reloaded its configuration successfully.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: alertmanager_config_last_reload_successful
prometheus: prometheus_tsdb_op_failure
Prometheus tsdb failures by operation over 1m by operation
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100210
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: increase(label_replace({__name__=~"prometheus_tsdb_(.*)_failed_total"}, "operation", "$1", "__name__", "(.+)s_failed_total")[5m:1m])
prometheus: prometheus_target_sample_exceeded
Prometheus scrapes that exceed the sample limit over 10m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100211
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: increase(prometheus_target_scrapes_exceeded_sample_limit_total[10m])
prometheus: prometheus_target_sample_duplicate
Prometheus scrapes rejected due to duplicate timestamps over 10m
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100212
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: increase(prometheus_target_scrapes_sample_duplicate_timestamp_total[10m])
Prometheus: Container monitoring (not available on server)
prometheus: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod prometheus
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p prometheus
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '{{json .State}}' prometheus
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the prometheus container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs prometheus
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: count by(name) ((time() - container_last_seen{name=~"^prometheus.*"}) > 60)
prometheus: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}
prometheus: container_memory_usage
Container memory usage by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}
prometheus: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(name) (rate(container_fs_reads_total{name=~"^prometheus.*"}[1h]) + rate(container_fs_writes_total{name=~"^prometheus.*"}[1h]))
Prometheus: Provisioning indicators (not available on server)
prometheus: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}[1d])
prometheus: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}[1d])
prometheus: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}[5m])
prometheus: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}[5m])
Prometheus: Kubernetes monitoring (only available on Kubernetes)
prometheus: pods_available_percentage
Percentage pods available
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Devops team.
Technical details
Query: sum by(app) (up{app=~".*prometheus"}) / count by (app) (up{app=~".*prometheus"}) * 100
Executor
Executes jobs in an isolated environment.
To see this dashboard, visit /-/debug/grafana/d/executor/executor
on your Sourcegraph instance.
Executor: Executor: Executor jobs
executor: executor_queue_size
Unprocessed executor job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100000
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by (queue)(src_executor_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
executor: executor_queue_growth_rate
Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100001
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (queue)(increase(src_executor_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m])) / sum by (queue)(increase(src_executor_processor_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m]))
executor: executor_queued_max_age
Unprocessed executor job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100002
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by (queue)(src_executor_queued_duration_seconds_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
Executor: Executor: Executor jobs
executor: executor_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100100
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(src_executor_processor_handlers{queue=~"${queue:regex}",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"})
executor: executor_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100110
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_executor_processor_total{queue=~"${queue:regex}",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: executor_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100111
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_executor_processor_duration_seconds_bucket{queue=~"${queue:regex}",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: executor_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100112
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: executor_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100113
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum(increase(src_executor_processor_total{queue=~"${queue:regex}",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
Executor: Run lock contention
executor: executor_run_lock_wait_total
Milliseconds wait every 5m
Number of milliseconds spent waiting for the run lock every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100200
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_executor_run_lock_wait_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: executor_run_lock_held_total
Milliseconds held every 5m
Number of milliseconds spent holding for the run lock every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100201
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_executor_run_lock_held_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
Executor: Executor: Queue API client
executor: apiworker_apiclient_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100300
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_apiclient_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_apiclient_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100301
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_apiworker_apiclient_duration_seconds_bucket{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_apiclient_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100302
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_apiclient_errors_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_apiclient_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100303
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_apiclient_errors_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum(increase(src_apiworker_apiclient_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum(increase(src_apiworker_apiclient_errors_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
executor: apiworker_apiclient_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100310
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_apiclient_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100311
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_apiclient_duration_seconds_bucket{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])))
executor: apiworker_apiclient_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100312
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_errors_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_apiclient_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100313
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_apiclient_errors_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum by (op)(increase(src_apiworker_apiclient_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum by (op)(increase(src_apiworker_apiclient_errors_total{job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
Executor: Executor: Job setup
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100400
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100401
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100402
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100403
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100410
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100411
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100412
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100413
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
Executor: Executor: Job execution
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100500
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100501
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100502
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100503
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100510
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100511
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100512
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100513
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
Executor: Executor: Job teardown
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100600
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100601
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100602
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100603
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100610
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100611
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: histogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100612
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100613
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors).*"}[5m]))) * 100
Executor: Executor: Compute instance metrics
executor: node_cpu_utilization
CPU utilization (minus idle/iowait)
Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_cpu_seconds_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",mode!~"(idle|iowait)",instance=~"$instance"}[$__rate_interval])) by(instance) / count(node_cpu_seconds_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",mode="system",instance=~"$instance"}) by (instance) * 100
executor: node_cpu_saturation_cpu_wait
CPU saturation (time waiting)
Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_cpu_waiting_seconds_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])
executor: node_memory_utilization
Memory utilization
Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query: (1 - sum(node_memory_MemAvailable_bytes{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}) by (instance) / sum(node_memory_MemTotal_bytes{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}) by (instance)) * 100
executor: node_memory_saturation_vmeff
Memory saturation (vmem efficiency)
Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers >~100% may be sign of imminent memory exhaustion, while sustained 0% < x < ~100% figures are very serious.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query: (rate(node_vmstat_pgsteal_anon{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_direct{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_file{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_kswapd{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])) / (rate(node_vmstat_pgscan_anon{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_direct{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_file{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_kswapd{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])) * 100
executor: node_memory_saturation_pressure_stalled
Memory saturation (fully stalled)
Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100712
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_memory_stalled_seconds_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])
executor: node_io_disk_utilization
Disk IO utilization (percentage time spent in IO)
Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100720
on your Sourcegraph instance.
Technical details
Query: sum(label_replace(label_replace(rate(node_disk_io_time_seconds_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(instance,disk) * 100
executor: node_io_disk_saturation
Disk IO saturation (avg IO queue size)
Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100721
on your Sourcegraph instance.
Technical details
Query: sum(label_replace(label_replace(rate(node_disk_io_time_weighted_seconds_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(instance,disk)
executor: node_io_disk_saturation_pressure_full
Disk IO saturation (avg time of all processes stalled)
Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100722
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_io_stalled_seconds_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])
executor: node_io_network_utilization
Network IO utilization (Rx)
Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100730
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_bytes_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])) by(instance) * 8
executor: node_io_network_saturation
Network IO saturation (Rx packets dropped)
Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100731
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_drop_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])) by(instance)
executor: node_io_network_saturation
Network IO errors (Rx)
Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100732
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_errs_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])) by(instance)
executor: node_io_network_utilization
Network IO utilization (Tx)
Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100740
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_bytes_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])) by(instance) * 8
executor: node_io_network_saturation
Network IO saturation (Tx packets dropped)
Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100741
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_drop_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])) by(instance)
executor: node_io_network_saturation
Network IO errors (Tx)
Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100742
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_errs_total{job=~"(sourcegraph-code-intel-indexer-nodes|sourcegraph-executor-nodes)",instance=~"$instance"}[$__rate_interval])) by(instance)
Executor: Executor: Docker Registry Mirror instance metrics
executor: node_cpu_utilization
CPU utilization (minus idle/iowait)
Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_cpu_seconds_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",mode!~"(idle|iowait)",instance=~".*"}[$__rate_interval])) by(instance) / count(node_cpu_seconds_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",mode="system",instance=~".*"}) by (instance) * 100
executor: node_cpu_saturation_cpu_wait
CPU saturation (time waiting)
Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_cpu_waiting_seconds_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])
executor: node_memory_utilization
Memory utilization
Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query: (1 - sum(node_memory_MemAvailable_bytes{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}) by (instance) / sum(node_memory_MemTotal_bytes{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}) by (instance)) * 100
executor: node_memory_saturation_vmeff
Memory saturation (vmem efficiency)
Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers >~100% may be sign of imminent memory exhaustion, while sustained 0% < x < ~100% figures are very serious.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query: (rate(node_vmstat_pgsteal_anon{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval]) + rate(node_vmstat_pgsteal_direct{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval]) + rate(node_vmstat_pgsteal_file{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval]) + rate(node_vmstat_pgsteal_kswapd{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])) / (rate(node_vmstat_pgscan_anon{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval]) + rate(node_vmstat_pgscan_direct{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval]) + rate(node_vmstat_pgscan_file{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval]) + rate(node_vmstat_pgscan_kswapd{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])) * 100
executor: node_memory_saturation_pressure_stalled
Memory saturation (fully stalled)
Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_memory_stalled_seconds_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])
executor: node_io_disk_utilization
Disk IO utilization (percentage time spent in IO)
Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100820
on your Sourcegraph instance.
Technical details
Query: sum(label_replace(label_replace(rate(node_disk_io_time_seconds_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(instance,disk) * 100
executor: node_io_disk_saturation
Disk IO saturation (avg IO queue size)
Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100821
on your Sourcegraph instance.
Technical details
Query: sum(label_replace(label_replace(rate(node_disk_io_time_weighted_seconds_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(instance,disk)
executor: node_io_disk_saturation_pressure_full
Disk IO saturation (avg time of all processes stalled)
Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100822
on your Sourcegraph instance.
Technical details
Query: rate(node_pressure_io_stalled_seconds_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])
executor: node_io_network_utilization
Network IO utilization (Rx)
Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100830
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_bytes_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])) by(instance) * 8
executor: node_io_network_saturation
Network IO saturation (Rx packets dropped)
Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100831
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_drop_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])) by(instance)
executor: node_io_network_saturation
Network IO errors (Rx)
Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100832
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_receive_errs_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])) by(instance)
executor: node_io_network_utilization
Network IO utilization (Tx)
Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100840
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_bytes_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])) by(instance) * 8
executor: node_io_network_saturation
Network IO saturation (Tx packets dropped)
Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100841
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_drop_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])) by(instance)
executor: node_io_network_saturation
Network IO errors (Tx)
Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100842
on your Sourcegraph instance.
Technical details
Query: sum(rate(node_network_transmit_errs_total{job=~"(sourcegraph-code-intel-indexer-docker-registry-mirror-nodes|sourcegraph-executors-docker-registry-mirror-nodes)",instance=~".*"}[$__rate_interval])) by(instance)
Executor: Golang runtime monitoring
executor: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100900
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by(instance) (go_goroutines{job=~".*(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors)"})
executor: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alert solutions reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100901
on your Sourcegraph instance.
Managed by the Sourcegraph Code-intel team.
Technical details
Query: max by(instance) (go_gc_duration_seconds{job=~".*(executor|sourcegraph-code-intel-indexers|executor-batches|sourcegraph-executors)"})