This document contains possible solutions for when you find alerts are firing in Sourcegraph's monitoring. If your alert isn't mentioned here, or if the solution doesn't help, contact us for assistance.
Descriptions:
warning_frontend_99th_percentile_search_request_duration
)Possible solutions:
"observability.logSlowSearches": 20,
in the site configuration and looking for frontend
warning logs prefixed with slow search request
for additional details.indexed-search.Deployment.yaml
if regularly hitting max CPU utilization.cpus:
of the zoekt-webserver container in docker-compose.yml
if regularly hitting max CPU utilization.Descriptions:
warning_frontend_90th_percentile_search_request_duration
)Possible solutions:
"observability.logSlowSearches": 15,
in the site configuration and looking for frontend
warning logs prefixed with slow search request
for additional details.indexed-search.Deployment.yaml
if regularly hitting max CPU utilization.cpus:
of the zoekt-webserver container in docker-compose.yml
if regularly hitting max CPU utilization.Descriptions:
warning_frontend_search_alert_user_suggestions
)Possible solutions:
Descriptions:
warning_frontend_99th_percentile_search_codeintel_request_duration
)Possible solutions:
"observability.logSlowSearches": 20,
in the site configuration and looking for frontend
warning logs prefixed with slow search request
for additional details.indexed-search.Deployment.yaml
if regularly hitting max CPU utilization.cpus:
of the zoekt-webserver container in docker-compose.yml
if regularly hitting max CPU utilization.Descriptions:
warning_frontend_90th_percentile_search_codeintel_request_duration
)Possible solutions:
"observability.logSlowSearches": 15,
in the site configuration and looking for frontend
warning logs prefixed with slow search request
for additional details.indexed-search.Deployment.yaml
if regularly hitting max CPU utilization.cpus:
of the zoekt-webserver container in docker-compose.yml
if regularly hitting max CPU utilization.Descriptions:
warning_frontend_search_codeintel_alert_user_suggestions
)Possible solutions:
Descriptions:
warning_frontend_99th_percentile_search_api_request_duration
)Possible solutions:
"observability.logSlowSearches": 20,
in the site configuration and looking for frontend
warning logs prefixed with slow search request
for additional details.count:
parameter, consider using our search pagination API.indexed-search.Deployment.yaml
if regularly hitting max CPU utilization.cpus:
of the zoekt-webserver container in docker-compose.yml
if regularly hitting max CPU utilization.Descriptions:
warning_frontend_90th_percentile_search_api_request_duration
)Possible solutions:
"observability.logSlowSearches": 15,
in the site configuration and looking for frontend
warning logs prefixed with slow search request
for additional details.count:
parameter, consider using our search pagination API.indexed-search.Deployment.yaml
if regularly hitting max CPU utilization.cpus:
of the zoekt-webserver container in docker-compose.yml
if regularly hitting max CPU utilization.Descriptions:
warning_frontend_search_api_alert_user_suggestions
)Possible solutions:
Descriptions:
warning_frontend_internal_indexed_search_error_responses
)Possible solutions:
Descriptions:
warning_frontend_internal_unindexed_search_error_responses
)Possible solutions:
Descriptions:
warning_frontend_internal_api_error_responses
)Possible solutions:
frontend
logs for potential causes.Descriptions:
warning_frontend_container_restarts
)Possible solutions:
kubectl describe pod frontend
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p frontend
.docker inspect -f '{{json .State}}' frontend
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the frontend container in docker-compose.yml
.panic:
messages or similar using docker logs frontend
(note this will include logs from the previous and currently running container).Descriptions:
warning_frontend_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of frontend container in docker-compose.yml
.Descriptions:
warning_frontend_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the frontend container in docker-compose.yml
.Descriptions:
warning_frontend_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the frontend container in docker-compose.yml
.Descriptions:
warning_frontend_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of frontend container in docker-compose.yml
.Descriptions:
warning_frontend_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the frontend container in docker-compose.yml
.Descriptions:
warning_frontend_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of frontend container in docker-compose.yml
.Descriptions:
gitserver: less than 25% disk space remaining by instance (warning_gitserver_disk_space_remaining
)
gitserver: less than 15% disk space remaining by instance (critical_gitserver_disk_space_remaining
)
Possible solutions:
Descriptions:
gitserver: 50+ running git commands (signals load) (warning_gitserver_running_git_commands
)
gitserver: 100+ running git commands (signals load) (critical_gitserver_running_git_commands
)
Possible solutions:
Descriptions:
warning_gitserver_repository_clone_queue_size
)Possible solutions:
Descriptions:
warning_gitserver_repository_existence_check_queue_size
)Possible solutions:
Descriptions:
gitserver: 1s+ echo command duration test (warning_gitserver_echo_command_duration_test
)
gitserver: 2s+ echo command duration test (critical_gitserver_echo_command_duration_test
)
Possible solutions:
Descriptions:
warning_gitserver_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs gitserver
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs gitserver
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_gitserver_container_restarts
)Possible solutions:
kubectl describe pod gitserver
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p gitserver
.docker inspect -f '{{json .State}}' gitserver
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the gitserver container in docker-compose.yml
.panic:
messages or similar using docker logs gitserver
(note this will include logs from the previous and currently running container).Descriptions:
warning_gitserver_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of gitserver container in docker-compose.yml
.Descriptions:
warning_gitserver_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the gitserver container in docker-compose.yml
.Descriptions:
warning_gitserver_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the gitserver container in docker-compose.yml
.Descriptions:
warning_gitserver_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of gitserver container in docker-compose.yml
.Descriptions:
warning_gitserver_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the gitserver container in docker-compose.yml
.Descriptions:
warning_gitserver_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of gitserver container in docker-compose.yml
.Descriptions:
warning_github-proxy_container_restarts
)Possible solutions:
kubectl describe pod github-proxy
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p github-proxy
.docker inspect -f '{{json .State}}' github-proxy
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the github-proxy container in docker-compose.yml
.panic:
messages or similar using docker logs github-proxy
(note this will include logs from the previous and currently running container).Descriptions:
warning_github-proxy_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of github-proxy container in docker-compose.yml
.Descriptions:
warning_github-proxy_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the github-proxy container in docker-compose.yml
.Descriptions:
warning_github-proxy_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the github-proxy container in docker-compose.yml
.Descriptions:
warning_github-proxy_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of github-proxy container in docker-compose.yml
.Descriptions:
warning_github-proxy_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the github-proxy container in docker-compose.yml
.Descriptions:
warning_github-proxy_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of github-proxy container in docker-compose.yml
.Descriptions:
precise-code-intel-bundle-manager: less than 25% disk space remaining by instance (warning_precise-code-intel-bundle-manager_disk_space_remaining
)
precise-code-intel-bundle-manager: less than 15% disk space remaining by instance (critical_precise-code-intel-bundle-manager_disk_space_remaining
)
Possible solutions:
Descriptions:
warning_precise-code-intel-bundle-manager_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs precise-code-intel-bundle-manager
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs precise-code-intel-bundle-manager
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_precise-code-intel-bundle-manager_container_restarts
)Possible solutions:
kubectl describe pod precise-code-intel-bundle-manager
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p precise-code-intel-bundle-manager
.docker inspect -f '{{json .State}}' precise-code-intel-bundle-manager
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the precise-code-intel-bundle-manager container in docker-compose.yml
.panic:
messages or similar using docker logs precise-code-intel-bundle-manager
(note this will include logs from the previous and currently running container).Descriptions:
warning_precise-code-intel-bundle-manager_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-bundle-manager container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-bundle-manager_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-bundle-manager container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-bundle-manager_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-bundle-manager container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-bundle-manager_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-bundle-manager container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-bundle-manager_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-bundle-manager container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-bundle-manager_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-bundle-manager container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-worker_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs precise-code-intel-worker
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs precise-code-intel-worker
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_precise-code-intel-worker_container_restarts
)Possible solutions:
kubectl describe pod precise-code-intel-worker
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p precise-code-intel-worker
.docker inspect -f '{{json .State}}' precise-code-intel-worker
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the precise-code-intel-worker container in docker-compose.yml
.panic:
messages or similar using docker logs precise-code-intel-worker
(note this will include logs from the previous and currently running container).Descriptions:
warning_precise-code-intel-worker_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-worker container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-worker_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-worker container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-worker_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-worker container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-worker_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-worker container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-worker_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-worker container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-worker_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-worker container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-indexer_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs precise-code-intel-indexer
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs precise-code-intel-indexer
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_precise-code-intel-indexer_container_restarts
)Possible solutions:
kubectl describe pod precise-code-intel-indexer
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p precise-code-intel-indexer
.docker inspect -f '{{json .State}}' precise-code-intel-indexer
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the precise-code-intel-indexer container in docker-compose.yml
.panic:
messages or similar using docker logs precise-code-intel-indexer
(note this will include logs from the previous and currently running container).Descriptions:
warning_precise-code-intel-indexer_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-indexer container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-indexer_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-indexer container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-indexer_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-indexer container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-indexer_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-indexer container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-indexer_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the precise-code-intel-indexer container in docker-compose.yml
.Descriptions:
warning_precise-code-intel-indexer_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of precise-code-intel-indexer container in docker-compose.yml
.Descriptions:
warning_query-runner_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs query-runner
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs query-runner
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_query-runner_container_restarts
)Possible solutions:
kubectl describe pod query-runner
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p query-runner
.docker inspect -f '{{json .State}}' query-runner
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the query-runner container in docker-compose.yml
.panic:
messages or similar using docker logs query-runner
(note this will include logs from the previous and currently running container).Descriptions:
warning_query-runner_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of query-runner container in docker-compose.yml
.Descriptions:
warning_query-runner_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the query-runner container in docker-compose.yml
.Descriptions:
warning_query-runner_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the query-runner container in docker-compose.yml
.Descriptions:
warning_query-runner_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of query-runner container in docker-compose.yml
.Descriptions:
warning_query-runner_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the query-runner container in docker-compose.yml
.Descriptions:
warning_query-runner_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of query-runner container in docker-compose.yml
.Descriptions:
warning_replacer_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs replacer
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs replacer
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_replacer_container_restarts
)Possible solutions:
kubectl describe pod replacer
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p replacer
.docker inspect -f '{{json .State}}' replacer
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the replacer container in docker-compose.yml
.panic:
messages or similar using docker logs replacer
(note this will include logs from the previous and currently running container).Descriptions:
warning_replacer_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of replacer container in docker-compose.yml
.Descriptions:
warning_replacer_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the replacer container in docker-compose.yml
.Descriptions:
warning_replacer_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the replacer container in docker-compose.yml
.Descriptions:
warning_replacer_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of replacer container in docker-compose.yml
.Descriptions:
warning_replacer_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the replacer container in docker-compose.yml
.Descriptions:
warning_replacer_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of replacer container in docker-compose.yml
.Descriptions:
warning_repo-updater_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs repo-updater
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs repo-updater
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_repo-updater_container_restarts
)Possible solutions:
kubectl describe pod repo-updater
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p repo-updater
.docker inspect -f '{{json .State}}' repo-updater
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the repo-updater container in docker-compose.yml
.panic:
messages or similar using docker logs repo-updater
(note this will include logs from the previous and currently running container).Descriptions:
warning_repo-updater_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of repo-updater container in docker-compose.yml
.Descriptions:
warning_repo-updater_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the repo-updater container in docker-compose.yml
.Descriptions:
warning_repo-updater_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the repo-updater container in docker-compose.yml
.Descriptions:
warning_repo-updater_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of repo-updater container in docker-compose.yml
.Descriptions:
warning_repo-updater_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the repo-updater container in docker-compose.yml
.Descriptions:
warning_repo-updater_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of repo-updater container in docker-compose.yml
.Descriptions:
warning_searcher_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs searcher
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs searcher
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_searcher_container_restarts
)Possible solutions:
kubectl describe pod searcher
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p searcher
.docker inspect -f '{{json .State}}' searcher
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the searcher container in docker-compose.yml
.panic:
messages or similar using docker logs searcher
(note this will include logs from the previous and currently running container).Descriptions:
warning_searcher_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of searcher container in docker-compose.yml
.Descriptions:
warning_searcher_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the searcher container in docker-compose.yml
.Descriptions:
warning_searcher_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the searcher container in docker-compose.yml
.Descriptions:
warning_searcher_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of searcher container in docker-compose.yml
.Descriptions:
warning_searcher_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the searcher container in docker-compose.yml
.Descriptions:
warning_searcher_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of searcher container in docker-compose.yml
.Descriptions:
warning_symbols_frontend_internal_api_error_responses
)Possible solutions:
docker logs $CONTAINER_ID
for logs starting with repo-updater
that indicate requests to the frontend service are failing.kubectl get pods
shows the frontend
pods are healthy.kubectl logs symbols
for logs indicate request failures to frontend
or frontend-internal
.docker ps
shows the frontend-internal
container is healthy.docker logs symbols
for logs indicating request failures to frontend
or frontend-internal
.Descriptions:
warning_symbols_container_restarts
)Possible solutions:
kubectl describe pod symbols
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p symbols
.docker inspect -f '{{json .State}}' symbols
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the symbols container in docker-compose.yml
.panic:
messages or similar using docker logs symbols
(note this will include logs from the previous and currently running container).Descriptions:
warning_symbols_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of symbols container in docker-compose.yml
.Descriptions:
warning_symbols_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the symbols container in docker-compose.yml
.Descriptions:
warning_symbols_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the symbols container in docker-compose.yml
.Descriptions:
warning_symbols_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of symbols container in docker-compose.yml
.Descriptions:
warning_symbols_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the symbols container in docker-compose.yml
.Descriptions:
warning_symbols_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of symbols container in docker-compose.yml
.Descriptions:
warning_syntect-server_container_restarts
)Possible solutions:
kubectl describe pod syntect-server
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p syntect-server
.docker inspect -f '{{json .State}}' syntect-server
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the syntect-server container in docker-compose.yml
.panic:
messages or similar using docker logs syntect-server
(note this will include logs from the previous and currently running container).Descriptions:
warning_syntect-server_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of syntect-server container in docker-compose.yml
.Descriptions:
warning_syntect-server_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the syntect-server container in docker-compose.yml
.Descriptions:
warning_syntect-server_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the syntect-server container in docker-compose.yml
.Descriptions:
warning_syntect-server_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of syntect-server container in docker-compose.yml
.Descriptions:
warning_syntect-server_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the syntect-server container in docker-compose.yml
.Descriptions:
warning_syntect-server_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of syntect-server container in docker-compose.yml
.Descriptions:
warning_zoekt-indexserver_container_restarts
)Possible solutions:
kubectl describe pod zoekt-indexserver
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p zoekt-indexserver
.docker inspect -f '{{json .State}}' zoekt-indexserver
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the zoekt-indexserver container in docker-compose.yml
.panic:
messages or similar using docker logs zoekt-indexserver
(note this will include logs from the previous and currently running container).Descriptions:
warning_zoekt-indexserver_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of zoekt-indexserver container in docker-compose.yml
.Descriptions:
warning_zoekt-indexserver_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the zoekt-indexserver container in docker-compose.yml
.Descriptions:
warning_zoekt-indexserver_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the zoekt-indexserver container in docker-compose.yml
.Descriptions:
warning_zoekt-indexserver_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of zoekt-indexserver container in docker-compose.yml
.Descriptions:
warning_zoekt-indexserver_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the zoekt-indexserver container in docker-compose.yml
.Descriptions:
warning_zoekt-indexserver_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of zoekt-indexserver container in docker-compose.yml
.Descriptions:
warning_zoekt-webserver_container_restarts
)Possible solutions:
kubectl describe pod zoekt-webserver
(look for OOMKilled: true
) and, if so, consider increasing the memory limit in the relevant Deployment.yaml
.panic:
messages or similar using kubectl logs -p zoekt-webserver
.docker inspect -f '{{json .State}}' zoekt-webserver
(look for "OOMKilled":true
) and, if so, consider increasing the memory limit of the zoekt-webserver container in docker-compose.yml
.panic:
messages or similar using docker logs zoekt-webserver
(note this will include logs from the previous and currently running container).Descriptions:
warning_zoekt-webserver_container_memory_usage
)Possible solutions:
Deployment.yaml
.memory:
of zoekt-webserver container in docker-compose.yml
.Descriptions:
warning_zoekt-webserver_container_cpu_usage
)Possible solutions:
Deployment.yaml
.cpus:
of the zoekt-webserver container in docker-compose.yml
.Descriptions:
warning_zoekt-webserver_provisioning_container_cpu_usage_7d
)Possible solutions:
Deployment.yaml
.cpus:
of the zoekt-webserver container in docker-compose.yml
.Descriptions:
warning_zoekt-webserver_provisioning_container_memory_usage_7d
)Possible solutions:
Deployment.yaml
.memory:
of zoekt-webserver container in docker-compose.yml
.Descriptions:
warning_zoekt-webserver_provisioning_container_cpu_usage_5m
)Possible solutions:
Deployment.yaml
.cpus:
of the zoekt-webserver container in docker-compose.yml
.Descriptions:
warning_zoekt-webserver_provisioning_container_memory_usage_5m
)Possible solutions:
Deployment.yaml
.memory:
of zoekt-webserver container in docker-compose.yml
.