IMPORTANT: Please check Upgrading docs before upgrading to any particular version of Sourcegraph to check if any manual migrations are necessary.
A new version of Sourcegraph is released every month (with patch releases in between, released as needed). Check the Sourcegraph blog for release announcements.
These steps assume that you followed the forking instructions in docs/configure.md
Merge the new version of Sourcegraph into your release branch.
cd $DEPLOY_SOURCEGRAPH_FORK git fetch git checkout release # Choose which version you want to deploy from https://github.com/sourcegraph/deploy-sourcegraph/releases git merge $VERSION
Deploy the updated version of Sourcegraph to your Kubernetes cluster:
./kubectl-apply-all.sh
Monitor the status of the deployment.
watch kubectl get pods -o wide
You can rollback by resetting your release
branch to the old state and proceeding with step 2 above.
If an update includes a database migration, rollback will require some manual DB modifications. We plan to eliminate these in the near future, but for now, email mailto:[email protected] if you have concerns before updating to a new release.
Some of the services that comprise Sourcegraph require more resources than others, especially if the
default CPU or memory allocations have been overridden. During an update when many services restart,
you may observe that the more resource-hungry pods (e.g., gitserver
, indexed-search
) fail to
restart, because no single node has enough available CPU or memory to accommodate them. This may be
especially true if the cluster is heterogeneous (i.e., not all nodes have the same amount of
CPU/memory).
If this happens, do the following:
kubectl drain $NODE
to drain a node of existing pods, so it has enough allocation for the larger
service.watch kubectl get pods -o wide
and wait until the node has been drained. Run kubectl get pods
to check that all pods except for the resource-hungry one(s) have been assigned to a node.kubectl uncordon $NODE
to enable the larger pod(s) to be scheduled on the drained node.Note that the need to run the above steps can be prevented altogether with node selectors, which tell Kubernetes to assign certain pods to specific nodes. See the docs on enabling node selectors for Sourcegraph on Kubernetes.
Sourcegraph is designed to be a high-availability (HA) service, but upgrades by default require a 10m downtime window. If you need zero-downtime upgrades, please contact us. Services employ health checks to test the health of newly updated components before switching live traffic over to them by default. HA-enabling features include the following:
Some users may wish to opt for running two separate Sourcegraph clusters running in a
blue-green deployment. Such a setup makes
the update step more complex, but it can still be done with the sourcegraph-server-gen snapshot
command:
sourcegraph-server-gen
is ugpraded to version 3.0.1 (sourcegraph-server-gen update
)kubectl
to access cluster A and then run sourcegraph-server-gen snapshot create
.Configure kubectl
to access B.
Spin down sourcegraph-frontend
replicas to 0. (Note: this is very important, because
otherwise sourcegraph-frontend
may apply changes to the database that corrupt the snapshot
restoration.)
kubectl scale --replicas=0 deployment/sourcegraph-frontend
sourcegraph-server-gen snapshot restore
from the same directory where you ran the snapshot creation earlier.
Spin up sourcegraph-frontend
replicas to what it was before:
kubectl scale --replicas=$N deployment/sourcegraph-frontend
After the update, cluster A will be live, cluster B will be in standby, and both will be running the same new version of Sourcegraph. You may lose a few minutes of database updates while A is not live, but that is generally acceptable.
To keep the database on B current, you may periodically wish to sync A's database over to B
(sourcegraph-server-gen snapshot create
on A, sourcegraph-server-gen snapshot restore
on B). It
is important that the versions of A and B are equivalent when this is done.
See the troubleshooting page.