-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[ws-manager-mk2] Support workspace snapshots #16471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
/werft run with-preview with-large-vm with-wsman-mk2 👍 started the job as gitpod-build-fo-mk2-snapshots.26 |
/werft run with-preview with-large-vm with-wsman-mk2 👍 started the job as gitpod-build-fo-mk2-snapshots.27 |
Verbs: []string{ | ||
"get", | ||
"patch", | ||
"update", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Do ws-manager-mk2 and ws-daemon both need patch and update for snapshots/status? I thought we were trying to consolidate update verbs to single sources of truth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated scope of permissions
@Furisto can you update with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some comments but LGTM otherwise. I like the use of the Snapshot resource
// SetupWithManager sets up the controller with the Manager. | ||
func (r *SnapshotReconciler) SetupWithManager(mgr ctrl.Manager) error { | ||
return ctrl.NewControllerManagedBy(mgr). | ||
For(&workspacev1.Snapshot{}). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed this recently for the other controllers, lets explicitly name each controller, as the name shows up in the metrics and allows us to differentiate different controllers for the same resource. (By default the controller name is the name of the resource it owns)
For(&workspacev1.Snapshot{}). | |
Named("snapshot"). | |
For(&workspacev1.Snapshot{}). |
log.Error(err, "could not set completion status for snapshot", "workspace", snapshot.Spec.WorkspaceID) | ||
} | ||
|
||
return ctrl.Result{}, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we delete the Snapshot resource somewhere after some time?
Might make sense to set a completion timestamp, and then clean up any snapshots completed e.g. > 30 mins ago?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The owner of the snapshot is the workspace so the snapshot gets deleted when the workspace gets deleted (cascading delete) which I think is pretty nice. We can inspect the snapshots taken by a workspace while it is running and we ensure that the snapshot CR does not outlive the workspace so that we do not try to take a snapshot from a stopped workspace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Tested and the following worked nicely in a preview env:
As an aside found some small error logging issues in content-service while checking the daemon logs: #16527 |
b6b40a8
to
84f25b0
Compare
- Name controller - Return early in case of error - Reduce scope of permissions
84f25b0
to
0d10ba0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✔️
Description
Adds support for snapshotting workspaces
Related Issue(s)
n.a.
How to test
kubectl label node fo-mk2-snapshots gitpod.io/experimental=true
Release Notes
Build Options:
Experimental feature to run the build with GitHub Actions (and not in Werft).
leeway-target=components:all
Run Leeway with
--dont-test
Publish Options
Installer Options
Add desired feature flags to the end of the line above, space separated
Preview Environment Options:
If enabled this will build
install/preview
If enabled this will create the environment on GCE infra
Valid options are
all
,workspace
,webapp
,ide
,jetbrains
,vscode
,ssh