Skip to content

Commit

Permalink
Keep a placeholder node around for UToronto Hub
Browse files Browse the repository at this point in the history
Users were reporting slow starts and timeouts in the morning,
as many users come on at the same time. This change keeps
a placeholder *node* around, with a placeholder pod that
will get displaced whenever a user needs that node. This should
increase the odds of a new node being up by the time more
than 1 node worth of users pop in.

Ref https://2i2c.freshdesk.com/a/tickets/201
  • Loading branch information
yuvipanda committed Sep 16, 2022
1 parent 750d631 commit 1f2f27b
Showing 1 changed file with 11 additions and 0 deletions.
11 changes: 11 additions & 0 deletions config/clusters/utoronto/prod.values.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,15 @@
jupyterhub:
scheduling:
userPlaceholder:
# Keep at least one spare node around
replicas: 1
resources:
requests:
# Each node on the UToronto cluster has 59350076Ki of RAM
# You can find this out by looking at the output of `kubectl get node <node-name> -o yaml`
# Look under `allocatable`, not `capacity`
# So even though this is under `userPlaceholder`, it really is operating as a `nodePlaceholder`
memory: 57350076Ki
hub:
db:
pvc:
Expand Down

0 comments on commit 1f2f27b

Please sign in to comment.