Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

garden exec is slow #1824

Closed
mnpenner opened this issue May 12, 2020 · 4 comments
Closed

garden exec is slow #1824

mnpenner opened this issue May 12, 2020 · 4 comments

Comments

@mnpenner
Copy link

Bug

Current Behavior

When I run garden exec php bash, garden spends an awful lot of time "Getting status". Even after it's done there's a very long pause before dropping me into my shell. Pretty sure docker on its own was way quicker, unless kubernetes adds some kind of overhead?

Expected behavior

Much quicker to get an interactive shell.

Reproducible example

For my test I'm using a docker image built on php:7.1-fpm.

Workaround

Maybe there's a way to bypass garden to get into the container? I don't know what the exact command is though, that's why I'm using garden 😋

Suggested solution(s)

Make it fassttrrrr. Maybe spend less time probing the current state of things and just try executing the command? If that fails, then run some diagnosis?

Additional context

Your environment

garden version 0.11.13

kubectl version Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:18:29Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

docker version 19.03.8

@mnpenner mnpenner changed the title graden exec is slow garden exec is slow May 12, 2020
@mnpenner
Copy link
Author

When I click this little button in Docker for Windows it opens a shell just about instantly:

So I'm pretty sure Garden is adding overhead.

@eysi09
Copy link
Collaborator

eysi09 commented May 17, 2020

Hi @mnpenner and thanks for the issue!

You're right that Garden adds some overhead. For most commands, Garden has to scan for configuration files, resolve the configuration and template strings and get the status of the providers. In the context of the Kubernetes provider, this means e.g. checking that system services are available. For commands like build and deploy, this may not be as noticeable, but it certainly is for commands like exec.

We've been exploring ways to sidestep this initial overhead, for example by having Garden run as a daemon or a long running process that can execute actions. However, these are not on our immediate roadmap.

That being said, there's not suppose to be much delay after getting the provider status and until Garden runs exec. I looked into that and turns out there's a network call there that usually just takes a few hundred milliseconds but can occasionally take several seconds. That's absolutely not suppose to happen and I'm working on a fix for that. So thanks for pointing this out!

And as for exec-ing into a running container without using Garden, you can use kubectl directly. First find the name of the pod with:

kubectl get pods -n <my-namespace>

The output looks something like this:

NAME                                     READY   STATUS             RESTARTS   AGE
backend-v-cfcd6507e7-698b9c769c-vrp2z    1/1     Running            0          24d
frontend-v-69d7a7c7e6-74775fbdfc-c2x79   1/1     Running            0          24d

Then run:

$ kubectl exec -it <name-of-pod> /bin/sh

You can read more in the official docs.

@eysi09 eysi09 added bug priority:high High priority issue or feature labels May 18, 2020
@elliott-davis
Copy link
Contributor

@mnpenner I have a similar issue where it takes up to 5 seconds for any task/exec command running through garden to start. I opted to write a bunch of shell wrapper scripts that I keep on PATH to do what I need. If you expose your service in Garden, you usually get a service port. This means you can shortcut the pod name lookup and do something like kubectl -n <garden_module_name> exec -it service/<name> bash and you'll have a shell a lot faster. The only overhead is how quickly you can type that command, ergo the shell wrappers 😄

@eysi09
Copy link
Collaborator

eysi09 commented Jun 2, 2020

Re-labeling this since the issue of the network call has been fixed with #1838. I'll keep the issue open though since the provider resolution part can still take time. We're actually working on improving that currently.

@eysi09 eysi09 added enhancement and removed bug priority:high High priority issue or feature labels Jun 2, 2020
edvald added a commit that referenced this issue Jun 3, 2020
We now cache provider statuses for a default of one hour. You can run
any command with `--force-refresh` to skip the caching, and can override
the cache duration with the `GARDEN_CACHE_TTL=<seconds>` environment
variable.

This substantially improves command execution times when run in
succession, which will be very useful for day-to-day usage, as well CI
performance.

Closes #1824
edvald added a commit that referenced this issue Jun 3, 2020
We now cache provider statuses for a default of one hour. You can run
any command with `--force-refresh` to skip the caching, and can override
the cache duration with the `GARDEN_CACHE_TTL=<seconds>` environment
variable.

This substantially improves command execution times when run in
succession, which will be very useful for day-to-day usage, as well CI
performance.

Closes #1824
edvald added a commit that referenced this issue Jun 5, 2020
We now cache provider statuses for a default of one hour. You can run
any command with `--force-refresh` to skip the caching, and can override
the cache duration with the `GARDEN_CACHE_TTL=<seconds>` environment
variable.

This substantially improves command execution times when run in
succession, which will be very useful for day-to-day usage, as well CI
performance.

Closes #1824
edvald added a commit that referenced this issue Jun 9, 2020
We now cache provider statuses for a default of one hour. You can run
any command with `--force-refresh` to skip the caching, and can override
the cache duration with the `GARDEN_CACHE_TTL=<seconds>` environment
variable.

This substantially improves command execution times when run in
succession, which will be very useful for day-to-day usage, as well CI
performance.

Closes #1824
edvald added a commit that referenced this issue Jun 21, 2020
We now cache provider statuses for a default of one hour. You can run
any command with `--force-refresh` to skip the caching, and can override
the cache duration with the `GARDEN_CACHE_TTL=<seconds>` environment
variable.

This substantially improves command execution times when run in
succession, which will be very useful for day-to-day usage, as well CI
performance.

Closes #1824
eysi09 pushed a commit that referenced this issue Jun 24, 2020
We now cache provider statuses for a default of one hour. You can run
any command with `--force-refresh` to skip the caching, and can override
the cache duration with the `GARDEN_CACHE_TTL=<seconds>` environment
variable.

This substantially improves command execution times when run in
succession, which will be very useful for day-to-day usage, as well CI
performance.

Closes #1824
edvald added a commit that referenced this issue Jun 24, 2020
We now cache provider statuses for a default of one hour. You can run
any command with `--force-refresh` to skip the caching, and can override
the cache duration with the `GARDEN_CACHE_TTL=<seconds>` environment
variable.

This substantially improves command execution times when run in
succession, which will be very useful for day-to-day usage, as well CI
performance.

Closes #1824
@edvald edvald closed this as completed in db72f2a Jun 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants