-
Notifications
You must be signed in to change notification settings - Fork 294
Add support for loading plugin and creating task from remote locations #1201
Comments
Being able to reference a common url for the task seems like it would be convenient. The plugin loading would make me a bit nervous and I would want a way to disable it. Pulling in executables from an arbitrary url, in particular via the REST API, means we would need to be much more careful about what can interact with and access the REST endpoint. At the very least, I think #286 becomes critical if you support this. In general for our setup we would typically want the consumption use-cases like creating tasks to be fairly open so developers can get the data they need with minimal hassle. However, we would likely restrict some of the management use-cases. Even if you ignore security concerns like an attacker, pulling in and running arbitrary binary on the instance seems like an easy way for a developer to accidentally break the app or cluster and cause an outage. |
I'm not sure I agree that requiring someone using snapctl/api to have a plugin/task file locally is cumbersome or infeasible. Can you give some reasons/examples of why/when that would be the case?
This seems like a larger and separate issue. |
@IRCody In highly heterogeneous environments this can prove to be cumbersome depending on the deployment tooling one uses. I'll give an example on some of the early work I was using SNAP for. Nested deployment for dev/test/prod environment can make things a bit challenging. See image below. I was automating the entire process using Ansible. This required me to have a copy of the task definition local to the playbook directory, which then had to be copied to VM that was being provisioned, and subsequently copied to the container that the SNAP daemon would run in. It would be useful to be able to have a repository of task files that I could point the SNAP daemon at. Thoughts? |
@edhenry: Thank you for taking the time to respond with an example.
The task/plugin only needs to be local to the snapctl instance, not the daemon. snapctl can connect to remote daemons via the
This seems like a different ask than what is above. Can you explain what you mean by "a repository of task files"? |
Cool, that was my misunderstanding of the functionality between the snap daemon and snapctl. From the use case I'd presented above, that does address the copying of the file multiple levels deep.
Sorry for the lack of clarity. I was speaking more from an implementation perspective, I think. I was thinking in the same vein of how something like a schema registry works in the world of Kafka/Avro. Being able to centrally store the schema definitions, or in the snap case task definitions / plugins, it might be useful if snapctl were able to call these objects from a remote repository. Rather than having the task / plugin files littered throughout whatever orchestration system you may be dealing with (Puppet, Chef, Ansible, Salt, etc.) Does this make sense? |
@IRCody my use case looks like this: I have a snap running as a pod in Kubernetes. Both snapd and snapctl are inside the container. Now, for example, in order to load a new plugin I must first download it (and depending on cluster type I must first find a partition to which I can write): then I must copy plugin binary inside a container, like this (bit hacky): and then I can load a plugin like this: Alternatively I could attach to docker container with snapd and snapctl, then download and load plugin there, but this also seems too complicated. If instead of all those steps I could use |
@edhenry: I'm not familiar with kafka/avro but it seems like adding the ability to point snapctl to remote url's as a convenience might make sense, especially if a lot of people think it would be helpful. This is a little different than having a central registry that can be listed/etc. Is the registry more what you were thinking would be good?
This part I understand, but the change being asked for basically amounts to snapctl doing the curl call (or equivalent) before sending that file to snapd? I guess it would hold the plugin in memory instead of writing to disk?
If you can access snapd http-api then this step is not required since snapctl can specify a remote snapd instance.
@andrzej-k: You can see my reply to @edhenry above about adding url option to snapctl. It also seems to me that all of these requested changes are limited to snapctl changes but in the original issue you mentioned supporting it in the rest api also. Can you expand on that? |
@IRCody Regarding REST API - according to the documentation (https://github.com/intelsdi-x/snap/blob/master/docs/REST_API.md#plugin-apis-and-examples) loading plugin is done like this:
So, if anyone would like to use REST API instead of snapctl then REST API should also support this feature. |
It seems like if someone is already scripting the API calls they should be able to deal with getting the binary to the API. Do we really want to complicate the API implementation when you can use curl already to do this?
|
@IRCody I don't mind complicating implementation to simplify the (user) interface ;) |
Currently both
snapctl plugin load
andsnapctl task create
require passing file path to either a plugin or a task. Sometimes it may not be feasible (or is cumbersome) and having possibility to callplugin load
andtask create
like this may be useful:snapctl plugin load http://snap.ci.snap-telemetry.io/plugins/snap-plugin-publisher-file/master/latest/linux/x86_64/snap-plugin-publisher-file
snapctl task create -u http://snap.ci.snap-telemetry.io/tasks/some-example-task.json
(-u standing for url).REST API should also support this.
The text was updated successfully, but these errors were encountered: