Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLI Peering Export Command #14547

Closed
DanStough opened this issue Sep 9, 2022 · 9 comments · Fixed by #15654
Closed

CLI Peering Export Command #14547

DanStough opened this issue Sep 9, 2022 · 9 comments · Fixed by #15654
Assignees
Labels
good first issue A well-defined bug or improvement with sufficient context which should be approachable for new contr help-wanted We encourage community PRs for these issues! theme/cli Flags and documentation for the CLI interface theme/cluster-peering Related to Consul's cluster peering feature

Comments

@DanStough
Copy link
Contributor

Feature Description

Create a CLI command, consul peering export that automates the process of creating config entries to export services from a cluster to a peer. It will create a exported-services object between the requested service and the peer names in the command.
The initial design could be something like the following:

consul peering export -service=<service name> [-namespace=<service ns>] -peers=<peer[,peers...]>

Validation should include checking for the existence of the service and peerings.

Good references for this work include the other peering CLI commands and the consul connect expose command, which has a similar flow of modifying config entries.

Use Case

A hypothetical example is a service web in the current cluster that we would like to export to a peering named other-cluster. The peering connection already exists. The following command should allow traffic to route from that cluster by creating the necessary exported-services:

consul peering export -service=web -peers=other-cluster
@DanStough DanStough added theme/cli Flags and documentation for the CLI interface good first issue A well-defined bug or improvement with sufficient context which should be approachable for new contr help-wanted We encourage community PRs for these issues! theme/cluster-peering Related to Consul's cluster peering feature labels Sep 9, 2022
@jkirschner-hashicorp
Copy link
Contributor

Hi @DanStough,

Thanks for writing this up! I have a few follow-up questions.

  • How many exported-services object(s) exist per OSS datacenter / enterprise partition? If it's just 1, then this command would need to modify the existing entry, not create a new one, right?
    • If we're modifying an existing config entry, we'd need to read the config entry first, merge the changes into that config, then write. Maybe we could use the check-and-set pattern to config we're not overwriting something that's been changed since we performed the read?
    • If we're modifying an existing config entry, what should happen if service web has already been exported? Would this just append new peers to export to?
  • We also need to support exporting services to a local partition (not just a cluster peer), right? I assume that would happen in the enterprise code.
  • The -namespace arg would be implemented in the enterprise code (stating this in case any OSS contributors are interested in contributing).
  • Thoughts on this alternate syntax to make the directionality of arguments clearer? I'm not sure I like it more than the original proposal, but still wanted to raise to get your perspective. consul peering export -service=<service name> [-from-namespace=<service ns>] -to-peers=<peer[,peers...]>

@DanStough
Copy link
Contributor Author

Good questions!

How many exported-services object(s) exist per OSS datacenter / enterprise partition? If it's just 1, then this command would need to modify the existing entry, not create a new one, right?

Yes, there should only be one, so we would have to go the check-and-set route as you mentioned. My though would be as we modify existing entries this command is append-only, preferring to leave finer-grained operations for config write.

We also need to support exporting services to a local partition (not just a cluster peer), right? I assume that would happen in the enterprise code.

I think we would either need to have a general consul service export command that supported both or have a separate consul partition export command. I like to have single-purpose commands with clearly required flags, so I prefer the latter. This would need to be in the enterprise codebase.

The -namespace arg would be implemented in the enterprise code (stating this in case any OSS contributors are interested in contributing).

Since this is just writing a config entry, I think this can be implemented in the CLI in OSS and the necessary validation will happen server-side.

Thoughts on this alternate syntax to make the directionality of arguments clearer? I'm not sure I like it more than the original proposal, but still wanted to raise to get your perspective. consul peering export -service= [-from-namespace=] -to-peers=<peer[,peers...]>

No strong opinion! I like to type less, but we can see if we get any community feedback.

@nathancoleman
Copy link
Member

nathancoleman commented Sep 30, 2022

@DanStough @jkirschner-hashicorp If no one is currently working on this, @20sr20 and I would like to

Discussed with Jared outside the PR. We're good to go and starting work on this next week 🛠️

@jkirschner-hashicorp
Copy link
Contributor

@nathancoleman , @20sr20 : something else I just thought of as a possible extension of this task. As I tried writing it out, I realized this was probably a much thornier suggestion than it seemed on the surface, and maybe not worth it at all (much less in the first pass). Let me know what you think. (CC: @DanStough )


Ideally, we'd help users understand that exporting a service isn't sufficient to allow access, intentions are also needed*.

Communication from the importing peer to the exported service won't work if there are no intentions allowing traffic from the exported service to anything on the importing side. If there are no intentions* allowing anything on the importing side to actually access the exported services, it might be nice to give the user a heads up. (We'd also need to inform the user if the ACL token in use lacks the permissions to check the intentions.)

That said, I'm not sure what to do if someone exports all services using the wildcard (*)... Would we then check for a wildcard intention?

*Note that intentions are effectively bypassed unless one of the following is true:

  • ACLs are enabled with default deny, and there's no wildcard allow intention
  • A wildcard deny intention is in place

@nathancoleman
Copy link
Member

nathancoleman commented Nov 7, 2022

The -namespace arg would be implemented in the enterprise code (stating this in case any OSS contributors are interested in contributing).
Since this is just writing a config entry, I think this can be implemented in the CLI in OSS and the necessary validation will happen server-side.

If I'm understanding correctly, -namespace would be used to specify the namespace of the service that we're exporting. This would result in an exported-services config entry that looks like this pseudocode:

...
Services: [
  {
    Name: "myService"
    Namespace: "myNamespace",
    Consumers: [
      ...
    ]
  }
]

however, Consul server does not actually return an error when you write a config entry like that from OSS.

Am I missing anything @DanStough , or does this indicate that the namespace code does need to live in the enterprise code after all since the server wouldn't return a validation error like we were expecting?

@jkirschner-hashicorp
Copy link
Contributor

jkirschner-hashicorp commented Nov 8, 2022

@nathancoleman : Out of curiosity, what happens if you just add arbitrary key/values to the objects in the Services array? I suspect there might just not be validation on adding fields that are ignored.

My understanding matches yours: that -namespace would specify the namespace of the service being exported. In other words, service myService is in namespace myNamespace. This is also what the docs suggest.

I think the implementation of the namespace field would need to live in the enterprise code (though I defer to @DanStough / engineering). Only enterprise can have services in a (non-default) namespace to begin with, so only enterprise would even need the ability to specify the namespace field.

@DanStough
Copy link
Contributor Author

@nathancoleman I think we're a little inconsistent with where some of the enterprise code lives for CLI commands.

  1. For commands with enterprise modifiers, we usually implement them in OSS and depend on the server to do the right thing. e.g. Service Registration w/ Namespaces and Partitions
  2. For subcommands only relevant to ENT, we have those in the enterprise repo behind build flags. e.g. consul partition.

I like the pattern of including everything in OSS CLI commands because it allows some interoperability between the Consul binary as a client (the thing installed on an operator's workstation) and many versions of Consul that they may be managing.

IMHO I would say we treat this as case 1., which would be the same thing as if the operator had incorrectly added the field to the file and tried to write the config entry. I think the optimization is that we should be validating this server-side to provide feedback in both cases. As part of this task or a future one, we could add validations to the Config entry struct for exported-services here. There will need to be a new function implemented in both OSS and ENT. Let me know what you think.

@nathancoleman
Copy link
Member

nathancoleman commented Dec 2, 2022

@DanStough I'm in favor of the approach you described. Since we're using this is a learning/pairing opportunity, I think it would be best to split the server validation of the config entry out into a separate task.

@nathancoleman nathancoleman linked a pull request Dec 12, 2022 that will close this issue
3 tasks
@nathancoleman
Copy link
Member

Update: After much discussion, the team has landed on the following format:

$ consul services export -name=my-service -consumer-peers=peer1,peer2 -consumer-partitions=part1

Notably, Consul Enterprise will include support for the standard -namespace=foo and -partition=bar flags.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue A well-defined bug or improvement with sufficient context which should be approachable for new contr help-wanted We encourage community PRs for these issues! theme/cli Flags and documentation for the CLI interface theme/cluster-peering Related to Consul's cluster peering feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants