-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't launch clusters in availability zones that aren't in your current knife[:region] #91
Comments
Do you know if it's OK if I just not refer to Chef:Config[:knife][:region]? Here are the options:
I lean towards #2. flip On Thu, Dec 22, 2011 at 9:30 PM, Tal Rotbart <
infochimps.com - discover data |
I lean towards 2 as well, but from my experiments the defaults do inject a region (us-east-1). A nice way to make it more transparent is put some info in the |
Good, let's go with #2; and remove from the defaults "us-east-1d". I think the rest of the cloud.rb defaults are reasonable. While we're on defaults, I'd value any feedback on whether the ones in volume.rb are sound. The most controversial one I think is specifying 'xfs' in the volumes... request for an announcement is a good one. something I've thought about is putting in a "pause" -- launching wouldn't |
I don't have any opinion (yet) regarding it-- just about to get to the part of configuring my cluster with volumes (mostly for static, non-HDFS data) Is there any point in specifying the filesystem if the volume is snapshot based? |
mostly no... here are two ways it kinda is still relevant:
The strong argument for being opinionated here: I don't like the idea of having people type out |
We just had internal reason to examine this issue closer. It turns out several things are more deeply tied to region than previously realized, making this difficult to tackle without changes to the structure of homebases (inclusive-or the way they interact with the AWS APIs). Removing the region declaration (and its associated deletion) doesn't seem to materially affect the ability to launch. Switching to a AZ outside of the region does, which leads to the conclusion that underlying calls are relying on that knife variable being set. Wrapping with something which sets and then unsets that knife variable should let us isolate those calls; I'm hopeful that once isolated, we will find better ways to call that don't rely on that shim. CaveatThis doesn't address how to multiplex (or force cross-region identity) for things like AWS key pairs, which comes along with this overall problem. That will almost certain force enough breaking changes to warrant a major version bump, with all the pain that comes with that. For now, the workaround is likely to be ( holds nose ) branching the credentials repository by region, and throwing errors if there's a region/AZ mismatch. There's also the issue of AMIs per region: the EC2 tools provided by Amazon can't migrate EBS backed images, and the best-looking third party tool runs to completion, but the resulting image is inaccessible. The obvious and easiest solution is to burn an image in each expected region; the more correct solution is to move away from image-based deployment entirely, so we can use stock AMI (etc) wherever we chose. |
This simply causes an opaque fog error such as:
It tracks down to line #125 in lib/cluster_chef/discovery.rb:
As AWS doesn't let you launch through the API with availability zones outside the region the connection is for.
IMHO a good resolution would be:
The text was updated successfully, but these errors were encountered: