-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FAQ: AWS private regions #396
FAQ: AWS private regions #396
Conversation
Didn't test this e2e, but it's how it should work. |
Some people are hitting:
offhand, I don't know why our pipeline isn't hitting this. One theory is that it's somehow specific to the |
This all said, maybe we should just delete the UEFI partition in our EC2 images. Part of me feels that it unnecessarily breaks the uniformity we have, and it's also explicitly against where we want to go in the future (using UEFI more across the board) but... |
The uniformity for a dead UEFI partition that we know won't be used doesn't really buy us much. In this case, the uniformity is academic. Dropping the UEFI partition seems like a minor thing, when we can just delete the partition and not change anything else. And with GPT partitions we can still keep root on part 4. |
I think the property of having each platform image be a simple transform step away from each other is really nice, but I'm not strongly opposed to it. There's something subtle going on here though if neither RHCOS nor FCOS hit this. Probably worth investigating a bit before using the nuclear option? Note also if we do this, we'll probably have to adapt the mount generator too. |
Due to the UEFI issue described by @cgwalters I've been working to find a workaround for getting a RHCOS image into AWS manually (especially in private AWS regions). Here are the details for how I used the RHCOS bare metal BIOS raw image and modified it to work: https://github.com/jaredhocutt/openshift4-aws/tree/master/rhcos#how-we-got-it-to-work It's not ideal and I wouldn't expect anyone to actually do it that way if they want a supported cluster, but I did want to pass along what I've figured out. |
Eek, no please don't do it that way. By snapshotting a booted system, you've saved things like SSH keys (so each machine will have the same host key, random seed, etc.) What you want to do is zap the partition offline. You should be able to do this by getting the raw vmdk file and using any partition program ( You'll then have a failed systemd unit on startup looking for it as mentioned above so you'd probably need to do something like actually replace the partition with a non-FAT. (Or disable the unit, but that's a bit awkward to do in a way persistent across upgrades) We're discussing potential upstream fixes here. |
I mounted it as a secondary disk and did not boot it. So did exactly what you said, just by mounting it to an EC2 instance instead of doing it on my laptop. |
Got it, sorry. Yes, that's fine. |
I was also able to figure out how to get the 4.3 AWS VMDK image to work, which I've added to the same GitHub page just below my details for the bare metal image in 4.2. The big issue is that with the current 4.3 AWS VMDK, you cannot use However, I was able to import the image just as a simple snapshot using So this works for now, but we really need to have an image that we can use |
Ahh yup, this matches up with what
Hmm, might be worth asking AWS to refine that API to not erroneously reject images that have EFI partitions if they also have a BIOS boot partition. Or barring that, some kind of "I know what I'm doing" flag. |
If that works consistently, then I think it's much simpler to just document it. I'll update this PR. And further for OpenShift, the installer should have a high level command for this. |
This has come up a few times.
12aa9ad
to
67f1cb7
Compare
It may be simpler to document, but it's not how users of AWS expect it to work. The AWS documentation describes using |
I've opened a support case with AWS to fix the problematic |
/lgtm |
Got a response from AWS about this. Essentially, the As such, the API is much more invasive. For example, for Windows images, it'll detect UEFI boot partitions and convert them to MBR. It doesn't support Linux UEFI images. But the point is that there's a mismatch of intent. Its goal is to implement automatic conversion heuristics, which I don't think we want. So overall, I think we should stick with the |
@jlebon Thanks for the update. In that case, when we document this method, it would be nice to do 2 things.
|
Flesh things out a bit more based on discussions in openshift#396.
Flesh things out a bit more based on discussions in openshift#396.
@jaredhocutt I posted a follow-up here: #398. |
Awesome! Thanks @jlebon :) |
The export process of AMIs fails for the same reason, the UEFI partitions. This means that it is not possible to get RHCOS images onto AWS SnowBall edge devices. I've spoken with the AWS TAMs at the NGA and they have said it is not possible to import snapshots and register images against the SnowBall edge devices as can be done for standard AWS like described in this issue. |
Nothing in the OS touches the ESP by default, so there's no reason to mount it by default, particularly wriable. This is good for avoiding wear&tear on the filesystem, but I am specifically doing this as preparation for potentially removing the ESP from AWS images, because AWS `ImportImage` chokes on its presence: openshift/os#396
Nothing in the OS touches the ESP by default, so there's no reason to mount it by default, particularly writable. This is good for avoiding wear&tear on the filesystem, but I am specifically doing this as preparation for potentially removing the ESP from AWS images, because AWS `ImportImage` chokes on its presence: openshift/os#396
Preparation for potentially removing the ESP from AWS images, because AWS `ImportImage` chokes on its presence: openshift/os#396
Re. Snowball, looks like the EFI partition is no longer an issue now as Dan mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1794157#c14. |
This has come up a few times.