Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/ubuntu server 20.04 #61

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

rubenst2013
Copy link

@rubenst2013 rubenst2013 commented Apr 29, 2020

Hi Jeff,

@JulyIghor and I spent all night trying to figure this out and were successful. YAY

This MR switches to the release version of Ubuntu Server 20.04 using the "new" Subiquity installer and cloud-init like descriptions for "preseeding". More details in the commit messages.

I tested them locally and they worked fine.
Please consider these changes so we are on a stable track forward with latest Ubuntu LTS.

Thanks for your hard work & best regards,

Ruben

This is the one that now uses subiquity instead of debian-installer
Boot command is now a tad bit shorter.
Thanks to @nickcharlton for finding this out! The keyword "autoinstall" needs to be present.

Boot_wait needs to be long enough for the VM to have actually started, but short enough to keep the ISO's bootloader from executing the installer.
Here we may need a more robust way of dropping into the grub command line...
instead of a preseed file, subiquity / cloud-init now want a yaml formated file with #cloud-init as a sort of she-bang on the first line.

@nickcharlton did an awesome blog post about the principle setup of these files:
https://nickcharlton.net/posts/automating-ubuntu-2004-installs-with-packer.html

The old preseed file is no longer needed.
The new Ubuntu Server installer starts an SSH server.
The credentials are installer:<random_pw>
Packer wrongfully tries to connect to this SSH, thinking the VM is ready for further provisioning steps - which it is NOT.

Thanks to @JulyIghor we found a workaround.
We simply change the port packer expects the ssh server to run at to something else AND during cloud-init late_commands we override the servers port accordingly. That way once the cloud-init finishes and reboots the VM the ssh server will run at the new port - now packer picks up on that and continues provisiong as we are used to.

As a last step durng provision, we remove the conf file, essentially resettign the ssh server port back to default 22.

@SwampDragons:
hashicorp/packer#9115
Please check on the logic behind communicator setting "pause_before_connecting".
That setting actually does still try to connect ONCE and then waits, instead of waiting for the specified duration and then and only then trying to connect. Thanks!
- use ssh: install-server: true instead of manually adding it
- unify syntax  from mixed yaml and json to just yaml
- uniformly quote late-commands
@rubenst2013
Copy link
Author

@nickcharlton Thank you for taking the time to review my PR.
I implemented your suggestions as discussed above and tested them successfully on my local machine. Please give them a spin and let me know if you need more info / other adjustments. :)

This admittedly adds less noise to the actual provisioning steps.

Simply increase the allowed number of failed cssh connection attempts to make it through the initial setup until reboot.

This may however add a bunch of false positives to the packer log if anyone looks in there.

+ change ssh_wait_timeout value to be more .
@rubenst2013
Copy link
Author

rubenst2013 commented May 2, 2020

Hi @SwampDragons, hi @nickcharlton,

I guess I did needed some convincing. :)

Admittedly using ssh_handshake_attempts introduces less noise to the "user side" provisioning steps and, with a high enough value, simply lets the initial setup run through.

As for my proposed other way of handling this packer-side I take up the challenge and learn some golang to make a PR of that. Once we have a tangible proof of concept we can tackle that piece again in the future. 😃

Thank you both very much for your valued input and time.

Best regards,

Ruben

@nickcharlton
Copy link

No worries, @rubenst2013, glad to be able to chip in.

I tried this out and it finishes building a box. I did notice this error twice in the output though:

E: The repository 'http://ppa.launchpad.net/ansible/ansible/ubuntu focal Release' does not have a Release file.

There's also a few Python syntax warnings with Ansible, but that seems common at the moment.

@rubenst2013
Copy link
Author

No worries, @rubenst2013, glad to be able to chip in.

I tried this out and it finishes building a box. I did notice this error twice in the output though:

E: The repository 'http://ppa.launchpad.net/ansible/ansible/ubuntu focal Release' does not have a Release file.

There's also a few Python syntax warnings with Ansible, but that seems common at the moment.

Yes, these come from the ansible roles that @geerlingguy created and hosted via ansible galaxy. Not sure if updating them is possible at the moment, since these are shared between several different boxes and OS versions, some of which require the older syntaxes. Perhaps Jeff can have a look at these.

@geerlingguy
Copy link
Owner

I'd like to finish moving over to this new install method at some point soon... has anyone tested this and made sure it's working in the past year+? I know a few others have gotten past this hurdle too, but I just haven't had the time in the past year to dig in :P

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants