Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to run CRC without deploying OpenShift (for use of podman runtime) #1097

Closed
gbraad opened this issue Mar 11, 2020 · 13 comments
Closed
Labels
kind/spike Investigation to provide direction and workable tasks status/stale Issue went stale; did not receive attention or no reply from the OP

Comments

@gbraad
Copy link
Contributor

gbraad commented Mar 11, 2020

Currently we deciced not to change the default behaviour of CRC, which means we deploy OpenShift and expose Podman up[on request. Details in the spike (copied to end of the message)

However, people have expressed, as expected, not to start openShift when only Podman is wanted. We can expose this as a possible start option: --no-provision (as done in Minishift), or a specific --podman option?


[We] spoke about the possible strategies to expose this to the user as part of the user story. We identified the following 3 possiblities:

  1. We use a lean VM approach by re-using the current RHCOS and not starting the kubelet service. This means we can re-use the current setup (while prepared during the build phase), but might have to handle a situation to allow the start the OpenShift cluster afterwards;. This enables a scenario in which we do not need to share the resource initially between both podman and OpenShift. The problem is however, to handle the start situation as this impacts our current flow of interaction. While not impossible, it might look like:
$ crc start --podman
Starting CRC VM
Podman access available
$ crc status
CRC VM started
OpenShift cluster not available
Podman access available
$ crc start  # default --openshift is assumed
CRC VM started
Starting OpenShift cluster.
$ crc status
CRC VM started
OpenShift cluster started
...
Podman access available
  1. Start the full VM and allow access to podman after specifically requesting this. This simplfiies the start process and does not have to do existence checks, etc. But in this situation, the VM will consume resource to maintain both the OpenShift cluster and podman. This might be in a podman-only sitiation waste a significant amount of memory
$ crc start
CRC VM starting
OpenShift cluster started
$ crc podman-env
# CRC VM started, so we only need to expose access
Podman access available
$ crc status
OpenShift cluster started 
...
Podman access available
  1. A dedicated VM. While being a leaner aproach, this introduces additional conmplexity of maintaining another VM, allow VMs to co-exist, etc. While not impossible (our codebase can) it does introduce a resource overhead when both are used, but solves separation, etc.
$ crc start --podman
Podman access available
$ crc start --openshift    # default
OpenShift started
$ crc status
Podman VM started
OpenShift VM started
...

At the moment we decided to go for option 2, and work on a situation as described in 1 over time.

@gbraad
Copy link
Contributor Author

gbraad commented Mar 11, 2020

the idea is to also use the systray configuration option for this, as it allows to make this stand out from a dialog

@gbraad
Copy link
Contributor Author

gbraad commented Mar 30, 2020

@zeenix would you be able to work on this? What would be needed and how much time?

@zeenix
Copy link
Contributor

zeenix commented Mar 31, 2020

@zeenix would you be able to work on this?

Yes. :)

What would be needed and how much time?

Not sure about that. We didn't investigate this (not starting the cluster) part during the spike.

@zeenix
Copy link
Contributor

zeenix commented Apr 1, 2020

So from what i can tell, for 1. we'll need to:

  • modify the SNC to disable the autolaunch of kubelet service.
  • modify crc to:
    • launch the kubelet service explicitly via SSH after stating the VM.
    • add new --podman and --openshift options to start subcommand.

@praveenkumar sounds correct? How do we handle backwards compatibility, i-e existing crc VM?

@cfergeau
Copy link
Contributor

cfergeau commented Apr 1, 2020

So from what i can tell, for 1. we'll need to:

  • modify the SNC to disable the autolaunch of kubelet service.
  • modify crc to:
    -- launch the kubelet service explicitly via SSH after stating the VM.

This is already done:
https://github.com/code-ready/snc/blob/master/createdisk.sh#L283-L284
https://github.com/code-ready/crc/blob/master/pkg/crc/machine/machine.go#L374-L378

@zeenix
Copy link
Contributor

zeenix commented Apr 1, 2020

This is already done:

Oh I somehow missed in the code.. That makes this a much easier task then.

@praveenkumar
Copy link
Member

add new --podman and --openshift options to start subcommand.

I think we only need --podman command to start options since we don't want to change the default behavior but we need to add another command crc openshift start to start the cluster and wait till it is started.

zeenix added a commit to zeenix/crc that referenced this issue Apr 8, 2020
Without the Openshift cluster.

Fixes crc-org#1097.
zeenix added a commit to zeenix/crc that referenced this issue Apr 14, 2020
Without the Openshift cluster.

Fixes crc-org#1097.
zeenix added a commit to zeenix/crc that referenced this issue Apr 15, 2020
Without the Openshift cluster.

Fixes crc-org#1097.
@stale
Copy link

stale bot commented Jun 2, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale Issue went stale; did not receive attention or no reply from the OP label Jun 2, 2020
@stale stale bot closed this as completed Jun 16, 2020
@afbjorklund
Copy link

I was trying to find this issue during Kubecon, but didn't (because the bot had closed it).

So now referring people to use Vagrant rather than CRC, for replacing podman-machine:

https://boot2podman.github.io/2020/07/22/machine-replacement.html

This also lowers the disk footprint, from 10G (crc) to 1G (fedora cloud).

@cfergeau
Copy link
Contributor

virt-builder would probably be usable as an alternative to vagrant

@afbjorklund
Copy link

virt-builder would probably be usable as an alternative to vagrant

vagrant works fine with the libvirt provisioner, so I am not sure why though...

To make it a oneliner rather than a Vagrantfile, or what is the main attraction ?

The upside with vagrant was that it also worked on VirtualBox and on Mac/Win

I'm not sure if virt-builder would be as portable, or where to direct users to ?

@cfergeau
Copy link
Contributor

Oh I'm not saying it's better, from your initial message, I was not sure if you were satisfied with using vagrant for that or not, so I suggested an alternative :) If vagrant does the job for you, I don't think virt-builder is going to be particularly better.

@afbjorklund
Copy link

afbjorklund commented Aug 31, 2020

Oh I'm not saying it's better, from your initial message, I was not sure if you were satisfied with using vagrant for that or not, so I suggested an alternative :) If vagrant does the job for you, I don't think virt-builder is going to be particularly better.

Haven't automated the setup yet, and the footprint is 10x bigger. So in that sense it is not the same as "podman-machine"*...
But it is 10x smaller than CRC (for running podman only) and the setup is simple enough ("vagrant up"), so it will do for now.

* https://podman.io/blogs/2019/01/14/podman-machine-and-boot2podman.html

It would of course be possible to make a small script with the most basic features from docker-machine and podman-machine.
But users are looking for an integrated solution like Docker Desktop, not something glued together like Docker Toolbox was.

https://www.docker.com/blog/docker-mac-windows-public-beta/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/spike Investigation to provide direction and workable tasks status/stale Issue went stale; did not receive attention or no reply from the OP
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants