Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a differed topology unbuild in pytest plugin #3

Open
carlos-jenkins opened this issue Jan 9, 2016 · 2 comments
Open

Add a differed topology unbuild in pytest plugin #3

carlos-jenkins opened this issue Jan 9, 2016 · 2 comments
Assignees

Comments

@carlos-jenkins
Copy link
Contributor

Currently, the unbuild of a topology is linear. Before a new suite starts, the previous needs to be unbuild. Depending of the topology being built, the unbuild step can be a large part of the time spent executing the suite.

Given the fact that topology_docker can run multiple suites in parallel, we can differ the unbuild of the topologies until some heuristic or event happen. In this way, the data of the test execution is available sooner.

If this mode is enabled, the unbuild will be differed to a specific "garbage collection" subprocess that will trigger the stop and removal of all the containers upon request or activation by an event or heuristic.

At the implementation details, the plugin can submit to the garbage collector all the topology objects to be discarted. The garbage collector process will query from time to time for resources, and decide to kick in if some parameter of disk space, number of containers, ram or processing is matched.

As part of this optimization, it is important to consider that the current implementation of the stop command is blocking:

https://github.com/HPENetworking/topology_docker/blob/master/lib/topology_docker/node.py#L188
https://github.com/HPENetworking/topology_docker/blob/master/lib/topology_docker/platform.py#L211

An optimization could be that once the topology is submitted to the garbage collector, the garbage collector subprocess request stop of all the containers, but do not wait or block until done. Once the event trigger the removal, the garbage collector will then wait if not stopped and then remove them.
Wait and removal of the container can be done in different threads (consider the GIL), and until all of them finish then the garbage collector will be done.

@carlos-jenkins
Copy link
Contributor Author

Thinking about implementation of this enhancement, one option could be:

  1. Move the wait and remove actions of the DockerNode to a __del__ method.
  2. Add a flag to the topology pytest plugin that will enable this feature.
  3. At the finalizer callback perform the following logic:
    1. If flag is NOT enabled, then unbuild topology.
    2. Else, store topology in a session-scoped garbage collector instance.
  4. Create a garbage collector class that will have a method to add topologies, one method to perform the garbage collection, and one method to determine if a garbage collection is required. The garbage collector class must register it's cleanup method to a atexit call in the constructor. The class may have a helper method that will receive a topology, call the checking method and if True, perform the garbage collection.

@carlos-jenkins
Copy link
Contributor Author

Another options was to split the destroy() with a new cleanup(). In this way, the topology can be stopped, in this case the containers stopped, then all containers removed in the cleaning process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant