-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to start the same container configuration concurrently #65
Comments
hummm it is strange because the container is removed after it is not required any more, so next time start can be executed without any problem. But it is true that if you are trying to run in parallel you can find this problem. I think that @aslakknutsen has been worked on forking executions. Let's see if he found a workaround and if not we will need to do something. Thanks for your incredibly feedback because it is really useful. |
This is true, the container is stopped and removed. My use case is that the same project with different code state is executed in parallel. A CI-Build is in our company a different task then release build. It is the same project, It uses the same container configuration and should run tests independent of each other. |
Yes this use case sadly is not covered, we will need to see what we can do (probably adding some random suffix to the docker container name), but we must see how this can affect globally. |
I think there are atleast two different issues here;
In my test setup I've used the surefire.forkNumber system property as part of the Container key and Docker image key. This is a system property provided by surefire that will represent the current fork'ed vm number when using forkCount > 1. This will prevent two surefire Forks colliding on the image name, but it does nothing for multiple Maven processes. See example config below;
Running multiple Images with the same portBinding will of course cause port conflicts on the host machine. It's a bit easier to deal with this with Docker as oppose to just a 'normal' container as we know all the ports that are being attempted exposed. In my test setup I've simply appended the surefire.forkNumber to the host bound port so they will end up unique between the forked vm's. But again, using forkNumber won't do anything for multiple maven processes. Depending on environment, I guess you could use any other 'unique' number from the build pipeline? SVN ref num? Jenkins build id? Example: <?xml version="1.0"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://jboss.org/schema/arquillian"
xsi:schemaLocation="http://jboss.org/schema/arquillian
http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
<extension qualifier="docker">
<property name="serverVersion">1.12</property>
<property name="serverUri">http://localhost:2375</property>
<property name="shouldAllowToConnectToRunningContainers">true</property>
<property name="dockerContainers">
wildfly_${surefire.forkNumber:1}:
buildImage:
dockerfileLocation: src/test/resources/wildfly
noCache: false
remove: false
portBindings: [3000${surefire.forkNumber:1}->8080/tcp, 4000${surefire.forkNumber:1}->9990/tcp]
await:
strategy: polling
</property>
</extension>
<container qualifier="wildfly_${surefire.forkNumber:1}" default="true">
<configuration>
<property name="username">admin</property>
<property name="password">Admin#70365</property>
</configuration>
</container>
</arquillian> Regardless of those two issues, I think it would be fairly easy for Cube to add support for defining -1 as a portBinding. -1 meaning any port that is currently available > 1024. |
Yes this is a good workaround, but maybe we could provide something more direct to cover this use case. I am not sure in both cases how (-1 in port binding is a good idea), but how about container name? |
@lordofthejars Why are we naming the images again? |
@lordofthejars Well the approach used in the example for the names it seems the best one to avoid any collision. |
@lordofthejars I mean why are naming the images in Docker? Why are we purposely making them unique? With no name defined the only thing we need is a mapping between the 'arq.xml' name and the image id. Not sure if not naming it will effect e.g. the caching etc? |
mmm yes it is true, we could not give a name and internally manage the name only for starting a container. Yes we could close this issue and add two issues, one for removing the name from creation and create a new field to store the id so in every call to docker-server we set that id instead of the name. And another issue for adding portbinding -1 approach. WDYT? |
Sounds good to me.. :) |
The port binding should be solved by the dynamic port binding feature docker already provides. -P assigns the exposed ports to a free port on the docker server. Cube just can pickup this. It allows then multiple runs. Perhaps we mean the same (port binding -1) which docker already provides with -P instead of -p 8080:8080 |
Cool then we can use -1 as dynamic binding. Also we must see if this feature is supported by API, but yes it is a good point. Thanks for the info. |
@rbattenfeld Yeah, -1 was just a way to define it in arquillian.xml. If docker supports that out of the box that's even better.. Hopefully not knowing the port before start won't mess with the Arquillian Container configuration mapping.. or atleast we can look for another way around that. |
Yes it should do it, and I think it will allow exactly to do what is described in the issue. Not sure if we should close now, or just wait until we implement both issues and we try it. |
Label it with 'question' or 'verify' ? |
verify. |
My limited experience with Arquillian-cube showed my that cube doesn't allow to run the same container configuration twice. Maybe you have already foreseen this and I don't know how to configure this. Currently, I can't.
Use Case: It is necessarily to run multiple test runs of the same code base. A CI build triggers a test run and a official build triggers a test run. The first one wins an the later fails. It should be possible to configure cube to allow concurrent docker instances.
The text was updated successfully, but these errors were encountered: