Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run a test more than once when using a dependent service #79

Open
tnine opened this issue Jan 29, 2015 · 11 comments
Open

Unable to run a test more than once when using a dependent service #79

tnine opened this issue Jan 29, 2015 · 11 comments

Comments

@tnine
Copy link

tnine commented Jan 29, 2015

I have the following arquillian.xml file.


<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns="http://jboss.org/schema/arquillian"
  xsi:schemaLocation="http://jboss.org/schema/arquillian
  http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

  <extension qualifier="docker">
      <property name="serverVersion">1.16</property>
      <property name="serverUri">https://192.168.59.105:2376</property>

     <property name="dockerContainers">
        cassandra:
          image: spotify/cassandra
          exposedPorts: [9160/tcp]
          await:
            strategy: polling
          env: []
          portBindings: [9160/tcp]
     </property>
    <property name="autoStartContainers">cassandra</property>
    <property name="shouldAllowToConnectToRunningContainers">true</property>
  </extension>

</arquillian>

When I run my trivial test below, it runs the first time

@RunWith( Arquillian.class)
public class StartupTest {

    @Test
    public void iStartCassandra(){
        assertTrue( "stuff done", true);
    }
}

Subsequent invocation results in the following error.

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.609 sec <<< FAILURE! - in StartupTest
StartupTest  Time elapsed: 1.609 sec  <<< ERROR!
org.arquillian.cube.spi.CubeControlException: Could not create cassandra
    at org.arquillian.cube.spi.CubeControlException.failedCreate(CubeControlException.java:19)
    at org.arquillian.cube.impl.model.DockerCube.create(DockerCube.java:69)
    at org.arquillian.cube.impl.client.CubeLifecycleController.create(CubeLifecycleController.java:14)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
    at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
    at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
    at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:145)
    at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:116)
    at org.jboss.arquillian.core.impl.EventImpl.fire(EventImpl.java:67)
    at org.arquillian.cube.impl.client.CubeSuiteLifecycleController.startAutoContainers(CubeSuiteLifecycleController.java:34)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
    at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
    at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
    at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:65)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
    at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
    at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:145)
    at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:116)
    at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.beforeSuite(EventTestRunnerAdaptor.java:74)
    at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:113)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
    at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: com.github.dockerjava.api.ConflictException: Conflict, The name cassandra is already assigned to 71edf8e14632. You have to delete (or rename) that container to be able to assign cassandra to a container again.

    at com.github.dockerjava.jaxrs.util.ResponseStatusExceptionFilter.filter(ResponseStatusExceptionFilter.java:51)
    at org.glassfish.jersey.client.ClientFilteringStages$ResponseFilterStage.apply(ClientFilteringStages.java:134)
    at org.glassfish.jersey.client.ClientFilteringStages$ResponseFilterStage.apply(ClientFilteringStages.java:123)
    at org.glassfish.jersey.process.internal.Stages.process(Stages.java:171)
    at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:251)
    at org.glassfish.jersey.client.JerseyInvocation$2.call(JerseyInvocation.java:683)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:424)
    at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:679)
    at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:435)
    at org.glassfish.jersey.client.JerseyInvocation$Builder.post(JerseyInvocation.java:338)
    at com.github.dockerjava.jaxrs.CreateContainerCmdExec.execute(CreateContainerCmdExec.java:31)
    at com.github.dockerjava.jaxrs.CreateContainerCmdExec.execute(CreateContainerCmdExec.java:14)
    at com.github.dockerjava.jaxrs.AbstrDockerCmdExec.exec(AbstrDockerCmdExec.java:47)
    at com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:26)
    at com.github.dockerjava.core.command.CreateContainerCmdImpl.exec(CreateContainerCmdImpl.java:351)
    at org.arquillian.cube.impl.docker.DockerClientExecutor.createContainer(DockerClientExecutor.java:199)
    at org.arquillian.cube.impl.model.DockerCube.create(DockerCube.java:63)
    at org.arquillian.cube.impl.client.CubeLifecycleController.create(CubeLifecycleController.java:14)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
    at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
    at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
    at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:145)
    at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:116)
    at org.jboss.arquillian.core.impl.EventImpl.fire(EventImpl.java:67)
    at org.arquillian.cube.impl.client.CubeSuiteLifecycleController.startAutoContainers(CubeSuiteLifecycleController.java:34)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
    at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
    at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
    at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:65)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:94)
    at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
    at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:145)
    at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:116)
    at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.beforeSuite(EventTestRunnerAdaptor.java:74)
    at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:113)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
    at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

I'm not sure where to start to resolve this issue. I'm on version 1.0.0.Alpha3, Arquillian version 1.1.6.Final

@lordofthejars
Copy link
Member

Hi, well first thing to note is that shouldAllowToConnectToRunningContainers has a bug which makes to not work pretty well. So I will suggest you to not use it until we fix it :). Also let me explain that using shouldAllowToConnectToRunningContainers means that you have started the container by yourself not by arquillian cube. I mean that what we expect is that you have started manually the container and if not (although it has been started by cube) the parameter will be ignored. This is because in starting phase we can get if the name of the container is already running or not and act accordantly, but in time of stopping it we don't know if one container should be closed or not. To avoid confusion (or at least this is what we tried) was that this attribute only works if you manually start the container.

About your question: Well the problem is that in cube we always destroy the containers after the execution because it can be started again in next test. If for example your test doesn't finish because you halt it manually then the container is not destroyed so it cannot be created after that. So what you need to do is remove it manually. Also until we fix the issue with autostart I recommend you to

I left the issue opened to improve this situation by destroying container before start in case it is already started.

@aslakknutsen
Copy link
Member

@lordofthejars destroying the container before start sounds a bit scary. A little miss config and stuff gets lost.

@lordofthejars
Copy link
Member

@aslakknutsen I totally agree with you, maybe it is better to leave as is, but maybe we could capture the exception and if the exception is that the container is already started we could or simply avoid the creation and start or we can destroy and recreate. As I said the best way would be leave as is, but we can think other alternatives as well.

@tnine
Copy link
Author

tnine commented Jan 29, 2015

@aslakknutsen and @lordofthejars I would think as a laymen user this would be an idempotent operation. I would envision the following semantics to make it a bit more user friendly and declartive.

  1. Does the container exist? If so, simply use it.
  2. If not pull it and start it.
  3. If it already exists and is running, what then? Maybe just throw an exception? This can get tricky, it seems like every user might have a different strategy for handling this. Perhaps a configuration or interface with some commands users can invoke to handle this for their specific cases?

Then for cleaning

Is it possible to allow users to invoke a clean command to remove existing images? IMHO this is kind of an edge case. I think it's safer and easier to simply document this and allow users to remove their own images. Since there's a strong toolset around this, and it's a 1 off sort of scenario, I can't envision it's something that users would need to run on every build.

Thoughts?

@lordofthejars
Copy link
Member

So we can do two things, document this and add in documentation the command that must be run to destroy the container, or we can create an attribute which tells if should connect to existing running container, throw an exception or remove it. Basically is changing shouldAllowToConnectToRunningContainers to a new name and new type instead of a boolean.

WDYT?

@aslakknutsen
Copy link
Member

This is slightly different tho. It's not allow to connect to, but allow to
use existing container with name..

If the container is running is a different concern.

On Thu, Jan 29, 2015, 16:59 Alex Soto [email protected] wrote:

So we can do two things, document this and add in documentation the
command that must be run to destroy the container, or we can create an
attribute which tells if should connect to existing running container,
throw an exception or remove it. Basically is changing
shouldAllowToConnectToRunningContainers to a new name and new type instead
of a boolean.

WDYT?


Reply to this email directly or view it on GitHub
#79 (comment)
.

@aslakknutsen
Copy link
Member

I'm a bit new on this.. But existing container could have a different state
then when created? Or would we need a commit step here for that to happen?

Basically we need to track, has our configuration for creating this
container changed? If not, then on to reuse. If it has, then.....

@tnine
Copy link
Author

tnine commented Jan 29, 2015

Hey guys, I'm working on a fix for this now. It's actually a trivial fix and test. However, there's no test for the DockerClientExecutor class. I would like to get this set up a few tests with a small "hello world" type of docker image. Is there an image you guys would prefer I use for this test?

@lordofthejars
Copy link
Member

We don't have any predefined image, if you want you can use the docker-server-stub to create a test if you want to use a stub instead of a true docker server, but feel free.

https://github.com/arquillian/arquillian-cube/tree/master/docker-server-stub

@aslakknutsen
Copy link
Member

On one hand we have the reconnect feature to allow reusing a running container to save startup time during development, but could also be used to connect to existing containers that are running/started by 3. party for what ever reason.

On the other we have crashed images.. Being able to clean up a existing container would be a nice feature, tho I don't think it should be default.

Take the build server scenario for instance. Jenkins is running, the build crash for what ever reason and you're now left with x number of 'stuck' containers. The build will continue to fail until someone manually goes and cleans up the docker server used by the job.

In this scenario I think both reconnect and reuse are wrong:

  • Build failed and Container is still running, reconnect = Unknown Container state, likely to fail the build, next build will be ok because the container is now gone and will be recreated.
  • Build failed and Container is not running but Container is there, restart: Unknown Container state, likely to fail the build, next build will be ok because the container is now gone and will be recreated.

@lordofthejars
Copy link
Member

After talking with @aslakknutsen we are going to close this issue because it is the same as #67 in the sense that talks about reusing an existing running container. We are going to continue the discussing there.

@tnine Has it sense to you to talk on that issue about the problem of reusing containers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants