Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't set container name #67

Open
lordofthejars opened this issue Jan 20, 2015 · 22 comments
Open

Don't set container name #67

lordofthejars opened this issue Jan 20, 2015 · 22 comments

Comments

@lordofthejars
Copy link
Member

To allow developers run in parallel tests instead of naming the containers we should use the generated id for referencing it. Name container will be only used by Arquillian Cube to know which container to start.

@lordofthejars
Copy link
Member Author

Not setting the name will have an impact on reconnecting feature. Currently we are using the name to know if the container is already running and reuse it. If we remove the name then we have the problem of identifying current running containers.

We cannot do it by id because we don't know it in another execution and if we use the container name for exapmle jboss/wildfly we can do it wrong because we can have two started containers based on same image. So maybe we should relax the conditions on when allow reconnect can be used.

@rbattenfeld
Copy link

IMO, reusing a docker container instance is not an use case for cube. It should be taken as a simple remote connection, and this is already handled by the several arquillian remote containers. Of course, the docker container instance is started manually.

@lordofthejars
Copy link
Member Author

Yes I thinking in this so much and I think that the best way is not touching anything about this. But maybe we can find a compromise solution.

@rbattenfeld
Copy link

Compromise is good. We need this feature really, so lets finding a compromise;-)
Is there are way to add a timestamp, say nanoseconds to the containername internally? Controlled by a configuration flag? It is then unique, ok almost unique:-) But the chance for a collision is very rare.

@tnine
Copy link

tnine commented Jan 30, 2015

I'd argue a Type1 time UUID would be best. I'm working on making this work for boot2docker, as well as remote docker servers. This means multiple clients could potentially get the same timestamp. If it were a timeuuid, the odds of collision are nearly non-existent. (In 6 years of using them in production with very high throughput systems, I have yet to have a collision).

I'm also wondering if we should pick a format like prefix + user supplied name + unique label. This way, if users CTRL+C their tests, JVM crashes etc, it makes cleaning up orphaned cube instances easy. Something like this could be done.

docker stop $(docker ps | grep [PREFIX] | awk '{print $1}')
docker rm $(docker ps -a | grep [PREFIX] | awk '{print $1}')

And all images are stopped and removed. Thoughts?

@lordofthejars
Copy link
Member Author

Today I talked with Aslak about this problem. And I think we have arrived to a good compromise on this. Let me explain me.
The first thing is that when we create a container we are going to hash the configuration file and tag the container with that hash.
Then if ConflictException is thrown (which means that the server is already running) we inspect the container and we get the tag, if hash is exactly the same as current configuration file will mean that using the same configuration and then we could restart the container. If not then we throw the exception because means that user wants to do something different.

@aslakknutsen @rbattenfeld and @tnine Do you think it is a good strategy? Want to add something else?

@rbattenfeld
Copy link

Thanks Alex, I am not quite sure how the proposal solves the problem. Two identical configuration files would still clash, right? Meaning, delevopers with the same configuration file are not able to execute at the same time tests? I like the idea @tnine proposed. But if your proposal solves this as well, then I am happy either.

@lordofthejars
Copy link
Member Author

Well if hash is identical then we Can reconnect to that container for
example
El dv., 30 de gen., 2015 a les 20.18 rbattenfeld [email protected]
va escriure:

Thanks Alex, I am not quite sure how the proposal solves the problem. Two
identical configuration files would still clash, right? Meaning, delevopers
with the same configuration file are not able to execute at the same time
tests? I like the idea @tnine https://github.com/tnine proposed. But if
your proposal solves this as well, then I am happy either.


Reply to this email directly or view it on GitHub
#67 (comment)
.

@aslakknutsen
Copy link
Member

Technically it doesn't solve the 'run multiple instances of the test suite' problem, tho it should to some extent address the concerns around recreating stale containers due to failure.

We would know that we created it and based on what. That to avoid stopping random other containers due to miss configuration. And since we created it and it's based on our current config it should be safe to recreate it.

@rbattenfeld
Copy link

If test runs cannot be executed in parallel, then the proposal is IMO not a solution. The great thing by using docker is the controlled isolation of resourses, tests etc. Isolation is the key word. Docker allows to run the same image as much as you like. Each test runs in an own container instance. THIS is the ultimated cube use case. I certainly have other opinions and requirements, which is fine. But it should be possible to use docker in the docker way and not making a restriction as the default behaviour.

@aslakknutsen I fully understand that the current behaviour makes for certain things sense but not in all cases. I believe that cube looses too many if that is not possible to change through configuration. Lets the user decide if he wants this behaviour or the other.

@rbattenfeld
Copy link

Sorry typo:-) --- you have certainly other opinions ---

@tnine
Copy link

tnine commented Jan 30, 2015

I know for us personally on UG, running in parallel is a must. Without parallel, our tests take upwards of 40 minutes to run. We do run method parallel, but we've found defining too many threads in a single JVM can cause issues. We try to use a combination of 4 forked JVMS at 4 threads each. If we followed any format of name + unique value per JVM, this means each JVM would have it's own isolated docker instance and would run. I like this idea quite a bit.

In regards to re-connect semantics. I think a has should work. IMHO I think simply documenting you shouldn't use reconnect in a multi-threaded environment would be sufficient as a user. I envision the following use cases for us.

Full suite
We want a non-colliding naming scheme. We want to run a new instance with every JVM, and allow our JVMs to run in parallel with their own docker instance

Single developer mode
To save startup time as we build our integration tests, we'll leave our Cassandra and ElasticSearch images running.

Single developer mode is a lower priority to me than a full suite. To alleviate startup time, we can configure very fast docker images. The full suite is what validates our build, and if we can run them faster via parallelism that would be fantastic.

@aslakknutsen
Copy link
Member

@ralfbattenfeld Well, it's not like you can't run tests in parallel now, you just have to specify the name your self. E.g. via system property replacement in arquillian.xml. my_contianer_${unique.id}

This issue was created as a 'wouldn't it be nice if Cube could do that automatically'... But it's hardly impossible as is.

@tnine tnine mentioned this issue Jan 30, 2015
@rbattenfeld
Copy link

Is #85 a solution for this?

I am experimenting with this: http://stackoverflow.com/questions/3984794/generating-uuid-through-maven

It seems to work but it seems also to be difficult to setup for command line and Eclipse execution. As a non maven hero;-)

@rbattenfeld
Copy link

It is probably not the nicest solution but the gmaven-plugin generating an UUID as described above is a compromise and should work.

@lordofthejars
Copy link
Member Author

Finally we are going to document how to achive this using Maven plugin. Issue #87

@tnine
Copy link

tnine commented Feb 2, 2015

Hey guys,
I'm working on a fix for this today. I'm going to create a strategy for this, so the user can pick the implementation they want in the Arquillian configuration. By default, I'll leave the current strategy. I'll add a unique per instance strategy as well.

I don't think the unique maven plugin solution would work in this configuration.

<plugin>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.18.1</version>
        <executions>
          <execution>
            <id>default-test</id>
            <phase>test</phase>
            <goals>
              <goal>test</goal>
            </goals>
            <configuration>
             <parallel>methods</parallel>
              <forkCount>.5C</forkCount>
              <threadCount>2</threadCount>
              <reuseForks>true</reuseForks>
           </configuration>
          </execution>
        </executions>
</plugin>

Each forked JVM would get the same variable, therefore still conflicting on the name, correct? If I've missed something, please let me know.

@tnine
Copy link

tnine commented Feb 2, 2015

So in thinking about this a bit more, do we want to support a scenario where we have both instances and container reuse? This feels complicated, and I'm not sure it's something that should be supported, but I wanted to throw it out there. Aside from a single reusable DB scenario with multiple clients, I can't envision a topology where this would be beneficial. I would also argue for true test isolation and each VM would get it's own stack of docker instances. Thoughts?

@aslakknutsen
Copy link
Member

@tnine Database related stuff would be the only thing that pops to mind here as well. I'd say start with the simple requirement. If a need comes up we can implement it then.

@lordofthejars lordofthejars removed this from the 1.0.0.Alpha5 milestone Apr 2, 2015
@iocanel
Copy link
Contributor

iocanel commented Jul 20, 2016

A little late to the party ...

Not sure what's the status of this issue, but I am seeing some tests treating the name as id and in many cases this cause failure. My most recent experience was with:

public void should_execute_top(@ArquillianResource CubeController cubeController, @ArquillianResource CubeID cubeID) {

@lordofthejars
Copy link
Member Author

I will check tomorrow but probably not related because dynamic id are only
enabled when container name finishes with *

Thanks for your efforts on this because in the past I was the one who runs
the tests so all of them passed but of course running in Linux has enabled
us to see some errors on tests. Tomorrow I will execute these tests to see
if they are reproducible

El 20 jul. 2016 9:22 p. m., "Ioannis Canellos" [email protected]
escribió:

A little late to the party ...

Not sure what's the status of this issue, but I am seeing some tests
treating the name as id and in many cases this cause failure. My most
recent experience was with:

public void should_execute_top(@ArquillianResource CubeController cubeController, @ArquillianResource CubeID cubeID) {


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#67 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABcmYWjfy6tLFKm_HaerqeTFzp2fEkQ6ks5qXnVVgaJpZM4DUtIM
.

@lordofthejars
Copy link
Member Author

In my case and running in docker machine it worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants