-
Notifications
You must be signed in to change notification settings - Fork 19
Contributing and Branching Strategy
Anyone with an interest in what Dasein Cloud is doing is welcome to contribute.
As noted elsewhere, Dasein Cloud is broken into a number of Git sub-modules for the core API and individual cloud implementations. What you depends on whether you are interested in working on a specific cloud or the whole thing.
The simplest approach is to fork the "develop" branch of the target sub-module and edit within your own Github project. You then make changes, run any relevant tests, and then submit a pull request to have them merged back into Dasein Cloud.
Each sub-module consists of at least 2 branches: "master" and "develop". Work with the "master" release when you are looking for the latest production ready code. In general, if you are doing development, you'll work with the "develop" branch. The exception is when you are fixing bugs for a specific release. Each release has it's own branch.
For example, Dasein Cloud Core has three branches right now:
- Master
- Develop
- 2012.09
Master and 2012.09 are ultimately the same thing because 2012.09 is the current production release.
Develop represents the current stage of 2013.01 development.
As bugs come up in 2012.09, they are developed in the 2012.09 branch and then merged into master. At that point, a formal release is created for 2012.09.1. That master is then tagged 2012.09.1 and the release merged back into the 2012.09 branch. No 2012.09.1 branch gets created.
So, if you are looking for:
- the current production code -> the current master
- 2012.09 as it was released -> the 2012.09 tag on master
- 2012.09.1 as it was released -> the 2012.09.1 tag on master
- Current work on 2012.09.2 -> the current 2012.09 tag
- Current work on the next major release (2013.01) -> the current develop
In general, you should not be editing the Maven versions when editing your code. If you check out develop, the version will already be set to the proper SNAPSHOT value. Furthermore, the release process automatically creates the proper version in the pom.xml for both non-SNAPSHOT and next version SNAPSHOT values.
The only exception is for the person setting things up in develop for the next major release.
As noted above, we welcome anyone to contribute. A wide degree of latitude is given to accepting cloud-specific sub-module contributes. The constraints around Dasein Cloud Core, however, are more strict. In particular, if you are making changes to Dasein Cloud Core, please follow these principles.
We basically don't make "dot release" changes to core unless they are simple bug fixes. Rarely is there a bug in core since it consists largely of interfaces. The really important thing is this: we don't role out contract changes in dot releases. Period. No changes to interfaces or method signatures or enums.
If you want to contribute something to the core, you should create an issue for it with your proposed solution. Core changes really should benefit from many eyes ahead of time.
Core interfaces should not (necessarily) reflect the way any specific cloud handles a problem. We want core to be able to support any functionality in any cloud, but in a way that consuming code can discover.
Here's a recent example:
A contributor noted that Dasein Cloud does not have explicit support for AWS tagging. Tagging is a mechanism that AWS has for enabling you to tag different AWS resources with meta-data through an independent tagging service. The lack of explicit support for this is indeed a hole in Dasein Cloud and thus core should be modified to address it.
One way to support it would be to model the problem the way AWS does. That is, create a TagSupport class and run tag management through it. This approach would, however, run into some difficulties in other clouds. First, the AWS mechanism works the way it does because AWS embeds the resource type in a resource ID. AWS is able to understand what you want to tag based on solely on the object ID. This design is not true of most other clouds. We would therefore have to develop a rich syntax for describing resource type along with the ID in a centralized service.
A more cloud-independent approach would be to have each support service implement tagging for the resources it manages. In other words, VirtualMachineSupport would handle tagging of virtual machines. Of course, that means more work in this case. But it is more flexible for supporting the different ways different clouds handle meta-data.
Everything that is in any way optional for a cloud for a specific kind of support should be reflected in meta-data. If it's binary, there should be a boolean supportsXXX() call. If it is something that may be considered not supported, optional, or required, use a Requirement identifyXXXRequirement() call.
Using the tags example above, there are two conditional pieces to the tagging puzzle:
- A cloud may or may not support runtime modification of tags
- A cloud may or may not support runtime modification of tags for all resources
- A cloud may limit the number of tags associated with a resource
The annoying thing is that you really have to have a lot of experience in operating across clouds to appreciate the third one. Still, the proposed solution was to add the following methods to VirtualMachineSupport to describe virtual machine tagging:
- boolean supportsTagModifications()
- int getMaxTags()
The test suite is in dasein-cloud-test. A full description of this test suite is beyond the scope of this Wiki item. The important thing is that every core feature should have a validation test in dasein-cloud-test. When you are adding meta-data, make sure that change is reflected in the MetaData test for the specified support service. If you are adding functionality, make sure you have a test specific to that piece of functionality.