-
Notifications
You must be signed in to change notification settings - Fork 7
Configuration
Configuration files define the behaviour of Watai, either for a specific suite or for multiple ones.
The file format may be one of:
- JSON (
config.json
): in such a case, it must be valid JSON, i.e. without comments, with double quotes only, and without any evaluation. - JavaScript as a RequireJS module (
config.js
): if you need code evaluation (or comments), write your configuration file as a JavaScript file whosemodule.exports
object is set to a configuration hash. Simply assign your hash to the magic variablemodule.exports
.
These declarations are thus exactly equivalent, and the format is yours to choose:
// example/DuckDuckGo/config.json
{
"baseURL": "https://duckduckgo.com/?kad=en_GB",
"browser": "firefox"
}
// example/DuckDuckGo/config.js
module.exports = { // comments can be added in a JS file, not JSON
baseURL: 'https://duckduckgo.com/?kad=en_GB',
browser: 'firefox'
}
Configuration is handled in a cascade, which means you can define details at different levels, and share some setup between different test suites.
Configuration files are loaded, if found, in the following order:
- CLI-passed values with the
--config
option; - test suite root (the folder you pass to Watai as a parameter);
- …all the way up to your
$HOME
folder; - in the
$HOME/.watai
folder; - then, defaults are loaded.
For more details, see the configuration management package homepage.
Thanks to this configuration system, it is easy to properly separate concerns. Let's consider the following setup:
- A folder holding all test suites for a given webapp stores the app-specific elements in a
config.json
file, such as the base URL and the browser to use for testing. - If you have many tests, you may spread them across several subfolders. Those would probably redefine the base URL of the page at which they start their tests.
- Then, host-specific configuration should be written in
$HOME/.watai/config.json
and not versioned. “Host-specific” configuration are things such as the location of the browser binaries, and whether the browsers should be killed in case of a failure. This allows you to clone the same code and, for example, use different binaries locations and policies for your continuous integration server than your development machines.
The following keys may be set in a configuration file. The description beneath each of them describes the value that it should map to.
In its simplest form, a string containing the URL that will be first loaded by the spawned browser before starting executing the scenarios.
Example:
baseURL: 'http://0.0.0.0:3000/'
The value may also be an URL object, as specified by the Node core
url
module, instead of a string. This means you can override some parts of the target URL through cascades, or at call-time with the [[--config
option|Options]].Example:
// test/integration/config.js baseURL: 'http://0.0.0.0:3000/'
// test/integration/components/config.js baseURL: { path: '/catalog/index' } // will target all suites in the
components
folder athttp://0.0.0.0:3000/catalog/index
Or, at call time:
watai test/integration --config '{ "baseURL": { "port": 3030 } }' # execute the test suite on a different target
A string defining which browser should be used for the test.
You may set this to firefox
(default), chrome
, ie
, safari
, and so on… As long as the named browser is installed on the machine that runs the Selenium server, and that it has a driver for Selenium to use it, it will be accessible.
This is actually a shortcut for predefined sets of
driverCapabilities
hashes, so if you need anything more specific, such as a version number, a platform, deactivating JavaScript… check out thedriverCapabilities
config key documentation below.
An array of strings, each of which specifies a view that should be used to report on tests advancement. Views are not mutually exclusive, they should be seen as “pieces” that you can mix to best fit your workflow and expected level of detail.
Currently, the following views are available:
-
CLI
: animated and UTF-nicey (with fallback for Windows) output. Best for development machines, when you stare at your screen hoping all your tests pass. -
Verbose
: realtime view of all steps and evaluations. -
Growl
: if you have a notification system supported by the Growl package, will notify you with tests results after they are all executed. -
Dots
: outputs a series of characters, one per test:.
for a pass,F
for a failure and detailed reports onstderr
if a failure arises. Useful for CI. -
Instafail
: logs failures as they arrive. Useful in combination with theDots
view for long tests on CI environments. -
PageDump
: dumps the whole page source at the end of the test in case the first scenario fails. Useful if you often have server failures. -
SauceLabs
: if you run your tests on SauceLabs, use this view to send their status (pass/fail) to SauceLabs. You will need tonpm install saucelabs
first.
The default is [ 'CLI', 'Growl' ]
, which means both the interactive console view and Growl notifications are used.
A good default for CI environments could be [ 'Dots', 'Instafail' ]
.
If you specify only one view, Watai can wrap a string in an array for you:
views: 'CLI'
is the same asviews: ['CLI']
.
A number of milliseconds that tells Watai how long it should keep on trying whenever a page seems to fail the described scenarios before considering it is an actual failure.
It may be tempting to set this to a really small amount, but remember that you don't want all your tests to fail whenever your internet connectivity is a bit worse than usual… :)
Defines under which conditions the driver and browser should be quit.
Can be one of:
-
"always"
(default): exits the browser under any conditions (except when Watai is killed manually [discuss this behavior here]). -
"on success"
: will let the browser open if failures occur, giving you the opportunity to interact with the browser, possibly open an inspector, to investigate the problem further. -
"never"
: will always let the browser open. This is probably not a good idea unless you have some automated cleanup thatkillall
s browsers after some time.
If you're using SauceLabs, not quitting the driver at the end of a test will give you 90 seconds (or the amount you specified in
idle-timeout
) to take control of the browser. If you don't intend to use this feature (e.g. your tests run in continuous integration), avoid wasting minutes by settingquit: "always"
.
The name of the suite defaults to the containing folder's name. If you want to set it explicitly, simply specify it as a string in this configuration key.
A hash that more or less precisely defines which browser you want to run the test in. See all possible values on the Selenium documentation.
The URL at which your Selenium server can be found, as a String or as an URL object, as specified by the Node core url
module (see the baseURL
key documentation for more details).
You don't need to set this if you use the default setup the installation procedure described.
Authentication for connection to the Selenium server is handled straight in the seleniumServerURL
key, through the http://<username:password>@<server>/…
syntax, or through an auth
key if you're using Node's URL object syntax.
The tags
and build
keys are parsed and sent to the Selenium server.
That means, if you run your tests with a Selenium server that logs runs and supports tags, such as SauceLabs, you can tag a suite with an array of strings to make it easier to find run results.
If set to true
, then tests will stop after the first failing scenario.
This can be useful if you have long tests where failing at one step has a high probability of preventing all later tests to bring any useful information.
You can provide an array of scenario indices that will be skipped at runtime.
watai your/suite --config '{"ignore":[4,5,17]}' # will act as if scenarios 4, 5 and 17 did not exist
The best way to use this option is through the command-line, for local runs. If you want to deactivate scenarios and share that modification, you should make this visible by modifying the scenarios’ filenames.
If you need to fetch some configuration elements from somewhere else, for example with filesystem interaction, you might encounter a situation where you need to specify configuration elements asynchronously.
Any configuration element may also be a function, ready to compute one of the values documented above. Such a configuration function may either synchronously returning the value to use in its place, or asynchronously compute it. In this case, it takes a Q deferred object (you can think of this as a promise controller) as its first parameter, passing it the result once it's ready, and you'll have to make sure you return the associated promise.
Example:
// tags a run with the name of the current Git branch
tags: function(deferred) {
require('child_process').exec('git rev-parse --abbrev-ref HEAD', deferred.makeNodeResolver()); // easily hook to Node-style callbacks
return deferred.promise; // make sure you return the promise!
}