There are several kinds of tests for the VS Code Extensions. This document describes them and give pointers on how to run/debug them.
These tests are described in the order in which we prefer to write them. Always prefer unit tests, integration tests, and system tests, in that order.
Ensure that you have set a default dev hub through the sfdx force:auth:*
commands,
passing in --setdefaultdevhubusername
.
To run all tests, execute npm run compile && npm run test
from the top-level
folder.
We have several modules that do not have any dependencies on VS Code. For instance, the salesforce-apex-debugger and salesforce-utils-vscode modules. You would write such tests using Mocha and Chai as you normally would for NPM modules.
VS Code provides its own special way to run tests that require access to the extension development host. Basically, it launches your test in another instance of itself. You have control over what extensions are launched, the tests that are run, and also the workspace that it will run in. This gives you quite a bit of flexibility.
More information can be found at the doc site.
While the test runner is highly configurable, there are certain assumptions that will help make writing the tests easier.
- Ensure that your tests go into the
test
folder. - Ensure that you have an index.ts file in the test folder that follows what is in the standard configuration (copy from an existing one if you don't have it).
- Ensure that your test files are named like .test.ts. The .test. in the middle is essential
- Ensure that your .js test files are compiled into the out/test directory.
There are configurations already created for you at the top level .vscode/launch.json file. Here's a sample.
{
"name": "Launch Salesforce DX VS Code Apex Tests",
"type": "extensionHost",
"request": "launch",
"runtimeExecutable": "${execPath}",
"args": [
//<path-to-a-folder>
"--extensionDevelopmentPath=${workspaceRoot}/packages",
"--extensionTestsPath=${workspaceRoot}/packages/salesforcedx-vscode-apex/out/test"
],
"stopOnEntry": false,
"sourceMaps": true,
"outFiles": [
"${workspaceRoot}/packages/*/out/test/**/*.js"
],
"preLaunchTask": "Compile"
}
The important args are:
- The first, optional, parameter is a location to a folder that will serve as the workspace to run the tests. If you omit this, it just uses a clean workspace (which is usually what you want).
--extensionDevelopmentPath
- This governs what extensions are loaded--extensionTestsPath
- This governs what tests are actually run. This seems to be a specific folder so you cannot add a wildcard.
You should add an entry like "test": "node ./node_modules/vscode/bin/test"
to
your package.json.
When you run npm test
it will actually go ahead and fetch an instance of code
into the .vscode-test folder and run your tests with that instance. Thus, the
.vscode-test folder should be put into your .gitignore (and other .ignore files
such as .npmignore and .vscodeignore)
There are some optional environment variables to configure the test runner:
Name | Description |
---|---|
CODE_VERSION |
Version of VS Code to run the tests against (e.g. 0.10.10 ) |
CODE_DOWNLOAD_URL |
Full URL of a VS Code drop to use for running tests against |
CODE_TESTS_PATH |
Location of the tests to execute (default is process.cwd()/out/test or process.cwd()/test ) |
CODE_EXTENSIONS_PATH |
Location of the extensions to load (default is proces.cwd() ) |
CODE_TESTS_WORKSPACE |
Location of a workspace to open for the test instance (default is CODE_TESTS_PATH) |
If you are running this from the top-level root folder, you can issue npm run test:without-system-tests
.
See VS Code's doc site for more information.
See this repository for the actual vscode/bin/test source.
There are tests that require integration with the Salesforce server. These tests
require prior authentication to have occurred and a default devhub to be set.
These show up in several packages. These tests are put under test/integration
and named in the standard .test.ts pattern. The package.json should have an
entry like "test:integration": "node ./node_modules/vscode/bin/test/integration"
.
These can be run in the same way from the CLI using npm run test:integration
.
Running npm run test
will also run these.
A module can also have an entry like "test:unit": "node ./node_modules/vscode/bin/test/unit"
. This is used for pure unit tests and for
VS Code based tests discussed above. It is a good pattern to have an entry of
"test:unit": "node ./node_modules/vscode/bin/test"
in the package.json for
modules that don't have separate integration tests.
These can be run using npm run test:unit
for quick testing that doesn't
require the scratch org and server.
Modules that have separate unit & integration tests can provide top level launch configurations for running those tests as well. See the examples in launch.json for Apex Debugger configurations.
Since some modules have a dependency on VSCode and others do not, the way the tests
are ran for them are different. The ones without a dependency on VSCode will run mocha
directly while the VSCode packages runs the tests programmatically. In order to produce
the junit and xunit files, we have to configure mocha to use mocha-multi-reporters
and the mocha-junit-reporter packages. For the packages running mocha directly,
they are configured by pointing the config file option to the top level mocha config file.
For the VSCode packages, the testrunner.ts
file will set the reporters if they have not
already been set.
In order to upload and store them into Appveyor, the junit-custom.xml
files
in each package are aggregated into a single folder and renamed to include the
relevant package with aggregate-junit-xml.js
. The appveyor config file is
set to point to that directory to upload a zip with all the junit files.
We have several system (end-to-end) tests written using Spectron. These tests exercise the end-to-end flow from the perspective of the user. These tests borrow heavily from how the VS Code team does its smoke tests.
- Port 7777 is available on your machine. If it is not, you need to change
application.ts
to use a different port for webdriver. - Ensure that system tests go into
packages/system-tests
. - Ensure that tests are in the
packages/system-tests/scenarios
folder. - Ensure that your test files are named like .test.ts. The .test. in the middle is essential.
The general flow of the system tests are as follows:
- Pre-populate a folder with one of the sample projects from the
packages/system-tests/assets
folder. - Open an instance of VS Code on that workspace.
- Execute a series of commands against that workspace and verify the results.
Because system tests take a long time to execute, it's important to group them together to minimize the setup time.
From the top-level folder, execute npm run test:system-tests
to execute only
the system tests.
DEBUG_SPECTRON=1 npm run test:system-tests
.- Use the "Attach to Process (Inspector Protocol)" debug configuration to attach to the process (uses port 9229).
System tests exercise a wide range of modules and, thus, are well-suited for providing an upper bound on our coverage numbers. However, they are harder to instrument since they are not isolated to one module; all modules need to be instrumented.
The dynamic loading technique that we use with testrunner.ts
is not sufficient
since it decaches the require cache and forces it to load dynamically
instrumented files. That is only sufficient for that particular module. When you
need to instrument all modules, you need a parent module that can load
everything. Such a module is not impossible but might deviate from usual user
interactions, i.e., there is too much magic going on that we might not be able
to trust the coverage numbers.
Thus, instead of dynamic instrumentation, we opt for static instrumentation.
- Build all the modules, like normal, and write their JS files and source maps to disk.
- Use
istanbul
(version 1.0 and above) that supports source maps to instrument the newly build modules from step 1 and write it out to disk. istanbul
works by source instrumentation and maintaining a global map of files to covered locations called__coverage__
.- At the end of our extension deactivation, we grab the value of the
__coverage__
variable and write it out into coverage/coverage.json - The
coverage/coverage.json
file still uses the JS line numbers. We then useremap-istanbul
to remap the line numbers to their TypeScript counterparts. - Then we replace the original
coverage/coverage.json
with this new version and generate the corresponding coverage.lcov. coverage/coverage.json
andlcov.info
are uploaded to codecov.io.
Because we are changing the built modules and writing to file, the steps above
have to be explicitly requested using the command npm run coverage:system-tests
.
Tip: Depending on what you are trying to do, it is likely that you need to rebuild the modules after the coverage tests to get to a pristine state.