Skip to content

Commit

Permalink
📝
Browse files Browse the repository at this point in the history
Signed-off-by: miguelgfierro <[email protected]>
  • Loading branch information
miguelgfierro committed Sep 17, 2023
1 parent 54b171b commit 1c1e1e4
Show file tree
Hide file tree
Showing 8 changed files with 54 additions and 79 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,13 +135,13 @@ This project adheres to [Microsoft's Open Source Code of Conduct](CODE_OF_CONDUC

## Build Status

These tests are the nightly builds, which compute the smoke and integration tests. `main` is our principal branch and `staging` is our development branch. We use [pytest](https://docs.pytest.org/) for testing python utilities in [recommenders](recommenders) and [Papermill](https://github.com/nteract/papermill) and [Scrapbook](https://nteract-scrapbook.readthedocs.io/en/latest/) for the [notebooks](examples).
These tests are the nightly builds, which compute the asynchronous tests. `main` is our principal branch and `staging` is our development branch. We use [pytest](https://docs.pytest.org/) for testing python utilities in [recommenders](recommenders) and [Papermill](https://github.com/nteract/papermill) and [Scrapbook](https://nteract-scrapbook.readthedocs.io/en/latest/) for the [notebooks](examples).

For more information about the testing pipelines, please see the [test documentation](tests/README.md).

### AzureML Nightly Build Status

Smoke and integration tests are run daily on AzureML.
The nightly build tests are run daily on AzureML.

| Build Type | Branch | Status | | Branch | Status |
| --- | --- | --- | --- | --- | --- |
Expand Down
4 changes: 2 additions & 2 deletions SETUP.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,9 +156,9 @@ First make sure that the tag that you want to add, e.g. `0.6.0`, is added in [`r
1. Make sure that the code in main passes all the tests (unit and nightly tests).
1. Create a tag with the version number: e.g. `git tag -a 0.6.0 -m "Recommenders 0.6.0"`.
1. Push the tag to the remote server: `git push origin 0.6.0`.
1. When the new tag is pushed, a release pipeline is executed. This pipeline runs all the tests again (unit, smoke and integration), generates a wheel and a tar.gz which are uploaded to a [GitHub draft release](https://github.com/microsoft/recommenders/releases).
1. When the new tag is pushed, a release pipeline is executed. This pipeline runs all the tests again (PR gate and nightly builds), generates a wheel and a tar.gz which are uploaded to a [GitHub draft release](https://github.com/microsoft/recommenders/releases).
1. Fill up the draft release with all the recent changes in the code.
1. Download the wheel and tar.gz locally, these files shouldn't have any bug, since they passed all the tests.
1. Install twine: `pip install twine`
1. Publish the wheel and tar.gz to pypi: `twine upload recommenders*`
1. Publish the wheel and tar.gz to PyPI: `twine upload recommenders*`

3 changes: 0 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,7 @@ build-backend = "setuptools.build_meta"
[tool.pytest.ini_options]
markers = [
"experimental: tests that will not be executed and may need extra dependencies",
"flaky: flaky tests that can fail unexpectedly",
"gpu: tests running on GPU",
"integration: integration tests",
"notebooks: tests for notebooks",
"smoke: smoke tests",
"spark: tests that requires Spark",
]
3 changes: 1 addition & 2 deletions tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ The first step is to tag the parameters that we are going to inject. For it we n

The way papermill works to inject parameters is very simple, it generates a copy of the notebook (in our code we call it `OUTPUT_NOTEBOOK`), and creates a new cell with the injected variables.

The second modification that we need to do to the notebook is to record the metrics we want to test using `sb.glue("output_variable", python_variable_name)`. We normally use the last cell of the notebook to record all the metrics. These are the metrics that we are going to control in the smoke and integration tests.
The second modification that we need to do to the notebook is to record the metrics we want to test using `sb.glue("output_variable", python_variable_name)`. We normally use the last cell of the notebook to record all the metrics. These are the metrics that we are going to control in the smoke and functional tests.

This is an example on how we do a smoke test. The complete code can be found in [smoke/examples/test_notebooks_python.py](./smoke/examples/test_notebooks_python.py):

Expand All @@ -136,7 +136,6 @@ import scrapbook as sb
TOL = 0.05
ABS_TOL = 0.05

@pytest.mark.smoke
def test_sar_single_node_smoke(notebooks, output_notebook, kernel_name):
notebook_path = notebooks["sar_single_node"]
pm.execute_notebook(
Expand Down
14 changes: 0 additions & 14 deletions tests/smoke/recommenders/recommender/test_deeprec_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,7 @@
pass # disable error while collecting tests for non-gpu environments


@pytest.mark.smoke
@pytest.mark.gpu
@pytest.mark.deeprec
def test_FFM_iterator(deeprec_resource_path):
data_path = os.path.join(deeprec_resource_path, "xdeepfm")
yaml_file = os.path.join(data_path, "xDeepFM.yaml")
Expand All @@ -52,9 +50,7 @@ def test_FFM_iterator(deeprec_resource_path):
assert isinstance(res, tuple)


@pytest.mark.smoke
@pytest.mark.gpu
@pytest.mark.deeprec
def test_model_xdeepfm(deeprec_resource_path):
data_path = os.path.join(deeprec_resource_path, "xdeepfm")
yaml_file = os.path.join(data_path, "xDeepFM.yaml")
Expand All @@ -79,9 +75,7 @@ def test_model_xdeepfm(deeprec_resource_path):
assert model.predict(data_file, output_file) is not None


@pytest.mark.smoke
@pytest.mark.gpu
@pytest.mark.deeprec
def test_model_dkn(deeprec_resource_path):
data_path = os.path.join(deeprec_resource_path, "dkn")
yaml_file = os.path.join(data_path, r"dkn.yaml")
Expand Down Expand Up @@ -116,10 +110,7 @@ def test_model_dkn(deeprec_resource_path):
assert model.run_eval(valid_file) is not None


@pytest.mark.smoke
@pytest.mark.gpu
@pytest.mark.deeprec
@pytest.mark.sequential
def test_model_slirec(deeprec_resource_path, deeprec_config_path):
data_path = os.path.join(deeprec_resource_path, "slirec")
yaml_file = os.path.join(deeprec_config_path, "sli_rec.yaml")
Expand Down Expand Up @@ -182,10 +173,7 @@ def test_model_slirec(deeprec_resource_path, deeprec_config_path):
assert model.predict(test_file, output_file) is not None


@pytest.mark.smoke
@pytest.mark.gpu
@pytest.mark.deeprec
@pytest.mark.sequential
def test_model_sum(deeprec_resource_path, deeprec_config_path):
data_path = os.path.join(deeprec_resource_path, "slirec")
yaml_file = os.path.join(deeprec_config_path, "sum.yaml")
Expand Down Expand Up @@ -248,9 +236,7 @@ def test_model_sum(deeprec_resource_path, deeprec_config_path):
assert model.predict(valid_file, output_file) is not None


@pytest.mark.smoke
@pytest.mark.gpu
@pytest.mark.deeprec
def test_model_lightgcn(deeprec_resource_path, deeprec_config_path):
data_path = os.path.join(deeprec_resource_path, "dkn")
yaml_file = os.path.join(deeprec_config_path, "lightgcn.yaml")
Expand Down
2 changes: 0 additions & 2 deletions tests/smoke/recommenders/recommender/test_deeprec_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@
pass # disable error while collecting tests for non-gpu environments


@pytest.mark.smoke
@pytest.mark.gpu
def test_DKN_iterator(deeprec_resource_path):
data_path = os.path.join(deeprec_resource_path, "dkn")
Expand Down Expand Up @@ -82,7 +81,6 @@ def test_DKN_iterator(deeprec_resource_path):
break


@pytest.mark.smoke
@pytest.mark.gpu
def test_Sequential_Iterator(deeprec_resource_path, deeprec_config_path):
data_path = os.path.join(deeprec_resource_path, "slirec")
Expand Down
68 changes: 32 additions & 36 deletions tests/smoke/recommenders/recommender/test_newsrec_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,33 +17,32 @@
pass # disable error while collecting tests for non-gpu environments


@pytest.mark.smoke
@pytest.mark.gpu
def test_model_nrms(mind_resource_path):
train_news_file = os.path.join(mind_resource_path, "train", r"news.tsv")
train_behaviors_file = os.path.join(mind_resource_path, "train", r"behaviors.tsv")
valid_news_file = os.path.join(mind_resource_path, "valid", r"news.tsv")
valid_behaviors_file = os.path.join(mind_resource_path, "valid", r"behaviors.tsv")
train_news_file = os.path.join(mind_resource_path, "train", "news.tsv")
train_behaviors_file = os.path.join(mind_resource_path, "train", "behaviors.tsv")
valid_news_file = os.path.join(mind_resource_path, "valid", "news.tsv")
valid_behaviors_file = os.path.join(mind_resource_path, "valid", "behaviors.tsv")
wordEmb_file = os.path.join(mind_resource_path, "utils", "embedding.npy")
userDict_file = os.path.join(mind_resource_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(mind_resource_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(mind_resource_path, "utils", r"nrms.yaml")
yaml_file = os.path.join(mind_resource_path, "utils", "nrms.yaml")

if not os.path.exists(train_news_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "train"),
"MINDdemo_train.zip",
)
if not os.path.exists(valid_news_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "valid"),
"MINDdemo_dev.zip",
)
if not os.path.exists(yaml_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "utils"),
"MINDdemo_utils.zip",
)
Expand All @@ -69,35 +68,34 @@ def test_model_nrms(mind_resource_path):
)


@pytest.mark.smoke
@pytest.mark.gpu
def test_model_naml(mind_resource_path):
train_news_file = os.path.join(mind_resource_path, "train", r"news.tsv")
train_behaviors_file = os.path.join(mind_resource_path, "train", r"behaviors.tsv")
valid_news_file = os.path.join(mind_resource_path, "valid", r"news.tsv")
valid_behaviors_file = os.path.join(mind_resource_path, "valid", r"behaviors.tsv")
train_news_file = os.path.join(mind_resource_path, "train", "news.tsv")
train_behaviors_file = os.path.join(mind_resource_path, "train", "behaviors.tsv")
valid_news_file = os.path.join(mind_resource_path, "valid", "news.tsv")
valid_behaviors_file = os.path.join(mind_resource_path, "valid", "behaviors.tsv")
wordEmb_file = os.path.join(mind_resource_path, "utils", "embedding_all.npy")
userDict_file = os.path.join(mind_resource_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(mind_resource_path, "utils", "word_dict_all.pkl")
vertDict_file = os.path.join(mind_resource_path, "utils", "vert_dict.pkl")
subvertDict_file = os.path.join(mind_resource_path, "utils", "subvert_dict.pkl")
yaml_file = os.path.join(mind_resource_path, "utils", r"naml.yaml")
yaml_file = os.path.join(mind_resource_path, "utils", "naml.yaml")

if not os.path.exists(train_news_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "train"),
"MINDdemo_train.zip",
)
if not os.path.exists(valid_news_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "valid"),
"MINDdemo_dev.zip",
)
if not os.path.exists(yaml_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "utils"),
"MINDdemo_utils.zip",
)
Expand All @@ -123,33 +121,32 @@ def test_model_naml(mind_resource_path):
)


@pytest.mark.smoke
@pytest.mark.gpu
def test_model_lstur(mind_resource_path):
train_news_file = os.path.join(mind_resource_path, "train", r"news.tsv")
train_behaviors_file = os.path.join(mind_resource_path, "train", r"behaviors.tsv")
valid_news_file = os.path.join(mind_resource_path, "valid", r"news.tsv")
valid_behaviors_file = os.path.join(mind_resource_path, "valid", r"behaviors.tsv")
train_news_file = os.path.join(mind_resource_path, "train", "news.tsv")
train_behaviors_file = os.path.join(mind_resource_path, "train", "behaviors.tsv")
valid_news_file = os.path.join(mind_resource_path, "valid", "news.tsv")
valid_behaviors_file = os.path.join(mind_resource_path, "valid", "behaviors.tsv")
wordEmb_file = os.path.join(mind_resource_path, "utils", "embedding.npy")
userDict_file = os.path.join(mind_resource_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(mind_resource_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(mind_resource_path, "utils", r"lstur.yaml")
yaml_file = os.path.join(mind_resource_path, "utils", "lstur.yaml")

if not os.path.exists(train_news_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "train"),
"MINDdemo_train.zip",
)
if not os.path.exists(valid_news_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "valid"),
"MINDdemo_dev.zip",
)
if not os.path.exists(yaml_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "utils"),
"MINDdemo_utils.zip",
)
Expand All @@ -175,33 +172,32 @@ def test_model_lstur(mind_resource_path):
)


@pytest.mark.smoke
@pytest.mark.gpu
def test_model_npa(mind_resource_path):
train_news_file = os.path.join(mind_resource_path, "train", r"news.tsv")
train_behaviors_file = os.path.join(mind_resource_path, "train", r"behaviors.tsv")
valid_news_file = os.path.join(mind_resource_path, "valid", r"news.tsv")
valid_behaviors_file = os.path.join(mind_resource_path, "valid", r"behaviors.tsv")
train_news_file = os.path.join(mind_resource_path, "train", "news.tsv")
train_behaviors_file = os.path.join(mind_resource_path, "train", "behaviors.tsv")
valid_news_file = os.path.join(mind_resource_path, "valid", "news.tsv")
valid_behaviors_file = os.path.join(mind_resource_path, "valid", "behaviors.tsv")
wordEmb_file = os.path.join(mind_resource_path, "utils", "embedding.npy")
userDict_file = os.path.join(mind_resource_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(mind_resource_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(mind_resource_path, "utils", r"lstur.yaml")
yaml_file = os.path.join(mind_resource_path, "utils", "lstur.yaml")

if not os.path.exists(train_news_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "train"),
"MINDdemo_train.zip",
)
if not os.path.exists(valid_news_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "valid"),
"MINDdemo_dev.zip",
)
if not os.path.exists(yaml_file):
download_deeprec_resources(
r"https://recodatasets.z20.web.core.windows.net/newsrec/",
"https://recodatasets.z20.web.core.windows.net/newsrec/",
os.path.join(mind_resource_path, "utils"),
"MINDdemo_utils.zip",
)
Expand Down
Loading

0 comments on commit 1c1e1e4

Please sign in to comment.