Skip to content

Commit

Permalink
Merge pull request #1940 from microsoft/staging
Browse files Browse the repository at this point in the history
Staging to main: fix tests and update documentation
  • Loading branch information
miguelgfierro authored Jun 9, 2023
2 parents fdac1bc + 3cea4d5 commit 787ae30
Show file tree
Hide file tree
Showing 5 changed files with 49 additions and 40 deletions.
9 changes: 3 additions & 6 deletions .github/actions/azureml-test/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -82,9 +82,6 @@ runs:
- name: Install wheel package
shell: bash
run: pip install --quiet wheel
- name: Create wheel from setup.py
shell: bash
run: python setup.py --quiet bdist_wheel
- name: Submit CPU tests to AzureML
shell: bash
if: contains(inputs.TEST_GROUP, 'cpu')
Expand All @@ -94,7 +91,7 @@ runs:
--rg ${{inputs.RG}} --wsname ${{inputs.WS}} --expname ${{inputs.EXP_NAME}}_${{inputs.TEST_GROUP}}
--testlogs ${{inputs.TEST_LOGS_PATH}} --testkind ${{inputs.TEST_KIND}}
--conda_pkg_python ${{inputs.PYTHON_VERSION}} --testgroup ${{inputs.TEST_GROUP}}
--disable-warnings
--disable-warnings --sha "${GITHUB_SHA}"
- name: Submit GPU tests to AzureML
shell: bash
if: contains(inputs.TEST_GROUP, 'gpu')
Expand All @@ -104,7 +101,7 @@ runs:
--rg ${{inputs.RG}} --wsname ${{inputs.WS}} --expname ${{inputs.EXP_NAME}}_${{inputs.TEST_GROUP}}
--testlogs ${{inputs.TEST_LOGS_PATH}} --add_gpu_dependencies --testkind ${{inputs.TEST_KIND}}
--conda_pkg_python ${{inputs.PYTHON_VERSION}} --testgroup ${{inputs.TEST_GROUP}}
--disable-warnings
--disable-warnings --sha "${GITHUB_SHA}"
- name: Submit PySpark tests to AzureML
shell: bash
if: contains(inputs.TEST_GROUP, 'spark')
Expand All @@ -114,7 +111,7 @@ runs:
--rg ${{inputs.RG}} --wsname ${{inputs.WS}} --expname ${{inputs.EXP_NAME}}_${{inputs.TEST_GROUP}}
--testlogs ${{inputs.TEST_LOGS_PATH}} --add_spark_dependencies --testkind ${{inputs.TEST_KIND}}
--conda_pkg_python ${{inputs.PYTHON_VERSION}} --testgroup ${{inputs.TEST_GROUP}}
--disable-warnings
--disable-warnings --sha "${GITHUB_SHA}"
- name: Print test logs
shell: bash
run: cat ${{inputs.TEST_LOGS_PATH}}
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Here are the basic steps to get started with your first contribution. Please rea
1. Use [open issues](https://github.com/Microsoft/Recommenders/issues) to discuss the proposed changes. Create an issue describing changes if necessary to collect feedback. Also, please use provided labels to tag issues so everyone can easily sort issues of interest.
1. [Fork the repo](https://help.github.com/articles/fork-a-repo/) so you can make and test local changes.
1. Create a new branch **from staging branch** for the issue (please do not create a branch from main). We suggest prefixing the branch with your username and then a descriptive title: (e.g. gramhagen/update_contributing_docs)
1. Install reco-utils package locally using the right optional dependency for your test and the dev option. (e.g. gpu test: `pip install -e .[gpu,dev]`)
1. Install recommenders package locally using the right optional dependency for your test and the dev option. (e.g. gpu test: `pip install -e .[gpu,dev]`)
1. Create a test that replicates the issue.
1. Make code changes.
1. Ensure unit tests pass and code style / formatting is consistent (see [wiki](https://github.com/Microsoft/Recommenders/wiki/Coding-Guidelines#python-and-docstrings-style) for more details).
Expand Down
26 changes: 21 additions & 5 deletions SETUP.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Setup Guide

The repo, including this guide, is tested on Linux. Where applicable, we document differences in [Windows](#windows-specific-instructions) and [macOS](#macos-specific-instructions) although
The repo, including this guide, is tested on Linux. Where applicable, we document differences in [Windows](#windows-specific-instructions) and [macOS](#macos-specific-instructions) although
such documentation may not always be up to date.

## Extras
Expand Down Expand Up @@ -91,7 +91,6 @@ Additionally, you must install the [spark-cosmosdb connector](https://docs.datab
</details>



## Setup for Experimental
<!-- FIXME FIXME 23/04/01 move to experimental. Have not tested -->
The `xlearn` package has dependency on `cmake`. If one uses the `xlearn` related notebooks or scripts, make sure `cmake` is installed in the system. The easiest way to install on Linux is with apt-get: `sudo apt-get install -y build-essential cmake`. Detailed instructions for installing `cmake` from source can be found [here](https://cmake.org/install/).
Expand All @@ -109,6 +108,24 @@ If zsh is used, one will need to use `pip install 'recommenders[<extras>]'` to i
For Spark features to work, make sure Java and Spark are installed first. Also make sure environment variables `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` are set to the the same python executable.
<!-- TO DO: Pytorch m1 mac GPU suppoort -->

## Setup for Developers

If you want to contribute to Recommenders, please first read the [Contributing Guide](./CONTRIBUTING.md). You will notice that our development branch is `staging`.

To start developing, you need to install the latest `staging` branch in local, the `dev` package, and any other package you want. For example, for starting developing with GPU models, you can use the following command:

```bash
git checkout staging
pip install -e .[dev,gpu]
```

You can decide which packages you want to install, if you want to install all of them, you can use the following command:

```bash
git checkout staging
pip install -e .[all]
```

## Test Environments

Depending on the type of recommender system and the notebook that needs to be run, there are different computational requirements.
Expand All @@ -121,15 +138,14 @@ Another alternative is to run all the recommender utilities directly from a loca

## Setup for Making a Release

The process of making a new release and publishing it to pypi is as follows:
The process of making a new release and publishing it to [PyPI](https://pypi.org/project/recommenders/) is as follows:

First make sure that the tag that you want to add, e.g. `0.6.0`, is added in [`recommenders.py/__init__.py`](recommenders.py/__init__.py). Follow the [contribution guideline](CONTRIBUTING.md) to add the change.

1. Make sure that the code in main passes all the tests (unit and nightly tests).
1. Create a tag with the version number: e.g. `git tag -a 0.6.0 -m "Recommenders 0.6.0"`.
1. Push the tag to the remote server: `git push origin 0.6.0`.
1. When the new tag is pushed, a release pipeline is executed. This pipeline runs all the tests again (unit, smoke and integration),
generates a wheel and a tar.gz which are uploaded to a [GitHub draft release](https://github.com/microsoft/recommenders/releases).
1. When the new tag is pushed, a release pipeline is executed. This pipeline runs all the tests again (unit, smoke and integration), generates a wheel and a tar.gz which are uploaded to a [GitHub draft release](https://github.com/microsoft/recommenders/releases).
1. Fill up the draft release with all the recent changes in the code.
1. Download the wheel and tar.gz locally, these files shouldn't have any bug, since they passed all the tests.
1. Install twine: `pip install twine`
Expand Down
15 changes: 7 additions & 8 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,11 @@
"nltk>=3.4,<4",
"seaborn>=0.8.1,<1",
"transformers>=2.5.0,<5",
"bottleneck>=1.2.1,<2",
"category_encoders>=1.3.0,<2",
"jinja2>=2,<3.1",
"pyyaml>=5.4.1,<6",
"requests>=2.0.0,<3",
"cornac>=1.1.2,<2",
"cornac>=1.1.2,<1.15.2;python_version<='3.7'",
"cornac>=1.15.2,<2;python_version>='3.8'", # After 1.15.2, Cornac requires python 3.8
"retrying>=1.3.3",
"pandera[strategies]>=0.6.5", # For generating fake datasets
"scikit-surprise>=1.0.6",
Expand All @@ -55,7 +54,6 @@
# shared dependencies
extras_require = {
"examples": [
"azure.mgmt.cosmosdb>=0.8.0,<1",
"hyperopt>=0.1.2,<1",
"ipykernel>=4.6.1,<7",
"jupyter>=1,<2",
Expand All @@ -72,7 +70,6 @@
"fastai>=1.0.46,<2",
],
"spark": [
"databricks_cli>=0.8.6,<1",
"pyarrow>=0.12.1,<7.0.0",
"pyspark>=2.4.5,<3.3.0",
],
Expand All @@ -81,7 +78,6 @@
"pytest>=3.6.4",
"pytest-cov>=2.12.1",
"pytest-mock>=3.6.1", # for access to mock fixtures in pytest
"pytest-rerunfailures>=10.2", # to mark flaky tests
],
}
# for the brave of heart
Expand Down Expand Up @@ -137,6 +133,9 @@
install_requires=install_requires,
package_dir={"recommenders": "recommenders"},
python_requires=">=3.6, <3.10",
packages=find_packages(where=".", exclude=["contrib", "docs", "examples", "scenarios", "tests", "tools"]),
setup_requires=["numpy>=1.15"]
packages=find_packages(
where=".",
exclude=["contrib", "docs", "examples", "scenarios", "tests", "tools"],
),
setup_requires=["numpy>=1.19"],
)
37 changes: 17 additions & 20 deletions tests/ci/azureml_tests/submit_groupwise_azureml_pytest.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ def create_run_config(
add_spark_dependencies,
conda_pkg_jdk,
conda_pkg_python,
reco_wheel_path,
commit_sha,
):
"""
AzureML requires the run environment to be setup prior to submission.
Expand All @@ -172,6 +172,7 @@ def create_run_config(
added to the conda environment, else False
add_spark_dependencies (bool) : True if PySpark packages should be
added to the conda environment, else False
commit_sha (str) : the commit that triggers the workflow
Return:
run_azuremlcompute : AzureML run config
Expand All @@ -188,32 +189,28 @@ def create_run_config(
# True means the user will manually configure the environment
run_azuremlcompute.environment.python.user_managed_dependencies = False

# install local version of recommenders on AzureML compute using .whl file
whl_url = run_azuremlcompute.environment.add_private_pip_wheel(
workspace=workspace,
file_path=reco_wheel_path,
exist_ok=True,
)
conda_dep = CondaDependencies()
conda_dep.add_conda_package(conda_pkg_python)
conda_dep.add_pip_package(whl_url)
conda_dep.add_pip_package(
"pymanopt@https://github.com/pymanopt/pymanopt/archive/fb36a272cdeecb21992cfd9271eb82baafeb316d.zip"
)

# install extra dependencies
# install recommenders
reco_extras = "dev,examples"
if add_gpu_dependencies and add_spark_dependencies:
conda_dep.add_channel("conda-forge")
conda_dep.add_conda_package(conda_pkg_jdk)
conda_dep.add_pip_package("recommenders[dev,examples,spark,gpu]")
reco_extras = reco_extras + ",spark,gpu"
elif add_gpu_dependencies:
conda_dep.add_pip_package("recommenders[dev,examples,gpu]")
reco_extras = reco_extras + ",gpu"
elif add_spark_dependencies:
conda_dep.add_channel("conda-forge")
conda_dep.add_conda_package(conda_pkg_jdk)
conda_dep.add_pip_package("recommenders[dev,examples,spark]")
else:
conda_dep.add_pip_package("recommenders[dev,examples]")
reco_extras = reco_extras + ",spark"

conda_dep.add_pip_package(
f"recommenders[{reco_extras}]@git+https://github.com/microsoft/recommenders.git@{commit_sha}"
)

run_azuremlcompute.environment.python.conda_dependencies = conda_dep
return run_azuremlcompute
Expand Down Expand Up @@ -286,6 +283,11 @@ def create_arg_parser():
"""

parser = argparse.ArgumentParser(description="Process some inputs")
parser.add_argument(
"--sha",
action="store",
help="the commit that triggers the workflow",
)
# script to run pytest
parser.add_argument(
"--test",
Expand Down Expand Up @@ -448,11 +450,6 @@ def create_arg_parser():
max_nodes=args.maxnodes,
)

wheel_list = glob.glob("./dist/*.whl")
if not wheel_list:
logger.error("Wheel not found!")
logger.info("Found wheel at " + wheel_list[0])

run_config = create_run_config(
cpu_cluster=cpu_cluster,
docker_proc_type=docker_proc_type,
Expand All @@ -461,7 +458,7 @@ def create_arg_parser():
add_spark_dependencies=args.add_spark_dependencies,
conda_pkg_jdk=args.conda_pkg_jdk,
conda_pkg_python=args.conda_pkg_python,
reco_wheel_path=wheel_list[0],
commit_sha=args.sha,
)

logger.info("exp: In Azure, look for experiment named {}".format(args.expname))
Expand Down

0 comments on commit 787ae30

Please sign in to comment.