diff --git a/.github/workflows/update_documentation.yml b/.github/workflows/update_documentation.yml new file mode 100644 index 0000000000..9779e5eed1 --- /dev/null +++ b/.github/workflows/update_documentation.yml @@ -0,0 +1,67 @@ +# --------------------------------------------------------- +# Copyright (c) Recommenders contributors. +# Licensed under the MIT License. +# --------------------------------------------------------- + +name: Update Documentation + +on: + push: + branches: + - main + +jobs: + build: + runs-on: ubuntu-22.04 + + steps: + - name: Checkout repository + uses: actions/checkout@v3 + + - name: Set up Python + uses: actions/setup-python@v4 + with: + python-version: 3.10 + + - name: Install dependencies + run: | + pip install -q --upgrade pip setuptools wheel + pip install -q --no-use-pep517 lightfm + pip install -q .[all] + pip install -q -r docs/requirements-doc.txt + + - name: List dependencies + run: | + pip list + + - name: Build documentation + run: | + jupyter-book config sphinx docs/ + sphinx-build docs docs/_build/html -b html + + - name: Configure Git + run: | + git config --global user.email "actions@github.com" + git config --global user.name "GitHub Actions" + + - name: Create and switch to gh-pages branch + run: | + git checkout -b gh-pages + git pull origin gh-pages || true + + - name: Copy built documentation + run: cp -r docs/_build/html/* . + + - name: Add and commit changes + run: | + git add * -f + git commit -m "Update documentation" + + - name: Configure pull strategy (rebase) + run: git config pull.rebase true + + - name: Pull latest changes from remote gh-pages branch + run: git pull -Xtheirs origin gh-pages + + - name: Push changes to gh-pages + run: git push origin gh-pages diff --git a/.readthedocs.yaml b/.readthedocs.yaml deleted file mode 100644 index c9b3305a62..0000000000 --- a/.readthedocs.yaml +++ /dev/null @@ -1,20 +0,0 @@ -version: 2 - -# Add necessary apt-get packages -build: - apt_packages: - - cmake - -# Explicitly set the version of Python and its requirements -# The flat extra_requirements all is equivalent to: pip install .[all] -python: - version: "3.7" - install: - - method: pip - path: . - extra_requirements: - - all - -# Build from the docs/ directory with Sphinx -sphinx: - configuration: docs/source/conf.py diff --git a/README.md b/README.md index 87b9ef986a..bdc82c96a7 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ Licensed under the MIT License. # Recommenders -[![Documentation Status](https://readthedocs.org/projects/microsoft-recommenders/badge/?version=latest)](https://microsoft-recommenders.readthedocs.io/en/latest/?badge=latest) +[![Documentation status](https://github.com/recommenders-team/recommenders/actions/workflows/pages/pages-build-deployment/badge.svg)](https://github.com/recommenders-team/recommenders/actions/workflows/pages/pages-build-deployment) @@ -25,10 +25,10 @@ Recommenders is a project under the [Linux Foundation of AI and Data](https://lf This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. The examples detail our learnings on five key tasks: -- [Prepare Data](examples/01_prepare_data): Preparing and loading data for each recommender algorithm. -- [Model](examples/00_quick_start): Building models using various classical and deep learning recommender algorithms such as Alternating Least Squares ([ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.html#ALS)) or eXtreme Deep Factorization Machines ([xDeepFM](https://arxiv.org/abs/1803.05170)). +- [Prepare Data](examples/01_prepare_data): Preparing and loading data for each recommendation algorithm. +- [Model](examples/00_quick_start): Building models using various classical and deep learning recommendation algorithms such as Alternating Least Squares ([ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.html#ALS)) or eXtreme Deep Factorization Machines ([xDeepFM](https://arxiv.org/abs/1803.05170)). - [Evaluate](examples/03_evaluate): Evaluating algorithms with offline metrics. -- [Model Select and Optimize](examples/04_model_select_and_optimize): Tuning and optimizing hyperparameters for recommender models. +- [Model Select and Optimize](examples/04_model_select_and_optimize): Tuning and optimizing hyperparameters for recommendation models. - [Operationalize](examples/05_operationalize): Operationalizing models in a production environment on Azure. Several utilities are provided in [recommenders](recommenders) to support common tasks such as loading datasets in the format expected by different algorithms, evaluating model outputs, and splitting training/test data. Implementations of several state-of-the-art algorithms are included for self-study and customization in your own applications. See the [Recommenders documentation](https://readthedocs.org/projects/microsoft-recommenders/). @@ -73,7 +73,7 @@ In addition to the core package, several extras are also provided, including: ## Algorithms -The table below lists the recommender algorithms currently available in the repository. Notebooks are linked under the Example column as Quick start, showcasing an easy to run example of the algorithm, or as Deep dive, explaining in detail the math and implementation of the algorithm. +The table below lists the recommendation algorithms currently available in the repository. Notebooks are linked under the Example column as Quick start, showcasing an easy to run example of the algorithm, or as Deep dive, explaining in detail the math and implementation of the algorithm. | Algorithm | Type | Description | Example | |-----------|------|-------------|---------| diff --git a/docs/Makefile b/docs/Makefile deleted file mode 100644 index 2fe93422bd..0000000000 --- a/docs/Makefile +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Recommenders contributors. -# Licensed under the MIT License. - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = sphinx-build -SOURCEDIR = source -BUILDDIR = build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) \ No newline at end of file diff --git a/docs/README.md b/docs/README.md deleted file mode 100644 index 4f4a6b2b90..0000000000 --- a/docs/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# Documentation - -To setup the documentation, first you need to install the dependencies of the full environment. For it please follow the [SETUP.md](../SETUP.md). Then type: - - conda create -n reco_full -c conda-forge python=3.7 cudatoolkit=11.2 cudnn=8.1 - conda activate reco_full - - pip install numpy - pip install "pymanopt@https://github.com/pymanopt/pymanopt/archive/fb36a272cdeecb21992cfd9271eb82baafeb316d.zip" - pip install sphinx_rtd_theme - - -To build the documentation as HTML: - - cd docs - make html - -To contribute to this repository, please follow our [coding guidelines](https://github.com/Microsoft/Recommenders/wiki/Coding-Guidelines). See also the [reStructuredText documentation](https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html) for the syntax of docstrings. diff --git a/docs/_config.yml b/docs/_config.yml new file mode 100644 index 0000000000..efdd892209 --- /dev/null +++ b/docs/_config.yml @@ -0,0 +1,55 @@ +# Copyright (c) Recommenders contributors. +# Licensed under the MIT License. + +# Book settings +# Learn more at https://jupyterbook.org/customize/config.html + +# To build the Jupyter Book: +# $ jupyter-book clean docs +# $ jupyter-book build docs + + +title: Recommenders documentation +author: Recommenders contributors +copyright: "2018-2024" +logo: https://raw.githubusercontent.com/recommenders-team/artwork/main/color/recommenders_color.svg + + +# Short description about the book +description: >- + Recommenders - Python utilities for building recommendation systems + +execute: + execute_notebooks : off + +# Interact link settings +notebook_interface : "notebook" + +# Launch button settings +repository: + url : https://github.com/recommenders-team/recommenders + path_to_book : /docs + branch : main + +launch_buttons: + notebook_interface : classic + +# HTML-specific settings +html: + favicon : https://raw.githubusercontent.com/recommenders-team/artwork/main/icon/recommenders_color_icon.svg + home_page_in_navbar : false + use_repository_button : true + use_issues_button : true + baseurl : https://recommenders-team.github.io/recommenders/ + +sphinx: + extra_extensions: + - sphinx.ext.autodoc + - sphinx.ext.doctest + - sphinx.ext.intersphinx + - sphinx.ext.ifconfig + - sphinx.ext.napoleon # To render Google format docstrings + - sphinx.ext.viewcode # Add links to highlighted source code + + + diff --git a/docs/_toc.yml b/docs/_toc.yml new file mode 100644 index 0000000000..90e4fe0e17 --- /dev/null +++ b/docs/_toc.yml @@ -0,0 +1,18 @@ +# Copyright (c) Recommenders contributors. +# Licensed under the MIT License. + +# Table of contents +# Learn more at https://jupyterbook.org/customize/toc.html + +format: jb-book +root: intro +defaults: + numbered: false +parts: + - caption: Recommenders API Documentation + chapters: + - file: datasets + - file: evaluation + - file: models + - file: tuning + - file: utils diff --git a/docs/source/datasets.rst b/docs/datasets.rst similarity index 97% rename from docs/source/datasets.rst rename to docs/datasets.rst index 9cc88735ff..448b965222 100644 --- a/docs/source/datasets.rst +++ b/docs/datasets.rst @@ -75,7 +75,6 @@ this impression. To protect user privacy, each user was de-linked from the produ and Ming Zhou, "MIND: A Large-scale Dataset for News Recommendation", ACL, 2020. - .. automodule:: recommenders.datasets.mind :members: @@ -106,47 +105,41 @@ It comes with several sizes: Download utilities ****************** - .. automodule:: recommenders.datasets.download_utils :members: -Cosmos CLI utilities -********************* - -.. automodule:: recommenders.datasets.cosmos_cli - :members: - - Pandas dataframe utilities *************************** - .. automodule:: recommenders.datasets.pandas_df_utils :members: Splitter utilities ****************** - +Python splitters +================ .. automodule:: recommenders.datasets.python_splitters :members: +PySpark splitters +================= .. automodule:: recommenders.datasets.spark_splitters :members: +Other splitters utilities +========================= .. automodule:: recommenders.datasets.split_utils :members: Sparse utilities **************** - .. automodule:: recommenders.datasets.sparse :members: Knowledge graph utilities ************************* - .. automodule:: recommenders.datasets.wikidata :members: diff --git a/docs/source/evaluation.rst b/docs/evaluation.rst similarity index 87% rename from docs/source/evaluation.rst rename to docs/evaluation.rst index 21fba4f7bf..1bd465c37f 100644 --- a/docs/source/evaluation.rst +++ b/docs/evaluation.rst @@ -13,4 +13,5 @@ PySpark evaluation =============================== .. automodule:: recommenders.evaluation.spark_evaluation - :members: \ No newline at end of file + :members: + :special-members: __init__ \ No newline at end of file diff --git a/docs/intro.md b/docs/intro.md new file mode 100644 index 0000000000..cad0c74c27 --- /dev/null +++ b/docs/intro.md @@ -0,0 +1,34 @@ + + +# Welcome to Recommenders + +Recommenders objective is to assist researchers, developers and enthusiasts in prototyping, experimenting with and bringing to production a range of classic and state-of-the-art recommendation systems. + +````{margin} +```sh +pip install recommenders +``` +Star Us +```` + +Recommenders is a project under the [Linux Foundation of AI and Data](https://lfaidata.foundation/projects/). + +This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. + +The examples detail our learnings on five key tasks: + +- Prepare Data: Preparing and loading data for each recommendation algorithm. +- Model: Building models using various classical and deep learning recommendation algorithms such as Alternating Least Squares ([ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.html#ALS)) or eXtreme Deep Factorization Machines ([xDeepFM](https://arxiv.org/abs/1803.05170)). +- Evaluate: Evaluating algorithms with offline metrics. +- Model Select and Optimize: Tuning and optimizing hyperparameters for recommendation models. +- Operationalize: Operationalizing models in a production environment. + +Several utilities are provided in the `recommenders` library to support common tasks such as loading datasets in the format expected by different algorithms, evaluating model outputs, and splitting training/test data. Implementations of several state-of-the-art algorithms are included for self-study and customization in your own applications. + + + + diff --git a/docs/source/models.rst b/docs/models.rst similarity index 58% rename from docs/source/models.rst rename to docs/models.rst index 4b5080869c..58712da51b 100644 --- a/docs/source/models.rst +++ b/docs/models.rst @@ -5,260 +5,317 @@ Recommender algorithms module Recommender system algorithms and utilities. -Cornac +Cornac utilities ****************************** - .. automodule:: recommenders.models.cornac.cornac_utils :members: -DeepRec -****************************** +DeepRec utilities +****************************** Base model -================== +============================== .. automodule:: recommenders.models.deeprec.models.base_model :members: + :special-members: __init__ + +Sequential base model +============================== +.. automodule:: recommenders.models.deeprec.models.sequential.sequential_base_model + :members: + :special-members: __init__ + +Iterators +============================== +.. automodule:: recommenders.models.deeprec.io.iterator + :members: + :special-members: __init__ +.. automodule:: recommenders.models.deeprec.io.dkn_iterator + :members: + :special-members: __init__ +.. automodule:: recommenders.models.deeprec.io.dkn_item2item_iterator + :members: + :special-members: __init__ +.. automodule:: recommenders.models.deeprec.io.nextitnet_iterator + :members: + :special-members: __init__ +.. automodule:: recommenders.models.deeprec.io.sequential_iterator + :members: + :special-members: __init__ + +Data processing utilities +============================== +.. automodule:: recommenders.models.deeprec.DataModel.ImplicitCF + :members: + :special-members: __init__ + +Utilities +============================== +.. automodule:: recommenders.models.deeprec.deeprec_utils + :members: + :special-members: __init__, __repr__ + DKN -================== +****************************** .. automodule:: recommenders.models.deeprec.models.dkn :members: + :special-members: __init__ + DKN item-to-item -================== +****************************** .. automodule:: recommenders.models.deeprec.models.dkn_item2item :members: + :special-members: __init__ -LightGCN -================== -.. automodule:: recommenders.models.deeprec.models.graphrec.lightgcn - :members: xDeepFM -============== +****************************** .. automodule:: recommenders.models.deeprec.models.xDeepFM :members: + :special-members: __init__ -Sequential models -================== -Sequential base model ---------------------------- -.. automodule:: recommenders.models.deeprec.models.sequential.sequential_base_model +LightGCN +****************************** +.. automodule:: recommenders.models.deeprec.models.graphrec.lightgcn :members: + :special-members: __init__ + A2SVD -------------- +****************************** .. automodule:: recommenders.models.deeprec.models.sequential.asvd :members: + :special-members: __init__ + Caser ----------------- +****************************** .. automodule:: recommenders.models.deeprec.models.sequential.caser :members: + :special-members: __init__ + GRU --------------- +****************************** .. automodule:: recommenders.models.deeprec.models.sequential.gru :members: + :special-members: __init__ + NextItNet --------------- +****************************** .. automodule:: recommenders.models.deeprec.models.sequential.nextitnet :members: + :special-members: __init__ + RNN Cells ------------- +****************************** .. automodule:: recommenders.models.deeprec.models.sequential.rnn_cell_implement :members: + :special-members: __init__ + SUM -------------------------- +****************************** .. automodule:: recommenders.models.deeprec.models.sequential.sum :members: - + :special-members: __init__ .. automodule:: recommenders.models.deeprec.models.sequential.sum_cells :members: + :special-members: __init__ + SLIRec -------------- +****************************** .. automodule:: recommenders.models.deeprec.models.sequential.sli_rec :members: + :special-members: __init__ -Iterators -=========== - -.. automodule:: recommenders.models.deeprec.io.iterator - :members: - -.. automodule:: recommenders.models.deeprec.io.dkn_iterator - :members: - -.. automodule:: recommenders.models.deeprec.io.dkn_item2item_iterator - :members: - -.. automodule:: recommenders.models.deeprec.io.nextitnet_iterator - :members: - -.. automodule:: recommenders.models.deeprec.io.sequential_iterator - :members: - -Data processing utilities -=========================== - -.. automodule:: recommenders.models.deeprec.DataModel.ImplicitCF - :members: - -Utilities -============ - -.. automodule:: recommenders.models.deeprec.deeprec_utils - :members: -FastAI +FastAI utilities ****************************** - .. automodule:: recommenders.models.fastai.fastai_utils :members: -GeoIMC -****************************** -.. automodule:: recommenders.models.geoimc.geoimc_algorithm - :members: - -.. automodule:: recommenders.models.geoimc.geoimc_data - :members: - -.. automodule:: recommenders.models.geoimc.geoimc_predict - :members: - -.. automodule:: recommenders.models.geoimc.geoimc_utils - :members: - -LightFM +LightFM utilities ****************************** - .. automodule:: recommenders.models.lightfm.lightfm_utils :members: -LightGBM -****************************** +LightGBM utilities +****************************** .. automodule:: recommenders.models.lightgbm.lightgbm_utils :members: + NCF ****************************** - .. automodule:: recommenders.models.ncf.dataset :members: - + :special-members: __init__ .. automodule:: recommenders.models.ncf.ncf_singlenode :members: + :special-members: __init__ -NewsRec -****************************** -.. automodule:: recommenders.models.newsrec.io.mind_all_iterator +NewsRec utilities +****************************** +Base model +============================== +.. automodule:: recommenders.models.newsrec.models.base_model :members: + :special-members: __init__ +Iterators +============================== .. automodule:: recommenders.models.newsrec.io.mind_iterator :members: - -.. automodule:: recommenders.models.newsrec.models.base_model + :special-members: __init__ +.. automodule:: recommenders.models.newsrec.io.mind_all_iterator :members: + :special-members: __init__ + +Utilities +============================== .. automodule:: recommenders.models.newsrec.models.layers :members: + :special-members: __init__ +.. automodule:: recommenders.models.newsrec.newsrec_utils + :members: + :special-members: __init__ + +LSTUR +****************************** .. automodule:: recommenders.models.newsrec.models.lstur :members: + :special-members: __init__ + +NAML +****************************** .. automodule:: recommenders.models.newsrec.models.naml :members: + :special-members: __init__ + +NPA +****************************** .. automodule:: recommenders.models.newsrec.models.npa :members: + :special-members: __init__ + +NRMS +****************************** .. automodule:: recommenders.models.newsrec.models.nrms :members: + :special-members: __init__ -.. automodule:: recommenders.models.newsrec.newsrec_utils - :members: RBM ****************************** - .. automodule:: recommenders.models.rbm.rbm :members: + :special-members: __init__ + +.. FIXME: Fix Pymanopt dependency. Issue #2038 +.. GeoIMC +.. ****************************** +.. .. automodule:: recommenders.models.geoimc.geoimc_algorithm +.. :members: +.. :special-members: __init__ +.. .. automodule:: recommenders.models.geoimc.geoimc_data +.. :members: +.. :special-members: __init__ +.. .. automodule:: recommenders.models.geoimc.geoimc_predict +.. :members: +.. .. automodule:: recommenders.models.geoimc.geoimc_utils +.. :members: + + +.. FIXME: Fix Pymanopt dependency. Issue #2038 +.. RLRMC +.. ****************************** +.. .. automodule:: recommenders.models.rlrmc.RLRMCdataset +.. :members: +.. :special-members: __init__ +.. .. automodule:: recommenders.models.rlrmc.RLRMCalgorithm +.. :members: +.. :special-members: __init__ +.. .. automodule:: recommenders.models.rlrmc.conjugate_gradient_ms +.. :members: +.. :special-members: __init__ -RLRMC -****************************** - -.. automodule:: recommenders.models.rlrmc.RLRMCalgorithm - :members: - -.. automodule:: recommenders.models.rlrmc.RLRMCdataset - :members: - -.. automodule:: recommenders.models.rlrmc.conjugate_gradient_ms - :members: - SAR ****************************** - .. automodule:: recommenders.models.sar.sar_singlenode :members: + :special-members: __init__ + SASRec ****************************** - .. automodule:: recommenders.models.sasrec.model :members: - + :special-members: __init__ .. automodule:: recommenders.models.sasrec.sampler :members: - + :special-members: __init__ .. automodule:: recommenders.models.sasrec.util :members: SSE-PT ****************************** - .. automodule:: recommenders.models.sasrec.ssept :members: + :special-members: __init__ -Surprise +Surprise utilities ****************************** - .. automodule:: recommenders.models.surprise.surprise_utils :members: -TF-IDF +TF-IDF utilities ****************************** - .. automodule:: recommenders.models.tfidf.tfidf_utils :members: -VAE +Standard VAE ****************************** +.. automodule:: recommenders.models.vae.standard_vae + :members: + :special-members: __init__ + +Multinomial VAE +****************************** .. automodule:: recommenders.models.vae.multinomial_vae :members: + :special-members: __init__ -.. automodule:: recommenders.models.vae.standard_vae + +Vowpal Wabbit utilities +****************************** +.. automodule:: recommenders.models.vowpal_wabbit.vw :members: Wide & Deep ****************************** - .. automodule:: recommenders.models.wide_deep.wide_deep_utils - :members: \ No newline at end of file + :members: + :special-members: __init__ \ No newline at end of file diff --git a/docs/requirements-doc.txt b/docs/requirements-doc.txt new file mode 100644 index 0000000000..9810dc3c59 --- /dev/null +++ b/docs/requirements-doc.txt @@ -0,0 +1 @@ +jupyter-book>=1.0.0 diff --git a/docs/source/conf.py b/docs/source/conf.py deleted file mode 100644 index caed50ab7b..0000000000 --- a/docs/source/conf.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Recommenders contributors. -# Licensed under the MIT License. - -# -*- coding: utf-8 -*- -# -# Configuration file for the Sphinx documentation builder. -# -# This file does only contain a selection of the most common options. For a -# full list see the documentation: -# http://www.sphinx-doc.org/en/master/config - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import sys - -sys.path.insert(0, os.path.abspath(os.path.join("..", ".."))) -sys.setrecursionlimit(1500) - -from recommenders import TITLE, VERSION, COPYRIGHT, AUTHOR - -# -- Project information ----------------------------------------------------- - -project = TITLE -copyright = COPYRIGHT -author = AUTHOR - -# The short X.Y version -version = VERSION -# The full version, including alpha/beta/rc tags -release = VERSION - - -# -- General configuration --------------------------------------------------- - -# If your documentation needs a minimal Sphinx version, state it here. -# -# needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - "sphinx.ext.autodoc", - "sphinx.ext.doctest", - "sphinx.ext.intersphinx", - "sphinx.ext.ifconfig", - "sphinx.ext.viewcode", # Add links to highlighted source code - "sphinx.ext.napoleon", # to render Google format docstrings -] - -# Add any paths that contain templates here, relative to this directory. -templates_path = ["_templates"] - -# The suffix(es) of source filenames. -# You can specify multiple suffix as a list of string: -# -# source_suffix = ['.rst', '.md'] -source_suffix = ".rst" - -# The master toctree document. -master_doc = "index" - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -exclude_patterns = ["Thumbs.db", ".DS_Store"] - - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = None - - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = "sphinx_rtd_theme" - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# -# html_theme_options = {} - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -# html_static_path = ["images"] - -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# The default sidebars (for documents that don't match any pattern) are -# defined by theme itself. Builtin themes are using these templates by -# default: ``['localtoc.html', 'relations.html', 'sourcelink.html', -# 'searchbox.html']``. -# -# html_sidebars = {} - - -# -- Options for HTMLHelp output --------------------------------------------- - -# Output file base name for HTML help builder. -htmlhelp_basename = "Recommendersdoc" - - -# -- Options for LaTeX output ------------------------------------------------ - -latex_elements = { - "papersize": "letterpaper", - "pointsize": "10pt", - "figure_align": "htbp", - "preamble": r""" - %% Adding source listings https://en.wikibooks.org/wiki/LaTeX/Source_Code_Listings - \usepackage{listings} - \usepackage{color} - - \definecolor{mygreen}{rgb}{0,0.6,0} - \definecolor{mygray}{rgb}{0.5,0.5,0.5} - \definecolor{mymauve}{rgb}{0.58,0,0.82} - - \lstset{ - backgroundcolor=\color{white}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor}; should come as last argument - basicstyle=\footnotesize, % the size of the fonts that are used for the code - breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace - breaklines=true, % sets automatic line breaking - captionpos=b, % sets the caption-position to bottom - commentstyle=\color{mygreen}, % comment style - deletekeywords={...}, % if you want to delete keywords from the given language - escapeinside={\%*}{*)}, % if you want to add LaTeX within your code - extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8 - firstnumber=1000, % start line enumeration with line 1000 - frame=single, % adds a frame around the code - keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible) - keywordstyle=\color{blue}, % keyword style - language=Python, % the language of the code - morekeywords={*,...}, % if you want to add more keywords to the set - numbers=left, % where to put the line-numbers; possible values are (none, left, right) - numbersep=5pt, % how far the line-numbers are from the code - numberstyle=\tiny\color{mygray}, % the style that is used for the line-numbers - rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here)) - showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces' - showstringspaces=false, % underline spaces within strings only - showtabs=false, % show tabs within strings adding particular underscores - stepnumber=2, % the step between two line-numbers. If it's 1, each line will be numbered - stringstyle=\color{mymauve}, % string literal style - tabsize=2, % sets default tabsize to 2 spaces - title=\lstname % show the filename of files included with \lstinputlisting; also try caption instead of title - } - - """, -} - -# Grouping the document tree into LaTeX files. List of tuples -# (source start file, target name, title, -# author, documentclass [howto, manual, or own class]). -latex_documents = [ - ( - master_doc, - "Recommenders.tex", - "Recommenders Documentation", - "Recommenders", - "manual", - ) -] - - -# -- Options for manual page output ------------------------------------------ - -# One entry per manual page. List of tuples -# (source start file, name, description, authors, manual section). -man_pages = [(master_doc, "recommenders", "Recommenders Documentation", [author], 1)] - - -# -- Options for Texinfo output ---------------------------------------------- - -# Grouping the document tree into Texinfo files. List of tuples -# (source start file, target name, title, author, -# dir menu entry, description, category) -texinfo_documents = [ - ( - master_doc, - "Recommenders", - "Recommenders Documentation", - author, - "Recommenders", - "One line description of project.", - "Miscellaneous", - ) -] - - -# -- Options for Epub output ------------------------------------------------- - -# Bibliographic Dublin Core info. -epub_title = project - -# The unique identifier of the text. This can be a ISBN number -# or the project homepage. -# -# epub_identifier = '' - -# A unique identification for the text. -# -# epub_uid = '' - -# A list of files that should not be packed into the epub file. -epub_exclude_files = ["search.html"] - - -# -- Extension configuration ------------------------------------------------- - -# -- Options for intersphinx extension --------------------------------------- - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = {"https://docs.python.org/": None} - -################################################## -# Other options -# html_favicon = os.path.join(html_static_path[0], "favicon.ico") - - -# Ensure that __init__() is always documented -# source: https://stackoverflow.com/a/5599712 -def skip(app, what, name, obj, would_skip, options): - if name == "__init__": - return False - return would_skip - - -def setup(app): - app.connect("autodoc-skip-member", skip) diff --git a/docs/source/index.rst b/docs/source/index.rst deleted file mode 100644 index 624d2cf63d..0000000000 --- a/docs/source/index.rst +++ /dev/null @@ -1,26 +0,0 @@ - -Recommender Utilities -=================================================== - -The `Recommenders repository `_ provides examples and best practices for building recommendation systems, provided as Jupyter notebooks. - -The module `recommenders `_ contains functions to simplify common tasks used when developing and -evaluating recommender systems. - -.. toctree:: - :maxdepth: 1 - :caption: Contents: - - Utils - Datasets - Evaluation - Recommender algorithms - Hyperparameter tuning - - -Indices and tables -================== - -* :ref:`genindex` -* :ref:`modindex` -* :ref:`search` diff --git a/docs/source/tuning.rst b/docs/tuning.rst similarity index 100% rename from docs/source/tuning.rst rename to docs/tuning.rst index 24f0fab9bf..35f1dce1c5 100644 --- a/docs/source/tuning.rst +++ b/docs/tuning.rst @@ -4,8 +4,8 @@ Hyperparameter tuning module ********************************* Hyperparameter tuning module from Recommenders utilities. + Parameter sweep utils =============================== - .. automodule:: recommenders.tuning.parameter_sweep :members: \ No newline at end of file diff --git a/docs/source/utils.rst b/docs/utils.rst similarity index 94% rename from docs/source/utils.rst rename to docs/utils.rst index b0f8b166e8..1e168a146a 100644 --- a/docs/source/utils.rst +++ b/docs/utils.rst @@ -6,65 +6,57 @@ Common utilities module General utilities =============================== - .. automodule:: recommenders.utils.general_utils :members: GPU utilities =============================== - .. automodule:: recommenders.utils.gpu_utils :members: Kubernetes utilities =============================== - .. automodule:: recommenders.utils.k8s_utils :members: Notebook utilities =============================== - .. automodule:: recommenders.utils.notebook_utils :members: - .. automodule:: recommenders.utils.notebook_memory_management :members: Python utilities =============================== - .. automodule:: recommenders.utils.python_utils :members: Spark utilities =============================== - .. automodule:: recommenders.utils.spark_utils :members: Tensorflow utilities =============================== - .. automodule:: recommenders.utils.tf_utils :members: + :special-members: __init__ Timer =============================== - .. automodule:: recommenders.utils.timer :members: + :special-members: __init__ Plot utilities =============================== - .. automodule:: recommenders.utils.plot :members: \ No newline at end of file diff --git a/recommenders/__init__.py b/recommenders/__init__.py index 75648c36e0..e28bf197ff 100644 --- a/recommenders/__init__.py +++ b/recommenders/__init__.py @@ -3,7 +3,7 @@ __title__ = "Recommenders" __version__ = "1.1.1" -__author__ = "RecoDev Team at Microsoft" +__author__ = "Recommenders contributors" __license__ = "MIT" __copyright__ = "Copyright 2018-present Recommenders contributors." diff --git a/recommenders/datasets/pandas_df_utils.py b/recommenders/datasets/pandas_df_utils.py index 74327392c4..50bd83dd8a 100644 --- a/recommenders/datasets/pandas_df_utils.py +++ b/recommenders/datasets/pandas_df_utils.py @@ -87,7 +87,7 @@ class LibffmConverter: """Converts an input dataframe to another dataframe in libffm format. A text file of the converted Dataframe is optionally generated. - .. note:: + Note: The input dataframe is expected to represent the feature data in the following schema: diff --git a/recommenders/datasets/split_utils.py b/recommenders/datasets/split_utils.py index 1ee6f4064b..da409292b3 100644 --- a/recommenders/datasets/split_utils.py +++ b/recommenders/datasets/split_utils.py @@ -138,8 +138,7 @@ def _get_column_name(name, col_user, col_item): def split_pandas_data_with_ratios(data, ratios, seed=42, shuffle=False): """Helper function to split pandas DataFrame with given ratios - .. note:: - + Note: Implementation referenced from `this source `_. Args: diff --git a/recommenders/evaluation/python_evaluation.py b/recommenders/evaluation/python_evaluation.py index 7569c7246c..e9adf621aa 100644 --- a/recommenders/evaluation/python_evaluation.py +++ b/recommenders/evaluation/python_evaluation.py @@ -33,10 +33,28 @@ class ColumnMismatchError(Exception): + """Exception raised when there is a mismatch in columns. + + This exception is raised when an operation involving columns + encounters a mismatch or inconsistency. + + Attributes: + message (str): Explanation of the error. + """ + pass class ColumnTypeMismatchError(Exception): + """Exception raised when there is a mismatch in column types. + + This exception is raised when an operation involving column types + encounters a mismatch or inconsistency. + + Attributes: + message (str): Explanation of the error. + """ + pass @@ -63,7 +81,7 @@ def check_column_dtypes_wrapper( col_item=DEFAULT_ITEM_COL, col_prediction=DEFAULT_PREDICTION_COL, *args, - **kwargs + **kwargs, ): """Check columns of DataFrame inputs @@ -81,12 +99,16 @@ def check_column_dtypes_wrapper( expected_true_columns.add(kwargs["col_rating"]) if not has_columns(rating_true, expected_true_columns): raise ColumnMismatchError("Missing columns in true rating DataFrame") - + if not has_columns(rating_pred, {col_user, col_item, col_prediction}): raise ColumnMismatchError("Missing columns in predicted rating DataFrame") - - if not has_same_base_dtype(rating_true, rating_pred, columns=[col_user, col_item]): - raise ColumnTypeMismatchError("Columns in provided DataFrames are not the same datatype") + + if not has_same_base_dtype( + rating_true, rating_pred, columns=[col_user, col_item] + ): + raise ColumnTypeMismatchError( + "Columns in provided DataFrames are not the same datatype" + ) return func( rating_true=rating_true, @@ -95,7 +117,7 @@ def check_column_dtypes_wrapper( col_item=col_item, col_prediction=col_prediction, *args, - **kwargs + **kwargs, ) return check_column_dtypes_wrapper @@ -750,7 +772,9 @@ def map_at_k( if df_merge is None: return 0.0 else: - return (df_merge["rr"] / df_merge["actual"].apply(lambda x: min(x, k))).sum() / n_users + return ( + df_merge["rr"] / df_merge["actual"].apply(lambda x: min(x, k)) + ).sum() / n_users def get_top_k_items( @@ -837,7 +861,7 @@ def check_column_dtypes_diversity_serendipity_wrapper( col_sim=DEFAULT_SIMILARITY_COL, col_relevance=None, *args, - **kwargs + **kwargs, ): """Check columns of DataFrame inputs @@ -904,7 +928,7 @@ def check_column_dtypes_diversity_serendipity_wrapper( col_sim=col_sim, col_relevance=col_relevance, *args, - **kwargs + **kwargs, ) return check_column_dtypes_diversity_serendipity_wrapper @@ -933,7 +957,7 @@ def check_column_dtypes_novelty_coverage_wrapper( col_user=DEFAULT_USER_COL, col_item=DEFAULT_ITEM_COL, *args, - **kwargs + **kwargs, ): """Check columns of DataFrame inputs @@ -969,7 +993,7 @@ def check_column_dtypes_novelty_coverage_wrapper( col_user=col_user, col_item=col_item, *args, - **kwargs + **kwargs, ) return check_column_dtypes_novelty_coverage_wrapper @@ -1006,7 +1030,6 @@ def _get_cosine_similarity( col_item=DEFAULT_ITEM_COL, col_sim=DEFAULT_SIMILARITY_COL, ): - if item_sim_measure == "item_cooccurrence_count": # calculate item-item similarity based on item co-occurrence count df_cosine_similarity = _get_cooccurrence_similarity( diff --git a/recommenders/evaluation/spark_evaluation.py b/recommenders/evaluation/spark_evaluation.py index b4d6ea6891..2e376edc28 100644 --- a/recommenders/evaluation/spark_evaluation.py +++ b/recommenders/evaluation/spark_evaluation.py @@ -150,7 +150,7 @@ def rsquared(self): def exp_var(self): """Calculate explained variance. - .. note:: + Note: Spark MLLib's implementation is buggy (can lead to values > 1), hence we use var(). Returns: @@ -161,7 +161,7 @@ def exp_var(self): if var1 is None or var2 is None: return -np.inf - else: + else: # numpy divide is more tolerant to var2 being zero return 1 - np.divide(var1, var2) @@ -187,7 +187,7 @@ def __init__( precision. The implementations of precision@k, ndcg@k, and mean average precision are referenced from Spark MLlib, which - can be found at `here `_. + can be found at `the link `_. Args: rating_true (pyspark.sql.DataFrame): DataFrame of true rating data (in the @@ -203,7 +203,7 @@ def __init__( values are "top_k", "by_time_stamp", and "by_threshold". threshold (float): threshold for determining the relevant recommended items. This is used for the case that predicted ratings follow a known - distribution. NOTE: this option is only activated if relevancy_method is + distribution. NOTE: this option is only activated if `relevancy_method` is set to "by_threshold". """ self.rating_true = rating_true @@ -304,9 +304,9 @@ def _calculate_metrics(self): def precision_at_k(self): """Get precision@k. - .. note:: + Note: More details can be found - `here `_. + `on the precisionAt PySpark documentation `_. Return: float: precision at k (min=0, max=1) @@ -316,9 +316,9 @@ def precision_at_k(self): def recall_at_k(self): """Get recall@K. - .. note:: + Note: More details can be found - `here `_. + `on the recallAt PySpark documentation `_. Return: float: recall at k (min=0, max=1). @@ -328,9 +328,9 @@ def recall_at_k(self): def ndcg_at_k(self): """Get Normalized Discounted Cumulative Gain (NDCG) - .. note:: + Note: More details can be found - `here `_. + `on the ndcgAt PySpark documentation `_. Return: float: nDCG at k (min=0, max=1). @@ -348,9 +348,8 @@ def map(self): def map_at_k(self): """Get mean average precision at k. - .. note:: - More details can be found - `here `_. + Note: + More details `on the meanAveragePrecision PySpark documentation `_. Return: float: MAP at k (min=0, max=1). @@ -370,7 +369,7 @@ def _get_top_k_items( DataFrame, output a Spark DataFrame in the dense format of top k items for each user. - .. note:: + Note: if it is implicit rating, just append a column of constants to be ratings. Args: @@ -464,7 +463,7 @@ def _get_relevant_items_by_timestamp( col_rating (str): column name for rating. col_timestamp (str): column name for timestamp. col_prediction (str): column name for prediction. - k: number of relevent items to be filtered by the function. + k: number of relevant items to be filtered by the function. Return: pyspark.sql.DataFrame: DataFrame of customerID-itemID-rating tuples with only relevant items. @@ -526,7 +525,7 @@ def __init__( P. Castells, S. Vargas, and J. Wang, Novelty and diversity metrics for recommender systems: choice, discovery and relevance, ECIR 2011 - Eugene Yan, Serendipity: Accuracy’s unpopular best friend in Recommender Systems, + Eugene Yan, Serendipity: Accuracy's unpopular best friend in Recommender Systems, eugeneyan.com, April 2020 Args: @@ -582,7 +581,6 @@ def __init__( ) ) if self.item_feature_df is not None: - if str(required_schema) != str(item_feature_df.schema): raise Exception( "Incorrect schema! item_feature_df should have schema " @@ -622,7 +620,6 @@ def _get_pairwise_items(self, df): ) def _get_cosine_similarity(self, n_partitions=200): - if self.item_sim_measure == "item_cooccurrence_count": # calculate item-item similarity based on item co-occurrence count self._get_cooccurrence_similarity(n_partitions) diff --git a/recommenders/models/deeprec/models/sequential/nextitnet.py b/recommenders/models/deeprec/models/sequential/nextitnet.py index ddf62b7ed0..3ba8a7912b 100644 --- a/recommenders/models/deeprec/models/sequential/nextitnet.py +++ b/recommenders/models/deeprec/models/sequential/nextitnet.py @@ -16,8 +16,7 @@ class NextItNetModel(SequentialBaseModel): Yuan, Fajie, et al. "A Simple Convolutional Generative Network for Next Item Recommendation", in Web Search and Data Mining, 2019. - .. note:: - + Note: It requires strong sequence with dataset. """ @@ -45,7 +44,6 @@ def _build_seq_graph(self): ) with tf.compat.v1.variable_scope("nextitnet", reuse=tf.compat.v1.AUTO_REUSE): - dilate_input = tf.concat( [item_history_embedding, cate_history_embedding], 2 ) diff --git a/recommenders/models/deeprec/models/sequential/rnn_cell_implement.py b/recommenders/models/deeprec/models/sequential/rnn_cell_implement.py index df7ea906fa..8d8f4c7827 100644 --- a/recommenders/models/deeprec/models/sequential/rnn_cell_implement.py +++ b/recommenders/models/deeprec/models/sequential/rnn_cell_implement.py @@ -59,7 +59,6 @@ def __init__( activation=None, reuse=None, ): - super(Time4LSTMCell, self).__init__(_reuse=reuse) if not state_is_tuple: logging.warn( @@ -127,6 +126,17 @@ def output_size(self): return self._output_size def call(self, inputs, state): + """Call method for the Time4LSTMCell. + + Args: + inputs: A 2D Tensor of shape [batch_size, input_size]. + state: A 2D Tensor of shape [batch_size, state_size]. + + Returns: + A tuple containing: + - A 2D Tensor of shape [batch_size, output_size]. + - A 2D Tensor of shape [batch_size, state_size]. + """ time_now_score = tf.expand_dims(inputs[:, -1], -1) time_last_score = tf.expand_dims(inputs[:, -2], -1) inputs = inputs[:, :-2] @@ -314,7 +324,6 @@ def __init__( activation=None, reuse=None, ): - super(Time4ALSTMCell, self).__init__(_reuse=reuse) if not state_is_tuple: logging.warn( @@ -382,6 +391,17 @@ def output_size(self): return self._output_size def call(self, inputs, state): + """Call method for the Time4ALSTMCell. + + Args: + inputs: A 2D Tensor of shape [batch_size, input_size]. + state: A 2D Tensor of shape [batch_size, state_size]. + + Returns: + A tuple containing: + - A 2D Tensor of shape [batch_size, output_size]. + - A 2D Tensor of shape [batch_size, state_size]. + """ att_score = tf.expand_dims(inputs[:, -1], -1) time_now_score = tf.expand_dims(inputs[:, -2], -1) time_last_score = tf.expand_dims(inputs[:, -3], -1) diff --git a/recommenders/models/newsrec/models/layers.py b/recommenders/models/newsrec/models/layers.py index 669040cc5c..862996459c 100644 --- a/recommenders/models/newsrec/models/layers.py +++ b/recommenders/models/newsrec/models/layers.py @@ -56,7 +56,7 @@ def build(self, input_shape): super(AttLayer2, self).build(input_shape) # be sure you call this somewhere! def call(self, inputs, mask=None, **kwargs): - """Core implemention of soft attention + """Core implementation of soft attention. Args: inputs (object): input tensor. @@ -84,7 +84,7 @@ def call(self, inputs, mask=None, **kwargs): return K.sum(weighted_input, axis=1) def compute_mask(self, input, input_mask=None): - """Compte output mask value + """Compte output mask value. Args: input (object): input tensor. @@ -96,7 +96,7 @@ def compute_mask(self, input, input_mask=None): return None def compute_output_shape(self, input_shape): - """Compute shape of output tensor + """Compute shape of output tensor. Args: input_shape (tuple): shape of input tensor. @@ -112,7 +112,7 @@ class SelfAttention(layers.Layer): Args: multiheads (int): The number of heads. - head_dim (object): Dimention of each head. + head_dim (object): Dimension of each head. mask_right (boolean): whether to mask right words. Returns: @@ -313,6 +313,14 @@ def __init__(self, **kwargs): super(ComputeMasking, self).__init__(**kwargs) def call(self, inputs, **kwargs): + """Call method for ComputeMasking. + + Args: + inputs (object): input tensor. + + Returns: + bool tensor: True for values not equal to zero. + """ mask = K.not_equal(inputs, 0) return K.cast(mask, K.floatx()) @@ -321,7 +329,7 @@ def compute_output_shape(self, input_shape): class OverwriteMasking(layers.Layer): - """Set values at spasific positions to zero. + """Set values at specific positions to zero. Args: inputs (list): value tensor and mask tensor. @@ -337,6 +345,14 @@ def build(self, input_shape): super(OverwriteMasking, self).build(input_shape) def call(self, inputs, **kwargs): + """Call method for OverwriteMasking. + + Args: + inputs (list): value tensor and mask tensor. + + Returns: + object: tensor after setting values to zero. + """ return inputs[0] * K.expand_dims(inputs[1]) def compute_output_shape(self, input_shape): diff --git a/recommenders/models/sar/sar_singlenode.py b/recommenders/models/sar/sar_singlenode.py index 5cc2bc854b..570e0d04a3 100644 --- a/recommenders/models/sar/sar_singlenode.py +++ b/recommenders/models/sar/sar_singlenode.py @@ -226,9 +226,8 @@ def set_index(self, df): def fit(self, df): """Main fit method for SAR. - .. note:: - - Please make sure that `df` has no duplicates. + Note: + Please make sure that `df` has no duplicates. Args: df (pandas.DataFrame): User item rating dataframe (without duplicates). diff --git a/recommenders/utils/general_utils.py b/recommenders/utils/general_utils.py index 115e185328..92bc61b114 100644 --- a/recommenders/utils/general_utils.py +++ b/recommenders/utils/general_utils.py @@ -8,7 +8,7 @@ def invert_dictionary(dictionary): """Invert a dictionary - .. note:: + Note: If the dictionary has unique keys and unique values, the inversion would be perfect. However, if there are repeated values, the inversion can take different keys diff --git a/recommenders/utils/notebook_utils.py b/recommenders/utils/notebook_utils.py index 5b9fd9f5e8..ae9d4be2fb 100644 --- a/recommenders/utils/notebook_utils.py +++ b/recommenders/utils/notebook_utils.py @@ -57,10 +57,7 @@ def _update_parameters(parameter_cell_source, new_parameters): new_value = f'"{new_value}"' # Define a regular expression pattern to match parameter assignments and ignore comments - pattern = re.compile( - rf"(\b{param})\s*=\s*([^#\n]+)(?:#.*$)?", - re.MULTILINE - ) + pattern = re.compile(rf"(\b{param})\s*=\s*([^#\n]+)(?:#.*$)?", re.MULTILINE) modified_cell_source = pattern.sub(rf"\1 = {new_value}", modified_cell_source) return modified_cell_source @@ -71,11 +68,10 @@ def execute_notebook( ): """Execute a notebook while passing parameters to it. - .. note:: - - Ensure your Jupyter Notebook is set up with parameters that can be - modified and read. Use Markdown cells to specify parameters that need - modification and code cells to set parameters that need to be read. + Note: + Ensure your Jupyter Notebook is set up with parameters that can be + modified and read. Use Markdown cells to specify parameters that need + modification and code cells to set parameters that need to be read. Args: input_notebook (str): Path to the input notebook. diff --git a/recommenders/utils/python_utils.py b/recommenders/utils/python_utils.py index 44d802913b..6efdedfed4 100644 --- a/recommenders/utils/python_utils.py +++ b/recommenders/utils/python_utils.py @@ -234,7 +234,7 @@ def rescale(data, new_min=0, new_max=1, data_min=None, data_max=None): If data_min and data_max are explicitly provided, they will be used as the old min/max values instead of taken from the data. - .. note:: + Note: This is same as the `scipy.MinMaxScaler` with the exception that we can override the min/max of the old scale. diff --git a/recommenders/utils/tf_utils.py b/recommenders/utils/tf_utils.py index e50f659d0b..86b1cb6152 100644 --- a/recommenders/utils/tf_utils.py +++ b/recommenders/utils/tf_utils.py @@ -61,8 +61,7 @@ def pandas_input_fn( """Pandas input function for TensorFlow high-level API Estimator. This function returns a `tf.data.Dataset` function. - .. note:: - + Note: `tf.estimator.inputs.pandas_input_fn` cannot handle array/list column properly. For more information, see https://www.tensorflow.org/api_docs/python/tf/estimator/inputs/numpy_input_fn @@ -199,7 +198,7 @@ def evaluation_log_hook( ): """Evaluation log hook for TensorFlow high-level API Estimator. - .. note:: + Note: TensorFlow Estimator model uses the last checkpoint weights for evaluation or prediction. In order to get the most up-to-date evaluation results while training, set model's `save_checkpoints_steps` to be equal or greater than hook's `every_n_iter`. @@ -216,7 +215,7 @@ def evaluation_log_hook( batch_size (int): Number of samples fed into the model at a time. Note, the batch size doesn't affect on evaluation results. eval_fns (iterable of functions): List of evaluation functions that have signature of - (true_df, prediction_df, **eval_kwargs)->(float). If None, loss is calculated on true_df. + `(true_df, prediction_df, **eval_kwargs)`->`float`. If None, loss is calculated on `true_df`. eval_kwargs: Evaluation function's keyword arguments. Note, prediction column name should be 'prediction' diff --git a/setup.py b/setup.py index 3ce8b5d4bf..bb9db16883 100644 --- a/setup.py +++ b/setup.py @@ -95,7 +95,7 @@ setup( name="recommenders", version=version, - description="Recommenders - Python utilities for building recommender systems", + description="Recommenders - Python utilities for building recommendation systems", long_description=LONG_DESCRIPTION, long_description_content_type="text/markdown", url="https://github.com/recommenders-team/recommenders",