Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: update package config #1448

Open
wants to merge 6 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 2 additions & 7 deletions .github/workflows/pull-request.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,7 @@ jobs:
- name: Install pip dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python -m pip install -r requirements-dev.txt
python -m pip install -r requirements-test.txt
python -m pip install ".[dev,test]"

- name: Install the package
run: make install
Expand Down Expand Up @@ -103,10 +101,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python -m pip install -r requirements-dev.txt
python -m pip install -r requirements-test.txt
python -m pip install -r requirements-docs.txt
python -m pip install ".[dev,test,docs]"

- name: Install the package
run: make install
Expand Down
4 changes: 1 addition & 3 deletions .github/workflows/release-deprecated.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python -m pip install -r requirements-dev.txt
python -m pip install -r requirements-test.txt
python -m pip install ".[dev,test]"

- name: Install
run: make install
Expand Down
4 changes: 1 addition & 3 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python -m pip install -r requirements-dev.txt
python -m pip install -r requirements-test.txt
python -m pip install ".[dev,test]"

- name: Install
run: make install
Expand Down
14 changes: 5 additions & 9 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -81,8 +81,7 @@ jobs:
${{ runner.os }}-${{ matrix.pandas }}-pip-
- run: |
pip install --upgrade pip setuptools wheel
pip install -r requirements.txt "${{ matrix.pandas }}" "${{ matrix.numpy }}"
pip install -r requirements-test.txt
pip install ".[test]" "${{ matrix.pandas }}" "${{ matrix.numpy }}"
- run: echo "YDATA_PROFILING_NO_ANALYTICS=False" >> $GITHUB_ENV
- run: make install

Expand Down Expand Up @@ -130,8 +129,7 @@ jobs:
${{ runner.os }}-${{ matrix.pandas }}-pip-
- run: |
pip install --upgrade pip setuptools wheel
pip install -r requirements.txt "${{ matrix.pandas }}" "${{ matrix.numpy }}"
pip install -r requirements-test.txt
pip install ".[test]" "${{ matrix.pandas }}" "${{ matrix.numpy }}"
echo "YDATA_PROFILING_NO_ANALYTICS=False" >> $GITHUB_ENV
- run: make install

Expand All @@ -146,8 +144,7 @@ jobs:
${{ runner.os }}-${{ matrix.pandas }}-pip-
- run: |
pip install --upgrade pip setuptools wheel
pip install -r requirements.txt "${{ matrix.pandas }}" "${{ matrix.numpy }}"
pip install -r requirements-test.txt
pip install ".[test]" "${{ matrix.pandas }}" "${{ matrix.numpy }}"
- run: make install
- run: make test_cov
- run: codecov -F py${{ matrix.python-version }}-${{ matrix.os }}-${{ matrix.pandas }}-${{ matrix.numpy }}
Expand Down Expand Up @@ -205,14 +202,13 @@ jobs:
- run: |
pip install --upgrade pip setuptools wheel
pip install pytest-spark>=0.6.0 pyarrow==1.0.1 pyspark=="${{ matrix.spark }}"
pip install -r requirements.txt
pip install -r requirements-test.txt
pip install ".[test]"
pip install "${{ matrix.pandas }}" "${{ matrix.numpy }}"
- if: ${{ matrix.spark != '3.0.1' }}
run: echo "ARROW_PRE_0_15_IPC_FORMAT=1" >> $GITHUB_ENV
- run: echo "SPARK_LOCAL_IP=127.0.0.1" >> $GITHUB_ENV
- run: make install
- run: make install-spark-ci
- run: pip install -r requirements-spark.txt # Make sure the proper version of pandas is install after everything
- run: pip install ".[spark]" # Make sure the proper version of pandas is install after everything
- run: make test_spark

4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ package:
twine check dist/*

install:
pip install -e .[notebook]
pip install -e ".[notebook]"

install-docs: install ### Installs regular and docs dependencies
pip install -r requirements-docs.txt
pip install -e ".[docs]"

install-spark-ci:
sudo apt-get update
Expand Down
2 changes: 1 addition & 1 deletion docs/support-contribution/contribution_guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ To activate the local mechanisms (created using pre-commit hooks), run the
following commands:

``` console
pip install -r requirements-dev.txt
pip install ".[dev]"
pre-commit install --hook-type commit-msg --hook-type pre-commit
```

Expand Down
135 changes: 135 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"

[project]
name = "ydata-profiling"
authors = [
{name = "YData Labs Inc", email = "[email protected]"},
]
description="Generate profile report for pandas DataFrame"
readme = "README.md"
requires-python=">=3.7, <3.13"
keywords=["pandas", "data-science", "data-analysis", "python", "jupyter", "ipython"]
license = {text = "MIT"}
classifiers=[
"Development Status :: 5 - Production/Stable",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: MIT License",
"Environment :: Console",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Topic :: Scientific/Engineering",
"Framework :: IPython",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
]

dependencies = [
"scipy>=1.4.1, <1.14",
"pandas>1.1, <3.0, !=1.4.0",
"matplotlib>=3.5, <=3.10",
"pydantic>=2",
"PyYAML>=5.0.0, <6.1",
"jinja2>=2.11.1, <3.2",
"visions[type_image_path]>=0.7.5, <0.7.7",
"numpy>=1.16.0,<2.2",
# Could be optional
# Related to HTML report
"htmlmin==0.1.12",
# Correlations
"phik>=0.11.1,<0.13",
# Examples
"requests>=2.24.0, <3",
# Progress bar
"tqdm>=4.48.2, <5",
"seaborn>=0.10.1, <0.14",
"multimethod>=1.4, <2",
# metrics
"statsmodels>=0.13.2, <1",
# type checking
"typeguard>=3, <5",
"imagehash==4.3.1",
"wordcloud>=1.9.3",
"dacite>=1.8",
"numba>=0.56.0, <1",
]

dynamic = ["version"]

[project.optional-dependencies]
# dependencies for development and testing
dev = [
"black>=20.8b1",
"isort>=5.0.7",
"pre-commit>=2.8.2",
"virtualenv>=20.0.33",
"twine",
"wheel",
"myst-parser>=0.18.1",
"sphinx_rtd_theme>=0.4.3",
"sphinx-autodoc-typehints>=1.10.3",
"sphinx-multiversion>=0.2.3",
"autodoc_pydantic",
]
# this provides the recommended pyspark and pyarrow versions for spark to work on pandas-profiling
# note that if you are using pyspark 2.3 or 2.4 and pyarrow >= 0.15, you might need to
# set ARROW_PRE_0_15_IPC_FORMAT=1 in your conf/spark-env.sh for toPandas functions to work properly
spark = [
"pyspark>=2.3.0",
"pyarrow>=2.0.0",
"pandas>1.1, <2, !=1.4.0",
"numpy>=1.16.0,<1.24",
"visions[type_image_path]==0.7.5",
]
test = [
"pytest",
"coverage>=6.5, <8",
"codecov",
"pytest-cov",
"pytest-spark",
"nbval",
"pyarrow",
"twine>=3.1.1",
"kaggle",
]
notebook = [
"jupyter>=1.0.0",
"ipywidgets>=7.5.1",
]
docs = [
"mkdocs>=1.6.0,<1.7.0",
"mkdocs-material>=9.0.12,<10.0.0",
"mkdocs-material-extensions>=1.1.1,<2.0.0",
"mkdocs-table-reader-plugin<=2.2.0",
"mike>=2.1.1,<2.2.0",
"mkdocstrings[python]>=0.20.0,<1.0.0",
"mkdocs-badges",
]
unicode= [
"tangled-up-in-unicode==0.2.0",
]

[tool.setuptools.packages.find]
where = ["src"]

[tool.setuptools.package-data]
ydata_profiling = ["py.typed"]

[tool.setuptools]
include-package-data = true

[project.scripts]
ydata_profiling = "ydata_profiling.controller.console:main"
pandas_profiling = "ydata_profiling.controller.console:main"

[project.urls]
homepage = "https://github.com/ydataai/ydata-profiling"
11 changes: 0 additions & 11 deletions requirements-dev.txt

This file was deleted.

7 changes: 0 additions & 7 deletions requirements-docs.txt

This file was deleted.

9 changes: 0 additions & 9 deletions requirements-spark.txt

This file was deleted.

9 changes: 0 additions & 9 deletions requirements-test.txt

This file was deleted.

27 changes: 0 additions & 27 deletions requirements.txt

This file was deleted.

56 changes: 1 addition & 55 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,12 @@
from pathlib import Path

from setuptools import find_packages, setup
from setuptools import setup

# Read the contents of README file
source_root = Path(".")
with (source_root / "README.md").open(encoding="utf-8") as f:
long_description = f.read()

# Read the requirements
with (source_root / "requirements.txt").open(encoding="utf8") as f:
requirements = f.readlines()

try:
version = (source_root / "VERSION").read_text().rstrip("\n")
except FileNotFoundError:
Expand All @@ -20,58 +16,8 @@
version_file.write(f"__version__ = '{version}'")

setup(
name="ydata-profiling",
version=version,
author="YData Labs Inc",
author_email="[email protected]",
packages=find_packages("src"),
package_dir={"": "src"},
url="https://github.com/ydataai/ydata-profiling",
license="MIT",
description="Generate profile report for pandas DataFrame",
python_requires=">=3.7, <3.13",
install_requires=requirements,
extras_require={
"notebook": [
"jupyter>=1.0.0",
"ipywidgets>=7.5.1",
],
"unicode": [
"tangled-up-in-unicode==0.2.0",
],
},
package_data={
"ydata_profiling": ["py.typed"],
},
include_package_data=True,
classifiers=[
"Development Status :: 5 - Production/Stable",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: MIT License",
"Environment :: Console",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Topic :: Scientific/Engineering",
"Framework :: IPython",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
],
keywords="pandas data-science data-analysis python jupyter ipython",
long_description=long_description,
long_description_content_type="text/markdown",
entry_points={
"console_scripts": [
"ydata_profiling = ydata_profiling.controller.console:main",
"pandas_profiling = ydata_profiling.controller.console:main",
]
},
options={"bdist_wheel": {"universal": True}},
)
Loading