Skip to content

Commit

Permalink
[MINOR][PYTHON] Remove some unused third-party library imports
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?
Remove some unused third-part library imports

### Why are the changes needed?
these imports are never used

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
ci

### Was this patch authored or co-authored using generative AI tooling?
no

Closes #48954 from zhengruifeng/fix_has_numpy.

Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Ruifeng Zheng <[email protected]>
  • Loading branch information
zhengruifeng committed Nov 25, 2024
1 parent 7b4922e commit da4bcb7
Show file tree
Hide file tree
Showing 3 changed files with 1 addition and 17 deletions.
7 changes: 0 additions & 7 deletions python/pyspark/sql/connect/session.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,13 +113,6 @@
from pyspark.sql.connect.shell.progress import ProgressHandler
from pyspark.sql.connect.datasource import DataSourceRegistration

try:
import memory_profiler # noqa: F401

has_memory_profiler = True
except Exception:
has_memory_profiler = False


class SparkSession:
# The active SparkSession for the current thread
Expand Down
5 changes: 1 addition & 4 deletions python/pyspark/sql/pandas/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,14 +53,11 @@
)
from pyspark.errors import PySparkTypeError, UnsupportedOperationException, PySparkValueError
from pyspark.loose_version import LooseVersion
from pyspark.sql.utils import has_numpy

if has_numpy:
import numpy as np

if TYPE_CHECKING:
import pandas as pd
import pyarrow as pa
import numpy as np

from pyspark.sql.pandas._typing import SeriesLike as PandasSeriesLike
from pyspark.sql.pandas._typing import DataFrameLike as PandasDataFrameLike
Expand Down
6 changes: 0 additions & 6 deletions python/pyspark/sql/session.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,12 +90,6 @@
from pyspark.sql.connect.client import SparkConnectClient
from pyspark.sql.connect.shell.progress import ProgressHandler

try:
import memory_profiler # noqa: F401

has_memory_profiler = True
except Exception:
has_memory_profiler = False

__all__ = ["SparkSession"]

Expand Down

0 comments on commit da4bcb7

Please sign in to comment.