Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated read_excel docstring to match style guide formatting #53953

Conversation

GarrettDaniel
Copy link

Copy link
Member

@rhshadrach rhshadrach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Overall looks good - some requests.

@@ -79,7 +79,7 @@
)
_read_excel_doc = (
"""
Read an Excel file into a pandas DataFrame.
Read an Excel file into a ``pandas`` ``DataFrame``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking we shouldn't wrap DataFrame throughout. In my opinion, it doesn't need the highlighting, and it can serve as a distraction.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. I wasn't sure on that one so I'm glad you provided some clarity on it. It could go either way, but I agree that it gets distracting with how often DataFrame is referenced in the docs.

Comment on lines 155 to 158
- ``"xlrd"`` supports old-style Excel files (.xls).
- ``"openpyxl"`` supports newer Excel file formats.
- ``"odf"`` supports OpenDocument file formats (.odf, .ods, .odt).
- ``"pyxlsb"`` supports Binary Excel files.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest no double quotes here (so just e.g. ``xlrd``). We can just be referring to the engine itself, rather than the string argument.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call! I'll make sure to change that in the commit I'm working on.

Detect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
Detect missing value markers (empty strings and the value of ``na_values``). In
data without any NAs, ``passing na_filter=False`` can improve the performance
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

passing shouldn't be included (just na_filter=False)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

arrays, nullable dtypes are used for all dtypes that have a nullable
implementation when "numpy_nullable" is set, pyarrow is used for all
dtypes if "pyarrow" is set.
dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to ``numpy`` backed ``DataFrames``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NumPy is the correct capitalization, now?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, I just wasn't sure if it would be better to go with NumPy or numpy, but like the comment above with DataFrame and pandas, I agree that it would be distracting to make every instance of NumPy a code block.

@@ -356,7 +356,7 @@
1 NaN 2
2 #Comment 3

Comment lines in the excel input file can be skipped using the `comment` kwarg
Comment lines in the excel input file can be skipped using the ``comment`` ``kwarg``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kwarg here should not be included. Can you write out the full phrase here: keyword argument.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. Should comment still be in a code block in this case since it's referencing the name of a parameter?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - I think so.

parse some cells as date just change their type in Excel to "Text".
For non-standard datetime parsing, use ``pd.to_datetime`` after ``pd.read_excel``.
parse some cells as date, just change their type in Excel to "Text".
For non-standard ``datetime`` parsing, use ``pd.to_datetime`` after ``pd.read_excel``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think datetime here is being used in a technical sense (e.g. specifying a package or snippet of code), and so shouldn't highlighted.

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. If it was referencing the datetime package or dtype that would be a more appropriate time to put it in a codeblock.

@@ -345,7 +345,7 @@
1 string2 2.0
2 #Comment 3.0

True, False, and NA values, and thousands separators have defaults,
``True``, ``False``, ``NaN`` values, and thousands of separators have defaults,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs use NA as opposed to NaN in various places because it can refer to pd.NA. Can you revert this change.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On it.

* If callable, then evaluate each column name against it and parse the
column if the callable returns ``True``.

Returns a subset of the columns according to behavior above.
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {{'a': np.float64, 'b': np.int32}}
Data type for data or columns. E.g. ``{'a': np.float64, 'b': np.int32}``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you need to leave the double curly braces {{ and }} for jinja-style templating.

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do agree that we need that for jinja-style templating, but from what I can tell, we aren't using jinja-style templating in this case. If we were, wouldn't we be passing in parameters or referencing an external file to be read in and rendered like a formatted string? (https://realpython.com/primer-on-jinja-templating/). At the moment, it renders like a regular string, but this seems like a perfect use case for a code block.

image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed on using the code-block for this, but I believe the change to the curly braces is making the CI fail; e.g.:

https://github.com/pandas-dev/pandas/actions/runs/5426779826/jobs/9869261284?pr=53953#step:9:54

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok that's good to know. In that case, should we just leave it to render as a string with the double curly braces so that CI won't fail? I suppose we could put that in a code block but it might look a little strange (i.e. {{'a': np.float64, 'b': np.int32}})

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(i.e. {{'a': np.float64, 'b': np.int32}})

When you do this, do both curly braces render in the docs? I would expect only one renders.

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been trying to get the docs to render as per the guide (https://pandas.pydata.org/docs/development/contributing_documentation.html#building-the-documentation), but I keep running into ModuleNotFoundError: No module named 'pandas.__libs.pandas_parser' when running python3 make.py html in the doc directory. This is most likely because of the version of pandas I have on my laptop, but I tried upgrading to the newest version and uninstalling + reinstalling and the error still persists. At this point, I'll commit it with the code block around the braces and if the CI build still fails I'll just remove the code block and leave it to render as standard text.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you create a development environment and compile/install pandas?

https://pandas.pydata.org/docs/development/contributing_environment.html

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried a few of the different methods on that doc, namely with Docker, DevContainers, and Mamba, and none of them were able to successfully import pandas. I believe it has something to do with a corporate proxy or security issue, but I either run into that same ModuleNotFoundError, or I run into ERROR: Disabling PEP 517 processing is invalid: project specifies a build backend of mesonpy in pyproject.toml when trying to run python -m pip install -e . --no-build-isolation --no-use-pep517. If you have any ideas on this, let me know. Otherwise, I've made all of the other requested changes in the other comments.

Copy link
Member

@rhshadrach rhshadrach Jul 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you're going off the stable (e.g. 2.0.3) version of the docs for building pandas. When reading docs on development, it's best to read the dev docs as we will break dev-specific things well before releasing 😆. See here:

https://pandas.pydata.org/pandas-docs/dev/development/contributing_environment.html#step-3-build-and-install-pandas

Copy link
Member

@rhshadrach rhshadrach Jul 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, I've made all of the other requested changes in the other comments.

The CI is failing; this will need to be addressed (which is easiest when you can build locally).

@mroeschke mroeschke added the IO Excel read_excel, to_excel label Jul 6, 2023
@mroeschke
Copy link
Member

Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen.

@mroeschke mroeschke closed this Aug 11, 2023
@linus-md linus-md mentioned this pull request Dec 11, 2023
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
IO Excel read_excel, to_excel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants