Skip to content

Commit

Permalink
Merge branch 'master' into wrap-grd2xyz
Browse files Browse the repository at this point in the history
  • Loading branch information
willschlitzer authored May 25, 2021
2 parents cd26ff2 + 7c29e60 commit ccd27c0
Show file tree
Hide file tree
Showing 17 changed files with 356 additions and 144 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/cache_data.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:

# Install GMT
- name: Install GMT
run: conda install -c conda-forge gmt=6.1.1
run: conda install -c conda-forge/label/dev gmt=6.2.0rc2

# Download remote files
- name: Download remote data
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci_docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ jobs:
# Install GMT and other required dependencies from conda-forge
- name: Install dependencies
run: |
conda install conda-forge/label/dev::gmt=6.2.0rc1 \
conda install conda-forge/label/dev::gmt=6.2.0rc2 \
numpy pandas xarray netCDF4 packaging \
ipython make myst-parser \
sphinx sphinx-copybutton sphinx-gallery sphinx_rtd_theme
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/ci_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ jobs:
optional-packages: ''
- python-version: 3.9
numpy-version: '1.20'
optional-packages: 'geopandas'
optional-packages: '' # 'geopandas'
defaults:
run:
shell: bash -l {0}
Expand Down Expand Up @@ -89,7 +89,7 @@ jobs:
# Install GMT and other required dependencies from conda-forge
- name: Install dependencies
run: |
conda install conda-forge/label/dev::gmt=6.2.0rc1 \
conda install conda-forge/label/dev::gmt=6.2.0rc2 \
numpy=${{ matrix.numpy-version }} \
pandas xarray netCDF4 packaging \
${{ matrix.optional-packages }} \
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci_tests_dev.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ jobs:
# Install dependencies from conda-forge
- name: Install dependencies
run: |
conda install ninja cmake libblas libcblas liblapack fftw gdal \
conda install ninja cmake libblas libcblas liblapack fftw gdal=3.2 geopandas \
ghostscript libnetcdf hdf5 zlib curl pcre make dvc
pip install --pre numpy pandas xarray netCDF4 packaging \
ipython pytest-cov pytest-mpl pytest>=6.0 sphinx-gallery
Expand Down
134 changes: 73 additions & 61 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,6 @@ read it carefully.
- [Testing your code](#testing-your-code)
- [Testing plots](#testing-plots)
- [Documentation](#documentation)
- [Code Review](#code-review)


## What Can I Do?
Expand Down Expand Up @@ -205,36 +204,73 @@ hesitate to [ask questions](#how-can-i-talk-to-you)):

### General guidelines

We follow the [git pull request workflow](http://www.asmeurer.com/git-workflow/) to
make changes to our codebase.
We follow the [git pull request workflow](http://www.asmeurer.com/git-workflow)
to make changes to our codebase.
Every change made goes through a pull request, even our own, so that our
[continuous integration](https://en.wikipedia.org/wiki/Continuous_integration) services
have a change to check that the code is up to standards and passes all our tests.
[continuous integration](https://en.wikipedia.org/wiki/Continuous_integration)
services have a chance to check that the code is up to standards and passes all
our tests.
This way, the *master* branch is always stable.

General guidelines for pull requests (PRs):

* **Open an issue first** describing what you want to do. If there is already an issue
that matches your PR, leave a comment there instead to let us know what you plan to
do.
* Each pull request should consist of a **small** and logical collection of changes.
* Larger changes should be broken down into smaller components and integrated
separately. For example, break the wrapping of aliases into multiple pull requests.
* Bug fixes should be submitted in separate PRs.
* Use underscores for all Python (*.py) files as per [PEP8](https://www.python.org/dev/peps/pep-0008/),
not hyphens. Directory names should also use underscores instead of hyphens.
* Describe what your PR changes and *why* this is a good thing. Be as specific as you
can. The PR description is how we keep track of the changes made to the project over
time.
* Do not commit changes to files that are irrelevant to your feature or bugfix (eg:
`.gitignore`, IDE project files, etc).
* Write descriptive commit messages. Chris Beams has written a
[guide](https://chris.beams.io/posts/git-commit/) on how to write good commit
messages.
* Be willing to accept criticism and work on improving your code; we don't want to break
other users' code, so care must be taken not to introduce bugs.
* Be aware that the pull request review process is not immediate, and is generally
proportional to the size of the pull request.
General guidelines for making a Pull Request (PR):

* What should be included in a PR
- Have a quick look at the titles of all the existing issues first. If there
is already an issue that matches your PR, leave a comment there to let us
know what you plan to do. Otherwise, **open an issue** describing what you
want to do.
- Each pull request should consist of a **small** and logical collection of
changes; larger changes should be broken down into smaller parts and
integrated separately.
- Bug fixes should be submitted in separate PRs.
* How to write and submit a PR
- Use underscores for all Python (*.py) files as per
[PEP8](https://www.python.org/dev/peps/pep-0008/), not hyphens. Directory
names should also use underscores instead of hyphens.
- Describe what your PR changes and *why* this is a good thing. Be as
specific as you can. The PR description is how we keep track of the changes
made to the project over time.
- Do not commit changes to files that are irrelevant to your feature or
bugfix (e.g.: `.gitignore`, IDE project files, etc).
- Write descriptive commit messages. Chris Beams has written a
[guide](https://chris.beams.io/posts/git-commit/) on how to write good
commit messages.
* PR review
- Be willing to accept criticism and work on improving your code; we don't
want to break other users' code, so care must be taken not to introduce
bugs.
- Be aware that the pull request review process is not immediate, and is
generally proportional to the size of the pull request.

#### Code Review

After you've submitted a pull request, you should expect to hear at least a
comment within a couple of days. We may suggest some changes, improvements or
alternative implementation details.

To increase the chances of getting your pull request accepted quickly, try to:

* Submit a friendly PR
- Write a good and detailed description of what the PR does.
- Write some documentation for your code (docstrings) and leave comments
explaining the *reason* behind non-obvious things.
- Write tests for the code you wrote/modified if needed.
Please refer to [Testing your code](#testing-your-code) or
[Testing plots](#testing-plots).
- Include an example of new features in the gallery or tutorials.
Please refer to [Gallery plots](#gallery-plots) or [Tutorials](#tutorials).
* Have a good coding style
- Use readable code, as it is better than clever code (even with comments).
- Follow the [PEP8](http://pep8.org) style guide for code and the
[numpy style guide](https://numpydoc.readthedocs.io/en/latest/format.html)
for docstrings. Please refer to [Code style](#code-style).

Pull requests will automatically have tests run by GitHub Actions.
This includes running both the unit tests as well as code linters.
GitHub will show the status of these checks on the pull request.
Try to get them all passing (green).
If you have any trouble, leave a comment in the PR or
[get in touch](#how-can-i-talk-to-you).

### Setting up your environment

Expand Down Expand Up @@ -510,11 +546,17 @@ def test_my_plotting_case():

### Documentation

Most documentation sources are in Python `*.py` files under the `examples/`
folder, and the code docstrings can be found e.g. under the `pygmt/src/` and
`pygmt/datasets/` folders. The documentation are written in
[reStructuredText](https://docutils.sourceforge.io/rst.html) and
built by [Sphinx](http://www.sphinx-doc.org/). Please refer to
[reStructuredText Cheatsheet](https://docs.generic-mapping-tools.org/latest/rst-cheatsheet.html)
if you are new to reStructuredText.

#### Building the documentation

Most documentation sources are in the `doc` folder.
We use [sphinx](http://www.sphinx-doc.org/) to build the web pages from these sources.
To build the HTML files:
To build the HTML files from sources:

```bash
cd doc
Expand Down Expand Up @@ -560,33 +602,3 @@ https://docs.generic-mapping-tools.org/latest/gmt.conf.html#term-COLOR_FOREGROUN

Sphinx will create a link to the automatically generated page for that
function/class/module.

**All docstrings** should follow the
[numpy style guide](https://numpydoc.readthedocs.io/en/latest/format.html).
All functions/classes/methods should have docstrings with a full description of all
arguments and return values.

### Code Review

After you've submitted a pull request, you should expect to hear at least a comment
within a couple of days.
We may suggest some changes or improvements or alternatives.

Some things that will increase the chance that your pull request is accepted quickly:

* Write a good and detailed description of what the PR does.
* Write tests for the code you wrote/modified.
* Readable code is better than clever code (even with comments).
* Write documentation for your code (docstrings) and leave comments explaining the
*reason* behind non-obvious things.
* Include an example of new features in the gallery or tutorials.
* Follow the [PEP8](http://pep8.org) style guide for code and the
[numpy guide](https://numpydoc.readthedocs.io/en/latest/format.html)
for documentation.

Pull requests will automatically have tests run by GitHub Actions.
This includes running both the unit tests as well as code linters.
GitHub will show the status of these checks on the pull request.
Try to get them all passing (green).
If you have any trouble, leave a comment in the PR or
[get in touch](#how-can-i-talk-to-you).
1 change: 1 addition & 0 deletions doc/api/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ Operations on grids:
grdcut
grdfill
grdfilter
grdgradient
grdtrack

Crossover analysis with x2sys:
Expand Down
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ channels:
dependencies:
# Required dependencies
- pip
- gmt=6.2.0rc1
- gmt=6.2.0rc2
- numpy>=1.17
- pandas
- xarray
Expand Down
1 change: 1 addition & 0 deletions pygmt/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
grdcut,
grdfill,
grdfilter,
grdgradient,
grdinfo,
grdtrack,
info,
Expand Down
31 changes: 31 additions & 0 deletions pygmt/helpers/decorators.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,37 @@
color : str or 1d array
Select color or pattern for filling of symbols or polygons. Default
is no fill.""",
"I": r"""
spacing : str
*xinc*\ [**+e**\|\ **n**][/\ *yinc*\ [**+e**\|\ **n**]].
*x_inc* [and optionally *y_inc*] is the grid spacing.
- **Geographical (degrees) coordinates**: Optionally, append an
increment unit. Choose among **m** to indicate arc minutes or
**s** to indicate arc seconds. If one of the units **e**, **f**,
**k**, **M**, **n** or **u** is appended instead, the increment
is assumed to be given in meter, foot, km, mile, nautical mile or
US survey foot, respectively, and will be converted to the
equivalent degrees longitude at the middle latitude of the region
(the conversion depends on :gmt-term:`PROJ_ELLIPSOID`). If
*y_inc* is given but set to 0 it will be reset equal to *x_inc*;
otherwise it will be converted to degrees latitude.
- **All coordinates**: If **+e** is appended then the corresponding
max *x* (*east*) or *y* (*north*) may be slightly adjusted to fit
exactly the given increment [by default the increment may be
adjusted slightly to fit the given domain]. Finally, instead of
giving an increment you may specify the *number of nodes* desired
by appending **+n** to the supplied integer argument; the
increment is then recalculated from the number of nodes, the
*registration*, and the domain. The resulting increment value
depends on whether you have selected a gridline-registered or
pixel-registered grid; see :gmt-docs:`GMT File Formats
<cookbook/file-formats.html#gmt-file-formats>` for details.
**Note**: If ``region=grdfile`` is used then the grid spacing and
the registration have already been initialized; use ``spacing`` and
``registration`` to override these values.""",
"V": """\
verbose : bool or str
Select verbosity level [Default is **w**], which modulates the messages
Expand Down
1 change: 1 addition & 0 deletions pygmt/src/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from pygmt.src.grdcut import grdcut
from pygmt.src.grdfill import grdfill
from pygmt.src.grdfilter import grdfilter
from pygmt.src.grdgradient import grdgradient
from pygmt.src.grdimage import grdimage
from pygmt.src.grdinfo import grdinfo
from pygmt.src.grdtrack import grdtrack
Expand Down
58 changes: 21 additions & 37 deletions pygmt/src/blockm.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,9 @@
"""
import pandas as pd
from pygmt.clib import Session
from pygmt.exceptions import GMTInvalidInput
from pygmt.helpers import (
GMTTempFile,
build_arg_string,
data_kind,
dummy_context,
fmt_docstring,
kwargs_to_strings,
use_alias,
Expand Down Expand Up @@ -41,29 +38,24 @@ def _blockm(block_method, table, outfile, **kwargs):
set by ``outfile``)
"""

kind = data_kind(table)
with GMTTempFile(suffix=".csv") as tmpfile:
with Session() as lib:
if kind == "matrix":
if not hasattr(table, "values"):
raise GMTInvalidInput(f"Unrecognized data type: {type(table)}")
file_context = lib.virtualfile_from_matrix(table.values)
elif kind == "file":
if outfile is None:
raise GMTInvalidInput("Please pass in a str to 'outfile'")
file_context = dummy_context(table)
else:
raise GMTInvalidInput(f"Unrecognized data type: {type(table)}")

with file_context as infile:
# Choose how data will be passed into the module
table_context = lib.virtualfile_from_data(check_kind="vector", data=table)
# Run blockm* on data table
with table_context as infile:
if outfile is None:
outfile = tmpfile.name
arg_str = " ".join([infile, build_arg_string(kwargs), "->" + outfile])
lib.call_module(module=block_method, args=arg_str)

# Read temporary csv output to a pandas table
if outfile == tmpfile.name: # if user did not set outfile, return pd.DataFrame
result = pd.read_csv(tmpfile.name, sep="\t", names=table.columns)
try:
column_names = table.columns.to_list()
result = pd.read_csv(tmpfile.name, sep="\t", names=column_names)
except AttributeError: # 'str' object has no attribute 'columns'
result = pd.read_csv(tmpfile.name, sep="\t", header=None, comment=">")
elif outfile != tmpfile.name: # return None if outfile set, output in outfile
result = None

Expand Down Expand Up @@ -95,23 +87,19 @@ def blockmean(table, outfile=None, **kwargs):
Parameters
----------
table : pandas.DataFrame or str
Either a pandas dataframe with (x, y, z) or (longitude, latitude,
elevation) values in the first three columns, or a file name to an
ASCII data table.
table : str or {table-like}
Pass in (x, y, z) or (longitude, latitude, elevation) values by
providing a file name to an ASCII data table, a 2D
{table-classes}.
spacing : str
*xinc*\[\ *unit*\][**+e**\|\ **n**]
[/*yinc*\ [*unit*][**+e**\|\ **n**]].
*xinc* [and optionally *yinc*] is the grid spacing.
{I}
region : str or list
*xmin/xmax/ymin/ymax*\[\ **+r**\][**+u**\ *unit*].
Specify the region of interest.
outfile : str
Required if ``table`` is a file. The file name for the output ASCII
file.
The file name for the output ASCII file.
{V}
{a}
Expand Down Expand Up @@ -156,23 +144,19 @@ def blockmedian(table, outfile=None, **kwargs):
Parameters
----------
table : pandas.DataFrame or str
Either a pandas dataframe with (x, y, z) or (longitude, latitude,
elevation) values in the first three columns, or a file name to an
ASCII data table.
table : str or {table-like}
Pass in (x, y, z) or (longitude, latitude, elevation) values by
providing a file name to an ASCII data table, a 2D
{table-classes}.
spacing : str
*xinc*\[\ *unit*\][**+e**\|\ **n**]
[/*yinc*\ [*unit*][**+e**\|\ **n**]].
*xinc* [and optionally *yinc*] is the grid spacing.
{I}
region : str or list
*xmin/xmax/ymin/ymax*\[\ **+r**\][**+u**\ *unit*].
Specify the region of interest.
outfile : str
Required if ``table`` is a file. The file name for the output ASCII
file.
The file name for the output ASCII file.
{V}
{a}
Expand Down
Loading

0 comments on commit ccd27c0

Please sign in to comment.