Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xmitgcm dev branch: possible bug/broken dependency #224

Closed
urielz opened this issue Sep 25, 2020 · 2 comments
Closed

xmitgcm dev branch: possible bug/broken dependency #224

urielz opened this issue Sep 25, 2020 · 2 comments

Comments

@urielz
Copy link

urielz commented Sep 25, 2020

First off, thank you devs for this extremely useful library.

I have 0.4.1 running fine on my main workstation. I came across this issue while trying to install xmitgcm dev branch from scratch on a new machine. If a run this simple code:

import numpy as np
import xarray as xr
from xmitgcm import llcreader
import os

model = llcreader.ECCOPortalLLC4320Model()

fprefix = 'Eta'
outdir = '.'

klev = 1
for ii in range(0,klev):

    ds = model.get_dataset(varnames=[fprefix],k_levels=[ii])
    region_slice = {'face': slice(10,11)}
    region = ds.isel(**region_slice,k=0,time=0)
    iters, datasets = zip(*region.groupby('k_l'))


    fname = os.path.join(outdir, '%s.%03d.nc' % (fprefix, ii))
    xr.save_mfdataset(datasets, [fname], engine='netcdf4')

I get this error:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-2-1d330d174849> in <module>
     20     fname = os.path.join(outdir, '%s.%03d.nc' % (fprefix, ii))
     21     print(fname)
---> 22     xr.save_mfdataset(datasets, [fname], engine='netcdf4')
     23 
     24 

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/xarray/backends/api.py in save_mfdataset(datasets, paths, mode, format, groups, engine, compute)
   1237 
   1238     try:
-> 1239         writes = [w.sync(compute=compute) for w in writers]
   1240     finally:
   1241         if compute:

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/xarray/backends/api.py in <listcomp>(.0)
   1237 
   1238     try:
-> 1239         writes = [w.sync(compute=compute) for w in writers]
   1240     finally:
   1241         if compute:

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/xarray/backends/common.py in sync(self, compute)
    153             # targets = [dask.delayed(t) for t in self.targets]
    154 
--> 155             delayed_store = da.store(
    156                 self.sources,
    157                 self.targets,

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/dask/array/core.py in store(sources, targets, lock, regions, compute, return_stored, **kwargs)
    967 
    968         if compute:
--> 969             result.compute(**kwargs)
    970             return None
    971         else:

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/dask/base.py in compute(self, **kwargs)
    165         dask.base.compute
    166         """
--> 167         (result,) = compute(self, traverse=False, **kwargs)
    168         return result
    169 

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/dask/base.py in compute(*args, **kwargs)
    450         postcomputes.append(x.__dask_postcompute__())
    451 
--> 452     results = schedule(dsk, keys, **kwargs)
    453     return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
    454 

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/dask/threaded.py in get(dsk, result, cache, num_workers, pool, **kwargs)
     74                 pools[thread][num_workers] = pool
     75 
---> 76     results = get_async(
     77         pool.apply_async,
     78         len(pool._pool),

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/dask/local.py in get_async(apply_async, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, **kwargs)
    484                         _execute_task(task, data)  # Re-execute locally
    485                     else:
--> 486                         raise_exception(exc, tb)
    487                 res, worker_id = loads(res_info)
    488                 state["cache"][key] = res

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/dask/local.py in reraise(exc, tb)
    314     if exc.__traceback__ is not tb:
    315         raise exc.with_traceback(tb)
--> 316     raise exc
    317 
    318 

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/dask/local.py in execute_task(key, task_info, dumps, loads, get_id, pack_exception)
    220     try:
    221         task, data = loads(task_info)
--> 222         result = _execute_task(task, data)
    223         id = get_id()
    224         result = dumps((result, id))

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/dask/core.py in _execute_task(arg, cache, dsk)
    119         # temporaries by their reference count and can execute certain
    120         # operations in-place.
--> 121         return func(*(_execute_task(a, cache) for a in args))
    122     elif not ishashable(arg):
    123         return arg

~/.local/share/virtualenvs/LLC_d2-j1O87tS3/lib/python3.8/site-packages/xmitgcm/llcreader/llcmodel.py in _get_1d_chunk(store, varname, klevels, nz, dtype)
    452 
    453     # now subset
--> 454     return data[klevels]
    455 
    456 class BaseLLCModel:

IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

I should emphasize that the code runs fine on xmitgcm 0.4.1 with fsspec 0.4.1, dask 2.4.0 and xarray 0.15.1. The error you see above is when trying to run current dev branch with fsspec 0.8.3, dask 2.27.0 and xarray 0.16.1.

I can provide pipenv lock files of both virtual environments to facilitate reproduction of the issue if that helps.

@rabernat
Copy link
Member

Uriel! ❤️ ❤️ ❤️ Lovely to hear from you after a long time!

Our CI contains an environment which will hopefully reproduce this error. I have opened #225, which we can use to dig deeper.

@urielz
Copy link
Author

urielz commented Oct 1, 2020

Hi Ryan! Indeed It's been too long! :) Hope things are going well.

Great. I'll leave this open in the meantime.

rabernat added a commit to rabernat/xmitgcm that referenced this issue Nov 12, 2020
rabernat added a commit to rabernat/xmitgcm that referenced this issue Nov 12, 2020
fraserwg pushed a commit to fraserwg/xmitgcm that referenced this issue Nov 23, 2021
* empty commit to trigger CI

* add test for k_levels=[1]

* add explicit test for MITgcm#233

* test for bug in llc90

* fixes MITgcm#224

* resolve final bugs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants