Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First version of SPDHG #1194

Merged
merged 5 commits into from
Apr 11, 2018
Merged

First version of SPDHG #1194

merged 5 commits into from
Apr 11, 2018

Conversation

mehrhardt
Copy link
Contributor

@mehrhardt mehrhardt commented Oct 13, 2017

This is a first version of SPDHG as requested by @ozanoktem in #985. It provides a couple of algorithms that were analyzed in https://arxiv.org/abs/1706.04957 and shows their use in a similar way as in the paper. It also provides a few new tools some of which hopefully may get eventually replaced by ODL core functionality.

Although this is part of the "contrib" section, I would be very grateful if you had a look and provide feedback about style but more importantly if you see that something could get implemented with other ODL functionality or just in a smarter way.

@adler-j
Copy link
Member

adler-j commented Oct 14, 2017

Great that you are getting this done, and I'll try to give this a review asap.

Thanks!

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
# ----------------------------------------------------------------------
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a comment upfront, before someone starts reviewing.

In the interest of avoiding infinite hassle, we can't have mixed licenses in one package, even if it's under contrib. The stuff is shipped with ODL either way, and anybody who relies on the code being MPL couldn't use it anymore without worries if they would always have to look through all files and see if there's a different license lurking somewhere.

It's also written in the guidelines.

So if you need to keep the Apache License, that's perfectly fine, but then the code needs to be in its own repository.

@mehrhardt mehrhardt force-pushed the issue985_spdhg branch 2 times, most recently from 71eb159 to cd7bab8 Compare October 24, 2017 16:23
@mehrhardt
Copy link
Contributor Author

I rebased this pull request onto the wrong branch which made me do a pull --force. This messed up with Githubs history of discussion but all files and changes are there as we discussed.

@mehrhardt
Copy link
Contributor Author

Not sure what happened here but the error messages don't look like that they come from my code:

Failed building wheel for scipy
Failed cleaning build dir for scipy

@adler-j
Copy link
Member

adler-j commented Oct 26, 2017

Still havn't been able to review all of this, but a major comment is that we try to keep the repository free of binary stuff (like images). If you want to add them, I suggest you host them somewhere externally, and then add then to odl.contrib.datasets where you can download them when needed.

In generall binary files do not play well att all with version control, and causes the repository to swell uncontrollably.

@kohr-h
Copy link
Member

kohr-h commented Oct 26, 2017

Still havn't been able to review all of this, but a major comment is that we try to keep the repository free of binary stuff (like images). If you want to add them, I suggest you host them somewhere externally, and then add then to odl.contrib.datasets where you can download them when needed.

In generall binary files do not play well att all with version control, and causes the repository to swell uncontrollably.

Same here. That said, the images are really nice.

@adler-j
Copy link
Member

adler-j commented Oct 26, 2017

Yes, I agree that they would be a tremendous addition, simply that they should go through the datasets package instead of the git repo!

@mehrhardt
Copy link
Contributor Author

I can add them, this is no problem. Currently, the ODL data functionality only works for .mat files so I would need to extend this to be able to load images. I would suggest I make a new pull request for that and then the two of you can comment. As an alternative, also you can provide this functionality.

@adler-j
Copy link
Member

adler-j commented Oct 26, 2017

My suggestion to you would be to simply convert these to .mat files and upload them. Mat files are nice for several reasons, for one, they can be read using both python and matlab (and i guess julia etc are adding support) and second it is a compressed format so we save some space.

Long term I guess we could add others but I'm not too stressed about it since converting needs only be done once.

@mehrhardt
Copy link
Contributor Author

mehrhardt commented Oct 26, 2017

I converted my images into .mat files and for me the .mat files are much bigger. For small files, it does not matter but there is one image that has 2.3 MB as a jpg which blows up to 17.3 MB as a .mat file. The images are now available, see #1211

@adler-j
Copy link
Member

adler-j commented Oct 26, 2017

Are you sure you used do_compression=True, it is an optional flag.

See e.g.: https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.io.savemat.html

@mehrhardt
Copy link
Contributor Author

I created these mat files with matlab. Also there, they have an additional compression flag that you can set but it would not reduce the data size much, 16 MB instead of 17 MB. In addition, the current get_data function cannot read MATLAB's v7.3 compressed mat files.

@mehrhardt
Copy link
Contributor Author

Updated this PR to remove the binary images, see #1211. In addition, I cleaned it up to make only one commit out of this.

@adler-j
Copy link
Member

adler-j commented Nov 2, 2017

Did you rebase away the images? We don't want them in the commit history since they still take up space then

@mehrhardt
Copy link
Contributor Author

Yes, I reset the commits and overwrote this branch with one clean commit.

Copy link
Member

@kohr-h kohr-h left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I din't look a the examples. For those, I would suggest that you make an examples subfolder and remove the example_ from the name.

The comments I have are partly small things, partly suggestions for alternative implementations. I didn't really consider the big picture yet, but apart from stuff that now is in main ODL (like Huber) and can be removed, the code looks good. It actually looks really good, and the methods look cool as well.

from scipy.signal import convolve2d as scipy_convolve2d

__author__ = "Matthias J. Ehrhardt"
__copyright__ = "Copyright 2016, University of Cambridge"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Contradicts the header 😛

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done



def exists(path):
return os.path.exists(path)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not worth a function. You get the exact same by from os.path import exists if you don't want to type os.path. every time. I'd use the full path if that's not extremely painful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

import os
import numpy as np
import odl
from scipy.signal import convolve2d as scipy_convolve2d
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd avoid these import aliases. They just obscure where a function comes from. Just go with import scipy.signal and then use scipy.signal.convolve2d (or just convolve for that matter).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


def mkdir(path):
if not os.path.exists(path):
os.makedirs(path)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also here you can just use os.path.makedirs(path, exist_ok=True).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

for j in subset2ind[i]:
ind2subset[j].append(i)

return (subset2ind, ind2subset)
Copy link
Member

@kohr-h kohr-h Nov 6, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be written as

def divide_1d_array_equally(ind, nsub):
    sub2ind = list(list(ind[i::nsub]) for i in range(nsub))
    ind2sub = (list(np.tile(np.arange(nsub), len(ind) // nsub)) +
               list(range(len(ind) % nsub)))
    return sub2ind, ind2sub

(without the wrapping of each index into a list in the second return value).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is "a" here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was a copy-paste error, I edited it away.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's wrong with np.split here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point of this function is to create a mapping from subsets to indices and from indices to subsets. In general a subset will contain many indices and an index may be in many subsets. I am not sure your function provides the latter in general. In any ideas how to achieve this like my code but as elegant as yours?

Copy link
Member

@kohr-h kohr-h Nov 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point of this function is to create a mapping from subsets to indices and from indices to subsets. In general a subset will contain many indices and an index may be in many subsets. I am not sure your function provides the latter in general. In any ideas how to achieve this like my code but as elegant as yours?

In that case, I would make it two functions, one doing the partitioning and one for the inverse. A more generic API for splitting would be something like

def partition_1d(arr, slices):
    return tuple(arr[slc] for slc in slices)


def partition_equally_1d(arr, n_parts, order='block'):
    if order == 'block':
        stride = int(np.ceil(arr.size / n_parts))
        slc_list = [slice(i * stride, (i + 1) * stride) for i in range(n_parts)]
    elif order == 'interlaced':
        slc_list = [slice(i, arr.size, n_parts) for i in range(n_parts)]
    else:
        raise ValueError

    return partition_1d(arr, slc_list)

You could go more fancy than that, of course. Also "equally" is so-so here since for ind = np.arange(10) you get

>>> partition_equally_1d(ind, 4, order='block')
(array([0, 1, 2]), array([3, 4, 5]), array([6, 7, 8]), array([9]))
>>> partition_equally_1d(ind, 4, order='interlaced')
(array([0, 4, 8]), array([1, 5, 9]), array([2, 6]), array([3, 7]))

For the inverse I can't think of an elegant way other than the naive implementation. The problem seems to me that the result is a list of lists with varying sizes, so it's a bit hard to use a Numpy array for that.

\\end{cases}

where all variables on the right hand side of the equation have a subscript
i which is omitted for readability.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:math:`i`

# x + r - y + y * log(y / (x + r)) = x - y * log(x + r) + c1
# with c1 = r - y + y * log y
i = x.ufuncs.greater_equal(0)
obj = (i * (x + r - y + y * (y / (x + r)).ufuncs.log()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can already give you a heads-up here that this code will break after #1088 is merged. The reason is that i will be in a space with dtype=bool, and the subsequent operations will be illegal without cast to float. Boolean indexing will work, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will #1088 be also working on the GPU? It would be good if one would not need to distinguish the two cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not with odlcuda, but there are a bunch of other backends almost ready to go in. At least cupy does support everything we need.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds really good. I guess it is best for me to hang tight and leave the code the way it is. Once we have a good option we should change it accordingly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the plan is to push in #1088 first and then add a GPU backend (sorry for the intermediate breakage, but otherwise the PR never goes in).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this now broken?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems so:

Traceback (most recent call last):

File "", line 131, in
callback=callback)

File "/home/ehrhardt/repositories/github_myodl/odl/solvers/nonsmooth/primal_dual_hybrid_gradient.py", line 274, in pdhg
proximal_dual_sigma(dual_tmp, out=y)

File "/home/ehrhardt/repositories/github_myodl/odl/operator/operator.py", line 678, in call
result = self._call_in_place(x, out=out, **kwargs)

File "/home/ehrhardt/repositories/github_myodl/odl/operator/pspace_ops.py", line 229, in _call
op(x[j], out=out[i])

File "/home/ehrhardt/repositories/github_myodl/odl/operator/operator.py", line 678, in call
result = self._call_in_place(x, out=out, **kwargs)

File "/home/ehrhardt/repositories/github_myodl/odl/operator/operator.py", line 1364, in _call
return self.left(tmp, out=out)

File "/home/ehrhardt/repositories/github_myodl/odl/operator/operator.py", line 678, in call
result = self._call_in_place(x, out=out, **kwargs)

File "/home/ehrhardt/repositories/github_myodl/odl/operator/operator.py", line 1363, in _call
self.right(x, out=tmp)

File "/home/ehrhardt/repositories/github_myodl/odl/operator/operator.py", line 678, in call
result = self._call_in_place(x, out=out, **kwargs)

File "/home/ehrhardt/repositories/github_myodl/odl/contrib/solvers/spdhg/misc.py", line 842, in _call
4 * s * y).ufuncs.sqrt()))

TypeError: unsupported operand type(s) for *: 'DiscreteLpElement' and 'DiscreteLpElement'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interestingly, I also can't cast it manually:

File "", line 1, in
x_opt = x_opt.astype('float')

File "/home/ehrhardt/repositories/github_myodl/odl/discr/discretization.py", line 361, in astype
return self.space.astype(dtype).element(self.tensor.astype(dtype))

File "/home/ehrhardt/repositories/github_myodl/odl/space/base_tensors.py", line 259, in astype
return self._astype(dtype)

File "/home/ehrhardt/repositories/github_myodl/odl/discr/lp_discr.py", line 375, in _astype
fspace = self.fspace.astype(dtype)

File "/home/ehrhardt/repositories/github_myodl/odl/space/fspace.py", line 706, in astype
if out_dtype == self.real_out_dtype:

File "/home/ehrhardt/repositories/github_myodl/odl/space/fspace.py", line 282, in real_out_dtype
raise TypeError('no real variant of output dtype defined')

TypeError: no real variant of output dtype defined

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an actual bug. See #1285

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed now

# Dual variable
y = kwargs.pop('y', None)

def fun_select(k): return [0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return statement on extra line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

y[i].lincomb(1, y_old[i], sigma_i, y[i])

# hack for the prox operator
yy = y[i].copy()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This issue will be solved with #1201. Maybe make a TODO to remember removing the hack.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, this is now gone.

opts = self.prox_options

sigma_ = np.copy(sigma)
z_ = z.copy()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do this in the else case a bit down. I also don't see a reason to copy sigma. The old values are never used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want to modify the old values which happens in the next lines. The code is intended to be used for both scalar and vector-valued sigma.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, good point. You can also alias the old name by doing sigma = <some computation involving sigma>, that also doesn't change the input and uses one less new name.

@mehrhardt
Copy link
Contributor Author

All changes made, please comment :)

@kohr-h
Copy link
Member

kohr-h commented Nov 7, 2017

All changes made, please comment :)

Nice! You didn't go for the extraglobs thing in the doctests, why?

I'll check the examples next ok?

@mehrhardt
Copy link
Contributor Author

mehrhardt commented Nov 7, 2017

Yes, please check them. For me your time would be most valuable spent on the odlcuda bug, though.

I think all imports we discussed are actually all not needed for automatic doctesting.

@adler-j
Copy link
Member

adler-j commented Jan 21, 2018

Ok, I've been somewhat away from this but would love to have it in. What is there left to do? Should I review?

@mehrhardt
Copy link
Contributor Author

It has been a while but if I remember correctly, this PR was at the review stage. I think I have incorporated all changes @adler-j and @kohr-h requested so far.

Copy link
Member

@adler-j adler-j left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall it looks good. The only thing I'd like to know more about is how the data is handled. If you can somehow share it, it would be great if you made it available to the users!

for j in subset2ind[i]:
ind2subset[j].append(i)

return (subset2ind, ind2subset)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's wrong with np.split here?

# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at https://mozilla.org/MPL/2.0/.

"""Contributed code for the stochastic PDHG."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a reference to the article here, it would be nice for users to see where it comes from! (Also free PR for you 😄)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

tol_norm = 1.05

# save images and data
file_data = '{}/data.npy'.format(folder_main)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should now be loadable separately, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would do you mean? the idea is that one does not need to recompute things if one executes this twice so some results and data are stored

# x + r - y + y * log(y / (x + r)) = x - y * log(x + r) + c1
# with c1 = r - y + y * log y
i = x.ufuncs.greater_equal(0)
obj = (i * (x + r - y + y * (y / (x + r)).ufuncs.log()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this now broken?

positive infinity.
"""

# TODO: implement more efficiently in terms of memory and CPU/GPU
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to fix these todos or leave them for now?

@adler-j
Copy link
Member

adler-j commented Feb 2, 2018

Also terribly sorry that this review process is taking forever! I'm actually looking forward to this PR quite a lot :)

@mehrhardt
Copy link
Contributor Author

I encountered something weird, perhaps you can make sense of it. The following code works fine

import odl.contrib.datasets.images as images
import numpy as np
import odl

# create geometry of operator
X = odl.uniform_discr(min_pt=[-20, -20], max_pt=[20, 20], shape=[30, 30])
geometry = odl.tomo.parallel_beam_geometry(X, num_angles=200, det_shape=200)
G = odl.BroadcastOperator(*[odl.tomo.RayTransform(X, g, impl='astra_cpu')
                            for g in geometry])

# create ground truth
Y = G.range
groundtruth = X.element(images.resolution_phantom(shape=X.shape))

sinogram = G(groundtruth)
np.save('./data.npy', (sinogram, sinogram, sinogram, groundtruth))

However
np.save('./data.npy', (sinogram, sinogram, sinogram))
causes the following error

np.save('./data.npy', (sinogram, sinogram, sinogram))

Traceback (most recent call last):

File "", line 1, in
np.save('./data.npy', (sinogram, sinogram, sinogram))

File "/home/ehrhardt/anaconda2/envs/odl-py35/lib/python3.5/site-packages/numpy/lib/npyio.py", line 489, in save
arr = np.asanyarray(arr)

File "/home/ehrhardt/anaconda2/envs/odl-py35/lib/python3.5/site-packages/numpy/core/numeric.py", line 533, in asanyarray
return array(a, dtype, copy=False, order=order, subok=True)

ValueError: cannot copy sequence with size 200 to array axis with dimension 1

This does not make any sense to me.

@kohr-h
Copy link
Member

kohr-h commented Feb 12, 2018

This has a lot to do with the fact that G.range is not a power space. So asarray on a single sinogram will fail (perhaps we should refine our __array__ implementation for that case). For some reason Numpy gets confused and can't make sense of shapes anymore. Hard to tell because it's in the C code.

The variant with the groundtruth mixed in works because then Numpy just gives up interpreting the contents and uses dtype=object (super general, super slow). If it's a workable variant for you, this works:

arr = np.empty(3, dtype=object)
for i in range(3):
    arr[i] = sinogram
np.save('dump.npy', arr)

@mehrhardt
Copy link
Contributor Author

I think the "errors" flagged by Travis are time outs; the tests took too much time. Can you help here or do we ignore this?

@kohr-h
Copy link
Member

kohr-h commented Feb 19, 2018

I think the "errors" flagged by Travis are time outs; the tests took too much time. Can you help here or do we ignore this?

Did you rebase on master recently? The fix #1236 dramatically reduced test times on master.

@mehrhardt
Copy link
Contributor Author

Yes, I rebased about a week ago.

@kohr-h
Copy link
Member

kohr-h commented Feb 19, 2018

Ah, sorry, I linked to the wrong PR. It's #1294 from 5 days ago.

@mehrhardt
Copy link
Contributor Author

Yes, I have that one already, too.

@kohr-h
Copy link
Member

kohr-h commented Feb 19, 2018

Yes, I have that one already, too.

According to the network graph your branch is based on the merge commit of PR 1293, one before the one I mention above. So I don't think you have the bugfix yet.

@mehrhardt
Copy link
Contributor Author

Sorry for all the mess so far. Now I removed all previous commits and put the current version into a single commit.

@mehrhardt
Copy link
Contributor Author

mehrhardt commented Apr 10, 2018

As the checks have passed, @adler-j, @kohr-h, please review so that we can finally have SPDHG in ODL.

Copy link
Member

@kohr-h kohr-h left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remark upfront: A README.md on the top level like here would be a good way to introduce your contribution. I'd also favor putting in a text like "Original contribution by: @mehrhardt" or similar.


I've focused mainly on the first example, comments there likely apply to all others as well. Some minor comments also on the "backend".

All in all I feel we shouldn't make this an ever-more exhausting loop (many iterative methods slow down close to convergence, don't they? 😉 ).
The most important part, the implementation, looks very good to me, and other things like stylistic questions are less important here.

import numpy as np
from scipy.ndimage.filters import gaussian_filter
import odl
import brewer2mpl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be worth mentioning as additional dependency in the module docstring, like ASTRA.

problem with total variation prior. As we do not exploit any smoothness here
we only expect a 1/k convergence of the ergodic sequence in a Bregman sense.

Note that this example uses the astra toolbox.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ASTRA

Maybe add a link to the installation doc about ASTRA?

folder_out = '.' # to be changed
filename = 'PET_1k'

nepoch = 300 # set number of epochs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment states the obvious

# change filename with problem size
filename = '{}_{}x{}'.format(filename, nvoxelx, nvoxelx)

# create output folders
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Reduce the "comment noise" by treating this whole setting of parameters and directory/file names as one block with one explaining comment.

n = nsub[alg]
(sub2ind, ind2sub) = spdhg.divide_1Darray_equally(range(len(A)), n)

if alg == 'pdhg' or alg[0:5] == 'spdhg':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alg.startswith('spdhg') is probably more robust and readable

proximal_primal_tau = g.proximal(tau)

if callback is not None:
callback([x, y])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code is really nice and clear!

import scipy.signal
import matplotlib
import matplotlib.pyplot as plt
from skimage.io import imsave
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My recommendation here is to use the imageio -- it's pretty lightweight, has lots of supported formats and depends on NumPy and Pillow only (by default). It's one of the possible providers of skimage.io.imsave, but the latter comes with a lot of extra weight.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good comment. I think I ll leave it for now but will consider this in the future.

Returns
-------
extended float (with infinity)
Is the input in the non-negative orthant?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this?

Parameters
----------
x : np.array
vector / image
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bump


class Blur2D(odl.Operator):
"""Blur operator
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

""" on same line

@kohr-h
Copy link
Member

kohr-h commented Apr 10, 2018

A README.md on the top level

Small addition: since the implementation depends on a 3rd party image I/O library, a README would be a good place to mention that.

@mehrhardt
Copy link
Contributor Author

Many thanks for your comments, @kohr-h. I agreed with almost all and made changes accordingly.

@kohr-h
Copy link
Member

kohr-h commented Apr 11, 2018

Nice! If you're happy with it, I would merge it as it is now.

@mehrhardt
Copy link
Contributor Author

Yes, that sounds good!

@kohr-h kohr-h merged commit 0cf0d8b into odlgroup:master Apr 11, 2018
@kohr-h
Copy link
Member

kohr-h commented Apr 11, 2018

Thanks a lot for the effort, @mehrhardt!

@mehrhardt
Copy link
Contributor Author

Thanks @kohr-h for all your time and input.

@mehrhardt mehrhardt deleted the issue985_spdhg branch July 6, 2019 16:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants