-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
don't overtype #2
Comments
Similarly for types, like your Krylov subspace type. |
Yes, these should be untyped so that they can allow anything with |
It's not clear to me that we need to support passing a function |
On the other hand, closures are cleaner for implementing things like shifts, since you can just do One possibility is to write everything in terms of closures as the "low-level" interface, but have a higher-level interface with duck-typed matrices, of the form:
|
There is also the question of memory allocation: it might be nice to support functions of the form |
That is what I ended up settling on inside the current |
Why is the higher-level interface based on AbstractMatrix rather than duck-typing? |
That is the higher level interface. |
Viral, |
There are two things that I would love for the iterative solvers to support:
One way to achieve this is to have: # Implement the method for linear operators
itermethod(linop::Function, args...)
# Handle AbstractMatrix this way
itermethod(A::AbstractMatrix, args...) = itermethod(x->A*x, args)
# For the case where the user has a different type that defines `*`, the user creates the closure and passes it as a linear operator, but duck typing makes the matrix-vector product work.
itermethod(x->A*x, args) This is the approach I have currently taken in the |
Viral, why not just have
This way, you support any type that defines |
It seems that I have been talking past you too, or I am not sure what I am missing. There are often cases, where people just want to apply an operator by just calling a function. Your suggestion requires a user to always define a new type with Let me provide a concrete example. See the It is nice to be able to just pass |
No, no, no, no. My suggestion is to define:
It is exactly the same as your proposal, except with the type declaration removed in the second method. It gives more functionality with less code. |
My understanding was that you were proposing not having the first definition at all. This clarifies and does give more functionality with less code. |
For most of these you need two linear operators, one to compute |
@timholy, what do you mean? The most popular iterative algorithms for non-symmetric problems (GMRES, BiCGSTAB, Arnoldi, Jacobi-Davidson), only require |
I didn't realize that. I guess I usually work with overdetermined least-squares problems, and those algorithms do require both. |
Ah, right, I was thinking of square-matrix problems; Golub–Kahn-like algorithms like LSMR for non-square problems require |
As far as I can tell from the literature, for most applications LSMR isn't obviously better than LSQR, but eventually we should have both (I already have LSQR). Does this fact change thinking about the API? I don't think it does, since to users it would emphasize the need to implement both, but just worth checking. |
It's not hard to understand, I just disagree with you. |
I guess out of fairness I should elaborate on the latter a bit more. In security/content-filtering, there are two approaches: whitelisting and blacklisting. Both have their places. Let's think about these in relation to Julia code. When somebody asks me to "turn on" access to a particular function for a particular type, it's pretty easy to "whitelist" via a |
@timholy, no one is arguing that duck typing should always be used in all contexts. The question is, should it be used in this context? In this context, there are obvious benefits to duck-typing, because many many objects can represent linear operators that are not array-like container types (and in fact may not provide efficient random access to the matrix elements at all), and in fact Julia already includes such objects (e.g. In this context, I don't see any obvious drawbacks to duck-typing, nor have you or Viral made any specific arguments to that effect that I can see. |
Just to be clear, everyone now agrees that the iterative methods in this package should apply to arbitrary duck-typed arguments |
I will say that I when I wrote the initial code, I thought that |
Correction: An Furthermore, remember that Julia does not support multiple inheritance, so it is not necessarily practical to require all types that represent linear operators to subclass |
(If you define an |
I didn't meant to say to say that Neither did I disagree with
However, I will agree that |
Whoops, sorry, you're right that I should have used |
So, does everyone now agree that the iterative methods in this package should apply to arbitrary duck-typed arguments |
I will agree that there is no single type that currently exists that covers the full abstraction of |
I am fully onboard with duck typing as the solution. It seems to be more general than the |
I tried to do this for as many routines as I could manage in the follow-up commit (87635c4). Perhaps you can check to see if I hadn't missed any. I think the only remaining routines that need to be fixed are |
Sorry, I missed that commit! Regarding |
Because of krylov.jl line 28, it seems solvers like |
Dear all, It is strange that I missed the startup of this package. I am very interested in this and potentially willing to contribute. I am not that experienced, although I did (carefully) read large parts of Saad and of these lecture notes (http://people.inf.ethz.ch/arbenz/ewp/Lnotes/lsevp.pdf) recently. Regarding the typing issue, I was involved in the AbstractMatrix versus duck typing discussion a while ago and was in the end convinced that duck typing is the way to go. I do have one suggestion that might be relevant if the plan is to also provide high-level user functions (e.g. similar to eigs) that try to select the best low-level solver based on the element type, the symmetry/hermiticity/positive definiteness and other properties of the linear operator? It could be OK to ask a user to overload *(A::UserType, v::Vector) but not that he also overloads all these other methods. Even for the built in AbstractMatrix types, issym etc do not always produce the results that a user might expect (i.e. it checks element wise for exact equality, without taking floating point precision into account). However, having to specify all these properties via arguments for every solver (which is what Matlab does for eigs) also doesn't sound like a good programming strategy. While I was philosophizing about a Krylov-type package, I figured that it might be a good idea to have a wrapper type LinearOperator that accepts a general A (duck typed) and a list of arguments via which the user can specify the properties of the linear operator. This way, the actual solver methods would just call eltype, issym, isherm, isposdef, size, A*v, etc and the code would look very readable and intuitive. These high-lever solver methods should also be duck typed and can then be used with any of the following:
|
This issue is also resolved by compatibility for LinearMaps |
:-). I didn't even remember this one. I guess this was before (and why) I started LinearMaps.jl |
The arguments should not be
AbstractMatrix
types, since that implies access to the elements. The argument should be untyped ("duck-typed"), and work for any typeT
supporting*(A::T, v::AbstractVector)
and possiblysize(A::T)
(although for linear solvers this is not needed, since the size can be inferred from the right-hand-side vector).The text was updated successfully, but these errors were encountered: