Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check sizes of arguments in dot; fixes #28617 #28666

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

ranocha
Copy link
Member

@ranocha ranocha commented Aug 15, 2018

As discussed in JuliaLang/LinearAlgebra.jl#549 and https://discourse.julialang.org/t/efficient-trace-of-product-of-matrices/13313/12, dot(A::AbstractArray, B::AbstractArray) should check whether size(A) == size(B) instead of length(A) == length(B).

@andreasnoack
Copy link
Member

I think the check should actually be axes(A) == axes(B) to handle non-standard indices correctly.

@KristofferC KristofferC added the breaking This change will break code label Aug 15, 2018
@ranocha
Copy link
Member Author

ranocha commented Aug 15, 2018

I think the check should actually be axes(A) == axes(B) to handle non-standard indices correctly.

I thought custom indices should only be some kind of simplification for special purposes. Thus, I would not forbid having two arrays of the same size but with different axes and calculating the dot product of them. Could you please explain why we should disallow something as

julia> using LinearAlgebra, OffsetArrays
                                                                                    
julia> A = [1 2; 3 4]; B = OffsetArray(A, (-1,-1))
OffsetArray(::Array{Int64,2}, 0:1, 0:1) with eltype Int64 with indices 0:1×0:1:
 1  2
 3  4

julia> dot(A, A)
30

julia> dot(A, B)
30

@andreasnoack
Copy link
Member

The idea is that the indices of A and B should match. So for Array, we'd disallow dot(A, B) for A = randn(4) and B = randn(2,2) because vec(CartesianIndices(A)) != vec(CartesianIndices(B)). The generalization of this requirement to AbstractArray is to require axes(A) == axes(B). I'd also expect this to hold when working with OffsetArrays but I might, of course, be wrong. Eventually, I'd also think it would be reasonable to require that axes(A, 2) == axes(B, 1) in the matrix multiplication A*B.

@ranocha
Copy link
Member Author

ranocha commented Aug 15, 2018

Thank you for the explanation. If this is the basic idea behind the checks (and A*B will work if axes(A, 2) == axes(B, 1)), there should be checks using axes, of course. I will update the PR accordingly.

Compute the dot product between two vectors. For complex vectors, the first
vector is conjugated. When the vectors have equal lengths, calling `dot` is
semantically equivalent to `sum(dot(vx,vy) for (vx,vy) in zip(x, y))`.
Compute the dot product between two arrays of the same size as if they were
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"of the same size" -> "with the same axes" ? The "equal sizes" language is also used a few lines below...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will change that.

@@ -206,7 +206,9 @@ end
# Frobenius dot/inner product: trace(A'B)
function dot(A::SparseMatrixCSC{T1,S1},B::SparseMatrixCSC{T2,S2}) where {T1,T2,S1,S2}
m, n = size(A)
size(B) == (m,n) || throw(DimensionMismatch("matrices must have the same dimensions"))
if size(B) != (m,n)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is axes purposely not used here for some reason?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought SparseMatrixCSC uses definitely classical indexing as in line 208. Thus, I've just used size instead of axes. Should I change that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, the SparseArrays code has not been modified to support arbitrary indices.
I suppose it would be generally less useful than for dense arrays.

@garrison garrison added the linear algebra Linear algebra label Aug 15, 2018
@ranocha
Copy link
Member Author

ranocha commented Aug 16, 2018

We should note somewhere that these changes are breaking. Since the NEWS.md file seems to be reserved for the official release of Julia v1.0, I don't know where I should add a comment. Could you help me, please?

@stevengj
Copy link
Member

I guess the question is whether we consider this to be a bugfix or a change in the documented API.

@ranocha
Copy link
Member Author

ranocha commented Aug 16, 2018

Following the discussion above, it seems that most people see this as a bugfix. Should I modify this PR in some way before it can be merged?

@KristofferC
Copy link
Member

This is not a simple bug fix.

For any iterable containers x and y (including arrays of any dimension) of numbers (or any element
type for which dot is defined), compute the dot product (or inner product or scalar product), i.e. the
sum of dot(x[i],y[i]), as if they were vectors.

It works exactly like documented. We can't label things bugfixes just because we change what we like about the behavior. Also, master is currently closed for breaking changes.

@ranocha
Copy link
Member Author

ranocha commented Aug 16, 2018

Okay. Do I have to anything now in that case?

@KristofferC
Copy link
Member

This LGTM so I think this can rest for a while until we open up master or implement versioning in for the stdlibs.

@garrison
Copy link
Member

garrison commented Dec 2, 2018

Bump. Any chance this qualifies as a "minor change"?

@StefanKarpinski StefanKarpinski added minor change Marginal behavior change acceptable for a minor release triage This should be discussed on a triage call labels Dec 3, 2018
@StefanKarpinski StefanKarpinski added this to the 1.1 milestone Dec 3, 2018
@StefanKarpinski
Copy link
Member

I've marked it as a "minor change" and put the "triage" label on it and 1.1 milestone; we can discuss whether to include it on Thursday's triage call.

@JeffBezanson
Copy link
Member

I'm not sure this is useful at all --- it just adds errors to more cases. Very much not sure it's worth the potential breakage.

@JeffBezanson
Copy link
Member

Also not obvious to me whether comparing axes or just sizes is the right thing.

@StefanKarpinski
Copy link
Member

The general triage sentiment is that by default we need to lean heavily in favor of not breaking code. So this could happen but the burden is on those proposing this that this is (a) the right thing to do and (b) that it won't break any packages or applications. Running PkgEval on this branch would be a good first step to inform that decision—if nothing breaks, we can consider it.

@JeffBezanson JeffBezanson modified the milestones: 1.1, 1.2 Dec 7, 2018
@JeffBezanson JeffBezanson added needs pkgeval Tests for all registered packages should be run with this change and removed triage This should be discussed on a triage call labels Feb 14, 2019
@JeffBezanson
Copy link
Member

Triage says 👍 but we need a PkgEval run.

@marius311
Copy link
Contributor

I'm for the behavior in this PR. Also, It should be marked as fixing JuliaLang/LinearAlgebra.jl#641 (which I opened).

@andreasnoack
Copy link
Member

@ararslan Would you be able to run PkgEval here?

@ararslan
Copy link
Member

I won't likely be able to get to it for a while.

@JeffBezanson JeffBezanson modified the milestones: 1.3, 1.4 Aug 15, 2019
@JuliaLang JuliaLang deleted a comment from nanosoldier Dec 19, 2019
@JuliaLang JuliaLang deleted a comment from nanosoldier Dec 19, 2019
@JuliaLang JuliaLang deleted a comment from JeffBezanson Dec 19, 2019
@maleadt
Copy link
Member

maleadt commented Dec 19, 2019

@nanosoldier runtests(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your test job has completed - possible new issues were detected. A full report can be found here. cc @maleadt

@JeffBezanson
Copy link
Member

Ok, this does indeed cause some package test failures. That matches my intuition that, given that we allow passing arbitrary iterators to dot, being permissive here can be useful.

@JeffBezanson JeffBezanson removed this from the 1.4 milestone Dec 20, 2019
@andreasnoack
Copy link
Member

I just read through the error logs here and I think we have drawn the wrong conclusion from them. Most of the failures are unrelated to this PR. They are either segfaults or look intermittent. Then three of them fail because of an unfortunate sum method in StatsBase that was removed in JuliaStats/StatsBase.jl#526. Finally, it looks like there are just a few cases where somebody ends up dotting a vector and a one-column matrix. The latter might be something that people actually find convenient but let's rebase one and rerun PkgEval to get an updated view on this.

@andreasnoack

This comment has been minimized.

@nanosoldier

This comment has been minimized.

@maleadt

This comment has been minimized.

@nanosoldier

This comment has been minimized.

@maleadt

This comment has been minimized.

@nanosoldier

This comment has been minimized.

@maleadt
Copy link
Member

maleadt commented Sep 29, 2021

Third time's the charm (and comes with a prevind fix):

@nanosoldier runtests(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your package evaluation job has completed - possible new issues were detected. A full report can be found here.

@andreasnoack
Copy link
Member

I've tried to summarize the reason for the failures in the table below. No packages rely on dotting arrays of different shape except for singleton dimensions. A few packages rely on dotting one column matrix with a vector and a few packages dots a one row matrix with a vector. There are also some numerical differences which might be because some calls now use our dot instead of the dot from BLAS. We can revert that but I think the tests might just be too strict and that it would be better to go through the packages and adjust the tests.

I think it's unfortunate that we don't provide an error in the motivating example in JuliaLang/LinearAlgebra.jl#549 but maybe it's not a sufficiently common case to warrant the the effort required for introducing the check.

Package Reason src or test
AeroMDAO One column matrix src
CompressiveLearning One column matrix src
CopEnt Unrelated N/A
EnhancedGJK One column matrix src
ExponentialUtilities Unrelated N/A
FaultDetectionTools Unrelated N/A
FilesystemDatastructures Unrelated N/A
GridArrays Unrelated N/A
ImageDistances Unrelated N/A
ImmersedLayers Marginally different results N/A
IndependentComponentAnalysis Unrelated N/A
KissMCMC Marginally different results N/A
LearningHorse One row(?!?) matrix src
MultinomialRegression Marginally different results N/A
MutableArithmetics One column matrix test
OpSel One column matrix src
QuantumTomography Unrelated N/A
RegressionDiscontinuity One row(?!?) matrix src
SolverTools Marginally different results N/A
Symbolic Unrelated N/A
TensorCore One row(?!?) matrix Test
UnbalancedOptimalTransport One column matrix src
WaveFD Marginally different results N/A
Widgets Unrelated N/A

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking This change will break code linear algebra Linear algebra minor change Marginal behavior change acceptable for a minor release needs pkgeval Tests for all registered packages should be run with this change
Projects
None yet
Development

Successfully merging this pull request may close these issues.