Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add methods to reduce, specialized for merge on Dict #26440

Merged
merged 4 commits into from
Apr 4, 2019

Conversation

jlapeyre
Copy link
Contributor

This is discussed in #21672.
merge is not efficient when called with a large number of Dict arguments. This PR makes merge efficient in this case.

For example

a = [Dict(:a => 1 , :b=>2) for i in 1:2000];
@time merge(a...);  # twice for jit

master 46dcb35
0.262732 seconds (25.95 k allocations: 120.125 MiB, 23.43% gc time)

This commit
0.000516 seconds (13 allocations: 63.641 KiB)

This does not affect other methods for merge that I have found. For instance, for named tuples, and OrderedDict in DataStructures. They do not use the code that I changed.

The inefficiency is due to the performance of this kind of construction

promoteK(K, d, ds...) = promoteK(promote_type(K, keytype(d)), ds...)

I first reimplemented the type promotion using reduce which is efficient. But, I replaced this with code copied from DataStructures, which uses a simple loop, as this is compact and very transparent.

There are similar inefficiencies in other (important) methods. I have a prepared a branch fixing these in a similar way. A more general solution, which I did not investigate, is to make the construction above efficient.

@jlapeyre jlapeyre changed the title WIP: make merge(arrayofdicts...) efficient (#21672) WIP: make merge(arrayofdicts...) efficient Mar 13, 2018
@nalimilan
Copy link
Member

nalimilan commented Mar 13, 2018

Thanks. Unfortunately, AFAICT merge is no longer inferrable with these changes (you can check that with @code_warntype).

I'm concerned that this change amounts to optimizing for a situation which should not really happen in the first place. Splatting shouldn't be used when the number of arguments is large. How about implementing the special reduce method discussed at #21672 instead?

@jlapeyre
Copy link
Contributor Author

That's strange, my change should not affect merge! at all.

@nalimilan
Copy link
Member

Sorry, I meant merge, not merge!.

@jlapeyre
Copy link
Contributor Author

Ok. Yes, that's not an acceptable loss. I see that merge for OrderedDict is also not inferrable.

I'll look into it.

@jlapeyre
Copy link
Contributor Author

I did not find a solution that is both efficient and inferable. The generic method for reduce(merge, array_of_dicts) is already reasonably fast and light on resources. However, I did write another method for reduce that reduces the time by 30% for some tests. I will change the pull request. Is this as simple as the suggestion from github ? :

Add more commits by pushing to the gjl/merge branch on jlapeyre/julia

Will the commit for the PR be updated, or do I need to do something more ? Of course, it would be best if the changes from the previous commit are not visible.

To be useful, the method for reduce with merge has to take an array of type Any containing Dicts of different types. I guess there is no way to write an inferable method with this constraint. (I did not look into the possibility of a Tuple of Dicts.)

@nalimilan
Copy link
Member

I did not find a solution that is both efficient and inferable. The generic method for reduce(merge, array_of_dicts) is already reasonably fast and light on resources. However, I did write another method for reduce that reduces the time by 30% for some tests. I will change the pull request. Is this as simple as the suggestion from github ? :

Add more commits by pushing to the gjl/merge branch on jlapeyre/julia

Will the commit for the PR be updated, or do I need to do something more ? Of course, it would be best if the changes from the previous commit are not visible.

Yes, that's it. You can amend the previous commit and force-push if you want to erase it.

To be useful, the method for reduce with merge has to take an array of type Any containing Dicts of different types. I guess there is no way to write an inferable method with this constraint. (I did not look into the possibility of a Tuple of Dicts.)

Actually, the method can accept any array, and if the array type is concrete (e.g. Vector{Dict{Symbol,Int}}), then the return type is inferrable.

@jlapeyre
Copy link
Contributor Author

Actually, the method can accept any array, and if the array type is concrete (e.g. Vector{Dict{Symbol,Int}}), then the return type is inferrable.

Yes, it should do the right thing for your example. I have been concentrating on the non-trivial cases. I just meant that it it ought to do something better than return Dict{Any,Any} if the input is Vector{Any}. I have been looking at discussions of how to handle type promotion or inference in various cases, and it left my head spinning. It's hard to predict the typical use case. In the case that lead to this, I had an array Vector{Any} all of whose elements were of the same type: concretely typed Dicts. Using the approach of map would be best in this case. That is, start with an output Dict of the same type as that of the first element, and widen and copy if necessary. On the other hand, I know that in some cases a Dict with certain concrete key and value types is not more efficient than one with more general types. But, I don't know the details. There is even precedent for making reduce return the best container. That is, don't look at the keytype, but rather the type of each key. This is kind of like what [x for x in array] does.

Instead, I have been following the example of merge, that is compute the correct output type once, before doing the merge.

@nalimilan
Copy link
Member

Makes sense. I think you can just use Dict{mapreduce(keytype, promote_type, vec_of_dicts), mapreduce(valtype, promote_type, vec_of_dicts)} to compute the type of the Dict to return. When vec_of_dicts has a concrete type, inference is able to figure out the return type, but it also works when it's not concrete. So that's the best of both worlds.

@jlapeyre
Copy link
Contributor Author

I'm questioning the value of continuing this PR. The arguments against continuing are

  1. The existing method for reduce(merge,arr_of_dicts) is not inefficient in an algorithmic sense (AFAICT). So far, I can only decrease the execution time by roughly a constant factor, independent of the number of dictionaries. In most cases that I tried, the decrease in time is 30% or so. (But, for OrderedDicts it may be as much as 90%.)

  2. I compare a couple of new methods and the default reduce method for a variety of inputs. Which method is fastest depends strongly on the types of dictionaries and the type of container. I get the biggest gain in performance for OrderedDicts, roughly a factor of 10. In some cases, the existing method is faster even though it is more naive and does far more allocation.

  3. Because of ongoing major changes in the iteration protocol and in optimization, the best method for the various cases is likely to change before v0.7 or v1.0 is released.

  4. Because of item number 2, the logic for implementing the new methods adds a bit of complexity and fragility that must be weighed against the benefits.

In short, this may be premature optimization. But, I am willing to consider arguments for proceeding with the PR.

@nalimilan
Copy link
Member

AFAICT the time increases more than linearly, because merge allocates a new dict at each step, which needs to include all contents treated in all previous iterations. And indeed, in a simple benchmark doubling the number of dicts more than doubles the time. Below you can see that multiplying the number of dicts by 256 multiplies the time by... 27,000!

julia> a = [Dict(rand()=>i for i in 1:1000) for j in 1:512];

julia> for i in 1:9; @btime reduce(merge, $(a[1:2^i])); end
  32.840 μs (6 allocations: 68.42 KiB)
  183.019 μs (25 allocations: 681.52 KiB)
  432.450 μs (53 allocations: 1.73 MiB)
  1.588 ms (115 allocations: 8.90 MiB)
  4.604 ms (227 allocations: 25.91 MiB)
  25.088 ms (457 allocations: 127.92 MiB)
  87.035 ms (905 allocations: 399.95 MiB)
  256.262 ms (1807 allocations: 1.27 GiB)
  874.068 ms (3605 allocations: 4.75 GiB)

# Merging in-place is much faster
julia> @btime reduce(merge!, $(copy(a[1])), a)
  44.914 ms (0 allocations: 0 bytes)

That said, if you don't see the need for this, there's no hurry to implement this specialized method. The issue in which I mentioned this was about vectors of vectors, which are much more common and cannot be concatenated in-place as easily as dicts.

@jlapeyre
Copy link
Contributor Author

I see your point. In number 2 in my last post I neglected to mention that efficiency also varies with the content of the dictionaries. I have been testing mostly with many dictionaries and a small set of discrete keys and values, because that was the way the problem arose in an application.

@jlapeyre jlapeyre changed the title WIP: make merge(arrayofdicts...) efficient WIP: add methods to reduce, specialized for merge Mar 19, 2018
@jlapeyre
Copy link
Contributor Author

I changed the content and title of the PR. This PR implements methods for reduce with operator merge.

I tested this on several cases including the one given in #26440 (comment).

@jlapeyre
Copy link
Contributor Author

The test suite fails. This PR introduces a method ambiguity: reduce(f, S::SharedArrays.SharedArray). This is a common problem. Is there a better way to solve this than to add a wrapper method for each ambiguity ?

@nalimilan
Copy link
Member

Unfortunately not. Though you could implement only the method for AbstractVector{<:AbstractDict}. The drawback would be that people storing their dicts in a Vector{Any} wouldn't benefit from the optimization. But that would make the code simpler and more robust, so maybe that's a good idea idea. For example, with the current state of the PR if the first element in the vector is a dict but others are not, the code is going to throw an error about keytype while a valid merge method could work.

@jlapeyre
Copy link
Contributor Author

That's the price for multiple dispatch.

Disallowing Vector{Any} is not very palatable. It would be a more complicated API for a user who needs the optimized version. For instance, JSON.parse returns a Vector{Any} of dicts, even if they are all of the same type.

For example, with the current state of the PR if the first element in the vector is a dict but others are not, the code is going to throw an error about keytype while a valid merge method could work.

Yes, for instance, if the second element is a NamedTuple. But in this case there is no applicable method for merge either. And reduce(merge, [dict,namedtuple]) throws an error in the master branch as well. Nor is there a method for merge that merges a dict with an array of pairs.

@nalimilan
Copy link
Member

Yes, for instance, if the second element is a NamedTuple. But in this case there is no applicable method for merge either. And reduce(merge, [dict,namedtuple]) throws an error in the master branch as well. Nor is there a method for merge that merges a dict with an array of pairs.

That's the case for named tuples, but any package could create a custom type which is not an AbstractDict and yet supports merge with a Dict. Granted, that's not completely standard, but it should be possible. To be safe, the specialized reduce method should check that all inputs are AbstractDict, and if no fall back to the generic method.

@jlapeyre
Copy link
Contributor Author

Good point. The optimized version in the current PR would cut off that opportunity. I'll work on fixing that.

@jlapeyre
Copy link
Contributor Author

jlapeyre commented Apr 7, 2018

I edited the PR to check that the types of all elements in an array of type Vector{Any} are subtypes of Dict. This required the addition of just one line

The generic reduce method calls a chain of methods. If one of the items to be merged is not a Dict, then I call the first method in that chain, rather than using the method that is specialized for Dicts. IIRC, there is a way to call reduce while specifying that I do not want to specialize on the first argument. But, I don't recall how this works.

Copy link
Member

@nalimilan nalimilan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I'm a bit worried about the number of new methods, which apart from duplicating a lot of features can also introduce ambiguities. A possible approach would be this:

  1. Have reduce(m::typeof(merge), items::AbstractVector) call _merge!(reduce(typejoin, items), items::AbstractVector).
  2. Have _merge!(::Type{<:AbstractDict}, items::AbstractVector) call reduce(merge!, _typeddictA(items), items), and a fallback _merge!(::Type, items::AbstractVector) call reduce(merge, first(items), view(items, firstindex(items)+1:end)).

This system would allow other types to define optimized methods without introducing ambiguities.

return dto
end

function _merge!(dto::AbstractDict, dfrom::AbstractDict)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this just merge!?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably. In any case, it is never called. It should be removed.

Copy link
Contributor Author

@jlapeyre jlapeyre Apr 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops! No I see it is called. It looks calling _merge!(dto::AbstractDict, dfrom::AbstractDict) gives the same result as merge!. But, merge! takes splatted arguments. It would collect the single argument dfrom into an array and then iterate over the array. I don't recall if I benchmarked it, but prima facie, it looks less efficient. My idea was to have a single method that merges two dicts. And any code (in my PR at least) would ultimately call this.

But, it does add complexity. I would not object to removing it and simply calling merge!. Its partly a question of practical philosophy. You could argue that this optimization is premature. The goal of the PR is to remove the gross inefficiencies.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand. merge! takes splatted arguments, but that's for multiple dictionaries, not for multiple pairs. Here the method only allows two arguments, so it's equivalent to merge!.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neither do I ! :) I wasn't thinking of pairs. I'll state how I see it and see if you or agree or see a mistake.

  1. We are talking about the methods merge!(d::AbstractDict, others::AbstractDict...) and _merge!(dto::AbstractDict, dfrom::AbstractDict).
    2. For Dicts a and b, there is no observable difference between merge!(a,b) and _merge!(a,b)
    3. For the call merge!(a,b), the argument b appears in the body in a Tuple of one element. The routine must iterate over this tuple. In _merge!(a,b), there is no tuple and no corresponding iteration. So the latter may be faster than the former.

I did some benchmarks and _merge! is indeed faster. How much faster depends of course on the characteristics of the input dicts. So you gain a bit of efficiency at the expense of writing another (short) method. One can argue for or against _merge!(a,b) by arguing that the efficiency outweighs the code complexity, or vice-versa.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A concrete example:

julia> print(d1)
Dict(7=>9,9=>5,4=>3,2=>8,3=>5,5=>6,1=>1)
julia> print(d2)
Dict(7=>6,9=>6,2=>3,3=>10,5=>9,8=>10,6=>2)
julia> @btime merge!($(copy(d1)),d2);
  133.953 ns (0 allocations: 0 bytes)

julia> @btime _merge!($(copy(d1)),d2);
  87.529 ns (0 allocations: 0 bytes)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure whether the performance difference is significant given how short these timings are, but if it makes a significant difference in your concrete use case then it's worth filing a separate issue. If the difference is real a two-argument method could be added to merge!.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the ratio is significant, and I think that's more important. But, the point is moot. I just wrote a discourse post about the general question. After thinking while composing the questions, I'm quite comfortable with implementing all your suggestions. I think its the better choice. As you say, maybe further optimizations can be made in the future.

return dto
end

function _merge!(dto::AbstractDict, dicts::AbstractArray)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be simplified to _merge!(dto, dicts[1], dicts[2:end]).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. But, shouldn't the last argument be a view ? I did benchmark the other use of view in this PR and the increased efficiency was not insignificant.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, a view might be slightly better.


function reduce(m::typeof(merge), items::AbstractVector)
isempty(items) && throw(ArgumentError("Cannot merge empty vector"))
length(items) == 1 && return copy(first(items))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reduce(merge, [x]) currently returns x without copying, so this isn't needed: you can pass an empty array when there's a single element.

Same below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand empty array. To agree with the current behavior, we could return first(items). But, you are saying something else ?

Copy link
Member

@nalimilan nalimilan Apr 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean that reduce(merge, items) is equivalent to first(items) when there's only one element, so no need to add special code for that case.

EDIT: or equivalent to reduce(merge, first(items), []).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Got it. Simply remove the line and there is no observable difference. You are arguing again for simplicity over efficiency.

length(items) == 1 && return copy(first(items))
i1 = firstindex(items) + 1
i2 = lastindex(items)
return reduce(m,first(items), view(items,i1:i2))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i2 can be replaced with end.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good. I wasn't sure about it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure about replacing i2 with end?

julia> a = collect(1:10);
julia> view(a, 1:end);
ERROR: syntax: missing last argument in "1:" range expression 
julia> view(a, 1:lastindex(a));

julia> ex = Expr(:call,:getindex, :a, Expr(:call,:(:),1,:end))
:(getindex(a, 1:end))
julia> eval(ex)
ERROR: UndefVarError: end not defined

Now I recall having tried end first. Must be a good reason for this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to use @view instead of view.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. That will work. Anyway the method in question has been removed.
I just rebased and pushed all the changes in your last round of comments.

@jlapeyre
Copy link
Contributor Author

Thanks for reviewing. I agree with your ideas for reducing the number of methods and ambiguity. But, In point 1., shouldn't reduce(typejoin,items) be mapreduce(typeof,typejoin,items) ?

What's the most efficient way to make the changes ? Do you want to do it ? It's easy for me to make the changes and rebase once more.

@jlapeyre
Copy link
Contributor Author

I did not refresh and see your new comments. I'll read them now...

@nalimilan
Copy link
Member

You haven't removed the methods as I suggested in my review. Have you tried that?

@jlapeyre
Copy link
Contributor Author

I thought I did remove them. Maybe, I misunderstood the comments and did make all the changes.
I did implement points 1 and 2 in in this comment, and these replaced other methods.

I see now that I did forget to replace _merge!(dto::AbstractDict, dfrom::AbstractDict) with merge!.

@jlapeyre jlapeyre force-pushed the gjl/merge branch 2 times, most recently from 6b575f2 to a499d9b Compare May 23, 2018 18:47
@jlapeyre
Copy link
Contributor Author

It looks like these build failures are not due to the PR. The PR is pretty minimal at this point. Can't be sure of course.

@jlapeyre
Copy link
Contributor Author

Is there a way to run this build again ? It seems likely that the failure was not due to the PR.

@StefanKarpinski
Copy link
Member

If you rebase and force push the branch, CI will run again.

@nalimilan
Copy link
Member

Bump. Can you rebase?

@jlapeyre
Copy link
Contributor Author

Yes. But, I have a deadline to meet tomorrow. Thanks

@jlapeyre
Copy link
Contributor Author

jlapeyre commented Jun 29, 2018

EDIT: See edit at bottom. I think the rebuild is being done properly.

I'm not sure of what I'm doing. I tried the following, which did nothing in the end. But, I can see that fetch is fetching from my fork of julia master. My fork has not been updated

lapeyre@ribot~/j/f/julia> git branch
* gjl/merge
  master
lapeyre@ribot~/j/f/julia> git fetch origin master
From github.com:jlapeyre/julia
 * branch                  master     -> FETCH_HEAD
lapeyre@ribot~/j/f/julia> git rebase --interactive origin/master   # I do not edit, just save the file that is opened
Successfully rebased and updated refs/heads/gjl/merge.
lapeyre@ribot~/j/f/julia> git push --force origin gjl/merge
Everything up-to-date

So, I try this:

lapeyre@ribot~/j/f/julia> git remote -v 
origin	[email protected]:jlapeyre/julia.git (fetch)
origin	[email protected]:jlapeyre/julia.git (push)
upstream	https://github.com/JuliaLang/julia (fetch)
upstream	https://github.com/JuliaLang/julia (push)
lapeyre@ribot~/j/f/julia> git fetch upstream  master 
remote: Counting objects: 7157, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 7157 (delta 3010), reused 3010 (delta 3010), pack-reused 4145
Receiving objects: 100% (7157/7157), 5.21 MiB | 9.94 MiB/s, done.
Resolving deltas: 100% (5382/5382), completed with 851 local objects.
From https://github.com/JuliaLang/julia
 * branch                  master     -> FETCH_HEAD
   a96d847592..9a4ecaa4e4  master     -> upstream/master

EDIT: The following has triggered a new build and test

lapeyre@ribot~/j/f/julia> git remote -v 
origin	[email protected]:jlapeyre/julia.git (fetch)
origin	[email protected]:jlapeyre/julia.git (push)
upstream	https://github.com/JuliaLang/julia (fetch)
upstream	https://github.com/JuliaLang/julia (push

lapeyre@ribot~/j/f/julia> git checkout master
lapeyre@ribot~/j/f/julia> git pull upstream master
lapeyre@ribot~/j/f/julia> git checkout gjl/merge
lapeyre@ribot~/j/f/julia> git rebase master
lapeyre@ribot~/j/f/julia> git push --force origin gjl/merge

@jlapeyre
Copy link
Contributor Author

This test added in this PR test fails

/tmp/julia/share/julia/test/dict.jl:902
  Got exception ErrorException("`reduce(op, v0, itr)` is deprecated, use `reduce(op, itr; init=v0)` instead") outside of a @test
  `reduce(op, v0, itr)` is deprecated, use `reduce(op, itr; init=v0)` instead

I'll fix this and rebase.

@jlapeyre
Copy link
Contributor Author

@nalimilan @StefanKarpinski Unless someone else has time for a review, looks like this PR is ready to merge.

@nalimilan nalimilan changed the title WIP: add methods to reduce, specialized for merge Add methods to reduce, specialized for merge on Dict Jul 2, 2018
test/dict.jl Outdated
@@ -900,7 +900,7 @@ end
end

@testset "Dict reduce merge" begin
f = (i::Vector{<:Dict}, o) -> begin
function f(i::Vector{<:Dict}, o)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be check_merge ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Argh, I knew editing online was risky...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree check_merge is better. I modeled the name f on a similar function in the file.

@AzamatB
Copy link
Contributor

AzamatB commented Jan 18, 2019

Bump.

@jlapeyre
Copy link
Contributor Author

I'ts been a while, but I think there is nothing for me to do at this point.
But, reviewing the comments it looks like nanosoldier was never run. Maybe worth checking ?

@nalimilan
Copy link
Member

Nanosoldier will only be useful if there are benchmarks for reduce(merge, dicts), which is probably not the case. It would be nice to add such benchmarks to BaseBenchmarks, though. Anyway, can you just show one or two benchmarks comparing with master?

@fredrikekre fredrikekre added performance Must go faster potential benchmark Could make a good benchmark in BaseBenchmarks collections Data structures holding multiple items, e.g. sets labels Jan 25, 2019
@nalimilan
Copy link
Member

I'll merge this tomorrow if nobody objects, as I don't want it to miss the release again.

@@ -691,6 +691,12 @@ end

filter!(f, d::Dict) = filter_in_one_pass!(f, d)

function reduce(::typeof(merge), items::Vector{<:Dict})
Copy link
Member

@rfourquet rfourquet Apr 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason not to allow AbstractVector{<:Dict} ?
(sorry for a last minute comment!)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC it triggered ambiguities (that's probably buried in a comment above). It would be useful to try to generalize, but for now I'd say it's already an improvement.

@nalimilan nalimilan merged commit 4105848 into JuliaLang:master Apr 4, 2019
@nalimilan
Copy link
Member

Thanks @jlapeyre! Sorry it took so long...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
collections Data structures holding multiple items, e.g. sets performance Must go faster potential benchmark Could make a good benchmark in BaseBenchmarks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants