Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up map on tuples. Fixes #15695 #15702

Closed
wants to merge 1 commit into from
Closed

Speed up map on tuples. Fixes #15695 #15702

wants to merge 1 commit into from

Conversation

shashi
Copy link
Contributor

@shashi shashi commented Mar 30, 2016

No description provided.

@ViralBShah ViralBShah added the performance Must go faster label Mar 30, 2016
@timholy
Copy link
Member

timholy commented Mar 31, 2016

While it will be nice for big tuples, this seems likely to hammer performance on small tuples.

@shashi
Copy link
Contributor Author

shashi commented Mar 31, 2016

It would probably be good to figure out approximately what "small" is and then write generated functions (or just @eval in a for loop) up to that number. Then use this method for all other cases. From my experiments performance takes a hit after some 8 elements. I'm also fine with holding up on this PR till there is a better consolidated clever implementation that Jeff mentioned in #15695

@KristofferC
Copy link
Member

I get the following run times comparing the "Vector version" vs a @generated version up to N = 100:

(Forgot y-axis, it is time[s])
figure_2

The compilation time looks like this:

figure_1

Pretty interesting that the @generated compilation time is lower than the vector one for < 25 elements.

@JeffBezanson
Copy link
Member

Nice data!

We also need to worry about the ability to infer the result type. We'll need some kind of cutoff, perhaps like this one:

function afoldl(op,a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,qs...)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants