-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
later-stage macros? #3706
Comments
To me this syntax starts to explore the outer boundaries of readability. If a small number of features (genloops, tuple, sub) cover most cases, that raises hope that we could do this with a bit of new syntax as in #1917, and some compiler optimizations. |
#1917 seems like cleaner syntax. If it can be generated cost-free, I'm all in favor of that variant. I was just trying to get there with what I know, and this seemed like it might be a small-ish change. To explore more whether that syntax really will do what we want, I'll ask a couple of questions over at that other issue. |
@JeffBezanson, as you may have seen I've started finding "cleaner syntax" applications of this feature. And I suspect there will be more, I just haven't invested much effort into this yet. Today I realized another---very different---potential application of this idea: efficient operations when passing functions as arguments (see here, #3426, #3893 for a recent discussion). Consider a sorting operation with a comparison function:
If I'm thinking about this correctly, the key point is that having a late-stage macro lets you "trivially" generate a different version of Thoughts? Obviously not a Julia 0.2 feature :-). |
+1 Inlining/specializing on functions would be great--that's the largest bottleneck in the |
FWIW, although it's an awful API and clearly transient, you can, if you really need to sort with specialization, define a custom immutable CustomOrder <: Sort.Ordering end
Base.isless(a,b) = customisless(a,b) Then you can do things like |
Whereas I see the value of a later-stage macro, specializing and inlining function argument might not be the most appropriate application. I hope that function arguments can be inlined/specialized by the compiler -- without the need of using a macro do do this. In other words, people can just write |
Yes, that's really much preferable. |
@lindahua, that's a separate issue. Whether
The issue is the following: compilation in Julia is determined by the input types. As soon as you call Now that I think it through carefully like this, for this proposal to work you'd still need to introduce a unique type signature for each distinct compiled function (i.e., |
@StefanKarpinski, I was mainly referring to the idea that by profiling, most of the time in sorting is in the comparison functions, and that manually inlining speeds things up about 30% (I just checked). Which is not amazing, but nothing to sneeze at. Either way, I'm looking forward to some way to inline things like comparison functions, whether by the compiler or some other method. |
@timholy I understand the complexities here. To clarify, when I talked about specializing & inlining function argument, I mean the compiler treat function argument specially -- that is, to specialize based on what function is passed in (or in other words, assign unique type signature for each distinct function). |
So basically, every time a new function is passed into a higher level function, the compilation needs to be triggered again. |
@lindahua, I misread something you'd written earlier. My apologies. Upon reflection, this case isn't as clearly-related to late-evaluating macros as I initially felt. Most of the work is elsewhere. So cartesian iteration remains the best application for late-evaluating macros (that I've thought of, anyway). I'm back to pecking at this issue in part because, in #1917, I just don't even know what |
Late-stage macros seem like an orthogonal feature to specializing specific arguments. Specialization could work like e.g.:
where 'specialize' (or some other term) means "force this to always be a compile-time constant" (= compile for every new cmp value and stow the functions away in e.g. a hash table). You can do a lot of fun stuff with that, e.g. PyPy uses it to generate JIT compilers from interpreters (just specialize on the source code argument to an interpreter, e.g. "regex_interpreter(specialize rex::String, data::String)" is a JITted regex matcher). Late-stage macros also seem useful for implementing any custom optimizations one wants. One could e.g. implement aggressive constant folding by defining ispure(f :: Function) and using a late-stage macro to recursively evaluate all pure subexpressions. This combines well with argument value specialization, as there are more constants available for folding. Like that regex code you just got interactively from the user. |
Superseded by #7311? |
Yes I think so. |
Would it be possible to have parametric macros whose evaluation is deferred until compile time?
This is treading on old ground (staged functions, etc), but perhaps with a slightly different twist. It's a bit of a continuation of the ideas I pursued in
Cartesian.jl
, but it has the potential for cartesian iteration without any kind of performance penalty.The point behind this is (hopefully) illustrated in this gist. The "user-written" part of the code begins at
#### User-written code ####
, everything else above that is library code.Most of that code works fine as-is; it's only the last example, the arbitrary-dimension version of
showelements
, that requires a new language feature. If we had "parametric macros", e.g.,@genloops{N}
, that are not evaluated until compile-time, this might be achievable.Note that writing arbitrary-dimensional functions may always have some challenges, depending on the task, but perhaps a relatively small number of macros (illustrated by examples like
@tuple
and@sub
) might suffice for many tasks.I deliberately picked
ispeak
as an example of something harder, which may encourage us to think broadly. But even this might be achieved fairly simply, something along the lines ofalthough this is not identical to the other one (in 2d it checks 9 neighbors, not 4, and does not short-circuit). Thinking a variety of algorithms through might give us a sense of which, and how many, macros would be needed to cover many tasks.
(Note:
@sub{N}
has to be smart enough to look for an expression in its first argument, i.e., only a bit smarter than@sub
in the gist.)The text was updated successfully, but these errors were encountered: