-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge specializations and tfunc #15918
Conversation
jl_symbol_name(name)); | ||
} | ||
// suppress warning "replacing module Core.Inference" during bootstrapping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this comment can be removed?
Isn't there a chance this will still leave un-inferred code that we reach recursively during this inference process? Admitted, there don't seem to be any regressions evident in test timing at least. |
i couldn't think of anywhere this could miss, after setting although i'm thinking i should separate these two changes, since they aren't actually coupled. ( |
👍 |
91b2866
to
958961f
Compare
👍 do want |
958961f
to
f15e080
Compare
f15e080
to
ca05bdd
Compare
Important because:
|
TODO: @vtjnash, can you rebase this and see if the memory usage problems this previously caused are now improved so that this can be merged? @JeffBezanson did you have other reservations about this? |
Rebase & merge as far as I'm concerned. @vtjnash has indicated that the memory problems have been solved. |
ca05bdd
to
80f20bb
Compare
just fyi: this will increase the size of the system image by a few of megabytes. that's the result of an unintentional bugfix for the creation of the system image, not the impact of this PR, nor a runtime change. |
Could you please explain in more detail what the bug was and how this fixes it? |
There was an unintended conditional check at Line 1459 in 31f5a63
|
On Travis was that possibly an OOM while compiling the system image? |
Yes, apparently that's the downside to fixing that bug :). I've been exploring various fixes to this issue over the past month, so I guess it's time to actually go with one of them. |
Although we generally do need more precompilation, it would be nice to have some way to dial it down to find the right tradeoff. |
hopefully #16835 will fix the OOM, let's see |
80f20bb
to
ee8e9f8
Compare
ee8e9f8
to
1ec5092
Compare
It looks to me like TypeMap is using |
half a dozen 15% or worse regressions? 85d098c#commitcomment-17834416 |
Base.visit on |
That's a feature of |
I'm not sure what that means
|
It's intentional, although perhaps it would be more useful if visit returned the entry instead of the value:
|
What would be the use of having |
Curious: why do you want to iterate over specializations? |
For the debugger, it needs to set a breakpoint in each one. |
Ah, of course. It's probably better to iterate over the method cache instead, since |
I need everything that gets or could potentially get codegened. |
I would use mt->cache. That plus jl_method_tracer should give you everything I would think. |
Can you describe the semantics of that field? |
It's a TypeMap just like |
1) by resetting all method table caches and inferring everything in specializations, we can avoid needing to perform a two-stage inference (& the associated complexity with allowing a module to be replaced during compilation)this merges the LambdaInfo
tfunc
andspecializations
fields, to slightly reduce duplication. it now contains any lambda (or rettype) that was considered worth creating by either dispatch or inference, without prejudiceintermediate type inference results are fetched from the
active
queue to accommodate