From aca9be829737c3a4875c15ff4aa62e536fb333ca Mon Sep 17 00:00:00 2001 From: Shuhei Kadowaki Date: Wed, 11 Aug 2021 16:32:23 +0900 Subject: [PATCH 1/2] typo fixes --- docs/src/snoopi_deep.md | 2 +- docs/src/snoopi_deep_analysis.md | 2 +- docs/src/snoopr.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/src/snoopi_deep.md b/docs/src/snoopi_deep.md index a7604f52..c7159d6f 100644 --- a/docs/src/snoopi_deep.md +++ b/docs/src/snoopi_deep.md @@ -77,7 +77,7 @@ A non-empty list might indicate method invalidations, which can be checked (in a !!! tip Your workload may load packages and/or (re)define methods; these can be sources of invalidation and therefore non-empty output from `staleinstances`. - One trick that may cirumvent some invalidation is to load the packages and make the method definitions before launching `@snoopi_deep`, because it ensures the methods are in place + One trick that may circumvent some invalidation is to load the packages and make the method definitions before launching `@snoopi_deep`, because it ensures the methods are in place before your workload triggers compilation. ## Viewing the results diff --git a/docs/src/snoopi_deep_analysis.md b/docs/src/snoopi_deep_analysis.md index 93c38d5b..4ed4d11f 100644 --- a/docs/src/snoopi_deep_analysis.md +++ b/docs/src/snoopi_deep_analysis.md @@ -597,7 +597,7 @@ Inference triggered to call MethodInstance for show(::IOContext{Base.TTY}, ::MIM In this case we see that the method is `#38`. This is a `gensym`, or generated symbol, indicating that the method was generated during Julia's lowering pass, and might indicate a macro, a `do` block or other anonymous function, the generator for a `@generated` function, etc. !!! warning - It's particularly worth your while to improve inferrability for gensym-methods. The number assiged to a gensymmed-method may change as you or other developers modify the package (possibly due to changes at very difference source-code locations), and so any explicit `precompile` directives involving gensyms may not have a long useful life. + It's particularly worthwhile to improve inferrability for gensym-methods. The number assiged to a gensymmed-method may change as you or other developers modify the package (possibly due to changes at very difference source-code locations), and so any explicit `precompile` directives involving gensyms may not have a long useful life. But not all methods with `#` in their name are problematic: methods ending in `##kw` or that look like `##funcname#39` are *keyword* and *body* methods, respectively, for methods that accept keywords. They can be obtained from the main method, and so `precompile` directives for such methods will not be outdated by incidental changes to the package. diff --git a/docs/src/snoopr.md b/docs/src/snoopr.md index b0edf2d3..0daa5fb6 100644 --- a/docs/src/snoopr.md +++ b/docs/src/snoopr.md @@ -477,7 +477,7 @@ Julia could not have returned the newly-correct answer without recompiling the c Aside from cases like these, most invalidations occur whenever new types are introduced, and some methods were previously compiled for abstract types. -In some cases, this is inevitable, and the resulting invalidations simply need to be accepted as a consequence of a dynamic, updateable language. +In some cases, this is inevitable, and the resulting invalidations simply need to be accepted as a consequence of a dynamic, updatable language. (As recommended above, you can often minimize invalidations by loading all your code at the beginning of your session, before triggering the compilation of more methods.) However, in many circumstances an invalidation indicates an opportunity to improve code. In our first example, note that the call `call2f(c32)` did not get invalidated: this is because the compiler From dca6f99a8c6c39fcdf4256408ed454ec2fe19a83 Mon Sep 17 00:00:00 2001 From: Shuhei Kadowaki Date: Wed, 11 Aug 2021 16:33:43 +0900 Subject: [PATCH 2/2] minor documentation improvements While reading through the documentation, I found some words might look nicer if they are codified (i.e. by enclosing them by backticks), mainly: - make sure to codify `MethodInstance` - if a package module is actually used, codify its name, e.g. `SnoopCompile` Please tell me if you don't like these changes -- I don't mind simply reverting that part. Unrelated to above, I also added some typo fixes. --- docs/src/index.md | 2 +- docs/src/pgdsgui.md | 6 +++--- docs/src/snoopi_deep.md | 14 +++++++------- docs/src/snoopi_deep_analysis.md | 24 ++++++++++++------------ docs/src/snoopi_deep_parcel.md | 2 +- docs/src/snoopr.md | 22 +++++++++++----------- docs/src/tutorial.md | 4 ++-- 7 files changed, 37 insertions(+), 37 deletions(-) diff --git a/docs/src/index.md b/docs/src/index.md index 4b3c10d4..4eeacce7 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -86,7 +86,7 @@ For developers who can use Julia 1.6+, the recommended sequence is: 2. Record inference data with [`@snoopi_deep`](@ref). Analyze the data to: + adjust method specialization in your package or its dependencies + fix problems in type inference - + add precompile directives + + add `precompile` directives Under 2, the first two sub-points can often be done at the same time; the last item is best done as a final step, because the specific precompile directives needed depend on the state of your code, and a few fixes in specialization diff --git a/docs/src/pgdsgui.md b/docs/src/pgdsgui.md index b5cf4cc0..89ccfcd4 100644 --- a/docs/src/pgdsgui.md +++ b/docs/src/pgdsgui.md @@ -6,7 +6,7 @@ so while specialization often improves runtime performance, that has to be weigh There are also cases in which [overspecialization can hurt both run-time and compile-time performance](https://docs.julialang.org/en/v1/manual/performance-tips/#The-dangers-of-abusing-multiple-dispatch-(aka,-more-on-types-with-values-as-parameters)). Consequently, an analysis of specialization can be a powerful tool for improving package quality. -SnoopCompile ships with an interactive tool, [`pgdsgui`](@ref), short for "Profile-guided despecialization." +`SnoopCompile` ships with an interactive tool, [`pgdsgui`](@ref), short for "Profile-guided despecialization." The name is a reference to a related technique, [profile-guided optimization](https://en.wikipedia.org/wiki/Profile-guided_optimization) (PGO). Both PGO and PGDS use rutime profiling to help guide decisions about code optimization. PGO is often used in languages whose default mode is to avoid specialization, whereas PGDS seems more appropriate for @@ -130,7 +130,7 @@ julia> collect_for(mref[], tinf) So we can see that one `MethodInstance` for each type in `Ts` was generated. -If you see a list of MethodInstances, and the first is extremely costly in terms of inclusive time, but all the rest are not, then you might not need to worry much about over-specialization: +If you see a list of `MethodInstance`s, and the first is extremely costly in terms of inclusive time, but all the rest are not, then you might not need to worry much about over-specialization: your inference time will be dominated by that one costly method (often, the first time the method was called), and the fact that lots of additional specializations were generated may not be anything to worry about. However, in this case, the distribution of time is fairly flat, each contributing a small portion to the overall time. In such cases, over-specialization may be a problem. @@ -229,4 +229,4 @@ julia> methodinstances(m) # let's see what specializations we have MethodInstance for save(::String, ::Array) ``` -In this case we have 7 MethodInstances (some of which are clearly due to poor inferrability of the caller) when one might suffice. +In this case we have 7 `MethodInstance`s (some of which are clearly due to poor inferrability of the caller) when one might suffice. diff --git a/docs/src/snoopi_deep.md b/docs/src/snoopi_deep.md index c7159d6f..40d9ce07 100644 --- a/docs/src/snoopi_deep.md +++ b/docs/src/snoopi_deep.md @@ -8,7 +8,7 @@ For that reason, efforts at reducing latency should be informed by measuring the Moreover, because all code needs to be type-inferred before undergoing later stages of code generation, monitoring this "entry point" can give you an overview of the entire compile chain. On older versions of Julia, [`@snoopi`](@ref) allows you to make fairly coarse measurements on inference; -starting with Julia 1.6, the recommended tool is `@snoopi_deep`, which collects a much more detailed picture of type-inference's actions. +starting with Julia 1.6, the recommended tool is [`@snoopi_deep`](@ref), which collects a much more detailed picture of type-inference's actions. The rich data collected by `@snoopi_deep` are useful for several different purposes; on this page, we'll describe the basic tool and show how it can be used to profile inference. @@ -16,7 +16,7 @@ On later pages we'll show other ways to use the data to reduce the amount of typ ## Collecting the data -Like [`@snoopr`](@ref), `@snoopi_deep` is exported by both SnoopCompileCore and SnoopCompile, but in this case there is not as much reason to do the data collection by a very minimal package. Consequently here we'll just load SnoopCompile at the outset. +Like [`@snoopr`](@ref), `@snoopi_deep` is exported by both `SnoopCompileCore` and `SnoopCompile`, but in this case there is not as much reason to do the data collection by a very minimal package. Consequently here we'll just load `SnoopCompile` at the outset. To see `@snoopi_deep` in action, we'll use the following demo: @@ -55,7 +55,7 @@ InferenceTimingNode: 0.00932195/0.010080857 on InferenceFrameInfo for Core.Compi !!! tip Inference gets called only on the *first* invocation of a method with those specific types. You have to redefine the `FlattenDemo` module (by just re-executing the command we used to define it) if you want to collect data with `@snoopi_deep` on the same code a second time. - To make it easier to perform these demonstrations and use them for documentation purposes, SnoopCompile includes a function [`SnoopCompile.flatten_demo()`](@ref) that redefines the module and returns `tinf`. + To make it easier to perform these demonstrations and use them for documentation purposes, `SnoopCompile` includes a function [`SnoopCompile.flatten_demo()`](@ref) that redefines the module and returns `tinf`. This may not look like much, but there's a wealth of information hidden inside `tinf`. @@ -123,7 +123,7 @@ MethodInstance for FlattenDemo.packintype(::Int64) ``` Each node in this tree is accompanied by a pair of numbers. -The first number is the *exclusive* inference time (in seconds), meaning the time spent inferring the particular MethodInstance, not including the time spent inferring its callees. +The first number is the *exclusive* inference time (in seconds), meaning the time spent inferring the particular `MethodInstance`, not including the time spent inferring its callees. The second number is the *inclusive* time, which is the exclusive time plus the time spent on the callees. Therefore, the inclusive time is always at least as large as the exclusive time. @@ -133,7 +133,7 @@ Almost all of that was code-generation, but it also includes the time needed to Just 0.76ms was needed to run type-inference on this entire series of calls. As you will quickly discover, inference takes much more time on more complicated code. -We can also display this tree as a flame graph, using the [ProfileView](https://github.com/timholy/ProfileView.jl) package: +We can also display this tree as a flame graph, using the [ProfileView.jl](https://github.com/timholy/ProfileView.jl) package: ```jldoctest flatten-demo; filter=r":\d+" julia> fg = flamegraph(tinf) @@ -154,7 +154,7 @@ Users are encouraged to read the ProfileView documentation to understand how to - the horizontal axis is time (wide boxes take longer than narrow ones), the vertical axis is call depth - hovering over a box displays the method that was inferred -- left-clicking on a box causes the full MethodInstance to be printed in your REPL session +- left-clicking on a box causes the full `MethodInstance` to be printed in your REPL session - right-clicking on a box opens the corresponding method in your editor - ctrl-click can be used to zoom in - empty horizontal spaces correspond to activities other than type-inference @@ -162,7 +162,7 @@ Users are encouraged to read the ProfileView documentation to understand how to You can explore this flamegraph and compare it to the output from `display_tree`. -Finally, [`flatten`](@ref), on its own or together with [`accumulate_by_source`](@ref), allows you to get an sense for the cost of individual MethodInstances or Methods. +Finally, [`flatten`](@ref), on its own or together with [`accumulate_by_source`](@ref), allows you to get an sense for the cost of individual `MethodInstance`s or `Method`s. The tools here allow you to get an overview of where inference is spending its time. Sometimes, this information alone is enough to show you how to change your code to reduce latency: perhaps your code is spending a lot of time inferring cases that are not needed in practice and could be simplified. diff --git a/docs/src/snoopi_deep_analysis.md b/docs/src/snoopi_deep_analysis.md index 4ed4d11f..7adea85b 100644 --- a/docs/src/snoopi_deep_analysis.md +++ b/docs/src/snoopi_deep_analysis.md @@ -5,16 +5,16 @@ As indicated in the [workflow](@ref), the recommended steps to reduce latency ar - check for invalidations - adjust method specialization in your package or its dependencies - fix problems in type inference -- add precompile directives +- add `precompile` directives The importance of fixing "problems" in type-inference was indicated in the [tutorial](@ref): successful precompilation requires a chain of ownership, but runtime dispatch (when inference cannot predict the callee) results in breaks in this chain. By improving inferrability, you can convert short, unconnected call-trees into a smaller number of large call-trees that all link back to your package(s). In practice, it also turns out that opportunities to adjust specialization are often revealed by analyzing inference failures, so this page is complementary to the previous one. -Throughout this page, we'll use the `OptimizeMe` demo, which ships with SnoopCompile. +Throughout this page, we'll use the `OptimizeMe` demo, which ships with `SnoopCompile`. !!! note - To understand what follows, it's essential to refer to [OptimizeMe source code](https://github.com/timholy/SnoopCompile.jl/blob/master/examples/OptimizeMe.jl) as you follow along. + To understand what follows, it's essential to refer to [`OptimizeMe` source code](https://github.com/timholy/SnoopCompile.jl/blob/master/examples/OptimizeMe.jl) as you follow along. ```julia julia> using SnoopCompile @@ -58,12 +58,12 @@ From the standpoint of precompilation, this has some obvious problems: - even though we called a single method, `OptimizeMe.main()`, there are many distinct flames separated by blank spaces. This indicates that many calls are being made by runtime dispatch: each separate flame is a fresh entrance into inference. - several of the flames are marked in red, indicating that they are not precompilable. While SnoopCompile does have the capability to automatically emit `precompile` directives for the non-red bars that sit on top of the red ones, in some cases the red extends to the highest part of the flame. In such cases there is no available precompile directive, and therefore no way to avoid the cost of type-inference. -Our goal will be to improve the design of OptimizeMe to make it more precompilable. +Our goal will be to improve the design of `OptimizeMe` to make it more precompilable. ## Analyzing inference triggers We'll first extract the "triggers" of inference, which is just a repackaging of part of the information contained within `tinf`. -Specifically an [`InferenceTrigger`](@ref) captures callee/caller relationships that straddle a fresh entrance to type-inference, allowing you to identify which calls were made by runtime dispatch and what MethodInstance they called. +Specifically an [`InferenceTrigger`](@ref) captures callee/caller relationships that straddle a fresh entrance to type-inference, allowing you to identify which calls were made by runtime dispatch and what `MethodInstance` they called. ```julia julia> itrigs = inference_triggers(tinf) @@ -78,7 +78,7 @@ This indicates that a whopping 76 calls were (1) made by runtime dispatch and (2 (There was a 77th call that had to be inferred, the original call to `main()`, but by default [`inference_triggers`](@ref) excludes calls made directly from top-level. You can change that through keyword arguments.) !!! tip - In the REPL, SnoopCompile displays `InferenceTrigger`s with yellow coloration for the callee, red for the caller method, and blue for the caller specialization. This makes it easier to quickly identify the most important information. + In the REPL, `SnoopCompile` displays `InferenceTrigger`s with yellow coloration for the callee, red for the caller method, and blue for the caller specialization. This makes it easier to quickly identify the most important information. In some cases, this might indicate that you'll need to fix 76 separate callers; fortunately, in many cases fixing the origin of inference problems can fix a number of later callees. @@ -155,7 +155,7 @@ Inference triggered to call MethodInstance for (::Base.var"#cat_t##kw")(::NamedT ``` This is useful if you want to analyze a method via [`ascend`](@ref ascend-itrig). -Method-based triggers, which may aggregate many different individual triggers, are particularly useful mostly because tools like Cthulhu show you the inference results for the entire MethodInstance, allowing you to fix many different inference problems at once. +`Method`-based triggers, which may aggregate many different individual triggers, are particularly useful mostly because tools like [Cthulhu.jl](https://github.com/JuliaDebug/Cthulhu.jl) show you the inference results for the entire `MethodInstance`, allowing you to fix many different inference problems at once. ### Trigger trees @@ -193,7 +193,7 @@ We're going to march through these systematically. Let's start with the first of ### `suggest` and a fix involving manual `eltype` specification -Because the analysis of inference failures is somewhat complex, SnoopCompile attempts to `suggest` an interpretation and/or remedy for each trigger: +Because the analysis of inference failures is somewhat complex, `SnoopCompile` attempts to [`suggest`](@ref) an interpretation and/or remedy for each trigger: ``` julia> suggest(itree.children[1]) @@ -289,7 +289,7 @@ julia> suggest(itree.children[2]) lotsa_containers() at OptimizeMe.jl:14 ``` -While this tree is attributed to broadcast, you can see several references here to `OptimizeMe.jl:14`, which contains: +While this tree is attributed to `broadcast`, you can see several references here to `OptimizeMe.jl:14`, which contains: ```julia cs = Container.(list) @@ -331,7 +331,7 @@ cs = Container{Any}.(list) ``` This 5-character change ends up eliminating 45 of our original 76 triggers. -Not only did we eliminate the triggers from broadcasting, but we limited the number of different `show(::IO, ::Container{T})` MethodInstances we need from later calls in `main`. +Not only did we eliminate the triggers from broadcasting, but we limited the number of different `show(::IO, ::Container{T})`-`MethodInstance`s we need from later calls in `main`. When the `Container` constructor does more complex operations, in some cases you may find that `Container{Any}(args...)` still gets specialized for different types of `args...`. In such cases, you can create a special constructor that instructs Julia to avoid specialization in specific instances, e.g., @@ -623,12 +623,12 @@ end The generated method corresponds to the `do` block here. The call to `show` comes from `show(io, mime, x[])`. This implementation uses a clever trick, wrapping `x` in a `Ref{Any}(x)`, to prevent specialization of the method defined by the `do` block on the specific type of `x`. -This trick is designed to limit the number of MethodInstances inferred for this `display` method. +This trick is designed to limit the number of `MethodInstance`s inferred for this `display` method. Unfortunately, from the standpoint of precompilation we have something of a conundrum. It turns out that this trigger corresponds to the first of the big red flames in the flame graph. `show(::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, ::Vector{Main.OptimizeMe.Container{Any}})` is not precompilable because `Base` owns the `show` method for `Vector`; -we might own the element type, but we're leveraging the generic machinery in Base and consequently it owns the method. +we might own the element type, but we're leveraging the generic machinery in `Base` and consequently it owns the method. If these were all packages, you might request its developers to add a `precompile` directive, but that will work only if the package that owns the method knows about the relevant type. In this situation, Julia's `Base` module doesn't know about `OptimizeMe.Container{Any}`, so we're stuck. diff --git a/docs/src/snoopi_deep_parcel.md b/docs/src/snoopi_deep_parcel.md index 5bb71e10..c31f81ef 100644 --- a/docs/src/snoopi_deep_parcel.md +++ b/docs/src/snoopi_deep_parcel.md @@ -1,4 +1,4 @@ -# Using `@snoopi_deep` results to generate precompile directives +# Using `@snoopi_deep` results to generate `precompile` directives Improving inferrability, specialization, and precompilability may sometimes feel like "eating your vegetables": really good for you, but it sometimes feels like work. (Depending on tastes, of course; I love vegetables.) While we've already gotten some payoff, now we're going to collect an additional reward for our hard work: the "dessert" of adding `precompile` directives. diff --git a/docs/src/snoopr.md b/docs/src/snoopr.md index 0daa5fb6..40fec87e 100644 --- a/docs/src/snoopr.md +++ b/docs/src/snoopr.md @@ -39,9 +39,9 @@ DocTestSetup = quote end ``` -To record the invalidations caused by defining new methods, use `@snoopr`. -`@snoopr` is exported by SnoopCompile, but the recommended approach is to record invalidations using the minimalistic `SnoopCompileCore` package, and then load `SnoopCompile` to do the analysis. -**Remember** to run julia with the `--startup-file="no"` flag set, if you load packages such as [Revise](https://github.com/timholy/Revise.jl) in your startup file. +To record the invalidations caused by defining new methods, use [`@snoopr`](@ref). +`@snoopr` is exported by `SnoopCompile`, but the recommended approach is to record invalidations using the minimalistic `SnoopCompileCore` package, and then load `SnoopCompile` to do the analysis. +_**Remember**_ to run julia with the `--startup-file="no"` flag set, if you load packages such as [`Revise`](https://github.com/timholy/Revise.jl) in your startup file. Otherwise invalidations relating to those packages will also show up. ```julia @@ -54,7 +54,7 @@ using SnoopCompile # now that we've collected the data, load the complete pack !!! note `SnoopCompileCore` was split out from `SnoopCompile` to reduce the risk of invalidations from loading `SnoopCompile` itself. - Once a MethodInstance gets invalidated, it doesn't show up in future `@snoopr` results, so anything that + Once a `MethodInstance` gets invalidated, it doesn't show up in future `@snoopr` results, so anything that gets invalidated in order to provide `@snoopr` would be omitted from the results. `SnoopCompileCore` is a very small package with no dependencies and which avoids extending any of Julia's own functions, so it cannot invalidate any other code. @@ -112,7 +112,7 @@ julia> length(uinvalidated(invalidations)) # collect the unique MethodInstances The length of this set is your simplest insight into the extent of invalidations triggered by this method definition. -If you want to fix invalidations, it's crucial to know *why* certain MethodInstances were invalidated. +If you want to fix invalidations, it's crucial to know *why* certain `MethodInstance`s were invalidated. For that, it's best to use a tree structure, in which children are invalidated because their parents get invalidated: ```jldoctest invalidations @@ -230,7 +230,7 @@ julia> trees = invalidation_trees(invalidations) Your specific results may differ from this, depending on which version of Julia and of packages you are using. In this case, you can see that three methods (one for `all`, one for `any`, and one for `broadcasted`) triggered invalidations. -Perusing this list, you can see that methods in `Base`, `LoweredCodeUtils`, and `JuliaInterpreter` (the latter two were loaded by `Revise`) got invalidated by methods defined in FillArrays. +Perusing this list, you can see that methods in `Base`, `LoweredCodeUtils`, and `JuliaInterpreter` (the latter two were loaded by `Revise`) got invalidated by methods defined in `FillArrays`. The most consequential ones (the ones with the most children) are listed last, and should be where you direct your attention first. That last entry looks particularly problematic, so let's extract it: @@ -310,9 +310,9 @@ of the same package features. The video also walks through a real-world example fixing invalidations that stemmed from inference problems in some of `Pkg`'s code. -### ascend +### `ascend` -SnoopCompile, partnering with the remarkable [Cthulhu](https://github.com/JuliaDebug/Cthulhu.jl), +SnoopCompile, partnering with the remarkable [Cthulhu.jl](https://github.com/JuliaDebug/Cthulhu.jl), provides a tool called `ascend` to simplify diagnosing and fixing invalidations. To demonstrate this tool, let's use it on our test methods defined above. For best results, you'll want to copy those method definitions into a file: @@ -441,7 +441,7 @@ Choose a call for analysis (q to quit): Unfortunately for our investigations, none of these "top level" callers have defined backedges. (Overall, it's very fortunate that they don't, in that runtime dispatch without backedges avoids any need to invalidate the caller; the alternative would be extremely long chains of completely unnecessary invalidation, which would have many undesirable consequences.) -If you want to fix such "short chains" of invalidation, one strategy is to identify callers by brute force search enabled by the `MethodAnalysis` package. +If you want to fix such "short chains" of invalidation, one strategy is to identify callers by brute force search enabled by the [MethodAnalysis.jl](https://github.com/timholy/MethodAnalysis.jl) package. For example, one can discover the caller of `show(::IOBuffer, ::Sockets.IPAddr)` with ```julia @@ -508,7 +508,7 @@ Use of `ascend` is highly recommended for fixing `Core.Box` inference failures. In cases where invalidations occur, but you can't use concrete types (there are indeed many valid uses of `Vector{Any}`), you can often prevent the invalidation using some additional knowledge. -One common example is extracting information from an [IOContext](https://docs.julialang.org/en/v1/manual/networking-and-streams/#IO-Output-Contextual-Properties-1) structure, which is roughly defined as +One common example is extracting information from an [`IOContext`](https://docs.julialang.org/en/v1/manual/networking-and-streams/#IO-Output-Contextual-Properties-1) structure, which is roughly defined as ```julia struct IOContext{IO_t <: IO} <: AbstractPipe @@ -517,7 +517,7 @@ struct IOContext{IO_t <: IO} <: AbstractPipe end ``` -There are good reasons to use a value-type of `Any`, but that makes it impossible for the compiler to infer the type of any object looked up in an IOContext. +There are good reasons to use a value-type of `Any`, but that makes it impossible for the compiler to infer the type of any object looked up in an `IOContext`. Fortunately, you can help! For example, the documentation specifies that the `:color` setting should be a `Bool`, and since it appears in documentation it's something we can safely enforce. Changing diff --git a/docs/src/tutorial.md b/docs/src/tutorial.md index 9ba9da55..500f4b58 100644 --- a/docs/src/tutorial.md +++ b/docs/src/tutorial.md @@ -4,7 +4,7 @@ Certain concepts and types will appear repeatedly, so it's worth spending a little time to familiarize yourself at the outset. You can find a more expansive version of this page in [this blog post](https://julialang.org/blog/2021/01/precompile_tutorial/). -## MethodInstances, type-inference, and backedges +## `MethodInstance`s, type-inference, and backedges Our first goal is to understand how code connects together. We'll try some experiments using the following: @@ -181,7 +181,7 @@ julia> mi.def So even though the method is defined in `Base`, because `SnoopCompileDemo` needed this code it got stashed in `SnoopCompileDemo.ji`. -*The ability to cache MethodInstances from code defined in other packages or libraries is fundamental to latency reduction; however, it has significant limitations.* Most crucially, `*.ji` files can only hold code they "own," either: +*The ability to cache `MethodInstance`s from code defined in other packages or libraries is fundamental to latency reduction; however, it has significant limitations.* Most crucially, `*.ji` files can only hold code they "own," either: - to a method defined in the package - through a chain of backedges to methods owned by the package