From b53c4bc33c97360536a22c7a9cd99384c02bf826 Mon Sep 17 00:00:00 2001 From: Dimi Racordon Date: Fri, 24 Jan 2025 11:58:54 +0100 Subject: [PATCH] Remove invisible whitespaces --- content/42.type.md | 2 +- content/alternative-bind-variables.md | 38 ++--- content/better-fors.md | 8 +- content/byname-implicits.md | 32 ++-- content/clause-interleaving.md | 2 +- content/drop-stdlib-forwards-bin-compat.md | 2 +- content/interpolation-quote-escape.md | 2 +- content/polymorphic-eta-expansion.md | 26 +-- .../priority-based-infix-type-precedence.md | 8 +- content/scala-cli.md | 42 ++--- content/unroll-default-arguments.md | 156 +++++++++--------- 11 files changed, 159 insertions(+), 159 deletions(-) diff --git a/content/42.type.md b/content/42.type.md index c35552cf..861505d4 100644 --- a/content/42.type.md +++ b/content/42.type.md @@ -582,7 +582,7 @@ terms. ### Byte and short literals -`Byte` and `Short` have singleton types, but lack any corresponding syntax either at the type or at the term level. +`Byte` and `Short` have singleton types, but lack any corresponding syntax either at the type or at the term level. These types are important in libraries which deal with low-level numerics and protocol implementation (see eg. [Spire](https://github.com/non/spire) and [Scodec](https://github.com/scodec/scodec)) and elsewhere, and the ability to, for instance, index a type class by a byte or short literal would be diff --git a/content/alternative-bind-variables.md b/content/alternative-bind-variables.md index 58cc4d4d..55e243db 100644 --- a/content/alternative-bind-variables.md +++ b/content/alternative-bind-variables.md @@ -49,7 +49,7 @@ Typically, the commands are tokenized and parsed. After a parsing stage we may e enum Word case Get, North, Go, Pick, Up case Item(name: String) - + case class Command(words: List[Word]) ``` @@ -64,7 +64,7 @@ matching on a single stable identifier, `North` and the code would look like thi ~~~ scala import Command.* - + def loop(cmd: Command): Unit = cmd match case Command(North :: Nil) => // Code for going north @@ -107,7 +107,7 @@ def loop(cmd: Cmd): Unit = case Command(Get :: Item(name)) => pickUp(name) ~~~ -Or any number of different encodings. However, all of them are less intuitive and less obvious than the code we tried to write. +Or any number of different encodings. However, all of them are less intuitive and less obvious than the code we tried to write. ## Commentary @@ -147,7 +147,7 @@ type, like so: enum Foo: case Bar(x: Int) case Baz(y: Int) - + def fun = this match case Bar(z) | Baz(z) => ... // z: Int ~~~ @@ -161,11 +161,11 @@ Removing the restriction would also allow recursive alternative patterns: enum Foo: case Bar(x: Int) case Baz(x: Int) - + enum Qux: case Quux(y: Int) case Corge(x: Foo) - + def fun = this match case Quux(z) | Corge(Bar(z) | Baz(z)) => ... // z: Int ~~~ @@ -177,8 +177,8 @@ We also expect to be able to use an explicit binding using an `@` like this: enum Foo: case Bar() case Baz(bar: Bar) - - def fun = this match + + def fun = this match case Baz(x) | x @ Bar() => ... // x: Foo.Bar ~~~ @@ -191,7 +191,7 @@ inferred within within each branch. enum Foo: case Bar(x: Int) case Baz(y: String) - + def fun = this match case Bar(x) | Baz(x) => // x: Int | String ~~~ @@ -203,26 +203,26 @@ the following case to match all instances of `Bar`, regardless of the type of `A enum Foo[A]: case Bar(a: A) case Baz(i: Int) extends Foo[Int] - + def fun = this match - case Baz(x) | Bar(x) => // x: Int | A + case Baz(x) | Bar(x) => // x: Int | A ~~~ ### Given bind variables -It is possible to introduce bindings to the contextual scope within a pattern match branch. +It is possible to introduce bindings to the contextual scope within a pattern match branch. Since most bindings will be anonymous but be referred to within the branches, we expect the _types_ present in the contextual scope for each branch to be the same rather than the _names_. ~~~ scala case class Context() - + def run(using ctx: Context): Unit = ??? - + enum Foo: case Bar(ctx: Context) case Baz(i: Int, ctx: Context) - + def fun = this match case Bar(given Context) | Baz(_, given Context) => run // `Context` appears in both branches ~~~ @@ -233,7 +233,7 @@ This begs the question of what to do in the case of an explicit `@` binding wher enum Foo: case Bar(s: String) case Baz(i: Int) - + def fun = this match case Bar(x @ given String) | Baz(x @ given Int) => ??? ~~~ @@ -254,13 +254,13 @@ However, since untagged unions are part of Scala 3 and the fact that both are re #### Type ascriptions in alternative branches -Another suggestion is that an _explicit_ type ascription by a user ought to be defined for all branches. For example, in the currently proposed rules, the following code would infer the return type to be `Int | A` even though the user has written the statement `id: Int`. +Another suggestion is that an _explicit_ type ascription by a user ought to be defined for all branches. For example, in the currently proposed rules, the following code would infer the return type to be `Int | A` even though the user has written the statement `id: Int`. ~~~scala enum Foo[A]: case Bar[A](a: A) case Baz[A](a: A) - + def test = this match case Bar(id: Int) | Baz(id) => id ~~~ @@ -295,7 +295,7 @@ If `p_i` is a quoted pattern binding a variable or type variable, the alternativ Each $`p_n`$ must introduce the same set of bindings, i.e. for each $`n`$, $`\Gamma_n`$ must have the same **named** members $`\Gamma_{n+1}`$ and the set of $`{T_0, ... T_n}`$ must be the same. -If $`X_{n,i}`$, is the type of the binding $`x_i`$ within an alternative $`p_n`$, then the consequent type, $`X_i`$, of the +If $`X_{n,i}`$, is the type of the binding $`x_i`$ within an alternative $`p_n`$, then the consequent type, $`X_i`$, of the variable $`x_i`$ within the pattern scope, $`\Gamma`$ is the least upper-bound of all the types $`X_{n, i}`$ associated with the variable, $`x_i`$ within each branch. diff --git a/content/better-fors.md b/content/better-fors.md index ed614f71..3fd1ebbd 100644 --- a/content/better-fors.md +++ b/content/better-fors.md @@ -54,7 +54,7 @@ There are some clear pain points related to Scala'3 `for`-comprehensions and tho This complicates the code, even in this simple example. 2. The simplicity of desugared code - + The second pain point is that the desugared code of `for`-comprehensions can often be surprisingly complicated. e.g. @@ -92,7 +92,7 @@ There are some clear pain points related to Scala'3 `for`-comprehensions and tho This SIP suggests the following changes to `for` comprehensions: 1. Allow `for` comprehensions to start with pure aliases - + e.g. ```scala for @@ -103,7 +103,7 @@ This SIP suggests the following changes to `for` comprehensions: ``` 2. Simpler conditional desugaring of pure aliases. i.e. whenever a series of pure aliases is not immediately followed by an `if`, use a simpler way of desugaring. - e.g. + e.g. ```scala for a <- doSth(arg) @@ -250,7 +250,7 @@ A new desugaring rules will be introduced for simple desugaring. For any N: for (P <- G; P_1 = E_1; ... P_N = E_N; ...) ==> - G.flatMap (P => for (P_1 = E_1; ... P_N = E_N; ...)) + G.flatMap (P => for (P_1 = E_1; ... P_N = E_N; ...)) And: diff --git a/content/byname-implicits.md b/content/byname-implicits.md index 4e47f976..24b701ee 100644 --- a/content/byname-implicits.md +++ b/content/byname-implicits.md @@ -167,7 +167,7 @@ object Semigroup { } } ``` - + then we can manually write instances for, for example, tuples of types which have `Semigroup` instances, @@ -387,7 +387,7 @@ val showListInt: Show[List[Int]] = showUnit ) ) -``` +``` where at least one argument position between the val definition and the recursive occurrence of `showListInt` is byname. @@ -499,16 +499,16 @@ any _Tj_, where _i_ < _j_. The essence of the algorithm described in the Scala Language Specification is as follows, > Call the sequence of open implicit types _O_. This is initially empty. -> -> To resolve an implicit of type _T_ given stack of open implicits _O_, -> +> +> To resolve an implicit of type _T_ given stack of open implicits _O_, +> > + Identify the definition _d_ which satisfies _T_. -> +> > + If the core type of _T_ dominates any element of _O_ then we have observed divergence and we're > done. -> +> > + If _d_ has no implicit arguments then the result is the value yielded by _d_. -> +> > + Otherwise for each implicit argument _a_ of _d_, resolve _a_ against _O+T_, and the result is the > value yielded by _d_ applied to its resolved arguments. @@ -550,15 +550,15 @@ divergence check across the set of relevant implicit definitions. This gives us the following, -> To resolve an implicit of type _T_ given stack of open implicits _O_, -> +> To resolve an implicit of type _T_ given stack of open implicits _O_, +> > + Identify the definition _d_ which satisfies _T_. -> +> > + If the core type of _T_ dominates the type _U_ of some element __ of _O_ then we have > observed divergence and we're done. -> +> > + If _d_ has no implicit arguments then the result is the value yielded by _d_. -> +> > + Otherwise for each implicit argument _a_ of _d_, resolve _a_ against _O+_, and the result is > the value yielded by _d_ applied to its resolved arguments. @@ -646,8 +646,8 @@ larger than _U_ despite using only elements that are present in _U_. This gives us the following, -> To resolve an implicit of type _T_ given stack of open implicits _O_, -> +> To resolve an implicit of type _T_ given stack of open implicits _O_, +> > + Identify the definition _d_ which satisfies _T_. > > + if there is an element _e_ of _O_ of the form __ such that at least one element between _e_ @@ -658,7 +658,7 @@ This gives us the following, > observed divergence and we're done. > > + If _d_ has no implicit arguments then the result is the value yielded by _d_. -> +> > + Otherwise for each implicit argument _a_ of _d_, resolve _a_ against _O+_, and the result is > the value yielded by _d_ applied to its resolved arguments. diff --git a/content/clause-interleaving.md b/content/clause-interleaving.md index 69619914..23e1f8f3 100644 --- a/content/clause-interleaving.md +++ b/content/clause-interleaving.md @@ -71,7 +71,7 @@ This definition provides the expected source API at call site, but it has two is Another workaround is to return a polymorphic function, for example: ~~~scala -def getOrElse(k:Key): [V >: k.Value] => (default: V) => V = +def getOrElse(k:Key): [V >: k.Value] => (default: V) => V = [V] => (default: V) => ??? ~~~ While again, this provides the expected API at call site, it also has issues: diff --git a/content/drop-stdlib-forwards-bin-compat.md b/content/drop-stdlib-forwards-bin-compat.md index fc17c0df..771804f6 100644 --- a/content/drop-stdlib-forwards-bin-compat.md +++ b/content/drop-stdlib-forwards-bin-compat.md @@ -210,7 +210,7 @@ repositories { dependencies { implementation 'org.scala-lang:scala-library:2.13.8' implementation 'com.softwaremill.sttp.client3:core_2.13:3.8.3' - implementation 'com.softwaremill.sttp.shared:ws_2.13:1.2.7' + implementation 'com.softwaremill.sttp.shared:ws_2.13:1.2.7' } $> gradle dependencies --configuration runtimeClasspath diff --git a/content/interpolation-quote-escape.md b/content/interpolation-quote-escape.md index 4639fc61..4602cbaa 100644 --- a/content/interpolation-quote-escape.md +++ b/content/interpolation-quote-escape.md @@ -38,7 +38,7 @@ escape a `"` character to represent a literal `"` withing a string. ## Motivating Example That the `"` character can't be easily escaped in interpolations has been an -open issue since at least 2012[^1], and how to deal with this issue is a +open issue since at least 2012[^1], and how to deal with this issue is a somewhat common SO question[^2][^3] {% highlight Scala %} diff --git a/content/polymorphic-eta-expansion.md b/content/polymorphic-eta-expansion.md index bd64ea84..4883a546 100644 --- a/content/polymorphic-eta-expansion.md +++ b/content/polymorphic-eta-expansion.md @@ -21,7 +21,7 @@ permalink: /sips/:title.html - For a first-time reader, a high-level overview of what they should expect to see in the proposal. - For returning readers, a quick reminder of what the proposal is about. --> -We propose to extend eta-expansion to polymorphic methods. +We propose to extend eta-expansion to polymorphic methods. This means automatically transforming polymorphic methods into corresponding polymorphic functions when required, for example: ~~~ scala @@ -44,10 +44,10 @@ This section should clearly express the scope of the proposal. It should make it Regular eta-expansion is so ubiquitous that most users are not aware of it, for them it is intuitive and obvious that methods can be passed where functions are expected. -When manipulating polymorphic methods, we wager that most users find it confusing not to be able to do the same. +When manipulating polymorphic methods, we wager that most users find it confusing not to be able to do the same. This is the main motivation of this proposal. -It however remains to be demonstrated that such cases appear often enough for time and maintenance to be devoted to fixing it. +It however remains to be demonstrated that such cases appear often enough for time and maintenance to be devoted to fixing it. To this end, the remainder of this section will show a manufactured example with tuples, as well as real-world examples taken from the [Shapeless-3](https://index.scala-lang.org/typelevel/shapeless-3) and [kittens](https://index.scala-lang.org/typelevel/kittens) libraries. @@ -89,7 +89,7 @@ There is however the following case, where a function is very large: case (acc, Some(t)) => Some((t, acc._1)) } } -~~~ +~~~ By factoring out the function, it is possible to make the code more readable: @@ -113,7 +113,7 @@ By factoring out the function, it is possible to make the code more readable: case (acc, Some(t)) => Some((t, acc._1)) } } -~~~ +~~~ It is natural at this point to want to transform the function into a method, as the syntax for the latter is more familiar, and more readable: @@ -139,7 +139,7 @@ It is natural at this point to want to transform the function into a method, as } ~~~ -However, this does not compile. +However, this does not compile. Only monomorphic eta-expansion is applied, leading to the same issue as with our previous `Tuple.map` example. #### Kittens ([source](https://github.com/typelevel/kittens/blob/e10a03455ac3dd52096a1edf0fe6d4196a8e2cad/core/src/main/scala-3/cats/derived/DerivedTraverse.scala#L44-L48)) @@ -251,7 +251,7 @@ For example, if the syntax of the language is changed, this section should list Before we go on, it is important to clarify what we mean by "polymorphic method", we do not mean, as one would expect, "a method taking at least one type parameter clause", but rather "a (potentially partially applied) method whose next clause is a type clause", here is an example to illustrate: -~~~ scala +~~~ scala extension (x: Int) def poly[T](x: T): T = x // signature: (Int)[T](T): T @@ -279,15 +279,15 @@ Note: Polymorphic functions always take term parameters (but `k` can equal zero 1. Copies of `T_i`s are created, and replaced in `U_i`s, `L_i`s, `A_i`s and `R`, noted respectively `T'_i`, `U'_i`, `L'_i`, `A'_i` and `R'`. 2. Is the expected type a polymorphic context function ? -* 1. If yes then `m` is replaced by the following: +* 1. If yes then `m` is replaced by the following: ~~~ scala -[T'_1 <: U'_1 >: L'_1, ... , T'_n <: U'_n >: L'_n] +[T'_1 <: U'_1 >: L'_1, ... , T'_n <: U'_n >: L'_n] => (a_1: A'_1 ..., a_k: A'_k) ?=> m[T'_1, ..., T'_n] ~~~ -* 2. If no then `m` is replaced by the following: +* 2. If no then `m` is replaced by the following: ~~~ scala -[T'_1 <: U'_1 >: L'_1, ... , T'_n <: U'_n >: L'_n] +[T'_1 <: U'_1 >: L'_1, ... , T'_n <: U'_n >: L'_n] => (a_1: A'_1 ..., a_k: A'_k) => m[T'_1, ..., T'_n](a_1, ..., a_k) ~~~ @@ -321,7 +321,7 @@ extension [A](x: A) def foo[B](y: B) = (x, y) val voo: [T] => T => [U] => U => (T, U) = foo -// foo expands to: +// foo expands to: // [T'] => (t: T') => ( foo[T'](t) with expected type [U] => U => (T', U) ) // [T'] => (t: T') => [U'] => (u: U') => foo[T'](t)[U'](u) ~~~ @@ -384,7 +384,7 @@ Not included in this proposal are: * Polymorphic SAM conversion * Polymorphic functions from wildcard: `foo[_](_)` -While all of the above could be argued to be valuable, we deem they are out of the scope of this proposal. +While all of the above could be argued to be valuable, we deem they are out of the scope of this proposal. We encourage the creation of follow-up proposals to motivate their inclusion. diff --git a/content/priority-based-infix-type-precedence.md b/content/priority-based-infix-type-precedence.md index 71d32c1c..55c8af65 100644 --- a/content/priority-based-infix-type-precedence.md +++ b/content/priority-based-infix-type-precedence.md @@ -121,9 +121,9 @@ A PR for this SIP is available at: [https://github.com/scala/scala/pull/6147](ht ### Interactions with other language features -#### Star `*` infix type interaction with repeated parameters -The [repeated argument symbol `*`](https://www.scala-lang.org/files/archive/spec/2.12/04-basic-declarations-and-definitions.html#repeated-parameters) may create confusion with the infix type `*`. -Please note that this feature interaction already exists within the current specification. +#### Star `*` infix type interaction with repeated parameters +The [repeated argument symbol `*`](https://www.scala-lang.org/files/archive/spec/2.12/04-basic-declarations-and-definitions.html#repeated-parameters) may create confusion with the infix type `*`. +Please note that this feature interaction already exists within the current specification. ```scala trait +[N1, N2] @@ -141,7 +141,7 @@ However, it is very unlikely that such interaction would occur. ## Backward Compatibility -Changing infix type associativity and precedence affects code that uses type operations and conforms to the current specification. +Changing infix type associativity and precedence affects code that uses type operations and conforms to the current specification. Note: changing the infix precedence didn't fail any scalac test. diff --git a/content/scala-cli.md b/content/scala-cli.md index f9791040..9bd17ccf 100644 --- a/content/scala-cli.md +++ b/content/scala-cli.md @@ -16,7 +16,7 @@ title: SIP-46 - Scala CLI as default Scala command ## Summary -We propose to replace current script that is installed as `scala` with Scala CLI - a batteries included tool to interact with Scala. Scala CLI brings all the features that the commands above provide and expand them with incremental compilation, dependency management, packaging and much more. +We propose to replace current script that is installed as `scala` with Scala CLI - a batteries included tool to interact with Scala. Scala CLI brings all the features that the commands above provide and expand them with incremental compilation, dependency management, packaging and much more. Even though Scala CLI could replace `scaladoc` and `scalac` commands as well for now, we do not propose to replace them. @@ -25,30 +25,30 @@ Even though Scala CLI could replace `scaladoc` and `scalac` commands as well for The current default `scala` script is quite limited since it can only start repl or run pre-compile Scala code. -The current script are lacking basic features such as support for resolving dependencies, incremental compilation or support for outputs other than JVM. This forces any user that wants to do anything more than just basic things to learn and use SBT, Mill or an other build tool and that adds to the complexity of learning Scala. +The current script are lacking basic features such as support for resolving dependencies, incremental compilation or support for outputs other than JVM. This forces any user that wants to do anything more than just basic things to learn and use SBT, Mill or an other build tool and that adds to the complexity of learning Scala. -We observe that the current state of tooling in Scala is limiting creativity, with quite a high cost to create e.g. an application or a script with some dependencies that target Node.js. Many Scala developers are not choosing Scala for their personal projects, scripts, or small applications and we believe that the complexity of setting up a build tool is one of the reasons. +We observe that the current state of tooling in Scala is limiting creativity, with quite a high cost to create e.g. an application or a script with some dependencies that target Node.js. Many Scala developers are not choosing Scala for their personal projects, scripts, or small applications and we believe that the complexity of setting up a build tool is one of the reasons. With this proposal our main goal is to turn Scala into a language with "batteries included" that will also respect the community-first aspect of our ecosystem. ### Why decided to work on Scala CLI rather then improve existing tools like sbt or Mill? Firstly, Scala CLI is in no way an actual replacement for SBT or Mill - nor was it ever meant to be. We do not call it a build tool, even though it does share some similarities with build tools. It doesn't aim at supporting multi-module -projects, nor to be extended via a task system. The main advantages of SBT and Mill: multi-module support and plugin ecosystem in the use cases for Scala CLI and scala command can often be disadvantages as it affects performance: configuration needs to be compiled, plugins resolved etc. +projects, nor to be extended via a task system. The main advantages of SBT and Mill: multi-module support and plugin ecosystem in the use cases for Scala CLI and scala command can often be disadvantages as it affects performance: configuration needs to be compiled, plugins resolved etc. Mill and SBT uses turing complete configuration for build so the complexity of build scripts in theory is unlimited. Scala CLI is configuration-only and that limits the complexity what put a hard cap how complex Scala CLI builds can be. -`scala` command should be first and foremost a command line tool. Requirements for a certain project structure or presence configuration files limit SBT and Mill usability certain use cases related to command line. +`scala` command should be first and foremost a command line tool. Requirements for a certain project structure or presence configuration files limit SBT and Mill usability certain use cases related to command line. -One of the main requirements for the new `scala` commands was speed, flexibility and focus on command-line use cases. Initially, we were considering improving SBT or Mill as well as building Scala CLI on top one. We have quickly realized that getting Mill or SBT to reply within Milliseconds (for cases where no hard work like compilation is require) would be pretty much out of reach. Mill and SBT's codebases are too big to compile them to native image using GraalVM, not to mention problems with dynamic loading and reflection. Adding flexibility when in comes to input sources (e.g. support for Gists) and making the tool that can accept most of the configuration using simple command-line parameters would involve writhing a lot of glue code. That is why we decided to build the tool from scratch based on existing components like coursier, bloop or scalafmt. +One of the main requirements for the new `scala` commands was speed, flexibility and focus on command-line use cases. Initially, we were considering improving SBT or Mill as well as building Scala CLI on top one. We have quickly realized that getting Mill or SBT to reply within Milliseconds (for cases where no hard work like compilation is require) would be pretty much out of reach. Mill and SBT's codebases are too big to compile them to native image using GraalVM, not to mention problems with dynamic loading and reflection. Adding flexibility when in comes to input sources (e.g. support for Gists) and making the tool that can accept most of the configuration using simple command-line parameters would involve writhing a lot of glue code. That is why we decided to build the tool from scratch based on existing components like coursier, bloop or scalafmt. ## Proposed solution -We propose to gradually replace the current `scala`, `scalac` and `scaladoc` commands by single `scala` command that under the hood will be `scala-cli`. We could also add wrapper scripts for `scalac` and `scaladoc` that will mimic the functionality that will use `scala-cli` under the hood. +We propose to gradually replace the current `scala`, `scalac` and `scaladoc` commands by single `scala` command that under the hood will be `scala-cli`. We could also add wrapper scripts for `scalac` and `scaladoc` that will mimic the functionality that will use `scala-cli` under the hood. The complete set of `scala-cli` features can be found in [its documentation](https://scala-cli.virtuslab.org/docs/overview). -Scala CLI brings many features like testing, packaging, exporting to sbt / Mill or upcoming support for publishing micro-libraries. Initially, we propose to limit the set of features available in the `scala` command by default. Scala CLI is a relatively new project and we should battle-proof some of its features before we commit to support them as part of the official `scala` command. +Scala CLI brings many features like testing, packaging, exporting to sbt / Mill or upcoming support for publishing micro-libraries. Initially, we propose to limit the set of features available in the `scala` command by default. Scala CLI is a relatively new project and we should battle-proof some of its features before we commit to support them as part of the official `scala` command. Scala CLI offers [multiple native ways to be installed](https://scala-cli.virtuslab.org/install#advanced-installation) so most users should find a suitable method. We propose that these packages to become the default `scala` package in most repositories, often replacing existing `scala` packages but the fact how new `scala` command would be installed is not intended to be a part of this SIP. @@ -58,7 +58,7 @@ Let us show a few examples where adopting Scala CLI as `scala` command would be **Using REPL with a 3rd-party dependency** -Currently, to start a Scala REPL with a dependency on the class path, users need to resolve this dependency with all its transitive dependencies (coursier can help here) and pass those to the `scala` command using the `--cp` option. Alternatively, one can create an sbt project including a single dependency and use the `sbt console` task. Ammonite gives a better experience with its magic imports. +Currently, to start a Scala REPL with a dependency on the class path, users need to resolve this dependency with all its transitive dependencies (coursier can help here) and pass those to the `scala` command using the `--cp` option. Alternatively, one can create an sbt project including a single dependency and use the `sbt console` task. Ammonite gives a better experience with its magic imports. With Scala CLI, starting a REPL with a given dependency is as simple as running: @@ -82,7 +82,7 @@ Currently, when reporting a bug in the compiler (or any other Scala-related) rep //> using platform "native" //> using "com.lihaoyi::os-lib:0.7.8" //> using options "-Xfatal-warnings" - + def foo = println("") ``` @@ -132,7 +132,7 @@ Last section of this proposal is the list of options that each sub-command MUST Scala CLI can also be configured with ["using directives"](https://scala-cli.virtuslab.org/docs/guides/introduction/using-directives) - a comment-based configuration syntax that should be placed at the top of Scala files. This allows for self-containing examples within one file since most of the configuration can be provided either from the command line or via using directives (command line has precedence). This is a game changer for use cases like scripting, reproduction, or within the academic scope. -We have described the motivation, syntax and implementation basis in the [dedicated pre-SIP](https://contributors.scala-lang.org/t/pre-sip-using-directives/5700). Currently, we recommend to write using directives as comments, so making them part of the language specification is not necessary at this stage. Moreover, the new `scala` command could ignore using directives in the initial version, however we strongly suggest to include comment-based using directives from the start. +We have described the motivation, syntax and implementation basis in the [dedicated pre-SIP](https://contributors.scala-lang.org/t/pre-sip-using-directives/5700). Currently, we recommend to write using directives as comments, so making them part of the language specification is not necessary at this stage. Moreover, the new `scala` command could ignore using directives in the initial version, however we strongly suggest to include comment-based using directives from the start. Last section of this proposal contains a sumamry of Using Directives syntax as well as list of directives that MUST and SHOULD be supported. @@ -158,7 +158,7 @@ The release cadence: should the new `scala` command follow the current release c ## Alternatives -Scala CLI has many alternatives. The most obvious ones are sbt, Mill, or other build tools. However, these are more complicated than Scala CLI, and what is more important they are not designed as command-line first tools. Ammonite, is another alternative, however it covers only part of the Scala CLI features (REPL and scripting), and lacks many of the Scala CLI features (incremental compilation, Scala version selection, support for Scala.js and Scala Native, just to name a few). +Scala CLI has many alternatives. The most obvious ones are sbt, Mill, or other build tools. However, these are more complicated than Scala CLI, and what is more important they are not designed as command-line first tools. Ammonite, is another alternative, however it covers only part of the Scala CLI features (REPL and scripting), and lacks many of the Scala CLI features (incremental compilation, Scala version selection, support for Scala.js and Scala Native, just to name a few). ## Related work @@ -196,7 +196,7 @@ Scala Runner MUST support following options from Scala Compiler directly: - `-X` - `-Y` -SHOULD be treated as be Scala compiler options and be propagated to Scala Compiler. This applies to all commands that uses compiler directly or indirectly. +SHOULD be treated as be Scala compiler options and be propagated to Scala Compiler. This applies to all commands that uses compiler directly or indirectly. # MUST have commands @@ -219,7 +219,7 @@ Compile Scala code - `--js-version`: The Scala.js version - `--js-mode`: The Scala.js mode, either `dev` or `release` - `--js-module-kind`: The Scala.js module kind: commonjs/common, esmodule/es, nomodule/none -- `--js-check-ir`: +- `--js-check-ir`: - `--js-emit-source-maps`: Emit source maps - `--js-source-maps-path`: Set the destination path of source maps - `--js-dom`: Enable jsdom @@ -273,7 +273,7 @@ Generate Scaladoc documentation - `--js-version`: The Scala.js version - `--js-mode`: The Scala.js mode, either `dev` or `release` - `--js-module-kind`: The Scala.js module kind: commonjs/common, esmodule/es, nomodule/none -- `--js-check-ir`: +- `--js-check-ir`: - `--js-emit-source-maps`: Emit source maps - `--js-source-maps-path`: Set the destination path of source maps - `--js-dom`: Enable jsdom @@ -328,7 +328,7 @@ Fire-up a Scala REPL - `--js-version`: The Scala.js version - `--js-mode`: The Scala.js mode, either `dev` or `release` - `--js-module-kind`: The Scala.js module kind: commonjs/common, esmodule/es, nomodule/none -- `--js-check-ir`: +- `--js-check-ir`: - `--js-emit-source-maps`: Emit source maps - `--js-source-maps-path`: Set the destination path of source maps - `--js-dom`: Enable jsdom @@ -390,7 +390,7 @@ scala-cli MyApp.scala -- first-arg second-arg - `--js-version`: The Scala.js version - `--js-mode`: The Scala.js mode, either `dev` or `release` - `--js-module-kind`: The Scala.js module kind: commonjs/common, esmodule/es, nomodule/none -- `--js-check-ir`: +- `--js-check-ir`: - `--js-emit-source-maps`: Emit source maps - `--js-source-maps-path`: Set the destination path of source maps - `--js-dom`: Enable jsdom @@ -470,7 +470,7 @@ println("Hello, world) - `--js-version`: The Scala.js version - `--js-mode`: The Scala.js mode, either `dev` or `release` - `--js-module-kind`: The Scala.js module kind: commonjs/common, esmodule/es, nomodule/none -- `--js-check-ir`: +- `--js-check-ir`: - `--js-emit-source-maps`: Emit source maps - `--js-source-maps-path`: Set the destination path of source maps - `--js-dom`: Enable jsdom @@ -528,7 +528,7 @@ Format Scala code - `--js-version`: The Scala.js version - `--js-mode`: The Scala.js mode, either `dev` or `release` - `--js-module-kind`: The Scala.js module kind: commonjs/common, esmodule/es, nomodule/none -- `--js-check-ir`: +- `--js-check-ir`: - `--js-emit-source-maps`: Emit source maps - `--js-source-maps-path`: Set the destination path of source maps - `--js-dom`: Enable jsdom @@ -581,7 +581,7 @@ Compile and test Scala code - `--js-version`: The Scala.js version - `--js-mode`: The Scala.js mode, either `dev` or `release` - `--js-module-kind`: The Scala.js module kind: commonjs/common, esmodule/es, nomodule/none -- `--js-check-ir`: +- `--js-check-ir`: - `--js-emit-source-maps`: Emit source maps - `--js-source-maps-path`: Set the destination path of source maps - `--js-dom`: Enable jsdom @@ -618,7 +618,7 @@ Compile and test Scala code # Using Directives -As a part of this SIP we propose to introduce Using Directives, a special comments containing configuration. Withing Scala CLI and by extension `scala` command, the command line arguments takes precedence over using directives. +As a part of this SIP we propose to introduce Using Directives, a special comments containing configuration. Withing Scala CLI and by extension `scala` command, the command line arguments takes precedence over using directives. Using directives can be place on only top of the file (above imports, package definition etx.) and can be proceed only by plain comments (e.g. to comment out an using directive) diff --git a/content/unroll-default-arguments.md b/content/unroll-default-arguments.md index eb646d3c..6d90188b 100644 --- a/content/unroll-default-arguments.md +++ b/content/unroll-default-arguments.md @@ -17,7 +17,7 @@ title: SIP-61 - Unroll Default Arguments for Binary Compatibility ## Summary This SIP proposes an `@unroll` annotation lets you add additional parameters -to method `def`s,`class` construtors, or `case class`es, without breaking binary +to method `def`s,`class` construtors, or `case class`es, without breaking binary compatibility. `@unroll` works by generating "unrolled" or "telescoping" forwarders: ```scala @@ -31,19 +31,19 @@ def foo(s: String, n: Int) = foo(s, n, true, 0) In contrast to most existing or proposed alternatives that require you to contort your code to become binary compatible (see [Major Alternatives](#major-alternatives)), -`@unroll` allows you to write Scala with vanilla `def`s/`class`es/`case class`es, add +`@unroll` allows you to write Scala with vanilla `def`s/`class`es/`case class`es, add a single annotation, and your code will maintain binary compatibility as new default -parameters and fields are added over time. +parameters and fields are added over time. -`@unroll`'s only constraints are that: +`@unroll`'s only constraints are that: -1. New parameters need to have a default value +1. New parameters need to have a default value 2. New parameters can only be added on the right 3. The `@unroll`ed methods must be abstract or final These are both existing industry-wide standard when dealing with data and schema evolution (e.g. [Schema evolution in Avro, Protocol Buffers and Thrift — Martin Kleppmann’s blog](https://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html)), -and are also the way the new parameters interact with _source compatibility_ in +and are also the way the new parameters interact with _source compatibility_ in the Scala language. Thus these constraints should be immediately familiar to any experienced programmers, and would be easy to follow without confusion. @@ -53,38 +53,38 @@ Prior Discussion can be found [here](https://contributors.scala-lang.org/t/can-w Maintaining binary compatibility of Scala libraries as they evolve over time is difficult. Although tools like https://github.com/lightbend/mima help _surface_ -issues, actually _resolving_ those issues is a different challenge. +issues, actually _resolving_ those issues is a different challenge. -Some kinds of library changes are fundamentally impossible to make compatible, -e.g. removing methods or classes. But there is one big class of binary compatibility +Some kinds of library changes are fundamentally impossible to make compatible, +e.g. removing methods or classes. But there is one big class of binary compatibility issues that are "spurious": adding default parameters to methods, `class` constructors, or `case class`es. Adding a default parameter is source-compatible, but not binary compatible: a user -downstream of a library that adds a default parameter does not need to make any +downstream of a library that adds a default parameter does not need to make any changes to their code, but _does_ need to re-compile it. This is "spurious" because there is no _fundamental_ incompatibility here: semantically, a new default parameter is meant to be optional! Old code invoking that method without a new default parameter is exactly the user intent, and works just fine if the downstream code is re-compiled. -Other languages, such as Python, have the same default parameter language feature but face -no such compatibility issues with their use. Even Scala codebases compiled from source +Other languages, such as Python, have the same default parameter language feature but face +no such compatibility issues with their use. Even Scala codebases compiled from source do not suffer these restrictions: adding a default parameter to the right side of a parameter list is for all intents and purposes backwards compatible in a mono-repo setup. The fact that such addition is binary incompatible is purely an implementation restriction of Scala's binary artifact format and distribution strategy. **Binary compatibility is generally more important than Source compatibility**. When -you hit a source compatibility issue, you can always change the source code you are +you hit a source compatibility issue, you can always change the source code you are compiling, whether manually or via your build tool. In contrast, when you hit binary compatibility issues, it can come in the form of diamond dependencies that would require _re-compiling all of your transitive dependencies_, a task that is far more difficult and often impractical. -There are many approaches to resolving these "spurious" binary compatibility issues, -but most of them involve either tremendous amounts of boilerplate writing -binary-compatibility forwarders, giving up on core language features like Case Classes -or Default Parameters, or both. Consider the following code snippet +There are many approaches to resolving these "spurious" binary compatibility issues, +but most of them involve either tremendous amounts of boilerplate writing +binary-compatibility forwarders, giving up on core language features like Case Classes +or Default Parameters, or both. Consider the following code snippet ([link](https://github.com/com-lihaoyi/mainargs/blob/1d04a6bd19aaca401d11fe26da31615a8bc9213c/mainargs/src/Parser.scala)) from the [com-lihaoyi/mainargs](https://github.com/com-lihaoyi/mainargs) library, which duplicates the parameters of `def constructEither` no less than five times in @@ -159,13 +159,13 @@ parameters are added to `def constructEither`: Apart from being extremely verbose and full of boilerplate, like any boilerplate this is also extremely error-prone. Bugs like [com-lihaoyi/mainargs#106](https://github.com/com-lihaoyi/mainargs/issues/106) -slip through when a mistake is made in that boilerplate. These bugs are impossible to catch -using a normal test suite, as they only appear in the presence of version skew. The above code -snippet actually _does_ have such a bug, that the test suite _did not_ catch. See if you can -spot it! +slip through when a mistake is made in that boilerplate. These bugs are impossible to catch +using a normal test suite, as they only appear in the presence of version skew. The above code +snippet actually _does_ have such a bug, that the test suite _did not_ catch. See if you can +spot it! Sebastien Doraene's talk [Designing Libraries for Source and Binary Compatibility](https://www.youtube.com/watch?v=2wkEX6MCxJs) -explores some of the challenges, and discusses the workarounds. +explores some of the challenges, and discusses the workarounds. ## Requirements @@ -183,19 +183,19 @@ Given: * The behavior should be binary compatible and semantically indistinguishable from using a verion of **Downstream** compiled against the _newer_ version of **Upstream** -**Note:** we do not aim for _Forwards_ compatibility. Using an _older_ -version of **Upstream** with a _newer_ version of **Downstream** compiled against a +**Note:** we do not aim for _Forwards_ compatibility. Using an _older_ +version of **Upstream** with a _newer_ version of **Downstream** compiled against a _newer_ version of **Upstream** is not a use case we want to support. The vast majority of OSS software does not promise forwards compatibility, including software such as the JVM, so we should just follow suite ### All Overrides Are Equivalent -All versions of an `@unroll`ed method `def foo` should have the same semantics when called +All versions of an `@unroll`ed method `def foo` should have the same semantics when called with the same parameters. We must be careful to ensure: 1. All our different method overrides point at the same underlying implementation -2. Abstract methods are properly implemented, and no method would fail with an +2. Abstract methods are properly implemented, and no method would fail with an `AbstractMethodError` when called 3. We properly forward the necessary argument and default parameter values when calling the respective implementation. @@ -257,7 +257,7 @@ object Unrolled{ } ``` -This is a source-compatible change, but not binary-compatible: JVM bytecode compiled against an +This is a source-compatible change, but not binary-compatible: JVM bytecode compiled against an earlier version of the library would be expecting to call `def foo(String, Int)`, but will fail because the signature is now `def foo(String, Int, Boolean)` or `def foo(String, Int, Boolean, Long)`. On the JVM this will result in a `MethodNotFoundError` at runtime, a common experience for anyone @@ -265,7 +265,7 @@ who upgrading the versions of their dependencies. Similar concerns are present w Scala-Native, albeit the failure happens at link-time rather than run-time `@unroll` is an annotation that can be applied as follows, to the first "additional" default -parameter that was added in each published version of the library (in this case, +parameter that was added in each published version of the library (in this case, `b: Boolean = true` and `l: Long = 0`) @@ -292,11 +292,11 @@ object Unrolled{ ``` As a result, old callers who expect `def foo(String, Int, Boolean)` or `def foo(String, Int, Boolean, Long)` -can continue to work, even as new parameters are added to `def foo`. The only restriction is that +can continue to work, even as new parameters are added to `def foo`. The only restriction is that new parameters can only be added on the right, and they must be provided with a default value. If multiple default parameters are added at once (e.g. `b` and `l` below) you can also -choose to only `@unroll` the first default parameter of each batch, to avoid generating +choose to only `@unroll` the first default parameter of each batch, to avoid generating unnecessary forwarders: ```scala @@ -312,8 +312,8 @@ parameter list can be unrolled (though it does not need to be the first one). e. ```scala object Unrolled{ - def foo(s: String, - n: Int = 1, + def foo(s: String, + n: Int = 1, @unroll b: Boolean = true, @unroll l: Long = 0) (implicit blah: Blah) = s + n + b + l @@ -325,8 +325,8 @@ As does this ```scala object Unrolled{ def foo(blah: Blah) - (s: String, - n: Int = 1, + (s: String, + n: Int = 1, @unroll b: Boolean = true, @unroll l: Long = 0) = s + n + b + l } @@ -336,7 +336,7 @@ object Unrolled{ ### Unrolling `class`es -Class constructors and secondary constructors are treated by `@unroll` just like any +Class constructors and secondary constructors are treated by `@unroll` just like any other method: ```scala @@ -405,10 +405,10 @@ Unrolls to: case class Unrolled(s: String, n: Int = 1, @unroll b: Boolean = true, @unroll l: Long = 0L){ def this(s: String, n: Int) = this(s, n, true, 0L) def this(s: String, n: Int, b: Boolean) = this(s, n, b, 0L) - + def copy(s: String, n: Int) = copy(s, n, this.b, this.l) def copy(s: String, n: Int, b: Boolean) = copy(s, n, b, this.l) - + def foo = s + n + b } object Unrolled{ @@ -419,26 +419,26 @@ object Unrolled{ Notes: -1. `@unroll`ed `case class`es are fully binary and backwards compatible in Scala 3, but not in Scala 2 +1. `@unroll`ed `case class`es are fully binary and backwards compatible in Scala 3, but not in Scala 2 2. `.unapply` does not need to be duplicated in Scala 3.x, as its signature `def unapply(x: Unrolled): Unrolled` does not change when new `case class` fields are added. 3. Even in Scala 2.x, where `def unapply(x: Unrolled): Option[TupleN]` is not - binary compatible, pattern matching on `case class`es is already binary compatible - to addition of new fields due to + binary compatible, pattern matching on `case class`es is already binary compatible + to addition of new fields due to [Option-less Pattern Matching](https://docs.scala-lang.org/scala3/reference/changed-features/pattern-matching.html). - Thus, only calls to `.tupled` or `.curried` on the `case class` companion `object`, or direct calls + Thus, only calls to `.tupled` or `.curried` on the `case class` companion `object`, or direct calls to `.unapply` on an unrolled `case class` in Scala 2.x (shown below) - will cause a crash if additional fields were added: + will cause a crash if additional fields were added: ```scala def foo(t: (String, Int)) = println(t) Unrolled.unapply(unrolled).map(foo) ``` -In Scala 3, `@unroll`ing a `case class` also needs to generate a `fromProduct` +In Scala 3, `@unroll`ing a `case class` also needs to generate a `fromProduct` implementation in the companion object, as shown below: ```scala @@ -480,7 +480,7 @@ This is done in two different ways: ## Limitations -### Only the one parameter list of multi-parameter list methods can be `@unroll`ed. +### Only the one parameter list of multi-parameter list methods can be `@unroll`ed. Unrolling multiple parameter lists would generate a number of forwarder methods exponential with regard to the number of parameter lists unrolled, @@ -499,11 +499,11 @@ compatibility on the JVM works. ### `@unroll`ed case classes are only fully binary compatible in Scala 3 -They are _almost_ binary compatible in Scala 2. Direct calls to `unapply` are binary -incompatible, but most common pattern matching of `case class`es goes through a different +They are _almost_ binary compatible in Scala 2. Direct calls to `unapply` are binary +incompatible, but most common pattern matching of `case class`es goes through a different code path that _is_ binary compatible. There are also the `AbstractFunctionN` traits, from which the companion object inherits `.curried` and `.tupled` members. Luckily, `unapply` -was made binary compatible in Scala 3, and `AbstractFunctionN`, `.curried`, and `.tupled` +was made binary compatible in Scala 3, and `AbstractFunctionN`, `.curried`, and `.tupled` were removed ### While `@unroll`ed `case class`es are *not* fully _source_ compatible @@ -539,15 +539,15 @@ default parameters over time. In such extreme scenarios, some kind of builder pa `object` methods and constructors are naturally final, but `class` or `trait` methods that are `@unroll`ed need to be explicitly marked `final`. -It has proved difficult to implement the semantics of `@unroll` in the presence of downstream -overrides, `super`, etc. where the downstream overrides can be compiled against by different +It has proved difficult to implement the semantics of `@unroll` in the presence of downstream +overrides, `super`, etc. where the downstream overrides can be compiled against by different versions of the upstream code. If we can come up with some implementation that works, we can lift this restriction later, but for now I have not managed to do so and so this restriction stays. ### Challenges of Non-Final Methods and Overriding -To elaborate a bit on the issues with non-final methods and overriding, consider the following +To elaborate a bit on the issues with non-final methods and overriding, consider the following case with four classes, `Upstream`, `Downstream`, `Main1` and `Main2`, each of which is compiled against different versions of each other (hence the varying number of parameters for `foo`): @@ -581,14 +581,14 @@ object Main2 { // compiled against Upstream V1 The challenge here is: how do we make sure that `Main1` and `Main2`, who call -`new Downstream().foo`, correctly pick up the version of `def foo` that is -provided by `Downstream`? +`new Downstream().foo`, correctly pick up the version of `def foo` that is +provided by `Downstream`? With the current implementation, the `override def foo` inside `Downstream` would only override one of `Upstream`'s synthetic forwarders, but would not override the actual primary implementation. As a result, we would see `Main1` calling the implementation -of `foo` from `Upstream`, while `Main2` calls the implementation of `foo` from -`Downstream`. So even though both `Main1` and `Main2` have the same +of `foo` from `Upstream`, while `Main2` calls the implementation of `foo` from +`Downstream`. So even though both `Main1` and `Main2` have the same `Upstream` and `Downstream` code on the classpath, they end up calling different implementations based on what they were compiled against. @@ -605,7 +605,7 @@ happen according to what version combinations are supported by our definition of concern due to the requirement that [All Overrides Are Equivalent](#all-overrides-are-equivalent). It may be possible to loosen this restriction to also allow abstract methods that -are implemented only once by a final method. See the section about +are implemented only once by a final method. See the section about [Abstract Methods](#abstract-methods) for details. ## Major Alternatives @@ -626,11 +626,11 @@ takes: The first major difference between `@unroll` and the above alternatives is that these alternatives all introduce something new: some kind of _not-a-case-class_ `class` that is to be used -when binary compatibility is desired. This _not-a-case-class_ has different syntax from +when binary compatibility is desired. This _not-a-case-class_ has different syntax from `case class`es, different semantics, different methods, and so on. In contrast, `@unroll` does not introduce any new language-level or library-level constructs. -The `@unroll` annotation is purely a compiler-backend concern for maintaining binary +The `@unroll` annotation is purely a compiler-backend concern for maintaining binary compatibility. At a language level, `@unroll` allows you to keep using normal method `def`s, `class`es and `case class`es with exactly the same syntax and semantics you have been using all along. @@ -644,10 +644,10 @@ designing their data types, is inferior to simply using `case class`es all the t The alternatives linked above all build a Java-esque "[inner platform](https://en.wikipedia.org/wiki/Inner-platform_effect)" -on top of the Scala language, with its own conventions like `.withFoo` methods. +on top of the Scala language, with its own conventions like `.withFoo` methods. In contrast, `@unroll` makes use of the existing Scala language's default parameters -to achieve the same effect. +to achieve the same effect. If we think Scala is nicer to write then Java due to its language features, then `@unroll`'s approach of leveraging those language features is nicer @@ -662,7 +662,7 @@ things that do not affect typechecking, and `@unroll` fits the bill perfectly. ### Evolving Any Class v.s. Evolving Pre-determined Classes The alternatives given require that the developer has to decide _up front_ whether their -data type needs to be evolved while maintaining binary compatibility. +data type needs to be evolved while maintaining binary compatibility. In contrast, `@unroll` allows you to evolve any existing `class` or `case class`. @@ -680,7 +680,7 @@ Binary compatility is not just a problem for `case class`es adding new fields: n `class` constructors, instance method `def`s, static method `def`s, etc. have default parameters added all the time as well. -In contrast, `@unroll` allows the evolution of `def`s and normal `class`es, in addition +In contrast, `@unroll` allows the evolution of `def`s and normal `class`es, in addition to `case class`es, all using the same approach: 1. `@unroll`ing `case class`es is about _schema evolution_ @@ -690,7 +690,7 @@ to `case class`es, all using the same approach: All three cases above have analogous best practices in the broader software engineering world: whether you are adding an optional column to a database table, adding an optional flag to a command-line tool, are extending an existing protocol with optional -fields that may need handling by both clients and servers implementing that protocol. +fields that may need handling by both clients and servers implementing that protocol. `@unroll` solves all three problems at once - schema evolution, API evolution, and protocol evolution. It does so with the same Scala-level syntax and semantics, with the same requirements @@ -699,7 +699,7 @@ software engineering community. ### Abstract Methods -Apart from `final` methods, `@unroll` also supports purely abstract methods. Consider +Apart from `final` methods, `@unroll` also supports purely abstract methods. Consider the following example with a trait `Unrolled` and an implementation `UnrolledObj`: ```scala @@ -729,7 +729,7 @@ object UnrolledObj extends Unrolled{ // version 3 } ``` -Note that both the abstract methods from `trait Unrolled` and the concrete methods +Note that both the abstract methods from `trait Unrolled` and the concrete methods from `object UnrolledObj` generate forwarders when `@unroll`ed, but the forwarders are generated _in opposite directions_! Unrolled concrete methods forward from longer parameter lists to shorter parameter lists, while unrolled abstract methods forward @@ -753,7 +753,7 @@ UnrolledObj.foo(String, Int, Boolean) UnrolledObj.foo(String, Int, Boolean, Long) ``` -Because such downstream code cannot know which version of `Unrolled` that `UnrolledObj` +Because such downstream code cannot know which version of `Unrolled` that `UnrolledObj` was compiled against, we need to ensure all such calls find their way to the correct implementation of `def foo`, which may be at any of the above signatures. This "double forwarding" strategy ensures that regardless of _which_ version of `.foo` gets called, @@ -761,20 +761,20 @@ it ends up eventually forwarding to the actual implementation of `foo`, with the correct combination of passed arguments and default arguments ```scala -UnrolledObj.foo(String, Int) // forwards to UnrolledObj.foo(String, Int, Boolean) -UnrolledObj.foo(String, Int, Boolean) // actual implementation +UnrolledObj.foo(String, Int) // forwards to UnrolledObj.foo(String, Int, Boolean) +UnrolledObj.foo(String, Int, Boolean) // actual implementation UnrolledObj.foo(String, Int, Boolean, Long) // forwards to UnrolledObj.foo(String, Int, Boolean) ``` -As is the case for `@unroll`ed methods on `trait`s and `class`es, `@unroll`ed +As is the case for `@unroll`ed methods on `trait`s and `class`es, `@unroll`ed implementations of an abtract method must be final. #### Are Reverse Forwarders Really Necessary? -This "double forwarding" strategy is not strictly necessary to support +This "double forwarding" strategy is not strictly necessary to support [Backwards Compatibility](#backwards-compatibility): the "reverse" forwarders generated for abstract methods are only necessary when a downstream callsite -of `UnrolledObj.foo` is compiled against a newer version of the original +of `UnrolledObj.foo` is compiled against a newer version of the original `trait Unrolled` than the `object UnrolledObj` was, as shown below: ```scala @@ -802,14 +802,14 @@ If we did not have the reverse forwarder from `foo(String, Int, Boolean, Long)` It also will get caught by MiMa as a `ReversedMissingMethodProblem`. This configuration of version is not allowed given our definition of backwards compatibility: -that definition assumes that `Unrolled` must be of a greater or equal version than `UnrolledObj`, +that definition assumes that `Unrolled` must be of a greater or equal version than `UnrolledObj`, which itself must be of a greater or equal version than the final call to `UnrolledObj.foo`. However, -the reverse forwarders are needed to fulfill our requirement +the reverse forwarders are needed to fulfill our requirement [All Overrides Are Equivalent](#all-overrides-are-equivalent): looking at `trait Unrolled // version 3` and `object UnrolledObj // version 2` in isolation, we find that without the reverse forwarders the signature `foo(String, Int, Boolean, Long)` is defined but not implemented. Such an un-implemented abstract method is something -we want to avoid, even if our artifact version constraints mean it should technically +we want to avoid, even if our artifact version constraints mean it should technically never get called. ## Minor Alternatives: @@ -819,7 +819,7 @@ never get called. Currently, `@unroll` generates a forwarder only for the annotated default parameter; if you want to generate multiple forwarders, you need to `@unroll` each one. In the -vast majority of scenarios, we want to unroll every default parameters we add, and in +vast majority of scenarios, we want to unroll every default parameters we add, and in many cases default parameters are added one at a time. In this case, an `@unrollAll` annotation may be useful, a shorthand for applying `@unroll` to the annotated default parameter and every parameter to the right of it: @@ -858,9 +858,9 @@ def foo(s: Object, n: Int = 1, b: Boolean = true) = s.toString + n + b + l def foo(s: String, n: Int = 1, b: Boolean = true) = foo(s, n, b) ``` -This would follow the precedence of how Java's and Scala's covariant method return -type overrides are implemented: when a class overrides a method with a new -implementation with a narrower return type, a forwarder method is generated to +This would follow the precedence of how Java's and Scala's covariant method return +type overrides are implemented: when a class overrides a method with a new +implementation with a narrower return type, a forwarder method is generated to allow anyone calling the original signature \to be forwarded to the narrower signature. This is not currently implemented in `@unroll`, but would be a straightforward addition. @@ -893,7 +893,7 @@ The first option results in shorter stack traces, while the second option result roughly half as much generated bytecode in the method bodies (though it's still `O(n^2)`). In order to allow `@unroll`ing of [Abstract Methods](#abstract-methods), we had to go with -the second option. This is because when an abstract method is overriden, it is not necessarily +the second option. This is because when an abstract method is overriden, it is not necessarily true that the longest override that contains the implementation. Thus we need to forward between the different `def foo` overrides one at a time until the override containing the implementation is found.