Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google feedback on TS 4.1-beta #41369

Closed
brad4d opened this issue Nov 2, 2020 · 12 comments
Closed

Google feedback on TS 4.1-beta #41369

brad4d opened this issue Nov 2, 2020 · 12 comments
Labels
Discussion Issues which may not have code impact

Comments

@brad4d
Copy link

brad4d commented Nov 2, 2020

This GitHub issue contains feedback on the TS 4.1-beta release from the team
that is responsible for keeping Google's internal software working with the
latest version of TypeScript.

Executive summary

  • We do not expect to have significant difficulty in upgrading Google to TS
    4.1.

  • We found one change to the emitted JS that broke our tsickle tool's output
    in some cases. We will fix tsickle to match the new behavior.

  • Some changes to our TypeScript code are required to make it compile with TS
    4.1.

    • Most of the new errors reported by TS 4.1-beta are clearly related to
      announced changes.

    • The remaining new TS error messages are few and appear to report
      legitimate source code errors that were previously missed by TS 4.0.

    • Detail sections below explain the changes to our code we expect to make
      to unblock the upgrade.

Impact summary

Change description Announced Libraries affected
CommonJS exports initialization emit change no 0.71%
resolve() parameter required yes 0.11%
miscellaneous TS errors no 0.02%
abstract async disallowed yes 0.01%
any/unknown propagated in Falsy positions yes 0.01%

The Announced column indicates whether we were able to connect the observed
change with a section in the
TS4.1-beta announcement.

The following sections give more detailed explanations of the changes listed
above.

CommonJS exports initialization emit change

Our tsickle tool depends on CommonJS mode output from tsc, which it further
converts into the goog.module() format used by closure-compiler.

In TS 3.9 tsc started emitting a statement to initialize exported symbols to
void 0 before the statements that give those symbols their intended values.
This seems to be for reasons having to do with CommonJS spec. (See
#38552 (comment))

exports.p1 = exports.p2 = ... = exports.pN = void 0;

During our upgrade to TS 3.9, we updated tsickle to recognize and drop this
statement, because it violates closure-compiler's input expectations.

In TS 4.1 the emit changes so that the initialization is broken into one or more
statements containing no more than 50 variable assignments.

exports.p1 = exports.p2 = ... = exports.p50 = void 0;
exports.p51 = exports.p52 = ... = exports.pN = void 0;

This resulted in compile time errors from closure-compiler, because tsickle
only recognized and removed the first statement.

We will update tsickle to recognize and drop all of these statements. We have
confirmed with a draft change that doing so fixes all cases.

resolve() parameter required

This change was an
announced part of TS 4.1-beta.
We believe fixing the type errors that arise from this change will improve the
quality of our code.

Our team will likely unblock the TS upgrade by inserting a cast to any for
each of these errors along with a TODO comment containing the error message
and a request to the team owning the code to investigate and make a more correct
fix when they can.

miscellaneous TS errors

There are several different error messages in this category, but all appear to
be cases where TS 4.1-beta correctly identifies coding mistakes that were
previously missed.

Exactly how we resolve these error messages will be determined later, but we
will likely resolve most of them by inserting a cast to any for the error
along with a TODO comment containing the error message and a request to the
team owning the code to investigate and make a more correct fix when they can.

abstract async disallowed

This change was an
announced part of TS 4.1-beta.
This seems like a reasonable change to the language to us.

We expect to fix these errors by dropping the async and changing the current
return type from T to Promise<T>.

any/unknown propagated in Falsy positions

This change was an
announced part of TS 4.1-beta.
We believe fixing the type errors that arise from this change improves the
quality of our code.

Our team will likely fix these errors by inserting !! before the non-boolean
expression that causes the error.

@DanielRosenwasser DanielRosenwasser added the Discussion Issues which may not have code impact label Nov 2, 2020
@DanielRosenwasser
Copy link
Member

Thank you for writing this up! 😀

@RyanCavanaugh
Copy link
Member

+1, we really appreciate this level of feedback. Glad to hear it was a mostly positive experience!

@brad4d
Copy link
Author

brad4d commented Nov 10, 2020

I've tested TS4.1-rc with nearly identical results to what I got with the beta. I went out to an extra decimal place because otherwise the least frequent issue would have rounded to 0.00%.

Change description Announced Libraries affected
resolve() parameter required yes 0.105%
miscellaneous TS errors no 0.019%
abstract async disallowed yes 0.013%
any/unknown propagated in Falsy positions yes 0.004%

I also separately tried using the --noUncheckedIndexedAccess flag. This caused an additional 14.987% of our libraries to fail to build. At present I don't believe that the programming errors that would be caught by enabling this option balance the cost of enabling it.

@brad4d
Copy link
Author

brad4d commented Nov 10, 2020

The table below gives more accurate numbers acquired through a slightly different method.
(The impact reported for --noUncheckedIndexedAccess above (14.987%) was calculated with this method, so that number is unchanged.)

Change description Announced Libraries affected
resolve() parameter required yes 0.290%
miscellaneous TS errors no 0.051%
abstract async disallowed yes 0.027%
any/unknown propagated in Falsy positions yes 0.008%

@brieb
Copy link

brieb commented Nov 10, 2020

Thanks for these write-ups! They have been a helpful reference for us because we upgrade a large monorepo at Airbnb as well.

Out of curiosity, in addition to looking at the new errors/issues introduced by the upgrade, do you also do any performance measurements? For example, we've noticed slowdowns from the 3.7 to 3.8 and 3.9 to 4.0 upgrades. Have you noticed that as well or have things been roughly the same given your build system setup. Also, curious if you've developed some automated way of detecting and isolating performance regressions across versions at Google.

Thanks for the tips!

@mprobst
Copy link
Contributor

mprobst commented Nov 10, 2020 via email

@brieb
Copy link

brieb commented Nov 16, 2020

Was meaning time to compilation completion (which as you mentioned does have an impact on editor performance). In CI, we build 8M lines of TypeScript. We cache our TS output directory between CI builds so that we can incrementally build from it on subsequent builds. We invalidate our cache whenever node_modules changes (#38648). So, when we do a TS upgrade we do a cold build. I was looking at these cold build timings to get a sense of the raw performance of the upgraded version, and we also want to generally keep these full-rebuilds under a 15 min budget for developers who make changes to the package.json.

We've seen these cold build times trending upwards as part of the last few upgrades. In these tests, we use the same snapshot of the code, vary the TS version, and run a few trials to get an average.

We did some timings when upgrading from 3.7 to 3.9 and found
TS3.7: ~9-12 m
TS3.8: ~16-19 m
TS3.9: ~15-18 m

Since then, we've added/migrated more TS code, parallelized our build, and sharded it across five machines. The maximum time of one of the shards represents how long a developer needs to wait until the overall build is finished. On average, we found the longest shard to take 1m 14s more time.
TS3.9: ~17m 11s
TS4.0: ~18m 25s

Here is a replica of our repo setup: https://github.com/brieb/ts-monorepo

Q: Have you noticed a change in the full rebuild times across upgrades?

Q: Out of curiosity, how long do your 50th and 90th percentile builds take? How many lines of TS are there?

Q: Do you use a local build cache on developer machines so that you can make use of the cache locally as well?

Side note: We are in the process of switching to Bazel for our TypeScript builds which should hopefully make things way faster. And perhaps there's a way we could encode our package dependencies explicitly so that we don't have to blow the entire cache away when something in package.json / node_modules changes. The better cache utilization should help us easily clear our 15m budget.

We have some command line tooling that spins up a compiler and type checks & auto-completes some code through the language service API. We use that to reproduce performance changes, but it's not particularly automated

Q: Have you thought about doing CI measurements on this at all? Or are the compilation times a reasonable enough proxy, as editor performance is faster when compilation is faster?

the debugging process roughly follows the movie script of The Men Who Stare at Goats

😂

@mprobst
Copy link
Contributor

mprobst commented Nov 16, 2020

I think our setup is unfortunately not really comparable. I hope what I write below still might be useful or at least curious to you.

We're using bazel (or its Google-internal cousin). All builds are incremental for us, we don't have full rebuilds. We compile compilation units of (only) hundreds of lines at a time, with all their dependencies coming in as (cached) .d.ts files. This is a bit like TS' support for multi-project builds I understand, but on a different, custom toolchain. So we also don't really have something like total lines of code. Measuring is also a bit tricky because some compilations run on a compiler cluster, some run on workstations (for incremental/user interactive builds).

For cluster-run builds of individual targets whose compilation ended up contributing to the critical path of a build, P50 is at <1 second, P99 is at ~15 seconds, and we haven't seen them move substantially across recent upgrades. But again, those numbers are in no way comparable to what the typical open source toolchain would see.

Re local build cache, developer workstations pull compiled files from the compilation cluster (with precise hashing/cache invalidation, so they work across builds). It's essentially the same as caching the .d.ts files locally.

Re CI measurements - we don't make a difference between CI and regular builds, so we just measure builds. The only minor difference is that we track incremental interactive builds separately, as those tend to run on local machines instead of the cluster (depending on size of the build and capacity etc).

@brieb
Copy link

brieb commented Nov 17, 2020

@DanielRosenwasser Do you have any thoughts on what might have caused the TS3.9 -> TS4.0 slowdown we observed here #41369 (comment)? Any advice on changes we can make to our setup to avoid the perf regressions introduced in 4, so we could proceed with the upgrade? We haven't merged the upgrade yet, because we wanted to avoid slowing down our CI jobs.

Also, do you have any advice for catching this more proactively, in a way that would be beneficial to the TS team? Perhaps we could set up an automated build process that pulls down the latest TS snapshots to pinpoint when slowdowns happen, or come up with a representative example monorepo to add to your profiling suite?

EDIT: Also, realized that my comments are beyond the original purpose of this thread, so I'll file a separate issue for this. Filed here: #41570

@DanielRosenwasser
Copy link
Member

No worries! I'll continue the convo on the other thread though.

@DanielRosenwasser
Copy link
Member

DanielRosenwasser commented Nov 17, 2020

Also, do you have any advice for catching this more proactively, in a way that would be beneficial to the TS team? Perhaps we could set up an automated build process that pulls down the latest TS snapshots to pinpoint when slowdowns happen, or come up with a representative example monorepo to add to your profiling suite?

I think this would be extremely valuable as long as it's not a burden on you. If we could pinpoint regressions with that much granularity, it would probably go a long way for everyone. Let us know if there's anything we can do to help there!

@brad4d
Copy link
Author

brad4d commented Feb 23, 2021

FTR, Google's upgrade to TS 4.1 is complete.
We didn't discover any noteworthy breakages beyond those already described here.

@brad4d brad4d closed this as completed Feb 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Discussion Issues which may not have code impact
Projects
None yet
Development

No branches or pull requests

5 participants