Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial port and addition of README.md #1

Merged
merged 1 commit into from
Jan 14, 2015
Merged

Initial port and addition of README.md #1

merged 1 commit into from
Jan 14, 2015

Conversation

stephentoub
Copy link
Member

This is a port of the readme from CodePlex. Some of the links have been updated to point to locations on GitHub, but many of the links still point back to CodePlex as comparable GitHub locations are currently absent.

@jaredpar
Copy link
Member

👍 Looks great! Thanks for the help.

jaredpar added a commit that referenced this pull request Jan 14, 2015
Initial port and addition of README.md
@jaredpar jaredpar merged commit cef4a7f into dotnet:master Jan 14, 2015
@stephentoub stephentoub deleted the new_readme branch January 14, 2015 00:08
@theoy theoy added this to the Unknown milestone Jan 16, 2015
tmat added a commit that referenced this pull request Feb 11, 2015
Build against alpha/experimental mdreader-vnext package
mattscheffer added a commit that referenced this pull request Mar 10, 2015
updating personal fork to latest commit
heejaechang referenced this pull request in heejaechang/roslyn May 5, 2015
…anges to delay solution crawler

There are 2 things incremental processor takes care of

@1 is making sure we delay processing any work until there is enough idle (ex, typing) in host.
@2 is managing cancellation and pending works.

we used to do #1 and #2 only for Roslyn files. and that is usually fine since most of time solution contains only roslyn files.

but for mixed solution (ex, Roslyn files + HTML + JS + CSS), #2 still makes sense but #1 doesn't. We want to pause any work while something is going on in other project types as well.

we need to make sure we play nice with neighbors as well.

now, we don't care where changes are coming from. if there is any change in host, we puase oursevles for a while.
heejaechang referenced this pull request in heejaechang/roslyn Jun 2, 2015
this performance improvement is particularly for devdiv bug # 1089540

this makes the file in the bug to be formatted in several seconds compared to several minutes on my machine.

there were several issues. each one fixed by

#1, use concurrency on gathering operations.
#2, don't use too much time to split work to chunks if that requires more work than actually formatting.
#3, don't blindly set beginning of a file as inseparable start point for certain formatting options.

...

but these don't actually address the most impactful root cause of this perf issues. which is perf issue of GetPrevious/GetNextToken API in compiler.
(dotnet#3244)

formatter internally uses GetDescendantTokens to get all tokens at once and cache them which takes less than 1 second for the entire file (2M bytes size) in the bug. and use the cache internally.

but certain part of formatter (Rule Provider) can't use that internal cache, so it has to use the GetPrevious/GetNextToken to move around tokens, which in this particular bug, takes more than 40 seconds on my machine. and that is not even for entire file. (less than 1/12 of tokens)

I opened a bug to compiler team, hopely so that we can get better perf on those APIs.

in this PR, I mitigated the issue either by making more things to run concurrently or by changing logic which requires those APIs.
heejaechang added a commit that referenced this pull request Jun 4, 2015
make formatting performance better

...

this performance improvement is particularly for devdiv bug # 1089540

this makes the file in the bug to be formatted in several seconds compared to several minutes on my machine.

there were several issues. each one fixed by

#1, use concurrency on gathering operations.
#2, don't use too much time to split work to chunks if that requires more work than actually formatting.
#3, don't blindly set beginning of a file as inseparable start point for certain formatting options.

...

but these don't actually address the most impactful root cause of this perf issues. which is perf issue of GetPrevious/GetNextToken API in compiler.
(#3244)

formatter internally uses GetDescendantTokens to get all tokens at once and cache them which takes less than 1 second for the entire file (2M bytes size) in the bug. and use the cache internally.

but certain part of formatter (Rule Provider) can't use that internal cache, so it has to use the GetPrevious/GetNextToken to move around tokens, which in this particular bug, takes more than 40 seconds on my machine. and that is not even for entire file. (less than 1/12 of tokens)

I opened a bug to compiler team, hopely so that we can get better perf on those APIs.

in this PR, I mitigated the issue either by making more things to run concurrently or by changing logic which requires those APIs.
heejaechang referenced this pull request in heejaechang/roslyn Jan 22, 2016
experience seems actually quite better than before.

anyway, 2 main changes are

1. active file analysis delay got shorten to 400ms from 800ms.
2. notification to editor on removed tags are now 50ms but added tags are now 1.5 seconds.

...

more detail explanations below

for #1. the delay change is only for 1 file (a file that has focus). all other file analysis delay is same as before (1.5 seconds). so I believe perf impact due to this should be fairly small. and it is still better than RTM which was 200ms. also, after RTM, we already made LB behavior not to be affected by this delay. so this change shouldn't affect LB behavior.

for #2. tagger actually has many small delays in them.
2 main ones are

1) delay to produce tags
2) delay to notify editor about changed tags.

1) is to reduce doing repeated works to generate tags. for diagnostic tagger, this actually is not needed since unlike any other tagger, diagnostic tagger uses external service (diagnostic service) which already does all these things (basically #1 delay is logically doing what 1) is trying to do)

now 1) delay is set to NearImmediate (50ms)

2) is to prevent us from abusing editors too much with a lot of notifications. basically this delay make sure we only ping editor once in a while (used to be 50ms) and aggregate events between them.

now, 2) is split into 2 different delays. one for adding new tags and the other for removing old tags.

adding new tags is now set to 1.5 seconds and removing old tags is set to 50ms.
heejaechang added a commit that referenced this pull request Apr 14, 2016
OOP #1 - new analyzer engine that uses CompilerAnalyzer model
agocke added a commit that referenced this pull request Apr 22, 2016
Revert "Add support for calling local functions with dynamic args (#1
dibarbet pushed a commit that referenced this pull request Jul 11, 2022
Cosifne pushed a commit that referenced this pull request Mar 30, 2024
davidwengier pushed a commit that referenced this pull request Oct 18, 2024
…tionAPI

Access services on Solution rather than Workspace
checkedsums added a commit to checkedsums/roslynjourneys that referenced this pull request Jan 4, 2025
Temporary Progress Towards namespace features.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants