Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.NET 7 application hangs during GC #80073

Closed
KenKozman-GE opened this issue Dec 30, 2022 · 95 comments
Closed

.NET 7 application hangs during GC #80073

KenKozman-GE opened this issue Dec 30, 2022 · 95 comments
Milestone

Comments

@KenKozman-GE
Copy link

Description

We host our application in Azure App Service in the host dependent mode.

Each instance of our application can end up getting stuck in a busy loop where it eats 100% of the CPU and no other HTTP calls are able to make it into the app. This only started happening when we upgraded to .NET 7.

It looks like this is due to one thread being stuck performing a GC, so all the other managed threads are paused.

Our application has health checks that run against it. One of those checks runs ~1/second. This check ends up querying a pgsql database and pulling down records (10s-1000s). This is the place we have seen the hang occur, although this same code can be called by other parts of the system. Likely it is just the health check that is triggering due to the much higher frequency of that call being made.

Here are a few of the unmanaged portion of the GC allocate from two separate dumps taken during the high CPU times (this is me doing dump debugging in Visual Studio 2022):

image
image

I believe it is running the Workstation GC based on the WKS:: namespace (and because the other threads are frozen waiting on the GC and because the GC is being performed directly on the user thread). I believe this is happening because the app service plan instance shows up as having 1 processor. We specify to use the Server GC in our configuration but I believe it will use the workstation one if there is only one processor detected.

In one case I generated three dumps 5 minutes apart. In all three cases the managed code had not moved and was in the same exact place. The second unmanaged stack above is the top of the stack in those cases (although the very top most function changes in each, which is why I thought allocate_in_condemned_generations was the culprit). Here is the managed portion of the stack from those dumps:

image

Here are some outputs from the CrashHangAnalysis MHT which AppService also kindly created (this is from one of the three dumps mentioned):

image

As you can see in the thread times almost all of the time was eaten up by Thread 48. In each successive 5 minute dump you can see it has eaten up 5 more minutes of CPU time:

image

Here is the full call stack for Thread 48 reported by the CrashHangAnalysis:

image

There are also multiple CLR runtimes loaded in the process. I believe that AppService itself loads the .NET Framework 4.8, we are using .NET 7.0.0:

image

I attempted to find this issue reported already but my searching turned up nothing :( Apologies if I missed it.

Reproduction Steps

At this point I don't have a way to reproduce it reliably. One of our systems seems to be falling into this particular hole once a day or so. The SQL query on that system is a bit larger than other systems, but not massively so.

Expected behavior

To just have the GC complete its work and return to our application.

Actual behavior

It appears the GC gets stuck in WKS::gc_heap::allocate_in_condemned_generations. I see a retry label in there, perhaps it is stuck in an infinite loop?

Regression?

I believe so. We never saw this on .NET 6 (several sub-versions) and .NET 5. We ran similar code with earlier versions of .NET Core and never saw any issues either.

Known Workarounds

Perhaps downgrading to .NET 6?
It is uncertain if this would happen with .NET 7.0.1, I have not gotten to test that yet.
It is unclear if this is a workstation GC issue only; my guess is yes.

Configuration

.NET 7.0.0 (in the dumps this shows up as CLR version v7.0.22).
Azure App Service Platform version: 99.0.7.620
Windows Server 2016 - 14393
Number of Processors - 1
Architecture: x64

Other information

It seems like some sort of infinite loop issue in the gc_heap::allocate_in_condemned_generations function. But that is me using my "Jump to Conclusions" mat.

@ghost ghost added the untriaged New issue has not been triaged by the area owner label Dec 30, 2022
@ghost
Copy link

ghost commented Dec 30, 2022

Tagging subscribers to this area: @dotnet/gc
See info in area-owners.md if you want to be subscribed.

Issue Details

Description

We host our application in Azure App Service in the host dependent mode.

Each instance of our application can end up getting stuck in a busy loop where it eats 100% of the CPU and no other HTTP calls are able to make it into the app. This only started happening when we upgraded to .NET 7.

It looks like this is due to one thread being stuck performing a GC, so all the other managed threads are paused.

Our application has health checks that run against it. One of those checks runs ~1/second. This check ends up querying a pgsql database and pulling down records (10s-1000s). This is the place we have seen the hang occur, although this same code can be called by other parts of the system. Likely it is just the health check that is triggering due to the much higher frequency of that call being made.

Here are a few of the unmanaged portion of the GC allocate from two separate dumps taken during the high CPU times (this is me doing dump debugging in Visual Studio 2022):

image
image

I believe it is running the Workstation GC based on the WKS:: namespace (and because the other threads are frozen waiting on the GC and because the GC is being performed directly on the user thread). I believe this is happening because the app service plan instance shows up as having 1 processor. We specify to use the Server GC in our configuration but I believe it will use the workstation one if there is only one processor detected.

In one case I generated three dumps 5 minutes apart. In all three cases the managed code had not moved and was in the same exact place. The second unmanaged stack above is the top of the stack in those cases (although the very top most function changes in each, which is why I thought allocate_in_condemned_generations was the culprit). Here is the managed portion of the stack from those dumps:

image

Here are some outputs from the CrashHangAnalysis MHT which AppService also kindly created (this is from one of the three dumps mentioned):

image

As you can see in the thread times almost all of the time was eaten up by Thread 48. In each successive 5 minute dump you can see it has eaten up 5 more minutes of CPU time:

image

Here is the full call stack for Thread 48 reported by the CrashHangAnalysis:

image

There are also multiple CLR runtimes loaded in the process. I believe that AppService itself loads the .NET Framework 4.8, we are using .NET 7.0.0:

image

I attempted to find this issue reported already but my searching turned up nothing :( Apologies if I missed it.

Reproduction Steps

At this point I don't have a way to reproduce it reliably. One of our systems seems to be falling into this particular hole once a day or so. The SQL query on that system is a bit larger than other systems, but not massively so.

Expected behavior

To just have the GC complete its work and return to our application.

Actual behavior

It appears the GC gets stuck in WKS::gc_heap::allocate_in_condemned_generations. I see a retry label in there, perhaps it is stuck in an infinite loop?

Regression?

I believe so. We never saw this on .NET 6 (several sub-versions) and .NET 5. We ran similar code with earlier versions of .NET Core and never saw any issues either.

Known Workarounds

Perhaps downgrading to .NET 6?
It is uncertain if this would happen with .NET 7.0.1, I have not gotten to test that yet.
It is unclear if this is a workstation GC issue only; my guess is yes.

Configuration

.NET 7.0.0 (in the dumps this shows up as CLR version v7.0.22).
Azure App Service Platform version: 99.0.7.620
Windows Server 2016 - 14393
Number of Processors - 1
Architecture: x64

Other information

It seems like some sort of infinite loop issue in the gc_heap::allocate_in_condemned_generations function. But that is me using my "Jump to Conclusions" mat.

Author: KenKozman-GE
Assignees: -
Labels:

area-GC-coreclr

Milestone: -

@cshung
Copy link
Member

cshung commented Dec 30, 2022

@KenKozman-GE Can you share a dump or something useful so that we can investigate?

@KenKozman-GE
Copy link
Author

Yeah sorry...

Here are the crash hung analyses:
CrashHangAnalyses.zip

Let me see if I can push the dumps here or not (they are full memory dumps, so quite large).

@KenKozman-GE
Copy link
Author

Ah yep, 25MB limit. Let me try putting them somewhere that I can share, one second.

@KenKozman-GE
Copy link
Author

KenKozman-GE commented Dec 30, 2022

Okay here are a couple of full memory dumps (mentioned above) as well as the crash hang analyses.
Crash Hang Analysis
Dump #1
Dump #2

@mangod9 mangod9 removed the untriaged New issue has not been triaged by the area owner label Dec 30, 2022
@mangod9 mangod9 added this to the 8.0.0 milestone Dec 30, 2022
@KenKozman-GE
Copy link
Author

One other workaround here would be to switch to an App Service Plan SKU which would show up as having multiple processors and thus use the Server GC (which would hopefully not exhibit the same issue).

Right now we are using P1V2 for some instances which have a single vCPU.

@KenKozman-GE
Copy link
Author

KenKozman-GE commented Dec 30, 2022

One further update, we used to use S2 SKUs and switched to P1V2 in the middle of 2022. So this still seems like a regression from .NET 6 to .NET 7 for us, for the Workstation GC. But we would have been using the Server GC at the start of 2022 when we were still on S2 SKUs. We were on S2 SKUs during the .NET 5 timeframe as well.

(S2 SKUs have 2 vCPUs, P1V2 SKUs have only 1 vCPU)

@cshung
Copy link
Member

cshung commented Dec 30, 2022

I took a quick look at the dump and hypothesize that it may be an infinite loop because we forgot about the possibility of pinning.

Roughly speaking, the allocate_in_condemned_generations method does something along this line:

allocate_in_condemned_generations(...size...)
    retry:
    if we hit a pin or run out of space
        if we hit a pin
            deal with it
            goto retry
        if we run out of space
            deal with it
        goto retry
    ...

The key idea is that the deal with it should deal with the situation so that we should not be running into a pin or running out of space, as such, the loop is meant to be run for at most 3 iterations or less. (1 for pin, 1 for space, last for succeed).

There are no synchronization primitives involved in this loop, so this is pretty much a case of an infinite loop.

There are no more pins in the pin queue (i.e. gc_heap::mark_stack_bos == gc_heap::mark_stack_tos (c.f. gc_heap::pinned_plug_que_empty_p)), therefore, the problem has something to do with space.

Looking at the implementation of gc_heap::size_fit_p, I understand why we get stuck running out of space.

We have a just-fit allocation context:

0:046> ?? gen->allocation_context.alloc_ptr
unsigned char * 0x000001d2`72c00020

0:046> ?? gen->allocation_context.alloc_limit
unsigned char * 0x000001d2`73000000

0:046> ?? size
unsigned int64 0x3fffe0

Note that 0x000001d272c00020 + 0x3fffe0 == 0x000001d273000000, the allocation context should fit the allocation exactly, probably prepared so by the deal_with_it code for space and that's why it is so exact.

But then old_loc != nullptr, so the code in size_fit_p insists on requiring a bit more memory (i.e. + Align(min_object_size)), so that check will fail.

We are probably stuck in that situation and that is why the loop isn't breaking. What baffles me is that the old_loc is already considered in the deal with it section for running out of space by being passed as an argument to grow_heap_segment, not sure why we aren't processing that correctly.

@Maoni0
Copy link
Member

Maoni0 commented Jan 3, 2023

it may be an infinite loop because we forgot about the possibility of pinning.

I don't know how to interpret this. allocate_in_condemned_generations has to handle pinning; otherwise every .net app would fail.

from the debug info given above, it doesn't look like it has to do with pinning - it says we have a huge plug where every single object on that region survived. and we are asking for an additional min object size when fitting in this case. can you check to see if it's asking for an additional min_obj_size because of USE_PADDING_FRONT or USE_PADDING_TAIL? what puzzled me is it seems like it shouldn't be due to USE_PADDING_FRONT because the plug is so huge which tells me this plug should belong to gen2 so this should be a gen2 GC. but if it's a gen2 GC, we should not be passing USE_PADDING_FRONT. and we should not be passing USE_PADDING_TAIL because generation_allocation_limit (gen) should be the same as heap_segment_plan_allocated (seg).

@cshung
Copy link
Member

cshung commented Jan 3, 2023

can you check to see if it's asking for an additional min_obj_size because of USE_PADDING_FRONT or USE_PADDING_TAIL?

It is because of USE_PADDING_FRONT.

0:046> dv
       ...
       pad_in_front = 0n1
       ...

what puzzled me is it seems like it shouldn't be due to USE_PADDING_FRONT because the plug is so huge which tells me this plug should belong to gen2 so this should be a gen2 GC.

It is a gen1 GC

0:046> ?? wks::gc_heap::settings
class WKS::gc_mechanisms
   +0x000 gc_index         : 0x6dc
   +0x008 condemned_generation : 0n1
   ...

This is on thread 46 of RD501AC5AE7597_w3wp_5784_638079619256436212.dmp

@Maoni0
Copy link
Member

Maoni0 commented Jan 3, 2023

I looked at the 2 dumps and the really odd thing with both dumps is the first region in gen0 has no free objects whatsoever but all the other gen0 regions look normal, ie with free objects. this explains why we have such a huge plug in a gen1 GC but this is not what should happen - we are supposed to have free objects as padding between alloc contexts so we don't form potentially huge plugs like this.

@KenKozman-GE do you happen to have a dump where it's running normally, ie, not getting this hang? that would be helpful.

@KenKozman-GE
Copy link
Author

I can make one, let me go do that, one second.

@Maoni0
Copy link
Member

Maoni0 commented Jan 3, 2023

thank you!

@KenKozman-GE
Copy link
Author

Apologies the download and re-upload process here is somewhat onerous (IT security goons making sure I am not downloading or uploading various secrets and/or malware no doubt). Should be done in maybe an hour.

@KenKozman-GE
Copy link
Author

Random possible workaround (if it is indeed a GC regression, we shall see): we could use the clrgc.dll as Maoni calls out here.

I did not test that before filing the issue here, apologies for that.

@KenKozman-GE
Copy link
Author

Okay here is an example dump of "normal processing": dump

@Maoni0
Copy link
Member

Maoni0 commented Jan 4, 2023

yep, this one looks normal. if I look at the first region in gen0, I see 513 free objects (a region is 4mb and each alloc context is about 8k). other gen0 regions look fine too.

so something is causing us to not allocate these free objects in that 1st gen0 region in the bad case. via a brief code review I don't see how that can happen. I'll take another look tomorrow but I wanted to ask - would it be possible to use a private version of clrgc.dll to help with debugging this if it comes to that? you could of course trying use the shipped version of clrgc.dll by setting COMPlus_GCName=clrgc.dll (which would revert back to the .net 6.0 behavior) but we'd like to figure out why the 7.0 GC is failing for you. we could share a private version of clrgc.dll that includes some instrumentation to help and you can use via the same way. would that be feasible? thanks!

@KenKozman-GE
Copy link
Author

Oh yeah we could do that I think.

I only mention the clrgc.dll bit above to help me remember and in case others see this type of issue in the future.

Anyway, just let me know and we can try to help however.

@Maoni0
Copy link
Member

Maoni0 commented Jan 6, 2023

just an update - I'm doing some testing on my side for the private build and also trying to see if I can repro this. will let you know when it's ready for you to pick up.

@KenKozman-GE
Copy link
Author

Sounds good. We have seen it happen still once every day or two on at least one tenant. So hopefully we can get a useful dump pretty quickly.

@Maoni0
Copy link
Member

Maoni0 commented Jan 6, 2023

if I make it so that it incurs an access violation when it detects this situation, would your system capture a dump? or would it be easier if I also make it hang? I usually do an AV in this situation but wanted to check if that's convenient for you.

@KenKozman-GE
Copy link
Author

I think an AV will be easiest. The hosting we use (Azure App Service) will capture a dump and restart it in that case.

This seems nicer than just making it hang given less potential downtime for the one instance.

@Maoni0
Copy link
Member

Maoni0 commented Jan 7, 2023

cool! I have the bits ready at https://github.com/Maoni0/clrgc/tree/main/issues/80073. there's a readme that explains how to use the files (in the "How to test" section). I also included explanation of changes included in the private builds and src/symbols in case you want to look at/use them.

@KenKozman-GE
Copy link
Author

Thanks @Maoni0! I likely won't be able to install it and test it until next Tuesday at the earliest (vacation, etc.). But will try to get to it as soon as I am able.

@Maoni0
Copy link
Member

Maoni0 commented Jan 7, 2023

that's totally fine, thanks!

@KenKozman-GE
Copy link
Author

Okay sorry @Maoni0 I am just getting back to this. I've never tried to install/use a clrgc.dll. I'm realizing now I might not be able to do this due to us deploying as a Framework Dependent DLL and dependencies to Azure App Service.

I say this as I see no coreclr.dll anywhere I can make changes. So I think it is just in the system area. Sounds like the clrgc.dll has to be in the same dir as coreclr.dll?

Also I've noticed that some optimization work we deployed Monday has seemed to make it where we aren't getting those hangs anymore, likely due to just much less GC pressure.

I can add back the inefficient code to a QA system and try to test there, but I'm also not sure how exactly to go about getting our bits to use the bits you made. Sounds like the clrgc.dll has to be in the same dir as coreclr.dll?

I could try to publish a self-contained piece for this, but I have never done that.

@KenKozman-GE
Copy link
Author

We are all patched with this as well (and I removed the COMPlus_gcConcurrent as well):

image

I was wondering the same thing as @Simon-Gregory-LG as to what to expect now :)

@KenKozman-GE
Copy link
Author

Okay @Maoni0 got a crash in gc_heap::verify_regions which is hopefully one of the instrumented bits that you are looking for.

I uploaded the dump to my Google drive and gave you access.

image

(Side note: and I thought I had some large source files!)

@Maoni0
Copy link
Member

Maoni0 commented Feb 7, 2023

oops, sorry I should have been clear - you are not expected to see AV if the fix indeed addresses the problem. but if the fix didn't work, and it's due to the same bug, it would AV but would put us closer to failure so it's easier to see what happened.

I just downloaded Ken's dump - this is again in verify_region and complaining that there are no regions in gen0 (same symptom as what the 1st issue I discovered that's only specific to Workstation GC but not the same cause. I'll look more later this afternoon.

@Maoni0
Copy link
Member

Maoni0 commented Feb 8, 2023

@KenKozman-GE your app is a good stress test for the GC 🙀 I think you are hitting an issue that we fixed in 8.0 - I believe the reason why it has 0 regions in gen0 is because we failed to get a new region for gen0. we are doing a sweeping gen1 GC in this case and there's plenty of space get a new region for gen0, but due to a bug we mistakenly didn't find enough space to commit the bookkeeping data for that region so we failed to get a new region -

0:047> dt clrgc!WKS::gc_heap::current_total_committed_bookkeeping 
0x1822000
0:047> dt clrgc!WKS::gc_heap::heap_hard_limit_for_bookkeeping
0x1824a2d
0:047> ? 0x1824a2d-0x1822000
Evaluate expression: 10797 = 00000000`00002a2d

that's not enough for a new region. let me make a new build with the fix for that. sorry about all the trouble!

@KenKozman-GE
Copy link
Author

"Inadvertent stress test" sounds like a decent band name!

Just let me know when there is a fixed clrgc DLL and we can install it.

@Maoni0
Copy link
Member

Maoni0 commented Feb 8, 2023

"Inadvertent stress test" sounds like a decent band name!

I agree :)

I've ported 2 fixes from 8.0 in the latest clrgc.dll #77480 and #80640 (it's 2 fixes because in the 1st fix we got rid of one of 2 bookkeeping fields I mentioned above and that's relevant in the 2nd fix).
please get the latest clrgc.dll from https://github.com/Maoni0/clrgc/tree/main/issues/80073/demotion_fix/v2. the file version should be (I increased the minor version number by 1)

FileVersion 7,0,323,56101 @Commit: ac25991336fe96ed6892de68b28adc4756ab94a4

thanks again for your patience!

@Simon-Gregory-LG
Copy link

"Inadvertent stress test" sounds like a decent band name!

Just let me know when there is a fixed clrgc DLL and we can install it.

@KenKozman-GE I also agree. I think there should be three albums: 'gen0', 'gen1' & 'gen2' then leave a gap to do a best of + unreleased materials called 'Unmanaged Heap'.

On a side note mine is still running fine with the first patch. Approaching 24 hours, but it's still within the window I've seen the issue occur. I might give it another 24 hours before I switch to try the new dll.

@KenKozman-GE
Copy link
Author

Okay we are cooking with gas over here (have the latest clrgc.dll). Will wait and see what happens:

image

@rbouallou
Copy link

@Maoni0 Would it be possible to have a patched libclrgc.so for linux-x64?

@Maoni0
Copy link
Member

Maoni0 commented Feb 10, 2023

@rbouallou, absolutely. I just put libclrgc.so along with src and symbols at https://github.com/Maoni0/clrgc/tree/main/issues/80073/demotion_fix/v2/linux. please let me know how this works out for you.

@Simon-Gregory-LG
Copy link

Just an update from me that the first patch (7,0,323,56100) has been running solidly for 3 days now. No issues so far (see below).

I will now install the newer (7,0,323,56101) to test over the weekend, but it's looking very good.

@Maoni0, thanks so much for the responsiveness and extremely helpful interactions on this issue so far, it's been excellent!


Patch test summary:

Health Check Metrics

  • Red - .NET 7.0 version deployed
  • Green - clrgc.dll patch 7,0,323,56100 deployed

Note: Typically the app would become unhealthy within 36 hours

image

Crash Monitoring Summary

image

@Maoni0
Copy link
Member

Maoni0 commented Feb 10, 2023

@Simon-Gregory-LG that's great to hear! and thank you so much for verifying :)

@Simon-Gregory-LG
Copy link

@Maoni0 no problem!

Lots going on today so forgot to post the confirmation actually, so here it is all up and running ready to enjoy the weekend!:

image

@KenKozman-GE
Copy link
Author

Wait... they give you guys weekends? I got to talk to my agent.

I just looked at our guinea pig instance over here and it is still rumbling along no crashes or infinite loops since the latest install. But I don't think our replication cycle was a reliable as @Simon-Gregory-LG. Maybe it will hit something this weekend.

@Simon-Gregory-LG
Copy link

So it's still going strong with 7,0,323,56101 and no unresponsiveness or exceptions (see below). Whereas our instances without the patch have gone down several times in this period.

We restarted it once to deploy the second patch, but we're again 3 days+ of stability with the new patch, so this looks very promising - amazing job @Maoni0 :)!

I take it that this hotfix is going to take a while before it makes the main .NET 7.0 runtimes and gets deployed to all the AppServices? In the interim should we consider this patch suitable for production deployment, or should we consider rolling back to .NET 6,0 for the time being?


Health Check Metrics

Red - .NET 7.0 version deployed
Green - clrgc.dll patch 7,0,323,56100 deployed
Dark Green - clrgc.dll patch 7,0,323,56101 deployed

image

@KenKozman-GE
Copy link
Author

@Simon-Gregory-LG: do you guys do self-contained deployments? If so I think the version of the framework is "baked in", so when/if this fix gets into .NET 7.0 we could just include that.

If not, we have seen where there is an app service extension that tends to be published (updated?) on/about when the latest version drop comes (e.g. 7.0.2 came out on Jan 10, 2023). Adding that extension has let us get the latest/greatest as the AppService included runtime seems to lag by a month or two (for testing and validation and what not I assume).

@Maoni0
Copy link
Member

Maoni0 commented Feb 13, 2023

we will backport all fixes to 7.0 (the fix causing the infinite loop in allocate_in_condemned_generations that both Ken and Simon hit, and the other 2 showed up in verify_regions that only affected Ken).

meanwhile, @Simon-Gregory-LG, the general recommendation is you shouldn't be running private builds. however, since this is a build you need to use a config to invoke which means you can control exactly which processes use it and I did include all the necessary debug info (src+symbols) should something happen, I think you could use this till the 7.0 servicing release comes out, as long as you are comfortable with it.

@rbouallou
Copy link

rbouallou commented Feb 22, 2023

@rbouallou, absolutely. I just put libclrgc.so along with src and symbols at https://github.com/Maoni0/clrgc/tree/main/issues/80073/demotion_fix/v2/linux. please let me know how this works out for you.

We've been running with the patched libclrgc.so for just over a week now and it's still running with no issues - the process would hang every ~2 days with the default runtime.

@Maoni0 You mentioned fixes will be backported to 7.0. Do you have any idea when the release will come out?
Many thanks for the support.

@manandre
Copy link
Contributor

we will backport all fixes to 7.0 (the fix causing the infinite loop in allocate_in_condemned_generations that both Ken and Simon hit, and the other 2 showed up in verify_regions that only affected Ken).

@Maoni0 The next release for version 7.0 is planned on March 14th. Any chance these fixes will be part of it?

@Maoni0
Copy link
Member

Maoni0 commented Feb 28, 2023

@manandre the reason why I haven't checked in is because I've been running more stress tests and hit multiple problems with that, - I hit some 8.0 problems (not in GC) when I tried to run stress on 8.0 so I ran this on 7.0 and I did hit another issue in the vicinity so I'd like to make a fix of that as well. should have it this week.

@Simon-Gregory-LG
Copy link

Hi @Maoni0, just wanted to quickly check if this fix has made its way into .NET 7 yet and if so, which version?

I see this is referenced above in the PRs which were merged in April, but they look to be tagged with 7.0.7. So am I right in thinking that it's probably not quite yet, or did the fixes make it into the 7.0.5 one that's publicly available from April? (https://versionsof.net/core/7.0/)

@mangod9
Copy link
Member

mangod9 commented Jun 22, 2023

This fix should be available in 7.0.7 released last week: https://devblogs.microsoft.com/dotnet/june-2023-updates/

@Simon-Gregory-LG
Copy link

@mangod9 oh fantastic, hopefully that'll make it's way onto my Azure AppService soon. I'll keep an eye out for that.

Thanks for the quick reply! :)

@mangod9
Copy link
Member

mangod9 commented Aug 2, 2023

Closing since the fix has been made

@mangod9 mangod9 closed this as completed Aug 2, 2023
@ghost ghost locked as resolved and limited conversation to collaborators Sep 2, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants