Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when disposing actor system with non default ActorRefProvider #2640

Closed
Ralf1108 opened this issue May 3, 2017 · 16 comments
Closed

Comments

@Ralf1108
Copy link
Contributor

Ralf1108 commented May 3, 2017

using Akka 1.2.0

If local actor system is created and disposed repeatedly then everything is fine.
If same is done with cluster actor system then there seems to be a memory leak after disposing.

Check tests:

  • IfLocalActorSystemIsStartedAndDisposedManyTimes_ThenThereShouldBeNoMemoryLeak
    Output:
    Got ActorIdentity: 42
    After first run - MemoryUsage: 1mb
    Iteration: 2 - MemoryUsage: 1mb
    Got ActorIdentity: 42
    Got ActorIdentity: 42
    Iteration: 4 - MemoryUsage: 1mb
    Got ActorIdentity: 42
    Got ActorIdentity: 42
    Iteration: 6 - MemoryUsage: 1mb
    ...
    Got ActorIdentity: 42
    Got ActorIdentity: 42
    Iteration: 98 - MemoryUsage: 1mb
    Got ActorIdentity: 42
    Got ActorIdentity: 42
    Iteration: 100 - MemoryUsage: 1mb
    Got ActorIdentity: 42

  • IfClusterActorSystemIsCreatedAndDisposedManyTimes_ThenThereShouldBeNoMemoryLeak
    Output:
    Got ActorIdentity: 42
    After first run - MemoryUsage: 35mb
    Iteration: 2 - MemoryUsage: 35mb
    Got ActorIdentity: 42
    Got ActorIdentity: 42
    Iteration: 4 - MemoryUsage: 102mb
    Got ActorIdentity: 42
    Got ActorIdentity: 42
    Iteration: 6 - MemoryUsage: 169mb
    System.InvalidOperationException : There seems to be a memory leak!

using System;
using Akka.Actor;
using Akka.Cluster.Tools.Client;
using Akka.Configuration;
using NUnit.Framework;

namespace StressTests
{
    [TestFixture]
    public class AkkaTests
    {
        private const string ClusterServerConfig = @"
akka {   
    actor {
        provider = ""Akka.Cluster.ClusterActorRefProvider, Akka.Cluster""
    }
    remote {
        helios.tcp {
            hostname = ""127.0.0.1""
            port = 3000
        }
    }
    cluster {
        seed-nodes = [""akka.tcp://[email protected]:3000""]
    }  
}
";

        private const string ClusterClientConfig = @"
akka {  
    actor {
        provider = ""Akka.Cluster.ClusterActorRefProvider, Akka.Cluster""   
    }
    remote {
        helios.tcp {
            hostname = ""127.0.0.1""
            port = 3001
        }   
    }
    cluster {
        client {
            initial-contacts = [""akka.tcp://[email protected]:3000/system/receptionist""]
        }    
    }
}
";

        [Test]
        public void IfLocalActorSystemIsCreatedAndDisposedManyTimes_ThenThereShouldBeNoMemoryLeak()
        {
            TestForMemoryLeak(RunLocalSystem);
        }

        [Test]
        public void IfClusterActorSystemIsCreatedAndDisposedManyTimes_ThenThereShouldBeNoMemoryLeak()
        {
            TestForMemoryLeak(RunClusterSystem);
        }

        private static void RunLocalSystem()
        {
            var system = ActorSystem.Create("Local");
            var actor = system.ActorOf<TestActor>();
            var result = actor.Ask<ActorIdentity>(new Identify(42)).Result;
            TestContext.Progress.WriteLine("Got ActorIdentity: " + result.MessageId);

            system.Terminate().Wait();
            system.Dispose();
        }

        private void RunClusterSystem()
        {
            var serverAkkaConfig = ConfigurationFactory.ParseString(ClusterServerConfig);
            var serverSystem = ActorSystem.Create("ClusterServer", serverAkkaConfig);
            var serverActor = serverSystem.ActorOf<TestActor>("TestActor");
            var receptionist = ClusterClientReceptionist.Get(serverSystem);
            receptionist.RegisterService(serverActor);

            var clientAkkaConfig = ConfigurationFactory.ParseString(ClusterClientConfig);
            var clientSystem = ActorSystem.Create("ClusterClient", clientAkkaConfig);

            var defaultConfig = ClusterClientReceptionist.DefaultConfig();
            clientSystem.Settings.InjectTopLevelFallback(defaultConfig);
            var clusterClientSettings = ClusterClientSettings.Create(clientSystem);
            var clientActor = clientSystem.ActorOf(ClusterClient.Props(clusterClientSettings));

            var result = clientActor.Ask<ActorIdentity>(new ClusterClient.Send("/user/TestActor",new Identify(42))).Result;
            TestContext.Progress.WriteLine("Got ActorIdentity: " + result.MessageId);

            clientSystem.Terminate().Wait();
            serverSystem.Terminate().Wait();

            clientSystem.Dispose();
            serverSystem.Dispose();
        }

        private static void TestForMemoryLeak(Action action)
        {
            const int IterationCount = 100;
            long memoryAfterFirstRun = 0;
            for (var i = 1; i <= IterationCount; i++)
            {
                if (i % 2 == 0)
                {
                    var currentMemory = GC.GetTotalMemory(true) / 1024 / 1024;
                    TestContext.Progress.WriteLine($"Iteration: {i} - MemoryUsage: {currentMemory}mb");

                    if (currentMemory > memoryAfterFirstRun + 100)
                        throw new InvalidOperationException("There seems to be a memory leak!");
                }

                action();

                if (i == 1)
                {
                    memoryAfterFirstRun = GC.GetTotalMemory(true) / 1024 / 1024;
                    TestContext.Progress.WriteLine($"After first run - MemoryUsage: {memoryAfterFirstRun}mb");
                }
            }
        }

        private class TestActor : ReceiveActor
        {
        }
    }
}
@Szer
Copy link
Contributor

Szer commented May 3, 2017

It leaking even with LocalSystem but very slow. You need this (more strings in config - faster leak):

        private const string LocalConfig =@"
akka {
    stdout-loglevel: DEBUG
    loglevel: DEBUG
    log-config-on-start: on
    actor {
        debug {
            autoreceive: on
            lifecycle: on
            unhandled: on
            router-misconfiguration: on
        }
    }
    loggers = [""Akka.Event.StandardOutLogger, Akka""]
}    
";

Then parse and inject this into local system, lower sensitivity to memory leak from 100mb to 2 and increase IterationCount to 2000 :)

@Ralf1108
Copy link
Contributor Author

Ralf1108 commented May 3, 2017

it looks like the cluster systems leaks much more data as the wasted memory size grows much faster.

@Ralf1108
Copy link
Contributor Author

Ralf1108 commented May 5, 2017

I reduced the reproduction steps to just creating and disposing the actor system.
It seems that the memory leak depends on the configured ActorRefProvider.

Output is now for

  • default ActorRefProvider (succeeds)
    After first run - MemoryUsage: 26743224
    Iteration: 10 - MemoryUsage: 25999656
    Iteration: 20 - MemoryUsage: 25973352
    Iteration: 30 - MemoryUsage: 25970312
    Iteration: 40 - MemoryUsage: 25964536
    Iteration: 50 - MemoryUsage: 25970648
    Iteration: 60 - MemoryUsage: 25964976
    Iteration: 70 - MemoryUsage: 25935432
    Iteration: 80 - MemoryUsage: 27264896
    Iteration: 90 - MemoryUsage: 25931552
    Iteration: 100 - MemoryUsage: 25930216

  • RemoteActorRefProvider (fails)
    After first run - MemoryUsage: 13921112
    Iteration: 10 - MemoryUsage: 16964400
    Iteration: 20 - MemoryUsage: 20098392
    Iteration: 30 - MemoryUsage: 23003384
    Iteration: 40 - MemoryUsage: 25996168

  • ClusterActorRefProvider (fails)
    After first run - MemoryUsage: 2688896
    Iteration: 10 - MemoryUsage: 6340264
    Iteration: 20 - MemoryUsage: 9969008
    Iteration: 30 - MemoryUsage: 13592672

using System;
using Akka.Actor;
using Akka.Configuration;
using Xunit;
using Xunit.Abstractions;

namespace Akka.Cluster.Tools.Tests.ClusterClient
{
    public class AkkaTests
    {
        private readonly ITestOutputHelper _output;

        public AkkaTests(ITestOutputHelper output)
        {
            _output = output;
        }

        [Fact]
        public void IfActorSystemWithDefaultActorRefProviderIsCreatedAndDisposed_ThenThereShouldBeNoMemoryLeak()
        {
            TestForMemoryLeak(() => CreateAndDisposeActorSystem(null));
        }

        [Fact]
        public void IfActorSystemWithRemoteActorRefProviderIsCreatedAndDisposed_ThenThereShouldBeNoMemoryLeak()
        {
            const string ConfigStringRemote = @"
akka {   
    actor {
        provider = ""Akka.Remote.RemoteActorRefProvider, Akka.Remote""
    }";

            TestForMemoryLeak(() => CreateAndDisposeActorSystem(ConfigStringRemote));
        }

        [Fact]
        public void IfActorSystemWithClusterActorRefProviderIsCreatedAndDisposed_ThenThereShouldBeNoMemoryLeak()
        {
            const string ConfigStringCluster = @"
akka {   
    actor {
        provider = ""Akka.Cluster.ClusterActorRefProvider, Akka.Cluster""
    }";

            TestForMemoryLeak(() => CreateAndDisposeActorSystem(ConfigStringCluster));
        }

        private void CreateAndDisposeActorSystem(string configString)
        {
            ActorSystem system;

            if (configString == null)
                system = ActorSystem.Create("Local");
            else
            {
                var config = ConfigurationFactory.ParseString(configString);
                system = ActorSystem.Create("Local", config);
            }

            // ensure that a actor system did some work
            var actor = system.ActorOf<TestActor>();
            var result = actor.Ask<ActorIdentity>(new Identify(42)).Result;

            system.Terminate().Wait();
            system.Dispose();
        }

        private void TestForMemoryLeak(Action action)
        {
            const int iterationCount = 100;
            const long memoryThreshold = 10 * 1024 * 1024;

            action();
            var memoryAfterFirstRun = GC.GetTotalMemory(true);
            Log($"After first run - MemoryUsage: {memoryAfterFirstRun}");

            for (var i = 1; i <= iterationCount; i++)
            {
                action();

                if (i % 10 == 0)
                {
                    var currentMemory = GC.GetTotalMemory(true);
                    Log($"Iteration: {i} - MemoryUsage: {currentMemory}");

                    if (currentMemory > memoryAfterFirstRun + memoryThreshold)
                        throw new InvalidOperationException("There seems to be a memory leak!");
                }
            }
        }

        private void Log(string text)
        {
            _output.WriteLine(text);
        }

        private class TestActor : ReceiveActor
        {
        }
    }
}

@Ralf1108 Ralf1108 changed the title Memory leak when disposing cluster actor system Memory leak when disposing actor system with non default ActorRefProvider May 5, 2017
@Ralf1108
Copy link
Contributor Author

Ralf1108 commented May 5, 2017

After some debugging the Terminate() method:
It seems that RemoteActorRefProvider and ClusterActorRefProvider force internally the instantiation of the ForkJoinExecutor. But if you put a break point into its Shutdown() method it will never be hit. So then the _dedicatedThreadPool doesn't dispose correctly which internally doesn't dispose the ThreadPoolWorkQueue correctly.

@to11mtm
Copy link
Member

to11mtm commented May 7, 2017

Question/Statement.

While it sounds like there could be some leaking occurring, I would think that you would want to force collection in your tests since the dispose pattern on it's own may not guarantee that all memory is freed. Things should dispose correctly but there's a difference (to me, anyway) between a soft leak that happens when a full GC is done and a hard leak that never gets handled.

What does it look like if a GC.Collect() is thrown in?

@Ralf1108
Copy link
Contributor Author

Ralf1108 commented May 8, 2017

According to msdn first parameter of "GC.GetTotalMemory(true)" forces a full collection:
"Retrieves the number of bytes currently thought to be allocated. A parameter indicates whether this method can wait a short interval before returning, to allow the system to collect garbage and finalize objects."

I also rerun the tests with old school memory cleanup like
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
but the numbers remained the same

@Ralf1108
Copy link
Contributor Author

Ralf1108 commented Aug 4, 2017

retested with Akka 1.2.3.
Memory leak is still existing

@Aaronontheweb
Copy link
Member

Pretty sure this issue and the problems we were having on #3668 are related. Going to be reproing it and looking into it.

@Aaronontheweb Aaronontheweb added this to the 1.3.12 milestone Mar 7, 2019
@Aaronontheweb Aaronontheweb self-assigned this Mar 7, 2019
@Aaronontheweb
Copy link
Member

Took @Ralf1108's reproduction code and turned it into this so I could run DotMemory profiling on it.

Looks like a leak in the HOCON tokenizer: https://github.com/Aaronontheweb/Akka.NET264BugRepro

image

@Aaronontheweb
Copy link
Member

So I've conclusively found the issue; it's still an issue in Akka.NET v1.3.11; and my research shows that @Ralf1108's original theory on its origins is correct - all of the ForkJoinDispatcher instances in Akka.Persistence, Akka.Remote, and Akka.Cluster are not being shut down correctly.

The root cause is this function call;

private void ScheduleShutdownAction()
{
try
{
Configurator.Prerequisites.Scheduler.Advanced.ScheduleOnce(ShutdownTimeout, () =>
{
try
{
_shutdownAction.Run();
}
catch (Exception ex)
{
ReportFailure(ex);
}
});
}
catch (SchedulerException) // Scheduler has been shut down already. Need to just cleanup synchronously then.
{
Shutdown();
}
}

By default, the ShutdownTimeout is set to 1 second via the akka.actor.default-dispatcher.shutdown-timeout property in HOCON. So here's the issue: the Scheduler is often shutdown before that 1 second elapses and thus the Dispose method on the DedicatedThreadPool is never called, because all outstanding scheduled items are discarded during shutdown. I was able to verify this via step-through debugging some of the Akka.Remote samples attached to the Akka.sln.

Workaround and Evidence

If I change akka.actor.default-dispatcher.shutdown-timeout to 0s, which means the scheduler will invoke the dispatcher's shutdown routine immediately, you'll notice that my memory graph for Aaronontheweb/Akka.NET264BugRepro#3 looks totally stable (using Akka.Persistence instead of Akka.Remote, since both use the ForkJoinExecutor.)

image

Memory holds pretty steady at around 25mb. It eventually climbs to 30mb after starting and stopping 1000 ActorSystem instances. I think this is because there are still cases where the HashedWheelTimer still gets shutdown before it has a chance to run the shutdown routine, albeit orders of magnitude fewer than before.

If I turn this setting back to its default, however...

image

Climbs up to 41mb and then fails early, since it exceeded its 10mb max allowance for memory creep.

So, as a workaround for this issue you could do what I did here and just set the following in your HOCON:

akka.actor.default-dispatcher.shutdown-timeout = 0s

That should help.

Permanent Fix

I'm going to work on a reproduction spec for this issue so we can regression-test it, but what I think I'm going to recommend doing is simply shutting down all dispatcher executors synchronously - that way there's nothing left behind and no dependency on the order in which the scheduler vs. the dispatcher gets shut down.

I don't entirely know what the side-effects will be of doing this, but I suspect not much: the dispatcher can't be shutdown until 100% of actors registered on it for use have stopped, which occurs during ActorSystem termination.

@Aaronontheweb
Copy link
Member

I also think, based on the data from DotMemory, there might be some memory issues with CoordinatedShutdown and closures going over the local ActorSystem but I'm not 100% certain. Going to look into it next after I get the dispatcher situation sorted and I'll likely open a new issue for that altogether.

Aaronontheweb added a commit to Aaronontheweb/akka.net that referenced this issue Mar 11, 2019
Aaronontheweb added a commit to Aaronontheweb/akka.net that referenced this issue Mar 11, 2019
Aaronontheweb added a commit to Aaronontheweb/akka.net that referenced this issue Mar 11, 2019
@Aaronontheweb
Copy link
Member

Closed via #3734

@EJantzerGitHub
Copy link

I updated a local copy of https://github.com/Aaronontheweb/Akka.NET264BugRepro to 1.3.12, bumped the memory sensitivity up to 100 Mb and it still throws at approximately 300 iterations

@Aaronontheweb
Copy link
Member

@EJantzerGitHub that'd be because of #3735. It was blowing up at ~30 before. Pretty sure the issue is related to some closures inside CoordinatedShutdown.

@EJantzerGitHub
Copy link

Thanks Aaron. I will be watching that bug then with great interest

@Aaronontheweb
Copy link
Member

@EJantzerGitHub no problem! If you'd like to help send in a pull request for it, definitely recommend taking a look at that reproduction program using a profiler like DotMemory. That's how I track this sort of stuff down usually.

They have a pretty useful tutorial on the subject too: https://www.jetbrains.com/help/dotmemory/How_to_Find_a_Memory_Leak.html

Aaronontheweb added a commit to Aaronontheweb/akka.net that referenced this issue Mar 20, 2019
needed to give the system more messages to process so we guarantee hitting all four dispatcher threads when running the test suite in parallel.
Aaronontheweb added a commit that referenced this issue May 16, 2019
… .NET 4.5.2 (#3668)

* migrated to 'dotnet test'

* added missing variable for detecing TeamCity

* added more targets for end-to-end testing and building

* fixed issue with dotnet test lockup for Akka.Cluster.Tests

* upgraded all core projects to standards

* fixed all major build issues thus far

* upgraded all contrib/cluster projects

* completed standardizing all projects

* fixed issue with Akka.DI.Core

* upgrade Linux to .NET Core 2.0 SDK

* further fixes to build.sh

* changed search location for MNTR assemblies

* upgraded MNTR to .NET 4.6.1

* fixed build.sh dotnet-install command

* fixed .NET Core test execution

* fixed issue with Akka.Remote.Tests.MultiNode outputting to wrong spot

* added channel to build.sh

* changed to wget

* fixed dotnet installer url

* skip API approvals on .NET Core

* fixed issue with MNTR NuGet packaging

* disabled FsCheck

* attempted to address Akka.Persistence memory leak

* migrated to 'dotnet test'

* added missing variable for detecing TeamCity

* added more targets for end-to-end testing and building

* fixed issue with dotnet test lockup for Akka.Cluster.Tests

* rebased on dev

* fixed all major build issues thus far

* upgraded all contrib/cluster projects

* completed standardizing all projects

* fixed issue with Akka.DI.Core

* upgrade Linux to .NET Core 2.0 SDK

* further fixes to build.sh

* changed search location for MNTR assemblies

* upgraded MNTR to .NET 4.6.1

* fixed build.sh dotnet-install command

* fixed .NET Core test execution

* fixed issue with Akka.Remote.Tests.MultiNode outputting to wrong spot

* added channel to build.sh

* changed to wget

* fixed dotnet installer url

* skip API approvals on .NET Core

* fixed issue with MNTR NuGet packaging

* disabled FsCheck

* attempted to address Akka.Persistence memory leak

* fixed issue with Akka.Streams tests

* standardized FluentAssertions version

* fixed compilation of TCK

* upgraded to .NET Core 2.1 SDK

* removed restore stage - no longer needed

* bumpe tests to .NET Core 2.1

* Revert "bumpe tests to .NET Core 2.1"

This reverts commit f76e09f.

* workaround dotnet/msbuild#2275 until .NET Core 2.1 migration

* Revert "upgraded to .NET Core 2.1 SDK"

This reverts commit b000b76.

* improved test error result handling

* Revert "Revert "upgraded to .NET Core 2.1 SDK""

This reverts commit 1b1a836.

* Revert "Revert "bumpe tests to .NET Core 2.1""

This reverts commit 175d6ca.

* moving onto .NET Standard 2.0

* standardized most test projects

* fixed common.props references

* fixed .NET Core 2.1 build systems

* fixed issue with packing MNTR

* fixed issue with single test failure stopping build

* fixed failure handling

* fixed issues with Akka.Streams specs

* fixed scan for incremental tests

* working on FsCheck standardization issues

* removed more net implicit standard junk

* cleaning up implicit package versions; bumped to JSON.NET 12.0.1

* fixed port bindings for Akka.Cluster.Tools and Akka.Cluster.Sharding so suites could theoretically run in parallel

* fixed more ports

* fixed compilation errors

* rolled back to Newtonsoft.Json 9.0.1

* disabled parallelization in Akka.Streams.Tests

* added xunit.runner.json

* Disabled xunit.runner.json for Akka.Streams.Tests

* added more debug logging to scriptedtest

* issue appears to be the 1ms deadline not being long enough on .NET Core - stream isn't even wired up yet

* fixed race condition with Bug2640Spec for #2640

needed to give the system more messages to process so we guarantee hitting all four dispatcher threads when running the test suite in parallel.

* updated API approvals

* fixed issue with Bug2640Spec again

No longer looking for an exact thread count since the CPU may not schedule it that. Instead, just ensure that all of the threads hit on the dispatcher shut down when the dispatcher is terminated.

* same fix as previous
madmonkey pushed a commit to madmonkey/akka.net that referenced this issue Jul 12, 2019
* close akkadotnet#2640 - fixed shutdown routine of HashedWheelTimerScheduler
madmonkey pushed a commit to madmonkey/akka.net that referenced this issue Jul 12, 2019
… .NET 4.5.2 (akkadotnet#3668)

* migrated to 'dotnet test'

* added missing variable for detecing TeamCity

* added more targets for end-to-end testing and building

* fixed issue with dotnet test lockup for Akka.Cluster.Tests

* upgraded all core projects to standards

* fixed all major build issues thus far

* upgraded all contrib/cluster projects

* completed standardizing all projects

* fixed issue with Akka.DI.Core

* upgrade Linux to .NET Core 2.0 SDK

* further fixes to build.sh

* changed search location for MNTR assemblies

* upgraded MNTR to .NET 4.6.1

* fixed build.sh dotnet-install command

* fixed .NET Core test execution

* fixed issue with Akka.Remote.Tests.MultiNode outputting to wrong spot

* added channel to build.sh

* changed to wget

* fixed dotnet installer url

* skip API approvals on .NET Core

* fixed issue with MNTR NuGet packaging

* disabled FsCheck

* attempted to address Akka.Persistence memory leak

* migrated to 'dotnet test'

* added missing variable for detecing TeamCity

* added more targets for end-to-end testing and building

* fixed issue with dotnet test lockup for Akka.Cluster.Tests

* rebased on dev

* fixed all major build issues thus far

* upgraded all contrib/cluster projects

* completed standardizing all projects

* fixed issue with Akka.DI.Core

* upgrade Linux to .NET Core 2.0 SDK

* further fixes to build.sh

* changed search location for MNTR assemblies

* upgraded MNTR to .NET 4.6.1

* fixed build.sh dotnet-install command

* fixed .NET Core test execution

* fixed issue with Akka.Remote.Tests.MultiNode outputting to wrong spot

* added channel to build.sh

* changed to wget

* fixed dotnet installer url

* skip API approvals on .NET Core

* fixed issue with MNTR NuGet packaging

* disabled FsCheck

* attempted to address Akka.Persistence memory leak

* fixed issue with Akka.Streams tests

* standardized FluentAssertions version

* fixed compilation of TCK

* upgraded to .NET Core 2.1 SDK

* removed restore stage - no longer needed

* bumpe tests to .NET Core 2.1

* Revert "bumpe tests to .NET Core 2.1"

This reverts commit f76e09f.

* workaround dotnet/msbuild#2275 until .NET Core 2.1 migration

* Revert "upgraded to .NET Core 2.1 SDK"

This reverts commit b000b76.

* improved test error result handling

* Revert "Revert "upgraded to .NET Core 2.1 SDK""

This reverts commit 1b1a836.

* Revert "Revert "bumpe tests to .NET Core 2.1""

This reverts commit 175d6ca.

* moving onto .NET Standard 2.0

* standardized most test projects

* fixed common.props references

* fixed .NET Core 2.1 build systems

* fixed issue with packing MNTR

* fixed issue with single test failure stopping build

* fixed failure handling

* fixed issues with Akka.Streams specs

* fixed scan for incremental tests

* working on FsCheck standardization issues

* removed more net implicit standard junk

* cleaning up implicit package versions; bumped to JSON.NET 12.0.1

* fixed port bindings for Akka.Cluster.Tools and Akka.Cluster.Sharding so suites could theoretically run in parallel

* fixed more ports

* fixed compilation errors

* rolled back to Newtonsoft.Json 9.0.1

* disabled parallelization in Akka.Streams.Tests

* added xunit.runner.json

* Disabled xunit.runner.json for Akka.Streams.Tests

* added more debug logging to scriptedtest

* issue appears to be the 1ms deadline not being long enough on .NET Core - stream isn't even wired up yet

* fixed race condition with Bug2640Spec for akkadotnet#2640

needed to give the system more messages to process so we guarantee hitting all four dispatcher threads when running the test suite in parallel.

* updated API approvals

* fixed issue with Bug2640Spec again

No longer looking for an exact thread count since the CPU may not schedule it that. Instead, just ensure that all of the threads hit on the dispatcher shut down when the dispatcher is terminated.

* same fix as previous
Aaronontheweb added a commit to Aaronontheweb/akka.net that referenced this issue Jul 30, 2019
… .NET 4.5.2 (akkadotnet#3668)

* migrated to 'dotnet test'

* added missing variable for detecing TeamCity

* added more targets for end-to-end testing and building

* fixed issue with dotnet test lockup for Akka.Cluster.Tests

* upgraded all core projects to standards

* fixed all major build issues thus far

* upgraded all contrib/cluster projects

* completed standardizing all projects

* fixed issue with Akka.DI.Core

* upgrade Linux to .NET Core 2.0 SDK

* further fixes to build.sh

* changed search location for MNTR assemblies

* upgraded MNTR to .NET 4.6.1

* fixed build.sh dotnet-install command

* fixed .NET Core test execution

* fixed issue with Akka.Remote.Tests.MultiNode outputting to wrong spot

* added channel to build.sh

* changed to wget

* fixed dotnet installer url

* skip API approvals on .NET Core

* fixed issue with MNTR NuGet packaging

* disabled FsCheck

* attempted to address Akka.Persistence memory leak

* migrated to 'dotnet test'

* added missing variable for detecing TeamCity

* added more targets for end-to-end testing and building

* fixed issue with dotnet test lockup for Akka.Cluster.Tests

* rebased on dev

* fixed all major build issues thus far

* upgraded all contrib/cluster projects

* completed standardizing all projects

* fixed issue with Akka.DI.Core

* upgrade Linux to .NET Core 2.0 SDK

* further fixes to build.sh

* changed search location for MNTR assemblies

* upgraded MNTR to .NET 4.6.1

* fixed build.sh dotnet-install command

* fixed .NET Core test execution

* fixed issue with Akka.Remote.Tests.MultiNode outputting to wrong spot

* added channel to build.sh

* changed to wget

* fixed dotnet installer url

* skip API approvals on .NET Core

* fixed issue with MNTR NuGet packaging

* disabled FsCheck

* attempted to address Akka.Persistence memory leak

* fixed issue with Akka.Streams tests

* standardized FluentAssertions version

* fixed compilation of TCK

* upgraded to .NET Core 2.1 SDK

* removed restore stage - no longer needed

* bumpe tests to .NET Core 2.1

* Revert "bumpe tests to .NET Core 2.1"

This reverts commit f76e09f.

* workaround dotnet/msbuild#2275 until .NET Core 2.1 migration

* Revert "upgraded to .NET Core 2.1 SDK"

This reverts commit b000b76.

* improved test error result handling

* Revert "Revert "upgraded to .NET Core 2.1 SDK""

This reverts commit 1b1a836.

* Revert "Revert "bumpe tests to .NET Core 2.1""

This reverts commit 175d6ca.

* moving onto .NET Standard 2.0

* standardized most test projects

* fixed common.props references

* fixed .NET Core 2.1 build systems

* fixed issue with packing MNTR

* fixed issue with single test failure stopping build

* fixed failure handling

* fixed issues with Akka.Streams specs

* fixed scan for incremental tests

* working on FsCheck standardization issues

* removed more net implicit standard junk

* cleaning up implicit package versions; bumped to JSON.NET 12.0.1

* fixed port bindings for Akka.Cluster.Tools and Akka.Cluster.Sharding so suites could theoretically run in parallel

* fixed more ports

* fixed compilation errors

* rolled back to Newtonsoft.Json 9.0.1

* disabled parallelization in Akka.Streams.Tests

* added xunit.runner.json

* Disabled xunit.runner.json for Akka.Streams.Tests

* added more debug logging to scriptedtest

* issue appears to be the 1ms deadline not being long enough on .NET Core - stream isn't even wired up yet

* fixed race condition with Bug2640Spec for akkadotnet#2640

needed to give the system more messages to process so we guarantee hitting all four dispatcher threads when running the test suite in parallel.

* updated API approvals

* fixed issue with Bug2640Spec again

No longer looking for an exact thread count since the CPU may not schedule it that. Instead, just ensure that all of the threads hit on the dispatcher shut down when the dispatcher is terminated.

* same fix as previous
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants