Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] org.opensearch.search.backpressure.SearchBackpressureIT.testSearchShardTaskCancellationWithHighCpu is flaky #7972

Closed
dblock opened this issue Jun 8, 2023 · 10 comments · Fixed by #8063
Labels
bug Something isn't working flaky-test Random test failure that succeeds on second run

Comments

@dblock
Copy link
Member

dblock commented Jun 8, 2023

Describe the bug

org.opensearch.search.backpressure.SearchBackpressureIT.testSearchShardTaskCancellationWithHighCpu is flaky

https://build.ci.opensearch.org/job/gradle-check/17145/

#7969 (comment)

REPRODUCE WITH: ./gradlew ':server:internalClusterTest' --tests "org.opensearch.search.backpressure.SearchBackpressureIT.testSearchShardTaskCancellationWithHighCpu" -Dtests.seed=8B21150333870C73 -Dtests.security.manager=true -Dtests.jvm.argline="-XX:TieredStopAtLevel=1 -XX:ReservedCodeCacheSize=64m" -Dtests.locale=tr -Dtests.timezone=Africa/Kinshasa -Druntime.java=20

org.opensearch.search.backpressure.SearchBackpressureIT > testSearchShardTaskCancellationWithHighCpu FAILED
    java.lang.AssertionError: 
    Expected: a string containing "cpu usage exceeded"
         but: was null
        at __randomizedtesting.SeedInfo.seed([8B21150333870C73:BBB02F2B23057755]:0)
        at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
        at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
        at org.opensearch.search.backpressure.SearchBackpressureIT.testSearchShardTaskCancellationWithHighCpu(SearchBackpressureIT.java:196)
@dblock dblock added bug Something isn't working untriaged flaky-test Random test failure that succeeds on second run labels Jun 8, 2023
@stephen-crawford
Copy link
Contributor

This should be resolved by #7978

@BhumikaSaini-Amazon
Copy link
Contributor

Flaky test failures for testSearchShardTaskCancellationWithHighCpu:

@reta
Copy link
Collaborator

reta commented Jun 9, 2023

Sadly, not fixed: #7988

     1 org.opensearch.search.backpressure.SearchBackpressureIT.testSearchTaskCancellationWithHighCpu
      1 org.opensearch.indices.replication.RemoteStoreReplicationSourceTests.testGetCheckpointMetadata

@stephen-crawford
Copy link
Contributor

Tragic... Gonna take a further look.

@stephen-crawford
Copy link
Contributor

@PritLadani and @ketanv3, I hope all is well.

I am tagging you since I see you were two of the main authors of the SearchBackpressure changes. I am reaching out about this issue because I believe the issue we are running into right now is a race condition. I have done some digging after the initial change to the threshold time did not work and have come to the conclusion that the most likely cause is that a thread swap causes an issue.

Here is an example of some manual debugging I did to try to identify the issue:

[2023-06-12T13:06:55,561][INFO ][o.o.s.b.SearchBackpressureIT] [testSearchTaskCancellationWithHighCpu] before test
[2023-06-12T13:06:55,561][INFO ][o.o.s.b.SearchBackpressureIT] [testSearchTaskCancellationWithHighCpu] [SearchBackpressureIT#testSearchTaskCancellationWithHighCpu]: setting up test
[2023-06-12T13:06:55,568][INFO ][o.o.p.PluginsService     ] [node_s2] PluginService:onIndexModule index:[byThyomkTYei3-y4BeJ9Gw/tlJ-mcwoQJiKnOOu94hRxQ]
[2023-06-12T13:06:55,569][INFO ][o.o.c.m.MetadataIndexTemplateService] [node_s2] adding template [random_index_template] for index patterns [*]
[2023-06-12T13:06:55,588][INFO ][o.o.s.b.SearchBackpressureIT] [testSearchTaskCancellationWithHighCpu] [SearchBackpressureIT#testSearchTaskCancellationWithHighCpu]: all set up test
[2023-06-12T13:06:55,604][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.node_duress.num_successive_breaches] from [3] to [1]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.node_duress.cpu_threshold] from [0.9] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.node_duress.num_successive_breaches] from [3] to [1]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.node_duress.num_successive_breaches] from [3] to [1]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.node_duress.heap_threshold] from [0.7] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.node_duress.cpu_threshold] from [0.9] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.search_task.total_heap_percent_threshold] from [0.05] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.node_duress.cpu_threshold] from [0.9] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.node_duress.heap_threshold] from [0.7] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.search_shard_task.total_heap_percent_threshold] from [0.05] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.node_duress.heap_threshold] from [0.7] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.search_task.total_heap_percent_threshold] from [0.05] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.search_task.total_heap_percent_threshold] from [0.05] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.search_shard_task.total_heap_percent_threshold] from [0.05] to [0.0]
[2023-06-12T13:06:55,605][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.search_shard_task.total_heap_percent_threshold] from [0.05] to [0.0]
[2023-06-12T13:06:55,606][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.node_duress.num_successive_breaches] from [3] to [1]
[2023-06-12T13:06:55,606][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.node_duress.cpu_threshold] from [0.9] to [0.0]
[2023-06-12T13:06:55,606][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.node_duress.heap_threshold] from [0.7] to [0.0]
[2023-06-12T13:06:55,607][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.search_task.total_heap_percent_threshold] from [0.05] to [0.0]
[2023-06-12T13:06:55,607][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.search_shard_task.total_heap_percent_threshold] from [0.05] to [0.0]
[2023-06-12T13:06:55,627][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.search_task.cpu_time_millis_threshold] from [30000] to [50]
[2023-06-12T13:06:55,627][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.search_task.cpu_time_millis_threshold] from [30000] to [50]
[2023-06-12T13:06:55,627][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.mode] from [monitor_only] to [enforced]
[2023-06-12T13:06:55,627][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.search_task.cpu_time_millis_threshold] from [30000] to [50]
[2023-06-12T13:06:55,627][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.mode] from [monitor_only] to [enforced]
[2023-06-12T13:06:55,627][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.mode] from [monitor_only] to [enforced]
[2023-06-12T13:06:55,628][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.search_task.cpu_time_millis_threshold] from [30000] to [50]
[2023-06-12T13:06:55,629][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.mode] from [monitor_only] to [enforced]
[2023-06-12T13:06:56,506][WARN ][o.o.s.b.SearchBackpressureService] [node_s2] [enforced mode] cancelling task [119] due to high resource consumption [cpu usage exceeded [861.2ms >= 50ms]]
Setting reason to: cpu usage exceeded [861.2ms >= 50ms]
Task is cancelled with reason: null
Caught exception is: cpu usage exceeded [861.2ms >= 50ms]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.node_duress.num_successive_breaches] from [1] to [3]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.node_duress.num_successive_breaches] from [1] to [3]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.node_duress.num_successive_breaches] from [1] to [3]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.node_duress.cpu_threshold] from [0.0] to [0.9]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.node_duress.cpu_threshold] from [0.0] to [0.9]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.node_duress.heap_threshold] from [0.0] to [0.7]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.search_task.total_heap_percent_threshold] from [0.0] to [0.05]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.node_duress.cpu_threshold] from [0.0] to [0.9]
[2023-06-12T13:06:56,527][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.node_duress.heap_threshold] from [0.0] to [0.7]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.node_duress.heap_threshold] from [0.0] to [0.7]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.search_task.cpu_time_millis_threshold] from [50] to [30000]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.search_task.total_heap_percent_threshold] from [0.0] to [0.05]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.search_task.total_heap_percent_threshold] from [0.0] to [0.05]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.search_shard_task.total_heap_percent_threshold] from [0.0] to [0.05]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.search_task.cpu_time_millis_threshold] from [50] to [30000]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.search_task.cpu_time_millis_threshold] from [50] to [30000]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s3] updating [search_backpressure.mode] from [enforced] to [monitor_only]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.search_shard_task.total_heap_percent_threshold] from [0.0] to [0.05]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.search_shard_task.total_heap_percent_threshold] from [0.0] to [0.05]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s0] updating [search_backpressure.mode] from [enforced] to [monitor_only]
[2023-06-12T13:06:56,528][INFO ][o.o.c.s.ClusterSettings  ] [node_s1] updating [search_backpressure.mode] from [enforced] to [monitor_only]
[2023-06-12T13:06:56,529][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.node_duress.num_successive_breaches] from [1] to [3]
[2023-06-12T13:06:56,529][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.node_duress.cpu_threshold] from [0.0] to [0.9]
[2023-06-12T13:06:56,529][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.node_duress.heap_threshold] from [0.0] to [0.7]
[2023-06-12T13:06:56,529][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.search_task.total_heap_percent_threshold] from [0.0] to [0.05]
[2023-06-12T13:06:56,529][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.search_task.cpu_time_millis_threshold] from [50] to [30000]
[2023-06-12T13:06:56,529][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.search_shard_task.total_heap_percent_threshold] from [0.0] to [0.05]
[2023-06-12T13:06:56,529][INFO ][o.o.c.s.ClusterSettings  ] [node_s2] updating [search_backpressure.mode] from [enforced] to [monitor_only]
[2023-06-12T13:06:56,530][INFO ][o.o.s.b.SearchBackpressureIT] [testSearchTaskCancellationWithHighCpu] [SearchBackpressureIT#testSearchTaskCancellationWithHighCpu]: cleaning up after test
[2023-06-12T13:06:56,543][INFO ][o.o.c.m.MetadataIndexTemplateService] [node_s2] removing template [random_index_template]
[2023-06-12T13:06:56,563][INFO ][o.o.s.b.SearchBackpressureIT] [testSearchTaskCancellationWithHighCpu] [SearchBackpressureIT#testSearchTaskCancellationWithHighCpu]: cleaned up after test
[2023-06-12T13:06:56,563][INFO ][o.o.s.b.SearchBackpressureIT] [testSearchTaskCancellationWithHighCpu] after test

Notice that the statement goes from having a reason to not having one and back

Setting reason to: cpu usage exceeded [861.2ms >= 50ms]
Task is cancelled with reason: null
Caught exception is: cpu usage exceeded [861.2ms >= 50ms]

The ordering of these processes should not allow this if there is thread safe operation.

We can compare this to the log from a different run:

Setting reason to: cpu usage exceeded [802.5ms >= 50ms]
Task is cancelled with reason: cpu usage exceeded [802.5ms >= 50ms]
Caught exception is: cpu usage exceeded [802.5ms >= 50ms]

If you have any knowledge of the thread safety implementations you added it would be appreciated so we can try to diagnose this issue.

@PritLadani
Copy link
Contributor

Hi @scrawfor99 , can you please share the files and methods where you added the below logs?

Setting reason to: cpu usage exceeded [861.2ms >= 50ms]
Task is cancelled with reason: null
Caught exception is: cpu usage exceeded [861.2ms >= 50ms]

@stephen-crawford
Copy link
Contributor

Hi @PritLadani, the location of those logging messages were CancellableTask.java L84, SearchBackpressureIT.java L410, and SearchBackpressureIT.java L177 & L195.

Hopefully this helps. I am still running into issues diagnosing the issue so any input from you would be appreciated :)

@PritLadani
Copy link
Contributor

This seems to be due to multiple threads trying to access a volatile variable CancellableTask.reason at different times, but expecting same value.
Consider a thread has set the value of AtomicBoolean cancelled to true and yet to reach at line 87. Another thread from SearchBackpressureIT#407 reads the values of CancellableTask.cancelled and considers that the task is cancelled and tries to get the cancellation reason, but CancellableTask.reason has not been assigned any value yet.
I think putting this code block in a synchronized block should fix the issue. Can you please try this? @scrawfor99

@ketanv3
Copy link
Contributor

ketanv3 commented Jun 12, 2023

Thanks @scrawfor99 for working on a previous fix. Reducing the CPU time threshold from 1000 ms to 50 ms should not be the correct fix as this code executes a busy-wait loop to simulate CPU cycles. It keeps running the loop until the threshold is breached and an exception is thrown. We should revert it to 1000 ms.

As @PritLadani rightly pointed out, the race-condition is due to the integration test thread reading the cancellation reason (here) before the server has updated it (here). We need to fix this by making the updates to the fields cancelled, cancellationStartTime, cancellationStartTimeNanos, reason atomic. Instead of adding a synchronized block, I would suggest wrapping these fields in an object and setting it via compare-and-swap (see usages of SetOnce).

@stephen-crawford
Copy link
Contributor

Hi @ketanv3 and @PritLadani, thank you for following up so quickly. I will try what you suggested. I initially attempted to decrease the threshold since I noticed the optional would be thrown if it did not reach it and attempted a quick fix locally. Since it passed, I assumed that had been the issue.

That did not work though so I did an actual RCA and found the thread issue. I will go ahead and try to correct the thread issue as you suggested.

reta pushed a commit that referenced this issue Jun 22, 2023
* fix thread issue

Signed-off-by: Stephen Crawford <[email protected]>

* fix thread issue

Signed-off-by: Stephen Crawford <[email protected]>

* Fix thresholds

Signed-off-by: Stephen Crawford <[email protected]>

* Swap to object based

Signed-off-by: Stephen Crawford <[email protected]>

* Spotless

Signed-off-by: Stephen Crawford <[email protected]>

* Swap to preserve nulls

Signed-off-by: Stephen Crawford <[email protected]>

* Spotless

Signed-off-by: Stephen Crawford <[email protected]>

* Resolve npe

Signed-off-by: Stephen Crawford <[email protected]>

* remove final declerations

Signed-off-by: Stephen Crawford <[email protected]>

* spotless

Signed-off-by: Stephen Crawford <[email protected]>

* add annotations

Signed-off-by: Stephen Crawford <[email protected]>

* push to rerun tests

Signed-off-by: Stephen Crawford <[email protected]>

* Fix idea

Signed-off-by: Stephen Crawford <[email protected]>

* Fix idea

Signed-off-by: Stephen Crawford <[email protected]>

---------

Signed-off-by: Stephen Crawford <[email protected]>
opensearch-trigger-bot bot pushed a commit that referenced this issue Jun 22, 2023
* fix thread issue

Signed-off-by: Stephen Crawford <[email protected]>

* fix thread issue

Signed-off-by: Stephen Crawford <[email protected]>

* Fix thresholds

Signed-off-by: Stephen Crawford <[email protected]>

* Swap to object based

Signed-off-by: Stephen Crawford <[email protected]>

* Spotless

Signed-off-by: Stephen Crawford <[email protected]>

* Swap to preserve nulls

Signed-off-by: Stephen Crawford <[email protected]>

* Spotless

Signed-off-by: Stephen Crawford <[email protected]>

* Resolve npe

Signed-off-by: Stephen Crawford <[email protected]>

* remove final declerations

Signed-off-by: Stephen Crawford <[email protected]>

* spotless

Signed-off-by: Stephen Crawford <[email protected]>

* add annotations

Signed-off-by: Stephen Crawford <[email protected]>

* push to rerun tests

Signed-off-by: Stephen Crawford <[email protected]>

* Fix idea

Signed-off-by: Stephen Crawford <[email protected]>

* Fix idea

Signed-off-by: Stephen Crawford <[email protected]>

---------

Signed-off-by: Stephen Crawford <[email protected]>
(cherry picked from commit 63dc6aa)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
reta pushed a commit that referenced this issue Jun 22, 2023
* fix thread issue



* fix thread issue



* Fix thresholds



* Swap to object based



* Spotless



* Swap to preserve nulls



* Spotless



* Resolve npe



* remove final declerations



* spotless



* add annotations



* push to rerun tests



* Fix idea



* Fix idea



---------


(cherry picked from commit 63dc6aa)

Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
gaiksaya pushed a commit to gaiksaya/OpenSearch that referenced this issue Jun 26, 2023
…rch-project#8063) (opensearch-project#8217)

* fix thread issue



* fix thread issue



* Fix thresholds



* Swap to object based



* Spotless



* Swap to preserve nulls



* Spotless



* Resolve npe



* remove final declerations



* spotless



* add annotations



* push to rerun tests



* Fix idea



* Fix idea



---------


(cherry picked from commit 63dc6aa)

Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
imRishN pushed a commit to imRishN/OpenSearch that referenced this issue Jun 27, 2023
…rch-project#8063)

* fix thread issue

Signed-off-by: Stephen Crawford <[email protected]>

* fix thread issue

Signed-off-by: Stephen Crawford <[email protected]>

* Fix thresholds

Signed-off-by: Stephen Crawford <[email protected]>

* Swap to object based

Signed-off-by: Stephen Crawford <[email protected]>

* Spotless

Signed-off-by: Stephen Crawford <[email protected]>

* Swap to preserve nulls

Signed-off-by: Stephen Crawford <[email protected]>

* Spotless

Signed-off-by: Stephen Crawford <[email protected]>

* Resolve npe

Signed-off-by: Stephen Crawford <[email protected]>

* remove final declerations

Signed-off-by: Stephen Crawford <[email protected]>

* spotless

Signed-off-by: Stephen Crawford <[email protected]>

* add annotations

Signed-off-by: Stephen Crawford <[email protected]>

* push to rerun tests

Signed-off-by: Stephen Crawford <[email protected]>

* Fix idea

Signed-off-by: Stephen Crawford <[email protected]>

* Fix idea

Signed-off-by: Stephen Crawford <[email protected]>

---------

Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Rishab Nahata <[email protected]>
shiv0408 pushed a commit to Gaurav614/OpenSearch that referenced this issue Apr 25, 2024
…rch-project#8063)

* fix thread issue

Signed-off-by: Stephen Crawford <[email protected]>

* fix thread issue

Signed-off-by: Stephen Crawford <[email protected]>

* Fix thresholds

Signed-off-by: Stephen Crawford <[email protected]>

* Swap to object based

Signed-off-by: Stephen Crawford <[email protected]>

* Spotless

Signed-off-by: Stephen Crawford <[email protected]>

* Swap to preserve nulls

Signed-off-by: Stephen Crawford <[email protected]>

* Spotless

Signed-off-by: Stephen Crawford <[email protected]>

* Resolve npe

Signed-off-by: Stephen Crawford <[email protected]>

* remove final declerations

Signed-off-by: Stephen Crawford <[email protected]>

* spotless

Signed-off-by: Stephen Crawford <[email protected]>

* add annotations

Signed-off-by: Stephen Crawford <[email protected]>

* push to rerun tests

Signed-off-by: Stephen Crawford <[email protected]>

* Fix idea

Signed-off-by: Stephen Crawford <[email protected]>

* Fix idea

Signed-off-by: Stephen Crawford <[email protected]>

---------

Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Shivansh Arora <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working flaky-test Random test failure that succeeds on second run
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants