Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend SendAsync/ReceiveAsync cancellation tests #44161

Merged
merged 3 commits into from
Nov 3, 2020

Conversation

antonfirsov
Copy link
Member

  • Rename CanceledDuringOperation_Throws to ReceiveAsync_CanceledDuringOperation_Throws and extend it to IPv6 sockets
  • Introduce cancellation test for Socket.SendAsync. Unless I'm missing something, this method was missing functional coverage.

@geoffkizer @stephentoub @tmds PTAL. I want to find the best practices for testing cancellation for stuff in #43935.

@ghost
Copy link

ghost commented Nov 2, 2020

Tagging subscribers to this area: @dotnet/ncl
See info in area-owners.md if you want to be subscribed.

[Theory]
[InlineData(false)]
[InlineData(true)]
public async Task SendAsync_CanceledDuringOperation_Throws(bool ipv6)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test seems highly timing-dependent.

I think what you are doing here is. you're flooding the kernel send buffer with so much data (64K per send, 1000 sends) that you're assuming at least some of the send tasks won't complete before you call CancelAfter. Am I understanding this correctly?

I don't see any better way to do this really, but I worry we may see intermittent failures here. Thoughts?

At the very least this should be [Outerloop] I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also a comment explaining what's intended would be useful.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, since this is TCP (I was thinking UDP for some reason) I think what you're doing should be fine.

That said, it should be [Outerloop] and a comment would be nice.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're flooding the kernel send buffer with so much data (64K per send, 1000 sends) that you're assuming at least some of the send tasks won't complete before you call CancelAfter

Yes correct. I also tried to change the send buffer size in the hope that this will fragment and slow down the execution of the sends, but on windows this didn't change anything. Maybe I have a bad understanding on how send buffer works.

but I worry we may see intermittent failures here

I think this can only happen if an environment is so fast, that all the sends complete before the cancellation. I havent seen this yet with the current parameters. Or do you rather worry because of the side effects this may have on other tests running in parallel?

const int NumOfSends = 1000;

(Socket client, Socket server) = SocketTestExtensions.CreateConnectedSocketPair(ipv6);
byte[] buffer = new byte[8192 * 8];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: it's easier to read buffer size if you do it as "64 * 1024"

@antonfirsov
Copy link
Member Author

@geoffkizer can you have another look? I don't think the test is timing-critical in it's current form (see #44161 (comment)). What is the reason then I can put into the [OuterLoop("reason")] description?

@antonfirsov

This comment has been minimized.

@azure-pipelines

This comment has been minimized.

@antonfirsov
Copy link
Member Author

Test failure is #44214.

@antonfirsov antonfirsov merged commit 5dcbc84 into dotnet:master Nov 3, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Dec 6, 2020
@karelz karelz added this to the 6.0.0 milestone Jan 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants