-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove in6addr cmp #17312
Remove in6addr cmp #17312
Conversation
@@ -588,23 +588,6 @@ static void __attribute__((unused)) sockunion_print(const union sockunion *su) | |||
} | |||
} | |||
|
|||
int in6addr_cmp(const struct in6_addr *addr1, const struct in6_addr *addr2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couldn't we just change the implementation of this so that it's not so ... awkward?
it seems nice to have the v6 address-specific signature, for instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
except of course we have IPV6_ADDR_CMP and a couple other places where we just use memcmp for v6 addresses in sockunion.c. I've just converted them over to use that version. All in all looks like a bunch of places could be fixed up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure- it's not the use of memcmp() at the bottom level that I was asking about, it's about having a v6-specific signature that seems ... useful and helpful. Just a fan of apis that help folks use our data types in a natural sort of way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see why IPV6_ADDR_CMP is not sufficient. the custom function was incredibly slow, when I removed it in a 256 peer X 100k routes I dropped from ~3:30 convergence to 2:30 convergence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that the byte-by-byte implementation was not good - fine, let's do it better.
the macro is just a macro, so no type-checking of its arguments - I like the idea of a strongly-typed utility (that has a good implementation). but it's ok - if you and Donatas are ok with the macro, I'm not lying on the tracks
memcmp will return and act exactly the same as in6addr_cmp but it does it significantly faster than how in6addr_cmp does it. Let this be a lesson for implementing something that is a duplicate of what is provided by the c library. Signed-off-by: Donald Sharp <[email protected]>
This function should just be memcmp. Signed-off-by: Donald Sharp <[email protected]>
b44822b
to
4cfe5aa
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Donald Sharp <[email protected]>
4cfe5aa
to
ff9781e
Compare
Why I did it Added patches from FRR to support scale of BGP neighbors to 256/514. Below are the patches Patch FRR Pull request 0069-lib-nexthop-code-should-use-uint16_t-for-nexthop-cou.patch FRRouting/frr#16967 0070-Allow-16-bit-size-for-nexthops.patch FRRouting/frr#17023 0071-zebra-Only-notify-dplane-work-pthread-when-needed.patch FRRouting/frr#17062 0072-Fix-up-improper-handling-of-nexthops-for-nexthop-tra.patch FRRouting/frr#17076 0073-remove-in6addr-cmp.patch FRRouting/frr#17312 0074-bgp-best-port-reordering.patch FRRouting/frr#15572 0075-bgp-mp-info-changes.patch FRRouting/frr#16961 0076-Optimizations-and-problem-fixing-for-large-scale-ecmp-from-bgp.patch FRRouting/frr#17229
…net#21199) Why I did it Added patches from FRR to support scale of BGP neighbors to 256/514. Below are the patches Patch FRR Pull request 0069-lib-nexthop-code-should-use-uint16_t-for-nexthop-cou.patch FRRouting/frr#16967 0070-Allow-16-bit-size-for-nexthops.patch FRRouting/frr#17023 0071-zebra-Only-notify-dplane-work-pthread-when-needed.patch FRRouting/frr#17062 0072-Fix-up-improper-handling-of-nexthops-for-nexthop-tra.patch FRRouting/frr#17076 0073-remove-in6addr-cmp.patch FRRouting/frr#17312 0074-bgp-best-port-reordering.patch FRRouting/frr#15572 0075-bgp-mp-info-changes.patch FRRouting/frr#16961 0076-Optimizations-and-problem-fixing-for-large-scale-ecmp-from-bgp.patch FRRouting/frr#17229
…net#21199) Why I did it Added patches from FRR to support scale of BGP neighbors to 256/514. Below are the patches Patch FRR Pull request 0069-lib-nexthop-code-should-use-uint16_t-for-nexthop-cou.patch FRRouting/frr#16967 0070-Allow-16-bit-size-for-nexthops.patch FRRouting/frr#17023 0071-zebra-Only-notify-dplane-work-pthread-when-needed.patch FRRouting/frr#17062 0072-Fix-up-improper-handling-of-nexthops-for-nexthop-tra.patch FRRouting/frr#17076 0073-remove-in6addr-cmp.patch FRRouting/frr#17312 0074-bgp-best-port-reordering.patch FRRouting/frr#15572 0075-bgp-mp-info-changes.patch FRRouting/frr#16961 0076-Optimizations-and-problem-fixing-for-large-scale-ecmp-from-bgp.patch FRRouting/frr#17229
…net#21199) Why I did it Added patches from FRR to support scale of BGP neighbors to 256/514. Below are the patches Patch FRR Pull request 0069-lib-nexthop-code-should-use-uint16_t-for-nexthop-cou.patch FRRouting/frr#16967 0070-Allow-16-bit-size-for-nexthops.patch FRRouting/frr#17023 0071-zebra-Only-notify-dplane-work-pthread-when-needed.patch FRRouting/frr#17062 0072-Fix-up-improper-handling-of-nexthops-for-nexthop-tra.patch FRRouting/frr#17076 0073-remove-in6addr-cmp.patch FRRouting/frr#17312 0074-bgp-best-port-reordering.patch FRRouting/frr#15572 0075-bgp-mp-info-changes.patch FRRouting/frr#16961 0076-Optimizations-and-problem-fixing-for-large-scale-ecmp-from-bgp.patch FRRouting/frr#17229
…net#21199) Why I did it Added patches from FRR to support scale of BGP neighbors to 256/514. Below are the patches Patch FRR Pull request 0069-lib-nexthop-code-should-use-uint16_t-for-nexthop-cou.patch FRRouting/frr#16967 0070-Allow-16-bit-size-for-nexthops.patch FRRouting/frr#17023 0071-zebra-Only-notify-dplane-work-pthread-when-needed.patch FRRouting/frr#17062 0072-Fix-up-improper-handling-of-nexthops-for-nexthop-tra.patch FRRouting/frr#17076 0073-remove-in6addr-cmp.patch FRRouting/frr#17312 0074-bgp-best-port-reordering.patch FRRouting/frr#15572 0075-bgp-mp-info-changes.patch FRRouting/frr#16961 0076-Optimizations-and-problem-fixing-for-large-scale-ecmp-from-bgp.patch FRRouting/frr#17229
<!-- Please make sure you've read and understood our contributing guidelines: https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" or "resolves #xxxx" Please provide the following information: --> #### Why I did it Added patches from FRR to support scale of BGP neighbors to 256/514. Below are the patches | Patch | FRR Pull request| | ------ |--------- | | 0069-lib-nexthop-code-should-use-uint16_t-for-nexthop-cou.patch | FRRouting/frr#16967 | | 0070-Allow-16-bit-size-for-nexthops.patch | FRRouting/frr#17023 | | 0071-zebra-Only-notify-dplane-work-pthread-when-needed.patch | FRRouting/frr#17062 | | 0072-Fix-up-improper-handling-of-nexthops-for-nexthop-tra.patch | FRRouting/frr#17076 | | 0073-remove-in6addr-cmp.patch | FRRouting/frr#17312 | | 0074-bgp-best-port-reordering.patch | FRRouting/frr#15572 | | 0075-bgp-mp-info-changes.patch | FRRouting/frr#16961 | | 0076-Optimizations-and-problem-fixing-for-large-scale-ecmp-from-bgp.patch | FRRouting/frr#17229 | ##### Work item tracking - Microsoft ADO **(number only)**: #### How I did it #### How to verify it <!-- If PR needs to be backported, then the PR must be tested against the base branch and the earliest backport release branch and provide tested image version on these two branches. For example, if the PR is requested for master, 202211 and 202012, then the requester needs to provide test results on master and 202012. --> #### Which release branch to backport (provide reason below if selected) <!-- - Note we only backport fixes to a release branch, *not* features! - Please also provide a reason for the backporting below. - e.g. - [x] 202006 --> - [ ] 201811 - [ ] 201911 - [ ] 202006 - [ ] 202012 - [ ] 202106 - [ ] 202111 - [ ] 202205 - [ ] 202211 - [ ] 202305 #### Tested branch (Please provide the tested image version) <!-- - Please provide tested image version - e.g. - [x] 20201231.100 --> - [ ] <!-- image version 1 --> - [ ] <!-- image version 2 --> #### Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> <!-- Ensure to add label/tag for the feature raised. example - PR#2174 under sonic-utilities repo. where, Generic Config and Update feature has been labelled as GCU. --> #### Link to config_db schema for YANG module changes <!-- Provide a link to config_db schema for the table for which YANG model is defined Link should point to correct section on https://github.com/Azure/sonic-buildimage/blob/master/src/sonic-yang-models/doc/Configuration.md --> #### A picture of a cute animal (not mandatory but encouraged)
…net#21199) Why I did it Added patches from FRR to support scale of BGP neighbors to 256/514. Below are the patches Patch FRR Pull request 0069-lib-nexthop-code-should-use-uint16_t-for-nexthop-cou.patch FRRouting/frr#16967 0070-Allow-16-bit-size-for-nexthops.patch FRRouting/frr#17023 0071-zebra-Only-notify-dplane-work-pthread-when-needed.patch FRRouting/frr#17062 0072-Fix-up-improper-handling-of-nexthops-for-nexthop-tra.patch FRRouting/frr#17076 0073-remove-in6addr-cmp.patch FRRouting/frr#17312 0074-bgp-best-port-reordering.patch FRRouting/frr#15572 0075-bgp-mp-info-changes.patch FRRouting/frr#16961 0076-Optimizations-and-problem-fixing-for-large-scale-ecmp-from-bgp.patch FRRouting/frr#17229
As a note, switching from from in6addr_cmp to memcmp significantly reduced cpu convergence time from ~3 minutes 30 seconds to 2 minutes 30 seconds with 100k x 256 ecmp in bgp