Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[portsorch] fix wrong orchagent behaviour when LAG member gets disabled #1166

Merged
merged 1 commit into from
Jan 28, 2020

Conversation

stepanblyschak
Copy link
Contributor

by teamd

Originally portsorch was designed to remove LAG member from LAG when
member gets disabled by teamd. This could lead to potential issues
including flood to that port and loops, since after removal member
becomes a switch port.
Now, portsorch will make use of SAI_LAG_MEMBER_ATTR_INGRESS_DISABLE and SAI_LAG_MEMBER_ATTR_EGRESS_DISABLE
to control collection/distribution through that LAG member port.
With this new flow, teammgrd and teamsyncd are candidates to be refactored, especially teamsyncd
warm reboot logic, since now we don't need to compare old, new lags and lag members.
Teamsyncd's job is simply to signal "status" field update without waiting for reconciliation timer to expire.

e.g. one port channel went down on peer:

admin@arc-switch1025:~$ show interfaces portchannel
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev         Protocol     Ports
-----  ---------------  -----------  --------------
 0001  PortChannel0001  LACP(A)(Up)  Ethernet112(S)
 0002  PortChannel0002  LACP(A)(Up)  Ethernet116(S)
 0003  PortChannel0003  LACP(A)(Up)  Ethernet120(S)
 0004  PortChannel0004  LACP(A)(Dw)  Ethernet124(D)
admin@arc-switch1025:~$ docker exec -it syncd sx_api_lag_dump.py
LAG Members Table
===============================================================================================================
| SWID| LAG Logical Port| LAG Oper State| PVID| Member Port ID| Port Label| Collector| Distributor| Oper State|
===============================================================================================================
|    0        0x10000000              UP     1|        0x11900|         29|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000100              UP     1|        0x11b00|         30|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000200              UP     1|        0x11d00|         31|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000300            DOWN     1|        0x11f00|         32|   Disable|     Disable|         UP|
===============================================================================================================

Signed-off-by: Stepan Blyschak [email protected]

What I did

Why I did it

How I verified it

Details if related

@stepanblyschak stepanblyschak force-pushed the lag_members_collec_distrib branch from 2dd9cd8 to 320eb73 Compare January 13, 2020 13:28
by teamd

Originally portsorch was designed to remove LAG member from LAG when
member gets disabled by teamd. This could lead to potential issues
including flood to that port and loops, since after removal member
becomes a switch port.
Now, portsorch will make use of SAI_LAG_MEMBER_ATTR_INGRESS_DISABLE and SAI_LAG_MEMBER_ATTR_EGRESS_DISABLE
to control collection/distribution through that LAG member port.
With this new flow, teammgrd and teamsyncd are candidates to be refactored, especially teamsyncd
warm reboot logic, since now we don't need to compare old, new lags and lag members.
Teamsyncd's job is simply to signal "status" field update without waiting for reconciliation timer to expire.

e.g. one port channel went down on peer:

```
admin@arc-switch1025:~$ show interfaces portchannel
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev         Protocol     Ports
-----  ---------------  -----------  --------------
 0001  PortChannel0001  LACP(A)(Up)  Ethernet112(S)
 0002  PortChannel0002  LACP(A)(Up)  Ethernet116(S)
 0003  PortChannel0003  LACP(A)(Up)  Ethernet120(S)
 0004  PortChannel0004  LACP(A)(Dw)  Ethernet124(D)
admin@arc-switch1025:~$ docker exec -it syncd sx_api_lag_dump.py
LAG Members Table
===============================================================================================================
| SWID| LAG Logical Port| LAG Oper State| PVID| Member Port ID| Port Label| Collector| Distributor| Oper State|
===============================================================================================================
|    0        0x10000000              UP     1|        0x11900|         29|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000100              UP     1|        0x11b00|         30|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000200              UP     1|        0x11d00|         31|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000300            DOWN     1|        0x11f00|         32|   Disable|     Disable|         UP|
===============================================================================================================
```

Signed-off-by: Stepan Blyschak <[email protected]>
@stepanblyschak stepanblyschak force-pushed the lag_members_collec_distrib branch from 320eb73 to 8a51225 Compare January 13, 2020 17:14
@stepanblyschak stepanblyschak marked this pull request as ready for review January 13, 2020 17:16
Copy link
Contributor

@yxieca yxieca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change generally looks good to me. Just one suggestion:

I suggest changing addLagMember to set initial member state to disabled. That way, if we decide in the future to create lag/member according to configuration without waiting for Kernel lag state change. We will be safe.

This suggestion is not meant to block check in, rather intended to start discussion. Once we have a collective decision, we could move ahead merging the change.

@stepanblyschak
Copy link
Contributor Author

@yxieca My concern is when we will introduce a defaults for collection/distribution attributes inside addLagMember we will set those defaults in warm start too, which on Mellanox means we will set those default in SAI/SDK/HW. If the default is disabled we will disrupt traffic until member gets enabled back. To avoid setting default status on warm start it requires special warm reboot handling logic in orchagent and probably orchagent needs to check APPL DB pre-reboot lag member status in bake.

If you remember, there was the same problem with port admin status attribute. Orchagent set it to down on port creation. Back then, we discussed that internally and decided that setting defaults is not orchagent's responsibility, it is the manager's responsibility. So we changed portmgrd and orchagent to set admin status which config DB wants or if there is no admin status in config DB for a port we set a default one.

If your concern is that we will add lag member with undefined collection/distribution status until kernel state changes then we will not. There are no lag members till teamsyncd produces '"PortChannelX:EthernetY" : {"status": "disable"}' to APPL_DB, on such message orchagent will first create a lag member and then set collection/distribution to disabled state.

In general I think with this approach teammgrd and teamsyncd can be considered to be refactored, especially teamsyncd reconcilation logic inside teamsyncd. This, however, is not an intention for this PR as well as changing default lag member collection/distribution status from SAI default (which is enable) to orchagent default (disable)

@yxieca
Copy link
Contributor

yxieca commented Jan 14, 2020

@stepanblyschak Warm reboot case is a good catch! I understand that the member creation and state setting are back to back. I was just thinking which one of following 2 in normal case would be more harmful?

  • having a member enabled briefly when it should be disabled.
  • having a member disabled briefly when it should be enabled.

I agree that this discussion is beyond the scope of this change.

@yxieca
Copy link
Contributor

yxieca commented Jan 14, 2020

retest this please

@yxieca
Copy link
Contributor

yxieca commented Jan 14, 2020

@stepanblyschak can you fix the test? Thanks!

@stepanblyschak
Copy link
Contributor Author

retest this please

@stepanblyschak
Copy link
Contributor Author

@yxieca

Looks like PR checker does not run tests against changes in PR:

/usr/bin/orchagent binary analysis in built VS docker:

[stepanb@r-build-sonic03] tmp 
➜ wget https://sonic-jenkins.westus2.cloudapp.azure.com/job/vs/job/sonic-swss-build-pr/1311/artifact/buildimage/target/docker-sonic-vs.gz
--2020-01-16 15:51:45--  https://sonic-jenkins.westus2.cloudapp.azure.com/job/vs/job/sonic-swss-build-pr/1311/artifact/buildimage/target/docker-sonic-vs.gz
Resolving sonic-jenkins.westus2.cloudapp.azure.com (sonic-jenkins.westus2.cloudapp.azure.com)... 52.250.106.22
Connecting to sonic-jenkins.westus2.cloudapp.azure.com (sonic-jenkins.westus2.cloudapp.azure.com)|52.250.106.22|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 202624421 (193M) [application/x-gzip]
Saving to: ‘docker-sonic-vs.gz’

100%[================================================================================================================================================>] 202,624,421 4.67MB/s   in 64s    

2020-01-16 15:52:50 (3.01 MB/s) - ‘docker-sonic-vs.gz’ saved [202624421/202624421]

[stepanb@r-build-sonic03] tmp 
➜ docker load < docker-sonic-vs.gz 
ebb9ae013834: Loading layer [==================================================>]  105.6MB/105.6MB
af40c19d7332: Loading layer [==================================================>]  84.47MB/84.47MB
8bb26b79b70f: Loading layer [==================================================>]  100.7MB/100.7MB
7a05913a0ef1: Loading layer [==================================================>]  141.6MB/141.6MB
516502bbeb79: Loading layer [==================================================>]  41.98MB/41.98MB
94124dabf1e5: Loading layer [==================================================>]  1.042MB/1.042MB
91399db88a65: Loading layer [==================================================>]  559.6kB/559.6kB
dcbec2dfb730: Loading layer [==================================================>]  972.8kB/972.8kB
2e22e0abd121: Loading layer [==================================================>]  561.7kB/561.7kB
e802eb064b53: Loading layer [==================================================>]  1.233MB/1.233MB
f9cedf79b180: Loading layer [==================================================>]  1.316MB/1.316MB
Loaded image: docker-sonic-vs:sonic-swss-build-pr.1311
[stepanb@r-build-sonic03] tmp 
➜ docker run -d --name vs docker-sonic-vs:sonic-swss-build-pr.1311
7ed05c25475b3d2516c7188e4de4ec38752f48f61f3606f749b0295eee878b5a
[stepanb@r-build-sonic03] tmp 
➜ docker exec -it vs bash
root@7ed05c25475b:/# apt-get update && apt-get install binutils
Hit:1 http://packages.microsoft.com/repos/sonic-dev jessie InRelease
Ign:2 http://debian-archive.trafficmanager.net/debian stretch InRelease
Hit:3 http://debian-archive.trafficmanager.net/debian-security stretch/updates InRelease
Hit:4 http://debian-archive.trafficmanager.net/debian stretch-backports InRelease
Hit:5 http://debian-archive.trafficmanager.net/debian stretch Release
Reading package lists... Done                      
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  binutils-doc
The following NEW packages will be installed:
  binutils
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
Need to get 3770 kB of archives.
After this operation, 23.9 MB of additional disk space will be used.
Get:1 http://debian-archive.trafficmanager.net/debian stretch/main amd64 binutils amd64 2.28-5 [3770 kB]
Fetched 3770 kB in 1s (2354 kB/s)   
Selecting previously unselected package binutils.
(Reading database ... 15619 files and directories currently installed.)
Preparing to unpack .../binutils_2.28-5_amd64.deb ...
Unpacking binutils (2.28-5) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
Setting up binutils (2.28-5) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
root@7ed05c25475b:/# strings /usr/bin/orchagent | grep setCollec
root@7ed05c25475b:/# 
root@7ed05c25475b:/# 
root@7ed05c25475b:/# strings /usr/bin/orchagent | grep setDistri
root@7ed05c25475b:/# 
root@7ed05c25475b:/# 
root@7ed05c25475b:/# md5sum /usr/bin/orchagent
f33acbda19e1a12635abe447099f6800  /usr/bin/orchagent
root@7ed05c25475b:/# 

orchagent bin analysis in built swss debian package

[stepanb@r-build-sonic03] tmp 
➜ wget https://sonic-jenkins.westus2.cloudapp.azure.com/job/vs/job/sonic-swss-build-pr/1311/artifact/swss_1.0.0_amd64.deb
--2020-01-16 16:00:45--  https://sonic-jenkins.westus2.cloudapp.azure.com/job/vs/job/sonic-swss-build-pr/1311/artifact/swss_1.0.0_amd64.deb
Resolving sonic-jenkins.westus2.cloudapp.azure.com (sonic-jenkins.westus2.cloudapp.azure.com)... 52.250.106.22
Connecting to sonic-jenkins.westus2.cloudapp.azure.com (sonic-jenkins.westus2.cloudapp.azure.com)|52.250.106.22|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 939184 (917K) [application/x-debian-package]
Saving to: ‘swss_1.0.0_amd64.deb’

100%[================================================================================================================================================>] 939,184      739KB/s   in 1.2s   

2020-01-16 16:00:48 (739 KB/s) - ‘swss_1.0.0_amd64.deb’ saved [939184/939184]

[stepanb@r-build-sonic03] tmp 
➜ mkdir stepanb_tmp
[stepanb@r-build-sonic03] tmp 
➜ dpkg -x swss_1.0.0_amd64.deb stepanb_tmp/
[stepanb@r-build-sonic03] tmp 
➜ strings stepanb_tmp/usr/bin/orchagent | grep setCollec
setCollectionOnLagMember
[stepanb@r-build-sonic03] tmp 
➜ strings stepanb_tmp/usr/bin/orchagent | grep setDistri
setDistributionOnLagMember
[stepanb@r-build-sonic03] tmp 
➜ md5sum stepanb_tmp/usr/bin/orchagent
3907b3d46b800a554f7368aad68df72c  stepanb_tmp/usr/bin/orchagent

This is why updated test_portchannel.py fails. It simply runs against old orchagent

@yxieca
Copy link
Contributor

yxieca commented Jan 24, 2020

retest this please

3 similar comments
@daall
Copy link
Contributor

daall commented Jan 25, 2020

retest this please

@lguohan
Copy link
Contributor

lguohan commented Jan 26, 2020

retest this please

@daall
Copy link
Contributor

daall commented Jan 27, 2020

retest this please

@prsunny prsunny changed the title [portsorch] fix worng orchagent behaviour when LAG member gets disabled [portsorch] fix wrong orchagent behaviour when LAG member gets disabled Jan 28, 2020
@prsunny prsunny requested a review from judyjoseph January 28, 2020 01:05
@lguohan lguohan merged commit a35afac into sonic-net:master Jan 28, 2020
lguohan pushed a commit that referenced this pull request Jan 28, 2020
…1166)

Originally portsorch was designed to remove LAG member from LAG when
member gets disabled by teamd. This could lead to potential issues
including flood to that port and loops, since after removal member
becomes a switch port.

Now, portsorch will make use of SAI_LAG_MEMBER_ATTR_INGRESS_DISABLE and SAI_LAG_MEMBER_ATTR_EGRESS_DISABLE
to control collection/distribution through that LAG member port.
With this new flow, teammgrd and teamsyncd are candidates to be refactored, especially teamsyncd
warm reboot logic, since now we don't need to compare old, new lags and lag members.
Teamsyncd's job is simply to signal "status" field update without waiting for reconciliation timer to expire.

e.g. one port channel went down on peer:

```
admin@arc-switch1025:~$ show interfaces portchannel
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev         Protocol     Ports
-----  ---------------  -----------  --------------
 0001  PortChannel0001  LACP(A)(Up)  Ethernet112(S)
 0002  PortChannel0002  LACP(A)(Up)  Ethernet116(S)
 0003  PortChannel0003  LACP(A)(Up)  Ethernet120(S)
 0004  PortChannel0004  LACP(A)(Dw)  Ethernet124(D)
admin@arc-switch1025:~$ docker exec -it syncd sx_api_lag_dump.py
LAG Members Table
===============================================================================================================
| SWID| LAG Logical Port| LAG Oper State| PVID| Member Port ID| Port Label| Collector| Distributor| Oper State|
===============================================================================================================
|    0        0x10000000              UP     1|        0x11900|         29|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000100              UP     1|        0x11b00|         30|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000200              UP     1|        0x11d00|         31|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000300            DOWN     1|        0x11f00|         32|   Disable|     Disable|         UP|
===============================================================================================================
```

Signed-off-by: Stepan Blyschak <[email protected]>
prsunny pushed a commit that referenced this pull request Jan 28, 2020
…1166)

Originally portsorch was designed to remove LAG member from LAG when
member gets disabled by teamd. This could lead to potential issues
including flood to that port and loops, since after removal member
becomes a switch port.

Now, portsorch will make use of SAI_LAG_MEMBER_ATTR_INGRESS_DISABLE and SAI_LAG_MEMBER_ATTR_EGRESS_DISABLE
to control collection/distribution through that LAG member port.
With this new flow, teammgrd and teamsyncd are candidates to be refactored, especially teamsyncd
warm reboot logic, since now we don't need to compare old, new lags and lag members.
Teamsyncd's job is simply to signal "status" field update without waiting for reconciliation timer to expire.

e.g. one port channel went down on peer:

```
admin@arc-switch1025:~$ show interfaces portchannel
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev         Protocol     Ports
-----  ---------------  -----------  --------------
 0001  PortChannel0001  LACP(A)(Up)  Ethernet112(S)
 0002  PortChannel0002  LACP(A)(Up)  Ethernet116(S)
 0003  PortChannel0003  LACP(A)(Up)  Ethernet120(S)
 0004  PortChannel0004  LACP(A)(Dw)  Ethernet124(D)
admin@arc-switch1025:~$ docker exec -it syncd sx_api_lag_dump.py
LAG Members Table
===============================================================================================================
| SWID| LAG Logical Port| LAG Oper State| PVID| Member Port ID| Port Label| Collector| Distributor| Oper State|
===============================================================================================================
|    0        0x10000000              UP     1|        0x11900|         29|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000100              UP     1|        0x11b00|         30|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000200              UP     1|        0x11d00|         31|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000300            DOWN     1|        0x11f00|         32|   Disable|     Disable|         UP|
===============================================================================================================
```

Signed-off-by: Stepan Blyschak <[email protected]>
lguohan pushed a commit that referenced this pull request Jan 30, 2020
…1166)

Originally portsorch was designed to remove LAG member from LAG when
member gets disabled by teamd. This could lead to potential issues
including flood to that port and loops, since after removal member
becomes a switch port.

Now, portsorch will make use of SAI_LAG_MEMBER_ATTR_INGRESS_DISABLE and SAI_LAG_MEMBER_ATTR_EGRESS_DISABLE
to control collection/distribution through that LAG member port.
With this new flow, teammgrd and teamsyncd are candidates to be refactored, especially teamsyncd
warm reboot logic, since now we don't need to compare old, new lags and lag members.
Teamsyncd's job is simply to signal "status" field update without waiting for reconciliation timer to expire.

e.g. one port channel went down on peer:

```
admin@arc-switch1025:~$ show interfaces portchannel
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev         Protocol     Ports
-----  ---------------  -----------  --------------
 0001  PortChannel0001  LACP(A)(Up)  Ethernet112(S)
 0002  PortChannel0002  LACP(A)(Up)  Ethernet116(S)
 0003  PortChannel0003  LACP(A)(Up)  Ethernet120(S)
 0004  PortChannel0004  LACP(A)(Dw)  Ethernet124(D)
admin@arc-switch1025:~$ docker exec -it syncd sx_api_lag_dump.py
LAG Members Table
===============================================================================================================
| SWID| LAG Logical Port| LAG Oper State| PVID| Member Port ID| Port Label| Collector| Distributor| Oper State|
===============================================================================================================
|    0        0x10000000              UP     1|        0x11900|         29|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000100              UP     1|        0x11b00|         30|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000200              UP     1|        0x11d00|         31|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000300            DOWN     1|        0x11f00|         32|   Disable|     Disable|         UP|
===============================================================================================================
```

Signed-off-by: Stepan Blyschak <[email protected]>
abdosi added a commit that referenced this pull request Feb 4, 2020
@abdosi
Copy link
Contributor

abdosi commented Feb 4, 2020

@lguohan @stepanblyschak @yxieca

Reverted this for 201911 since seeing issue on other platforms. Will cherry-pick again once that gets resolved.

abdosi pushed a commit that referenced this pull request Feb 24, 2020
…1166)

Originally portsorch was designed to remove LAG member from LAG when
member gets disabled by teamd. This could lead to potential issues
including flood to that port and loops, since after removal member
becomes a switch port.

Now, portsorch will make use of SAI_LAG_MEMBER_ATTR_INGRESS_DISABLE and SAI_LAG_MEMBER_ATTR_EGRESS_DISABLE
to control collection/distribution through that LAG member port.
With this new flow, teammgrd and teamsyncd are candidates to be refactored, especially teamsyncd
warm reboot logic, since now we don't need to compare old, new lags and lag members.
Teamsyncd's job is simply to signal "status" field update without waiting for reconciliation timer to expire.

e.g. one port channel went down on peer:

```
admin@arc-switch1025:~$ show interfaces portchannel
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev         Protocol     Ports
-----  ---------------  -----------  --------------
 0001  PortChannel0001  LACP(A)(Up)  Ethernet112(S)
 0002  PortChannel0002  LACP(A)(Up)  Ethernet116(S)
 0003  PortChannel0003  LACP(A)(Up)  Ethernet120(S)
 0004  PortChannel0004  LACP(A)(Dw)  Ethernet124(D)
admin@arc-switch1025:~$ docker exec -it syncd sx_api_lag_dump.py
LAG Members Table
===============================================================================================================
| SWID| LAG Logical Port| LAG Oper State| PVID| Member Port ID| Port Label| Collector| Distributor| Oper State|
===============================================================================================================
|    0        0x10000000              UP     1|        0x11900|         29|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000100              UP     1|        0x11b00|         30|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000200              UP     1|        0x11d00|         31|    Enable|      Enable|         UP|
===============================================================================================================
|    0        0x10000300            DOWN     1|        0x11f00|         32|   Disable|     Disable|         UP|
===============================================================================================================
```

Signed-off-by: Stepan Blyschak <[email protected]>
EdenGri pushed a commit to EdenGri/sonic-swss that referenced this pull request Feb 28, 2022
oleksandrivantsiv pushed a commit to oleksandrivantsiv/sonic-swss that referenced this pull request Mar 1, 2023
What I did:

Moved the SAI header to v1.8.1.
   7cd3a7ed84db3fc9cec13496a5339b6fe1888bb7 (HEAD, tag: v1.8.1, origin/v1.8) Update SAI version to V1.8.1 (sonic-net#1218)
   5913e4cdd0c9c7ae859baa2e18086327b39a94da Fix error when compiling Broadcom SAI with v1.8.0 (sonic-net#1216)
   5a98bc3c7e86c01f3cf702054f9af7c7c5ca6daf (HEAD, tag: v1.8.0, origin/master, origin/HEAD, master) Update version to 1.8.0 (sonic-net#1207)
   b3244ceceb45184ffe37da55bb9a98ef126050ce saineighbor.h: Updated SAI_NEIGHBOR_ENTRY_ATTR_ENCAP_INDEX and deprecated SAI_NEIGHBOR_ENTRY_ATTR_ENCAP_IMPOSE_INDEX (sonic-net#1202)
   8731ca6e09c7ba99b0b009e5821d80598e216756 Add source/dest/double NAPT entry available attributes (sonic-net#1194)
   f053d899feb9517f2db43ee462589a30572b5ed1 Add switch attributes for hash offset configuration. (sonic-net#1195)
   13e5cd6940f9a0da1878d00f08e5941e09f16e7f PRBS RX State Data Type (sonic-net#1179)
   9755845a06525a3c17f03e7b936a70783e8ef068 Packet header based VRF classification (sonic-net#1185)
   2369ecb59fff1a5cae948d41eea06bf8b71330b2 SAI versioning (sonic-net#1183)
   744279839c176e68b19734657975e3f5ec6f1a32 Replaced SAI_SWITCH_ATTR_MACSEC_OBJECT_ID with SAI_SWITCH_ATTR_MACSEC_OBJECT_LIST (sonic-net#1199)
   584c724864fe565357e82d097ddcc7363bddefac [CI] Set up CI&PR with Azure Pipelines (sonic-net#1200)
   08192237963174cc60edae9b4812a39c43b291fd Add attribute to query available packet DMA pool size (sonic-net#1198)
   f092ef1e3ce695fc3f9552721025695312b961a2 Add IPv6 flow label hash attribute. (sonic-net#1192)
   cbc9562bb7a8f2c3a79702b99be55f3b3afa6957 Override VRF (sonic-net#1186)
   1eb35afdb2146baf40e6c2b8f2f8bfe99075eaee Add SAI_SWITCH_ATTR_SWITCH_HARDWARE_INFO format for GB MDIO sysfs access   (sonic-net#1171)
   b2d4c9a57c7f00b2632c35ca5eb3dd6480f7916a Switch scoped tunnel attributes (sonic-net#1173)
   96adc95bf8316e1905143d9ecd21f32a43e80d7f Enhancements for MPLS support (sonic-net#1181)
   3dcf1f2028da4060b345ad78e8a0c87d225bf5d0 Support for ACL extensions in metadata (sonic-net#1178)
   24076be95b871e8f82ecaeb908cad951dc68896c [meta] Add support for allow empty list tag (sonic-net#1190)
   a2b3344cdde0bf5a4f8e98e1c676a658c0c615b0 spell check fixes (sonic-net#1189)
   bf4271bab6e8884bd96050bcba44e3134adaaec3 Do not call sai_metadata_sai get APIs before checking if they are allocated (sonic-net#1182)
   5d5784dc3dbfc713c92ae7d2c472680b837bb002 [macsec]: Separate XPN configuration attribute from read-only attribute (sonic-net#1169)
   6d5a9bf5ad17cb82621cabbe2449524320930606 [macsec]: add SAI_MACSEC_ATTR_SUPPORTED_CIPHER_SUITE_LIST (sonic-net#1172)
   e72c8f3a0cc543cb228554be82c97a63db917740 [meta] Print each tool version in Makefile (sonic-net#1177)
   8f19677da88c7494d563ef7c5acb0529ecbd0b6e [meta] Add check for START, END and RANBE_BASE enums (sonic-net#1175)
   24ad7906f145930b2e25682b6248909289d39e72 [meta] Create sai_switch_pointers_t struct (sonic-net#1174)
   4f5f84df3fcd0e146707df41d3e2837c48f7c760 Tunnel loopback packet action as resource (sonic-net#1163)
   8a0e82c57aa0e22e696158735516904e7dc14052 [meta] Add create only oid attribute check on switch object (sonic-net#1170)
   14cf50772e478551920963ecf11f4fd019a0c106 Remove obsolete stub folder (sonic-net#1168)
   f14f406340e4f5f1b1d674f6fdd5fd861a54c877 [meta] Use safer calloc for integer overflow check (sonic-net#1166)

Also this PR include changes of this sonic-net#815

SAI commit b2d4c9a57c7f00b2632c35ca5eb3dd6480f7916a Switch scoped tunnel attributes (sonic-net#1173) needed change in sai_redis_switch.cpp and sai_vs_switch.cpp for compilation.

How I verify:

Verify Build is fine of libsairedis*.deb, syncd*.deb, swss*.deb

Co-authored-by: Ann Pokora <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants