Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increased memory footprint #841

Closed
zolug opened this issue Mar 16, 2023 · 5 comments
Closed

Increased memory footprint #841

zolug opened this issue Mar 16, 2023 · 5 comments
Assignees
Labels
performance The problem related to system effectivity question Further information is requested

Comments

@zolug
Copy link
Contributor

zolug commented Mar 16, 2023

When running in AF_PACKET mode there seems to be a memory increase in the NSM supplied VPP process on NSM 1.8.

For me it seems the root cause might be the amount of memory assigned to an af_packet socket.
Previously it was ~20MB per socket while with 1.8 it appears to be 135MB per af_packet socket.

In case multiple interfaces are provided in the device-selector-file for vpp-forwarder, vpp seems to create an af_packet socket for each, which can significantly increase the memory footprint (RssFile) and risk OOM.

Is the increase due to the vpp version update? Are there any known NSM issues that are meant to be fixed by the increased af_packet memory?

Might be similar/related to: #332

NSM 1.8.0

# smaps
7f6f5d22f000-7f6f6562f000 rw-s 00000000 00:07 20735                      socket:[20735]
Size:             135168 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:              135168 kB
Pss:              135168 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:    129516 kB
Private_Dirty:      5652 kB
Referenced:       135168 kB
Anonymous:             0 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
FilePmdMapped:         0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
VmFlags: rd wr sh mr mw me ms mm 

# lsof
vpp_main   6800                     root  mem       REG                0,7                20735 socket:[20735]
vpp_main   6800                     root   13u     pack              20735       0t0        ALL type=SOCK_RAW

Before:

7f0655b5f000-7f0656f5f000 rw-s 00000000 00:08 134146836                  socket:[134146836]
Size:              20480 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:               20480 kB
Pss:               20480 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:     16384 kB
Private_Dirty:      4096 kB
Referenced:        20480 kB
Anonymous:             0 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
FilePmdMapped:         0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
ProtectionKey:         0
VmFlags: rd wr sh mr mw me ms sd mm 

@zolug zolug added question Further information is requested performance The problem related to system effectivity labels Mar 16, 2023
@LionelJouin LionelJouin moved this to 📋 To Do in Meridio Mar 17, 2023
@denis-tingaikin denis-tingaikin moved this to In Progress in Release v1.9.0 Mar 22, 2023
@glazychev-art
Copy link
Contributor

@zolug
Could you recheck the issue on the main branch?
We merged a couple of fixes with frame sizes.

@zolug
Copy link
Contributor Author

zolug commented Mar 24, 2023

@zolug Could you recheck the issue on the main branch? We merged a couple of fixes with frame sizes.

@glazychev-art
Thanks. Yesterday I checked #847, and the memory usage went back to the legacy values. I will try to check the veth pair related changes as well asap.

@denis-tingaikin denis-tingaikin moved this from In Progress to Under review in Release v1.9.0 Mar 24, 2023
@zolug
Copy link
Contributor Author

zolug commented Mar 27, 2023

Works as intended.

@glazychev-art
Copy link
Contributor

@zolug
Thanks!
Can we close the issue?

@zolug
Copy link
Contributor Author

zolug commented Mar 28, 2023

@zolug Thanks! Can we close the issue?

Yes, thanks for the quick fix.

@github-project-automation github-project-automation bot moved this from Under review to Done in Release v1.9.0 Mar 28, 2023
@LionelJouin LionelJouin moved this from 📋 To Do to ✅ Done in Meridio Apr 2, 2023
nsmbot pushed a commit that referenced this issue Aug 8, 2024
…k-vpp@main

PR link: networkservicemesh/sdk-vpp#841

Commit: 5b27c2f
Author: Network Service Mesh Bot
Date: 2024-08-08 05:10:25 -0500
Message:
  - Update go.mod and go.sum to latest version from networkservicemesh/sdk-kernel@main (#841)
PR link: networkservicemesh/sdk-kernel#671
Commit: c7b682d
Author: Network Service Mesh Bot
Date: 2024-08-08 05:06:17 -0500
Message:
    - Update go.mod and go.sum to latest version from networkservicemesh/sdk@main (#671)
PR link: networkservicemesh/sdk#1650
Commit: 3016313
Author: Nikita Skrynnik
Date: 2024-08-08 21:03:55 +1100
Message:
        - Add a timeout for Closes in begin.Server (#1650)
* fix corner cases of the begin chain element
* disable Test_RestartDuringRefresh
* add fresh context
* add extended context
* add refreshed close context everywhere in begin
* fix some unit tests
* unskip some tests
* fix golang linter issues
* debug
* cleanup
* fix race condition
* add unit tests
* fix go linter issues
* fix race conditiong
* apply review comments
---------
Signed-off-by: denis-tingaikin <[email protected]>
Signed-off-by: NikitaSkrynnik <[email protected]>
Signed-off-by: NSMBot <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance The problem related to system effectivity question Further information is requested
Projects
Status: Done
Development

No branches or pull requests

4 participants