-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CVE-2007-4559 Patch #1
Open
TrellixVulnTeam
wants to merge
2
commits into
gnoliyil:main
Choose a base branch
from
TrellixVulnTeam:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
github-actions
bot
force-pushed
the
main
branch
10 times, most recently
from
October 11, 2022 17:36
3570e6e
to
d167e14
Compare
github-actions bot
pushed a commit
that referenced
this pull request
Oct 11, 2022
Dart FIDL lists were made unmodifiable in https://fuchsia-review.googlesource.com/c/fuchsia/+/665177/ which seems to be making this logic crash at runtime. Fixes this error from some logs I saw in an unrelated bug fxb/101341: ``` [00020.831] [30826][60673][flutter_aot_product_runner.cm] ERROR: [flutter/runtime/dart_vm_initializer.cc:41] Unhandled Exception: Unsupported operation: Cannot remove from an unmodifiable list #0 UnmodifiableListMixin.removeAt (dart:_internal/list.dart:164) #1 new SettingsStateImpl.<anonymous closure>.<anonymous closure> (package:ermine/src/states/settings_state_impl.dart:325) #2 Function._apply (dart:core-patch/function_patch.dart:11) #3 Function.apply (dart:core-patch/function_patch.dart:34) #4 Action.call (package:mobx/src/core/action.dart:53) #5 runInAction (package:mobx/src/api/action.dart:11) #6 new SettingsStateImpl.<anonymous closure> (package:ermine/src/states/settings_state_impl.dart:313) #7 ChannelService.start (package:ermine/src/services/settings/channel_service.dart:39) <asynchronous suspension> #8 Future.wait.<anonymous closure> (dart:async/future.dart:522) <asynchronous suspension> #9 SettingsStateImpl.start (package:ermine/src/states/settings_state_impl.dart:387) <asynchronous suspension> ``` Change-Id: I5adffdec4e64beafc2cc97da5e23197e4ec2e298 Reviewed-on: https://fuchsia-review.googlesource.com/c/experiences/+/684597 Reviewed-by: Sanjay Chouksey <[email protected]> Reviewed-by: Charles Whitten <[email protected]> Commit-Queue: Alexander Biggs <[email protected]>
github-actions
bot
force-pushed
the
main
branch
17 times, most recently
from
October 12, 2022 09:31
dc3f685
to
bf40e67
Compare
github-actions
bot
force-pushed
the
main
branch
15 times, most recently
from
December 8, 2022 01:29
4bd2534
to
bc81409
Compare
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
Change the lock currently used by the implementation of the VM's PageQueues lock from a CriticalMutex to Spinlock. This lock currently protects most of the state of the PageQueues object. In general, there should be one of these objects per NUMA node in the system, but in practice there is only one since we do not currently run on many (any?) systems with multiple NUMA nodes. Instrumentation has shown that, generally speaking, the amount of work being done inside of this lock is quite low. Typically, on the order of 300nSec or so in a nested virtualised environment (something of a worst case scenario). When the lock is not being heavily contested, this is fine. It only takes a handful of nanoseconds (20 or so) to drop the lock when the time comes, provided that the lock has not become so contested that threads have started to block in it. But! When threads do start to pile up behind this lock, things get really bad. Now, when the lock is released, there is always one or more threads in the wait queue, one of which need to be woken. In a _heavily_ instrumented kernel (again, nested VM), the act of dropping the lock and waking/scheduling the next thread to run takes on average something like 10uSec, or about 33 times as long as the work actually done in the lock. Provided that the work done inside of the lock is well bounded, it would be better to use a spinlock here instead of a blocking mutex. There seem to be two reasons that this lock is currently a blocking mutex, however. 1) In the past, it seems like the scanner threads would hold this lock for relatively long periods of time where it would not be appropriate to use a spin lock. 2) While holding this lock, various threads occasionally need to signal a kernel Event as the queue sizes change in order to trigger LRU scanning and aging behaviors. One cannot signal an event while holding a spinlock, since it represents an action which might result in immediate preemption if the signaled thread ends up displacing the signaling thread in the scheduler's run queue. Issue #1 does not seem to be much of an issue anymore. The scanner threads now attempt to batch their work in ways which mean that there is a worst case amount of time that they might spend in the lock fetching a batch of work to scan. Issue #2 was addressed using a bit of static-analysis trickery. After converting the lock from a critical mutex to a spinlock, the locations where the events were being directly signaled were replaced with code which simply records that the need to signal the events are now "pending". A special zero-length static analysis capability (the PendingSignalToken) was added. This object behaves like a lock from a static analysis perspective, but under the hood, it actually does nothing at all. It is also required to be acquired before the object's main spinlock Next annotations were added to the new "PendSignal" method which demand that the PendingSignalToken needs to be held in order to pend a signal. Finally, an RAII style guard was introduced which "acquires" and "releases" the token, and always flushes out any pending signals when the guard goes out of scope. The Acquire/Release methods of the token itself were private, and the Guard was made a friend of the Token. So - With all of this in place: 1) No events will ever be signaled from inside of the (now) spinlock. 2) Instead, they will be pended, but this requires holding the token. 3) The token cannot be simply acquired by random code. It may only be acquired by instantiating the guard. 4) A guard can only be instantiated before acquiring the spinlock (which is also only ever acquired using a guard object). This means that the token-guard will always go out of scope after the lock guard. 5) The guard makes sure to always flush the pending signals, so it is now extremely difficult to make a mistake and forget to flush any pending signals. 6) The static-annotations make sure that all code patch which _might_ result in a pended signal will make sure to flush the pending signals at a point after the spinlock has been dropped. The result is a significant performance improvement in scenarios where the PageQueues lock had been previously heavily contested. Change-Id: I4c2b26007a58f1771cf8c4b130e18c47893c790f Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/762186 Reviewed-by: Corey Tabaka <[email protected]> Reviewed-by: Adrian Danis <[email protected]> Reviewed-by: Rasha Eqbal <[email protected]> Commit-Queue: John Grossman <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
This relands commit d9f1226af09993c50626196abc1585c65e783359. PS #1 is the same as that commit, but later PSs add fixes for the same flake described in https://fxrev.dev/781465 Moves sending of scan results to the location sensor into its own future, such that it doesn't block sending scan results to any requesters and is instead serviced in parallel. Bug: 73821 Bug: 76310 Bug: 116134 Test: fx test wlancfg-tests Multiply: wlancfg-tests Change-Id: Icddc612be3b835a40baff6742b15f593c1a0251e Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/781466 Commit-Queue: Auto-Submit <[email protected]> Reviewed-by: Kevin Sakuma <[email protected]> Fuchsia-Auto-Submit: Marc Khouri <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
This reverts commit e2f75411a5716c4f0bbe42ab6c2452c19209d8a6. Reason for revert: breaks compilation, raced with another change Original change's description: > [wlan][policy] Location sensor can't block scan results > > This relands commit d9f1226af09993c50626196abc1585c65e783359. PS #1 > is the same as that commit, but later PSs add fixes for the same > flake described in https://fxrev.dev/781465 > > Moves sending of scan results to the location sensor into its own > future, such that it doesn't block sending scan results to any > requesters and is instead serviced in parallel. > > Bug: 73821 > Bug: 76310 > Bug: 116134 > Test: fx test wlancfg-tests > Multiply: wlancfg-tests > Change-Id: Icddc612be3b835a40baff6742b15f593c1a0251e > Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/781466 > Commit-Queue: Auto-Submit <[email protected]> > Reviewed-by: Kevin Sakuma <[email protected]> > Fuchsia-Auto-Submit: Marc Khouri <[email protected]> Bug: 73821 Bug: 76310 Bug: 116134 Change-Id: Id557c222f8ca731fec9af7e67be5df117d0c77e6 No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/781246 Reviewed-by: RubberStamper 🤖 <[email protected]> Commit-Queue: Marc Khouri <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
Change the lock currently used by the implementation of the VM's PageQueues lock from a CriticalMutex to Spinlock. This lock currently protects most of the state of the PageQueues object. In general, there should be one of these objects per NUMA node in the system, but in practice there is only one since we do not currently run on many (any?) systems with multiple NUMA nodes. Instrumentation has shown that, generally speaking, the amount of work being done inside of this lock is quite low. Typically, on the order of 300nSec or so in a nested virtualised environment (something of a worst case scenario). When the lock is not being heavily contested, this is fine. It only takes a handful of nanoseconds (20 or so) to drop the lock when the time comes, provided that the lock has not become so contested that threads have started to block in it. But! When threads do start to pile up behind this lock, things get really bad. Now, when the lock is released, there is always one or more threads in the wait queue, one of which need to be woken. In a _heavily_ instrumented kernel (again, nested VM), the act of dropping the lock and waking/scheduling the next thread to run takes on average something like 10uSec, or about 33 times as long as the work actually done in the lock. Provided that the work done inside of the lock is well bounded, it would be better to use a spinlock here instead of a blocking mutex. There seem to be two reasons that this lock is currently a blocking mutex, however. 1) In the past, it seems like the scanner threads would hold this lock for relatively long periods of time where it would not be appropriate to use a spin lock. 2) While holding this lock, various threads occasionally need to signal a kernel Event as the queue sizes change in order to trigger LRU scanning and aging behaviors. One cannot signal an event while holding a spinlock, since it represents an action which might result in immediate preemption if the signaled thread ends up displacing the signaling thread in the scheduler's run queue. Issue #1 does not seem to be much of an issue anymore. The scanner threads now attempt to batch their work in ways which mean that there is a worst case amount of time that they might spend in the lock fetching a batch of work to scan. Issue #2 was addressed using a bit of static-analysis trickery. After converting the lock from a critical mutex to a spinlock, the locations where the events were being directly signaled were replaced with code which simply records that the need to signal the events are now "pending". A special zero-length static analysis capability (the PendingSignalToken) was added. This object behaves like a lock from a static analysis perspective, but under the hood, it actually does nothing at all. It is also required to be acquired before the object's main spinlock Next annotations were added to the new "PendSignal" method which demand that the PendingSignalToken needs to be held in order to pend a signal. Finally, an RAII style guard was introduced which "acquires" and "releases" the token, and always flushes out any pending signals when the guard goes out of scope. The Acquire/Release methods of the token itself were private, and the Guard was made a friend of the Token. So - With all of this in place: 1) No events will ever be signaled from inside of the (now) spinlock. 2) Instead, they will be pended, but this requires holding the token. 3) The token cannot be simply acquired by random code. It may only be acquired by instantiating the guard. 4) A guard can only be instantiated before acquiring the spinlock (which is also only ever acquired using a guard object). This means that the token-guard will always go out of scope after the lock guard. 5) The guard makes sure to always flush the pending signals, so it is now extremely difficult to make a mistake and forget to flush any pending signals. 6) The static-annotations make sure that all code patch which _might_ result in a pended signal will make sure to flush the pending signals at a point after the spinlock has been dropped. The result is a significant performance improvement in scenarios where the PageQueues lock had been previously heavily contested. Change-Id: I4c2b26007a58f1771cf8c4b130e18c47893c790f Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/762186 Reviewed-by: Corey Tabaka <[email protected]> Reviewed-by: Adrian Danis <[email protected]> Reviewed-by: Rasha Eqbal <[email protected]> Commit-Queue: John Grossman <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
This relands commit d9f1226af09993c50626196abc1585c65e783359. PS #1 is the same as that commit, but later PSs add fixes for the same flake described in https://fxrev.dev/781465 Moves sending of scan results to the location sensor into its own future, such that it doesn't block sending scan results to any requesters and is instead serviced in parallel. Bug: 73821 Bug: 76310 Bug: 116134 Test: fx test wlancfg-tests Multiply: wlancfg-tests Change-Id: Icddc612be3b835a40baff6742b15f593c1a0251e Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/781466 Commit-Queue: Auto-Submit <[email protected]> Reviewed-by: Kevin Sakuma <[email protected]> Fuchsia-Auto-Submit: Marc Khouri <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
This reverts commit e2f75411a5716c4f0bbe42ab6c2452c19209d8a6. Reason for revert: breaks compilation, raced with another change Original change's description: > [wlan][policy] Location sensor can't block scan results > > This relands commit d9f1226af09993c50626196abc1585c65e783359. PS #1 > is the same as that commit, but later PSs add fixes for the same > flake described in https://fxrev.dev/781465 > > Moves sending of scan results to the location sensor into its own > future, such that it doesn't block sending scan results to any > requesters and is instead serviced in parallel. > > Bug: 73821 > Bug: 76310 > Bug: 116134 > Test: fx test wlancfg-tests > Multiply: wlancfg-tests > Change-Id: Icddc612be3b835a40baff6742b15f593c1a0251e > Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/781466 > Commit-Queue: Auto-Submit <[email protected]> > Reviewed-by: Kevin Sakuma <[email protected]> > Fuchsia-Auto-Submit: Marc Khouri <[email protected]> Bug: 73821 Bug: 76310 Bug: 116134 Change-Id: Id557c222f8ca731fec9af7e67be5df117d0c77e6 No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/781246 Reviewed-by: RubberStamper 🤖 <[email protected]> Commit-Queue: Marc Khouri <[email protected]>
gnoliyil
force-pushed
the
main
branch
2 times, most recently
from
January 27, 2024 07:45
f5a427c
to
306a6c9
Compare
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
This reverts commit 9592d6baf89af9cd63675e1027d3e0c4b0755a51. Reason for revert: Broke `fx test`: ``` $ fx test netstack-gotests Logging all output to: /usr/local/google/home/ghanan/Projects/fuchsia/out/core.qemu-x64/fxtest-2023-06-10T16:55:39.191757.log Use the `--logpath` argument to specify a log location or `--no-log` to disable Output will be printed to the console for tests taking more than 2 seconds. To change the timeout threshold, specify the `-s/--slow` flag. To show all output, specify the `-o/--output` flag. Found 660 total tests in //out/core.qemu-x64/tests.json Will run 1 test Unhandled exception: type '_Map<String, dynamic>' is not a subtype of type 'Iterable<dynamic>' #0 new PackageManifestList.fromJson (package:fxtest/package_manifest_list.dart:29:50) #1 TestBundle.readPackageManifestsList (package:fxtest/test_bundle.dart:172:29) <asynchronous suspension> #2 TestBundle.calculateMinimalBuildTargets (package:fxtest/test_bundle.dart:99:9) <asynchronous suspension> #3 FuchsiaTestCommand.runTestSuite (package:fxtest/cmd.dart:186:31) <asynchronous suspension> ``` Original change's description: > [fxtest] Update to new package manifest list format > > We switched the package manifest list format from a line oriented file > to a json-based one, but we missed updating fxtest. > > Fixed: 128829 > Change-Id: I167204cfeff896fd4045dedd30c519fb7b18d0a6 > Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/869378 > Fuchsia-Auto-Submit: Erick Tryzelaar <[email protected]> > Reviewed-by: Ankur Mittal <[email protected]> > Commit-Queue: Auto-Submit <[email protected]> Change-Id: I1daa7c34087f26b96499f6c241283954f3f58d6b No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/869617 Commit-Queue: Ghanan Gowripalan <[email protected]> Reviewed-by: RubberStamper 🤖 <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
1. Automated the installation, uninstallation and formatting steps of code guidelines 2. Add build-system to pyproject.toml to enable local pip install of HoneyDew (which is used by scripts in #1) a. Also, removed mypy.toml and moved its content to pyproject.toml 3. Updated the formatting section in coding guideliens with: a. Added instructions for running isort on CLI to sort imports (instead of relying on vscode) b. Added instructions for running autoflake on CLI to remove unused imports (instead of relying on vscode) 4. Accordingly updated the recommended vscode settings and plugins 5. Ran these updated coding guidelines and fixed the code base Change-Id: If3af9e87619e42b5e07ad1ac9e56a934d986efed Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/868943 Reviewed-by: Kevin Cho <[email protected]> Commit-Queue: Pavan Kumar Juturu <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
Optimization #1: All 3 implementations of TransactionHandler delegate to LockManager for implementing transaction_lock, read_lock, and write_lock. These methods are async which requires heap allocating the futures. Synchronously exposing the LockManager removes a heap allocation. Optimization #2: Add a small vec optimization to the LockKey container. The implementation of LockManager::lock was taking a slice of keys, converting the slice to a vec, then allocating another vec to move the keys to. Most callers only take a single lock. Adding a small vec optimization that can store a single lock inline removes 2 heap allocations when only taking a single lock. Change-Id: I4233994d4cac795887c68720f20d0dbd76b0d836 Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/914276 Reviewed-by: James Sullivan <[email protected]> Commit-Queue: Chris Drouillard <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
The hypothesis for why these tests flake is the following: - A local component (for example: `mock_avrcp_client` connects to some capabilities. - The local component returns (exits). - component_manager shuts down the local component - Before the component's namespace vfs has a chance to process the Open requests for #1, component_manager drops the vfs. - Because the Open requests were dropped, we never run routing tasks for them. - Because we never run routing tasks, the component on the other end is never started. - The test stalls waiting for events that will never come. To fix, change these tasks to never exit. We should eventually fix component_manager to guarantee processing of these requests, but it's a more complicated fix (b/303919602). Multiply: bt-avrcp-smoke-test Multiply: bt-a2dp-smoke-test Multiply: bt-hfp-audio-gateway-smoke-test Multiply: bt-avrcp-target-smoke-test Multiply: bt-init-smoke-test Multiply: bt-gap-smoke-test Fixed: 133235 Change-Id: I47e8995430a15648372539ad6d26f2d68fdce9fd Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/928705 Reviewed-by: Marie Janssen <[email protected]> Commit-Queue: Gary Bressler <[email protected]> Reviewed-by: Ani Ramakrishnan <[email protected]>
gnoliyil
pushed a commit
that referenced
this pull request
Jan 27, 2024
…cpy_test (#75532) Internal builds of the unittests with msan flagged mempcpy_test. ==6862==WARNING: MemorySanitizer: use-of-uninitialized-value #0 0x55e34d7d734a in length llvm-project/libc/src/__support/CPP/string_view.h:41:11 #1 0x55e34d7d734a in string_view llvm-project/libc/src/__support/CPP/string_view.h:71:24 #2 0x55e34d7d734a in __llvm_libc_9999_0_0_git::testing::Test::testStrEq(char const*, char const*, char const*, char const*, __llvm_libc_9999_0_0_git::testing::internal::Location) llvm-project/libc/test/UnitTest/LibcTest.cpp:284:13 #3 0x55e34d7d4e09 in LlvmLibcMempcpyTest_Simple::Run() llvm-project/libc/test/src/string/mempcpy_test.cpp:20:3 #4 0x55e34d7d6dff in __llvm_libc_9999_0_0_git::testing::Test::runTests(char const*) llvm-project/libc/test/UnitTest/LibcTest.cpp:133:8 #5 0x55e34d7d86e0 in main llvm-project/libc/test/UnitTest/LibcTestMain.cpp:21:10 SUMMARY: MemorySanitizer: use-of-uninitialized-value llvm-project/libc/src/__support/CPP/string_view.h:41:11 in length What's going on here is that mempcpy_test.cpp's Simple test is using ASSERT_STREQ with a partially initialized char array. ASSERT_STREQ calls Test::testStrEq which constructs a cpp:string_view. That constructor calls the private method cpp::string_view::length. When built with msan, the loop is transformed into multi-byte access, which then fails upon access. I took a look at libc++'s __constexpr_strlen which just calls __builtin_strlen(). Replacing the implementation of cpp::string_view::length with a call to __builtin_strlen() may still result in out of bounds access when the test is built with msan. It's not safe to use ASSERT_STREQ with a partially initialized array. Initialize the whole array so that the test passes. GitOrigin-RevId: 3d32ed130a948acc4254b311256f7aebdcb69891 Original-Revision: 842a16013b1965fc619ba2cf7840f60cc6fccd32 Roller-URL: https://ci.chromium.org/b/8761691058572502529 CQ-Do-Not-Cancel-Tryjobs: true Change-Id: If758f9f7af115dd12022db9e0d2dd409954b9685 Reviewed-on: https://fuchsia-review.googlesource.com/c/fuchsia/+/961057
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Patching CVE-2007-4559
Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.
If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.