-
Notifications
You must be signed in to change notification settings - Fork 530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test example #1
Closed
Closed
Test example #1
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
yakuizhao
commented
Mar 6, 2018
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
lijinxia
pushed a commit
that referenced
this pull request
May 11, 2018
makefile: install the demo scripts
This was referenced Sep 7, 2018
This was referenced Nov 9, 2018
Closed
jsun26intel
added a commit
to jsun26intel/acrn-hypervisor
that referenced
this pull request
Oct 22, 2021
In commit of 7cc9c8f the pre-launched VM was set to ACPI HW Reduced platform, then the IRQ would be allocated from 0 for PCI MSI devices. The Intel igb/igc driver might get IRQ 8 when do request irq which would conflict with the irq of RTC device: [ 14.264954] genirq: Flags mismatch irq 8. 00000000 (enp0s8-TxRx-3) vs. 00000000 (rtc0) [ 14.265411] ------------[ cut here ]------------ [ 14.265508] kernel BUG at drivers/pci/msi.c:376! [ 14.265610] invalid opcode: 0000 [projectacrn#1] PREEMPT SMP [ 14.265710] CPU: 0 PID: 296 Comm: connmand Not tainted 5.10.52-acrn-sos -dirty projectacrn#72 [ 14.265863] RIP: 0010:free_msi_irqs+0x182/0x1b0 This patch will specify some legacy PnP device like UART and RTC in ACPI DSDT table of pre-launched VM, so that IRQs from IOAPIC pin 4/3/8 could be preserved before MSI device requesting IRQs. Tracked-On: projectacrn#6704 Signed-off-by: Victor Sun <[email protected]>
jsun26intel
added a commit
to jsun26intel/acrn-hypervisor
that referenced
this pull request
Oct 22, 2021
In commit of 7cc9c8f the pre-launched VM was set to ACPI HW Reduced platform, then the IRQ would be allocated from 0 for PCI MSI devices. The Intel igb/igc driver might get IRQ 8 when do request irq which would conflict with the irq of RTC device: [ 14.264954] genirq: Flags mismatch irq 8. 00000000 (enp0s8-TxRx-3) vs. 00000000 (rtc0) [ 14.265411] ------------[ cut here ]------------ [ 14.265508] kernel BUG at drivers/pci/msi.c:376! [ 14.265610] invalid opcode: 0000 [projectacrn#1] PREEMPT SMP [ 14.265710] CPU: 0 PID: 296 Comm: connmand Not tainted 5.10.52-acrn-sos -dirty projectacrn#72 [ 14.265863] RIP: 0010:free_msi_irqs+0x182/0x1b0 This patch will specify some legacy PnP device like UART and RTC in ACPI DSDT table of pre-launched VM, so that IRQs from IOAPIC pin 4/3/8 could be preserved before MSI device requesting IRQs. Tracked-On: projectacrn#6704 Signed-off-by: Victor Sun <[email protected]>
wenlingz
pushed a commit
that referenced
this pull request
Oct 25, 2021
In commit of 7cc9c8f the pre-launched VM was set to ACPI HW Reduced platform, then the IRQ would be allocated from 0 for PCI MSI devices. The Intel igb/igc driver might get IRQ 8 when do request irq which would conflict with the irq of RTC device: [ 14.264954] genirq: Flags mismatch irq 8. 00000000 (enp0s8-TxRx-3) vs. 00000000 (rtc0) [ 14.265411] ------------[ cut here ]------------ [ 14.265508] kernel BUG at drivers/pci/msi.c:376! [ 14.265610] invalid opcode: 0000 [#1] PREEMPT SMP [ 14.265710] CPU: 0 PID: 296 Comm: connmand Not tainted 5.10.52-acrn-sos -dirty #72 [ 14.265863] RIP: 0010:free_msi_irqs+0x182/0x1b0 This patch will specify some legacy PnP device like UART and RTC in ACPI DSDT table of pre-launched VM, so that IRQs from IOAPIC pin 4/3/8 could be preserved before MSI device requesting IRQs. Tracked-On: #6704 Signed-off-by: Victor Sun <[email protected]>
This was referenced Oct 25, 2021
This was referenced Nov 2, 2021
Merged
yonghuah
added a commit
to yonghuah/acrn-hypervisor
that referenced
this pull request
Feb 21, 2022
'mevent_lmutex' is initialized as default type attempting to recursively lock on this kind of mutext results in undefined behaviour. Recursively lock on 'mevent_lmutex' can be detected in mevent thread when user tries to trigger system reset from user VM, in this case, user VM reboot hang. The backtrace for this issue: projectacrn#1 in mevent_qlock () at core/mevent.c:93 projectacrn#2 in mevent_delete_even at core/mevent.c:357 ===>Recursively LOCK projectacrn#3 in mevent_delete_close at core/mevent.c:387 projectacrn#4 in acrn_timer_deinit at core/timer.c:106 projectacrn#5 in virtio_reset_dev at hw/pci/virtio/virtio.c:171 projectacrn#6 in virtio_console_reset at hw/pci/virtio/virtio_console.c:196 projectacrn#7 in virtio_console_destroy at hw/pci/virtio/virtio_console.c:1015 projectacrn#8 in virtio_console_teardown_backend at hw/pci/virtio/virtio_console.c:1042 projectacrn#9 in mevent_drain_del_list () at core/mevent.c:348 ===> 1st LOCK projectacrn#10 in mevent_dispatch () at core/mevent.c:472 projectacrn#11 in main at core/main.c:1110 So the root cause is: mevent_mutex lock is recursively locked by mevent thread itself (projectacrn#9 for this first lock and projectacrn#2 for recursively lock), which is not allowed for mutex with default attribute. This patch changes the mutex type of 'mevent_lmutex' from default to "PTHREAD_MUTEX_RECURSIVE", because recrusively lock shall be allowed as user of mevent may call mevent functions (where mutex lock maybe required) in teardown callbacks. Tracked-On: projectacrn#7133 Signed-off-by: Yonghua Huang <[email protected]> Acked-by: Yu Wang <[email protected]>
yonghuah
added a commit
to yonghuah/acrn-hypervisor
that referenced
this pull request
Feb 21, 2022
'mevent_lmutex' is initialized as default type, while attempting to recursively lock on this kind of mutext results in undefined behaviour. Recursively lock on 'mevent_lmutex' can be detected in mevent thread when user tries to trigger system reset from user VM, in this case, user VM reboot hang. The backtrace for this issue: projectacrn#1 in mevent_qlock () at core/mevent.c:93 projectacrn#2 in mevent_delete_even at core/mevent.c:357 ===>Recursively LOCK projectacrn#3 in mevent_delete_close at core/mevent.c:387 projectacrn#4 in acrn_timer_deinit at core/timer.c:106 projectacrn#5 in virtio_reset_dev at hw/pci/virtio/virtio.c:171 projectacrn#6 in virtio_console_reset at hw/pci/virtio/virtio_console.c:196 projectacrn#7 in virtio_console_destroy at hw/pci/virtio/virtio_console.c:1015 projectacrn#8 in virtio_console_teardown_backend at hw/pci/virtio/virtio_console.c:1042 projectacrn#9 in mevent_drain_del_list () at core/mevent.c:348 ===> 1st LOCK projectacrn#10 in mevent_dispatch () at core/mevent.c:472 projectacrn#11 in main at core/main.c:1110 So the root cause is: mevent_mutex lock is recursively locked by mevent thread itself (projectacrn#9 for this first lock and projectacrn#2 for recursively lock), which is not allowed for mutex with default attribute. This patch changes the mutex type of 'mevent_lmutex' from default to "PTHREAD_MUTEX_RECURSIVE", because recrusively lock shall be allowed as user of mevent may call mevent functions (where mutex lock maybe required) in teardown callbacks. Tracked-On: projectacrn#7133 Signed-off-by: Yonghua Huang <[email protected]> Acked-by: Yu Wang <[email protected]>
acrnsi-robot
pushed a commit
that referenced
this pull request
Feb 21, 2022
'mevent_lmutex' is initialized as default type, while attempting to recursively lock on this kind of mutext results in undefined behaviour. Recursively lock on 'mevent_lmutex' can be detected in mevent thread when user tries to trigger system reset from user VM, in this case, user VM reboot hang. The backtrace for this issue: #1 in mevent_qlock () at core/mevent.c:93 #2 in mevent_delete_even at core/mevent.c:357 ===>Recursively LOCK #3 in mevent_delete_close at core/mevent.c:387 #4 in acrn_timer_deinit at core/timer.c:106 #5 in virtio_reset_dev at hw/pci/virtio/virtio.c:171 #6 in virtio_console_reset at hw/pci/virtio/virtio_console.c:196 #7 in virtio_console_destroy at hw/pci/virtio/virtio_console.c:1015 #8 in virtio_console_teardown_backend at hw/pci/virtio/virtio_console.c:1042 #9 in mevent_drain_del_list () at core/mevent.c:348 ===> 1st LOCK #10 in mevent_dispatch () at core/mevent.c:472 #11 in main at core/main.c:1110 So the root cause is: mevent_mutex lock is recursively locked by mevent thread itself (#9 for this first lock and #2 for recursively lock), which is not allowed for mutex with default attribute. This patch changes the mutex type of 'mevent_lmutex' from default to "PTHREAD_MUTEX_RECURSIVE", because recrusively lock shall be allowed as user of mevent may call mevent functions (where mutex lock maybe required) in teardown callbacks. Tracked-On: #7133 Signed-off-by: Yonghua Huang <[email protected]> Acked-by: Yu Wang <[email protected]>
Open
Open
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.