| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: xilinx: xdma: Fix regmap max_register
The max_register field is assigned the size of the register memory
region instead of the offset of the last register.
The result is that reading from the regmap via debugfs can cause
a segmentation fault:
tail /sys/kernel/debug/regmap/xdma.1.auto/registers
Unable to handle kernel paging request at virtual address ffff800082f70000
Mem abort info:
ESR = 0x0000000096000007
EC = 0x25: DABT (current EL), IL = 32 bits
SET = 0, FnV = 0
EA = 0, S1PTW = 0
FSC = 0x07: level 3 translation fault
[...]
Call trace:
regmap_mmio_read32le+0x10/0x30
_regmap_bus_reg_read+0x74/0xc0
_regmap_read+0x68/0x198
regmap_read+0x54/0x88
regmap_read_debugfs+0x140/0x380
regmap_map_read_file+0x30/0x48
full_proxy_read+0x68/0xc8
vfs_read+0xcc/0x310
ksys_read+0x7c/0x120
__arm64_sys_read+0x24/0x40
invoke_syscall.constprop.0+0x64/0x108
do_el0_svc+0xb0/0xd8
el0_svc+0x38/0x130
el0t_64_sync_handler+0x120/0x138
el0t_64_sync+0x194/0x198
Code: aa1e03e9 d503201f f9400000 8b214000 (b9400000)
---[ end trace 0000000000000000 ]---
note: tail[1217] exited with irqs disabled
note: tail[1217] exited with preempt_count 1
Segmentation fault |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix deadlock in wait_current_trans() due to ignored transaction type
When wait_current_trans() is called during start_transaction(), it
currently waits for a blocked transaction without considering whether
the given transaction type actually needs to wait for that particular
transaction state. The btrfs_blocked_trans_types[] array already defines
which transaction types should wait for which transaction states, but
this check was missing in wait_current_trans().
This can lead to a deadlock scenario involving two transactions and
pending ordered extents:
1. Transaction A is in TRANS_STATE_COMMIT_DOING state
2. A worker processing an ordered extent calls start_transaction()
with TRANS_JOIN
3. join_transaction() returns -EBUSY because Transaction A is in
TRANS_STATE_COMMIT_DOING
4. Transaction A moves to TRANS_STATE_UNBLOCKED and completes
5. A new Transaction B is created (TRANS_STATE_RUNNING)
6. The ordered extent from step 2 is added to Transaction B's
pending ordered extents
7. Transaction B immediately starts commit by another task and
enters TRANS_STATE_COMMIT_START
8. The worker finally reaches wait_current_trans(), sees Transaction B
in TRANS_STATE_COMMIT_START (a blocked state), and waits
unconditionally
9. However, TRANS_JOIN should NOT wait for TRANS_STATE_COMMIT_START
according to btrfs_blocked_trans_types[]
10. Transaction B is waiting for pending ordered extents to complete
11. Deadlock: Transaction B waits for ordered extent, ordered extent
waits for Transaction B
This can be illustrated by the following call stacks:
CPU0 CPU1
btrfs_finish_ordered_io()
start_transaction(TRANS_JOIN)
join_transaction()
# -EBUSY (Transaction A is
# TRANS_STATE_COMMIT_DOING)
# Transaction A completes
# Transaction B created
# ordered extent added to
# Transaction B's pending list
btrfs_commit_transaction()
# Transaction B enters
# TRANS_STATE_COMMIT_START
# waiting for pending ordered
# extents
wait_current_trans()
# waits for Transaction B
# (should not wait!)
Task bstore_kv_sync in btrfs_commit_transaction waiting for ordered
extents:
__schedule+0x2e7/0x8a0
schedule+0x64/0xe0
btrfs_commit_transaction+0xbf7/0xda0 [btrfs]
btrfs_sync_file+0x342/0x4d0 [btrfs]
__x64_sys_fdatasync+0x4b/0x80
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Task kworker in wait_current_trans waiting for transaction commit:
Workqueue: btrfs-syno_nocow btrfs_work_helper [btrfs]
__schedule+0x2e7/0x8a0
schedule+0x64/0xe0
wait_current_trans+0xb0/0x110 [btrfs]
start_transaction+0x346/0x5b0 [btrfs]
btrfs_finish_ordered_io.isra.0+0x49b/0x9c0 [btrfs]
btrfs_work_helper+0xe8/0x350 [btrfs]
process_one_work+0x1d3/0x3c0
worker_thread+0x4d/0x3e0
kthread+0x12d/0x150
ret_from_fork+0x1f/0x30
Fix this by passing the transaction type to wait_current_trans() and
checking btrfs_blocked_trans_types[cur_trans->state] against the given
type before deciding to wait. This ensures that transaction types which
are allowed to join during certain blocked states will not unnecessarily
wait and cause deadlocks. |
| In the Linux kernel, the following vulnerability has been resolved:
phy: qcom-qusb2: Fix NULL pointer dereference on early suspend
Enabling runtime PM before attaching the QPHY instance as driver data
can lead to a NULL pointer dereference in runtime PM callbacks that
expect valid driver data. There is a small window where the suspend
callback may run after PM runtime enabling and before runtime forbid.
This causes a sporadic crash during boot:
```
Unable to handle kernel NULL pointer dereference at virtual address 00000000000000a1
[...]
CPU: 0 UID: 0 PID: 11 Comm: kworker/0:1 Not tainted 6.16.7+ #116 PREEMPT
Workqueue: pm pm_runtime_work
pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : qusb2_phy_runtime_suspend+0x14/0x1e0 [phy_qcom_qusb2]
lr : pm_generic_runtime_suspend+0x2c/0x44
[...]
```
Attach the QPHY instance as driver data before enabling runtime PM to
prevent NULL pointer dereference in runtime PM callbacks.
Reorder pm_runtime_enable() and pm_runtime_forbid() to prevent a
short window where an unnecessary runtime suspend can occur.
Use the devres-managed version to ensure PM runtime is symmetrically
disabled during driver removal for proper cleanup. |
| In the Linux kernel, the following vulnerability has been resolved:
ALSA: ac97: fix a double free in snd_ac97_controller_register()
If ac97_add_adapter() fails, put_device() is the correct way to drop
the device reference. kfree() is not required.
Add kfree() if idr_alloc() fails and in ac97_adapter_release() to do
the cleanup.
Found by code review. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: at_hdmac: fix device leak on of_dma_xlate()
Make sure to drop the reference taken when looking up the DMA platform
device during of_dma_xlate() when releasing channel resources.
Note that commit 3832b78b3ec2 ("dmaengine: at_hdmac: add missing
put_device() call in at_dma_xlate()") fixed the leak in a couple of
error paths but the reference is still leaking on successful allocation. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: bcm-sba-raid: fix device leak on probe
Make sure to drop the reference taken when looking up the mailbox device
during probe on probe failures and on driver unbind. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: dw: dmamux: fix OF node leak on route allocation failure
Make sure to drop the reference taken to the DMA master OF node also on
late route allocation failures. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: lpc18xx-dmamux: fix device leak on route allocation
Make sure to drop the reference taken when looking up the DMA mux
platform device during route allocation.
Note that holding a reference to a device does not prevent its driver
data from going away so there is no point in keeping the reference. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: sh: rz-dmac: fix device leak on probe failure
Make sure to drop the reference taken when looking up the ICU device
during probe also on probe failures (e.g. probe deferral). |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: stm32: dmamux: fix device leak on route allocation
Make sure to drop the reference taken when looking up the DMA mux
platform device during route allocation.
Note that holding a reference to a device does not prevent its driver
data from going away so there is no point in keeping the reference. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: ti: dma-crossbar: fix device leak on am335x route allocation
Make sure to drop the reference taken when looking up the crossbar
platform device during am335x route allocation. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix NULL dereference on root when tracing inode eviction
When evicting an inode the first thing we do is to setup tracing for it,
which implies fetching the root's id. But in btrfs_evict_inode() the
root might be NULL, as implied in the next check that we do in
btrfs_evict_inode().
Hence, we either should set the ->root_objectid to 0 in case the root is
NULL, or we move tracing setup after checking that the root is not
NULL. Setting the rootid to 0 at least gives us the possibility to trace
this call even in the case when the root is NULL, so that's the solution
taken here. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: always detect conflicting inodes when logging inode refs
After rename exchanging (either with the rename exchange operation or
regular renames in multiple non-atomic steps) two inodes and at least
one of them is a directory, we can end up with a log tree that contains
only of the inodes and after a power failure that can result in an attempt
to delete the other inode when it should not because it was not deleted
before the power failure. In some case that delete attempt fails when
the target inode is a directory that contains a subvolume inside it, since
the log replay code is not prepared to deal with directory entries that
point to root items (only inode items).
1) We have directories "dir1" (inode A) and "dir2" (inode B) under the
same parent directory;
2) We have a file (inode C) under directory "dir1" (inode A);
3) We have a subvolume inside directory "dir2" (inode B);
4) All these inodes were persisted in a past transaction and we are
currently at transaction N;
5) We rename the file (inode C), so at btrfs_log_new_name() we update
inode C's last_unlink_trans to N;
6) We get a rename exchange for "dir1" (inode A) and "dir2" (inode B),
so after the exchange "dir1" is inode B and "dir2" is inode A.
During the rename exchange we call btrfs_log_new_name() for inodes
A and B, but because they are directories, we don't update their
last_unlink_trans to N;
7) An fsync against the file (inode C) is done, and because its inode
has a last_unlink_trans with a value of N we log its parent directory
(inode A) (through btrfs_log_all_parents(), called from
btrfs_log_inode_parent()).
8) So we end up with inode B not logged, which now has the old name
of inode A. At copy_inode_items_to_log(), when logging inode A, we
did not check if we had any conflicting inode to log because inode
A has a generation lower than the current transaction (created in
a past transaction);
9) After a power failure, when replaying the log tree, since we find that
inode A has a new name that conflicts with the name of inode B in the
fs tree, we attempt to delete inode B... this is wrong since that
directory was never deleted before the power failure, and because there
is a subvolume inside that directory, attempting to delete it will fail
since replay_dir_deletes() and btrfs_unlink_inode() are not prepared
to deal with dir items that point to roots instead of inodes.
When that happens the mount fails and we get a stack trace like the
following:
[87.2314] BTRFS info (device dm-0): start tree-log replay
[87.2318] BTRFS critical (device dm-0): failed to delete reference to subvol, root 5 inode 256 parent 259
[87.2332] ------------[ cut here ]------------
[87.2338] BTRFS: Transaction aborted (error -2)
[87.2346] WARNING: CPU: 1 PID: 638968 at fs/btrfs/inode.c:4345 __btrfs_unlink_inode+0x416/0x440 [btrfs]
[87.2368] Modules linked in: btrfs loop dm_thin_pool (...)
[87.2470] CPU: 1 UID: 0 PID: 638968 Comm: mount Tainted: G W 6.18.0-rc7-btrfs-next-218+ #2 PREEMPT(full)
[87.2489] Tainted: [W]=WARN
[87.2494] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
[87.2514] RIP: 0010:__btrfs_unlink_inode+0x416/0x440 [btrfs]
[87.2538] Code: c0 89 04 24 (...)
[87.2568] RSP: 0018:ffffc0e741f4b9b8 EFLAGS: 00010286
[87.2574] RAX: 0000000000000000 RBX: ffff9d3ec8a6cf60 RCX: 0000000000000000
[87.2582] RDX: 0000000000000002 RSI: ffffffff84ab45a1 RDI: 00000000ffffffff
[87.2591] RBP: ffff9d3ec8a6ef20 R08: 0000000000000000 R09: ffffc0e741f4b840
[87.2599] R10: ffff9d45dc1fffa8 R11: 0000000000000003 R12: ffff9d3ee26d77e0
[87.2608] R13: ffffc0e741f4ba98 R14: ffff9d4458040800 R15: ffff9d44b6b7ca10
[87.2618] FS: 00007f7b9603a840(0000) GS:ffff9d4658982000(0000) knlGS:0000000000000000
[87.
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
can: j1939: make j1939_session_activate() fail if device is no longer registered
syzbot is still reporting
unregister_netdevice: waiting for vcan0 to become free. Usage count = 2
even after commit 93a27b5891b8 ("can: j1939: add missing calls in
NETDEV_UNREGISTER notification handler") was added. A debug printk() patch
found that j1939_session_activate() can succeed even after
j1939_cancel_active_session() from j1939_netdev_notify(NETDEV_UNREGISTER)
has completed.
Since j1939_cancel_active_session() is processed with the session list lock
held, checking ndev->reg_state in j1939_session_activate() with the session
list lock held can reliably close the race window. |
| In the Linux kernel, the following vulnerability has been resolved:
rust_binder: remove spin_lock() in rust_shrink_free_page()
When forward-porting Rust Binder to 6.18, I neglected to take commit
fb56fdf8b9a2 ("mm/list_lru: split the lock to per-cgroup scope") into
account, and apparently I did not end up running the shrinker callback
when I sanity tested the driver before submission. This leads to crashes
like the following:
============================================
WARNING: possible recursive locking detected
6.18.0-mainline-maybe-dirty #1 Tainted: G IO
--------------------------------------------
kswapd0/68 is trying to acquire lock:
ffff956000fa18b0 (&l->lock){+.+.}-{2:2}, at: lock_list_lru_of_memcg+0x128/0x230
but task is already holding lock:
ffff956000fa18b0 (&l->lock){+.+.}-{2:2}, at: rust_helper_spin_lock+0xd/0x20
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&l->lock);
lock(&l->lock);
*** DEADLOCK ***
May be due to missing lock nesting notation
3 locks held by kswapd0/68:
#0: ffffffff90d2e260 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x597/0x1160
#1: ffff956000fa18b0 (&l->lock){+.+.}-{2:2}, at: rust_helper_spin_lock+0xd/0x20
#2: ffffffff90cf3680 (rcu_read_lock){....}-{1:2}, at: lock_list_lru_of_memcg+0x2d/0x230
To fix this, remove the spin_lock() call from rust_shrink_free_page(). |
| In the Linux kernel, the following vulnerability has been resolved:
counter: interrupt-cnt: Drop IRQF_NO_THREAD flag
An IRQ handler can either be IRQF_NO_THREAD or acquire spinlock_t, as
CONFIG_PROVE_RAW_LOCK_NESTING warns:
=============================
[ BUG: Invalid wait context ]
6.18.0-rc1+git... #1
-----------------------------
some-user-space-process/1251 is trying to lock:
(&counter->events_list_lock){....}-{3:3}, at: counter_push_event [counter]
other info that might help us debug this:
context-{2:2}
no locks held by some-user-space-process/....
stack backtrace:
CPU: 0 UID: 0 PID: 1251 Comm: some-user-space-process 6.18.0-rc1+git... #1 PREEMPT
Call trace:
show_stack (C)
dump_stack_lvl
dump_stack
__lock_acquire
lock_acquire
_raw_spin_lock_irqsave
counter_push_event [counter]
interrupt_cnt_isr [interrupt_cnt]
__handle_irq_event_percpu
handle_irq_event
handle_simple_irq
handle_irq_desc
generic_handle_domain_irq
gpio_irq_handler
handle_irq_desc
generic_handle_domain_irq
gic_handle_irq
call_on_irq_stack
do_interrupt_handler
el0_interrupt
__el0_irq_handler_common
el0t_64_irq_handler
el0t_64_irq
... and Sebastian correctly points out. Remove IRQF_NO_THREAD as an
alternative to switching to raw_spinlock_t, because the latter would limit
all potential nested locks to raw_spinlock_t only. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: idxd: fix device leaks on compat bind and unbind
Make sure to drop the reference taken when looking up the idxd device as
part of the compat bind and unbind sysfs interface. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: tegra-adma: Fix use-after-free
A use-after-free bug exists in the Tegra ADMA driver when audio streams
are terminated, particularly during XRUN conditions. The issue occurs
when the DMA buffer is freed by tegra_adma_terminate_all() before the
vchan completion tasklet finishes accessing it.
The race condition follows this sequence:
1. DMA transfer completes, triggering an interrupt that schedules the
completion tasklet (tasklet has not executed yet)
2. Audio playback stops, calling tegra_adma_terminate_all() which
frees the DMA buffer memory via kfree()
3. The scheduled tasklet finally executes, calling vchan_complete()
which attempts to access the already-freed memory
Since tasklets can execute at any time after being scheduled, there is
no guarantee that the buffer will remain valid when vchan_complete()
runs.
Fix this by properly synchronizing the virtual channel completion:
- Calling vchan_terminate_vdesc() in tegra_adma_stop() to mark the
descriptors as terminated instead of freeing the descriptor.
- Add the callback tegra_adma_synchronize() that calls
vchan_synchronize() which kills any pending tasklets and frees any
terminated descriptors.
Crash logs:
[ 337.427523] BUG: KASAN: use-after-free in vchan_complete+0x124/0x3b0
[ 337.427544] Read of size 8 at addr ffff000132055428 by task swapper/0/0
[ 337.427562] Call trace:
[ 337.427564] dump_backtrace+0x0/0x320
[ 337.427571] show_stack+0x20/0x30
[ 337.427575] dump_stack_lvl+0x68/0x84
[ 337.427584] print_address_description.constprop.0+0x74/0x2b8
[ 337.427590] kasan_report+0x1f4/0x210
[ 337.427598] __asan_load8+0xa0/0xd0
[ 337.427603] vchan_complete+0x124/0x3b0
[ 337.427609] tasklet_action_common.constprop.0+0x190/0x1d0
[ 337.427617] tasklet_action+0x30/0x40
[ 337.427623] __do_softirq+0x1a0/0x5c4
[ 337.427628] irq_exit+0x110/0x140
[ 337.427633] handle_domain_irq+0xa4/0xe0
[ 337.427640] gic_handle_irq+0x64/0x160
[ 337.427644] call_on_irq_stack+0x20/0x4c
[ 337.427649] do_interrupt_handler+0x7c/0x90
[ 337.427654] el1_interrupt+0x30/0x80
[ 337.427659] el1h_64_irq_handler+0x18/0x30
[ 337.427663] el1h_64_irq+0x7c/0x80
[ 337.427667] cpuidle_enter_state+0xe4/0x540
[ 337.427674] cpuidle_enter+0x54/0x80
[ 337.427679] do_idle+0x2e0/0x380
[ 337.427685] cpu_startup_entry+0x2c/0x70
[ 337.427690] rest_init+0x114/0x130
[ 337.427695] arch_call_rest_init+0x18/0x24
[ 337.427702] start_kernel+0x380/0x3b4
[ 337.427706] __primary_switched+0xc0/0xc8 |
| In the Linux kernel, the following vulnerability has been resolved:
dm-verity: disable recursive forward error correction
There are two problems with the recursive correction:
1. It may cause denial-of-service. In fec_read_bufs, there is a loop that
has 253 iterations. For each iteration, we may call verity_hash_for_block
recursively. There is a limit of 4 nested recursions - that means that
there may be at most 253^4 (4 billion) iterations. Red Hat QE team
actually created an image that pushes dm-verity to this limit - and this
image just makes the udev-worker process get stuck in the 'D' state.
2. It doesn't work. In fec_read_bufs we store data into the variable
"fio->bufs", but fio bufs is shared between recursive invocations, if
"verity_hash_for_block" invoked correction recursively, it would
overwrite partially filled fio->bufs. |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: nf_tables: avoid chain re-validation if possible
Hamza Mahfooz reports cpu soft lock-ups in
nft_chain_validate():
watchdog: BUG: soft lockup - CPU#1 stuck for 27s! [iptables-nft-re:37547]
[..]
RIP: 0010:nft_chain_validate+0xcb/0x110 [nf_tables]
[..]
nft_immediate_validate+0x36/0x50 [nf_tables]
nft_chain_validate+0xc9/0x110 [nf_tables]
nft_immediate_validate+0x36/0x50 [nf_tables]
nft_chain_validate+0xc9/0x110 [nf_tables]
nft_immediate_validate+0x36/0x50 [nf_tables]
nft_chain_validate+0xc9/0x110 [nf_tables]
nft_immediate_validate+0x36/0x50 [nf_tables]
nft_chain_validate+0xc9/0x110 [nf_tables]
nft_immediate_validate+0x36/0x50 [nf_tables]
nft_chain_validate+0xc9/0x110 [nf_tables]
nft_immediate_validate+0x36/0x50 [nf_tables]
nft_chain_validate+0xc9/0x110 [nf_tables]
nft_table_validate+0x6b/0xb0 [nf_tables]
nf_tables_validate+0x8b/0xa0 [nf_tables]
nf_tables_commit+0x1df/0x1eb0 [nf_tables]
[..]
Currently nf_tables will traverse the entire table (chain graph), starting
from the entry points (base chains), exploring all possible paths
(chain jumps). But there are cases where we could avoid revalidation.
Consider:
1 input -> j2 -> j3
2 input -> j2 -> j3
3 input -> j1 -> j2 -> j3
Then the second rule does not need to revalidate j2, and, by extension j3,
because this was already checked during validation of the first rule.
We need to validate it only for rule 3.
This is needed because chain loop detection also ensures we do not exceed
the jump stack: Just because we know that j2 is cycle free, its last jump
might now exceed the allowed stack size. We also need to update all
reachable chains with the new largest observed call depth.
Care has to be taken to revalidate even if the chain depth won't be an
issue: chain validation also ensures that expressions are not called from
invalid base chains. For example, the masquerade expression can only be
called from NAT postrouting base chains.
Therefore we also need to keep record of the base chain context (type,
hooknum) and revalidate if the chain becomes reachable from a different
hook location. |