| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| YARD is a Ruby Documentation tool. Prior to version 0.9.42, a path traversal vulnerability was discovered in YARD when using yard server to serve documentation. This bug would allow unsanitized HTTP requests to access arbitrary files on the machine of a yard server host under certain conditions. This issue has been patched in version 0.9.42. |
| SEPPmail Secure Email Gateway before version 15.0.2.1 allows unauthenticated remote code execution in the new GINA UI because an endpoint passes attacker-controlled input from a parameter to Perl's eval. |
| SEPPmail Secure Email Gateway before version 15.0.4 contains an unauthenticated path traversal vulnerability in the identifier parameter of /api.app/attachment/preview that allows remote attackers to read arbitrary local files and trigger deletion of files in the targeted directory with the privileges of the api.app process. |
| SEPPmail Secure Email Gateway before version 15.0.4 exposes server environment variables through an unauthenticated endpoint in the new GINA UI, allowing remote attackers to obtain sensitive system information. |
| In the Linux kernel, the following vulnerability has been resolved:
soc/tegra: pmc: Fix unsafe generic_handle_irq() call
Currently, when resuming from system suspend on Tegra platforms,
the following warning is observed:
WARNING: CPU: 0 PID: 14459 at kernel/irq/irqdesc.c:666
Call trace:
handle_irq_desc+0x20/0x58 (P)
tegra186_pmc_wake_syscore_resume+0xe4/0x15c
syscore_resume+0x3c/0xb8
suspend_devices_and_enter+0x510/0x540
pm_suspend+0x16c/0x1d8
The warning occurs because generic_handle_irq() is being called from
a non-interrupt context which is considered as unsafe.
Fix this warning by deferring generic_handle_irq() call to an IRQ work
which gets executed in hard IRQ context where generic_handle_irq()
can be called safely.
When PREEMPT_RT kernels are used, regular IRQ work (initialized with
init_irq_work) is deferred to run in per-CPU kthreads in preemptible
context rather than hard IRQ context. Hence, use the IRQ_WORK_INIT_HARD
variant so that with PREEMPT_RT kernels, the IRQ work is processed in
hardirq context instead of being deferred to a thread which is required
for calling generic_handle_irq().
On non-PREEMPT_RT kernels, both init_irq_work() and IRQ_WORK_INIT_HARD()
execute in IRQ context, so this change has no functional impact for
standard kernel configurations.
[treding@nvidia.com: miscellaneous cleanups] |
| In the Linux kernel, the following vulnerability has been resolved:
media: verisilicon: Avoid G2 bus error while decoding H.264 and HEVC
For the i.MX8MQ platform, there is a hardware limitation: the g1 VPU and
g2 VPU cannot decode simultaneously; otherwise, it will cause below bus
error and produce corrupted pictures, even potentially lead to system hang.
[ 110.527986] hantro-vpu 38310000.video-codec: frame decode timed out.
[ 110.583517] hantro-vpu 38310000.video-codec: bus error detected.
Therefore, it is necessary to ensure that g1 and g2 operate alternately.
This allows for successful multi-instance decoding of H.264 and HEVC.
To achieve this, g1 and g2 share the same v4l2_m2m_dev, and then the
v4l2_m2m_dev can handle the scheduling. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: don't BUG() on unexpected delayed ref type in run_one_delayed_ref()
There is no need to BUG(), we can just return an error and log an error
message. |
| In the Linux kernel, the following vulnerability has been resolved:
iio: accel: adxl380: Avoid reading more entries than present in FIFO
The interrupt handler reads FIFO entries in batches of N samples, where N
is the number of scan elements that have been enabled. However, the sensor
fills the FIFO one sample at a time, even when more than one channel is
enabled. Therefore,the number of entries reported by the FIFO status
registers may not be a multiple of N; if this number is not a multiple, the
number of entries read from the FIFO may exceed the number of entries
actually present.
To fix the above issue, round down the number of FIFO entries read from the
status registers so that it is always a multiple of N. |
| In the Linux kernel, the following vulnerability has been resolved:
mm/page_alloc: clear page->private in free_pages_prepare()
Several subsystems (slub, shmem, ttm, etc.) use page->private but don't
clear it before freeing pages. When these pages are later allocated as
high-order pages and split via split_page(), tail pages retain stale
page->private values.
This causes a use-after-free in the swap subsystem. The swap code uses
page->private to track swap count continuations, assuming freshly
allocated pages have page->private == 0. When stale values are present,
swap_count_continued() incorrectly assumes the continuation list is valid
and iterates over uninitialized page->lru containing LIST_POISON values,
causing a crash:
KASAN: maybe wild-memory-access in range [0xdead000000000100-0xdead000000000107]
RIP: 0010:__do_sys_swapoff+0x1151/0x1860
Fix this by clearing page->private in free_pages_prepare(), ensuring all
freed pages have clean state regardless of previous use. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/v3d: Set DMA segment size to avoid debug warnings
When using V3D rendering with CONFIG_DMA_API_DEBUG enabled, the
kernel occasionally reports a segment size mismatch. This is because
'max_seg_size' is not set. The kernel defaults to 64K. setting
'max_seg_size' to the maximum will prevent 'debug_dma_map_sg()'
from complaining about the over-mapping of the V3D segment length.
DMA-API: v3d 1002000000.v3d: mapping sg segment longer than device
claims to support [len=8290304] [max=65536]
WARNING: CPU: 0 PID: 493 at kernel/dma/debug.c:1179 debug_dma_map_sg+0x330/0x388
CPU: 0 UID: 0 PID: 493 Comm: Xorg Not tainted 6.12.53-yocto-standard #1
Hardware name: Raspberry Pi 5 Model B Rev 1.0 (DT)
pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : debug_dma_map_sg+0x330/0x388
lr : debug_dma_map_sg+0x330/0x388
sp : ffff8000829a3ac0
x29: ffff8000829a3ac0 x28: 0000000000000001 x27: ffff8000813fe000
x26: ffffc1ffc0000000 x25: ffff00010fdeb760 x24: 0000000000000000
x23: ffff8000816a9bf0 x22: 0000000000000001 x21: 0000000000000002
x20: 0000000000000002 x19: ffff00010185e810 x18: ffffffffffffffff
x17: 69766564206e6168 x16: 74207265676e6f6c x15: 20746e656d676573
x14: 20677320676e6970 x13: 5d34303334393134 x12: 0000000000000000
x11: 00000000000000c0 x10: 00000000000009c0 x9 : ffff8000800e0b7c
x8 : ffff00010a315ca0 x7 : ffff8000816a5110 x6 : 0000000000000001
x5 : 000000000000002b x4 : 0000000000000002 x3 : 0000000000000008
x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff00010a315280
Call trace:
debug_dma_map_sg+0x330/0x388
__dma_map_sg_attrs+0xc0/0x278
dma_map_sgtable+0x30/0x58
drm_gem_shmem_get_pages_sgt+0xb4/0x140
v3d_bo_create_finish+0x28/0x130 [v3d]
v3d_create_bo_ioctl+0x54/0x180 [v3d]
drm_ioctl_kernel+0xc8/0x140
drm_ioctl+0x2d4/0x4d8 |
| In the Linux kernel, the following vulnerability has been resolved:
media: chips-media: wave5: Fix PM runtime usage count underflow
Replace pm_runtime_put_sync() with pm_runtime_dont_use_autosuspend() in
the remove path to properly pair with pm_runtime_use_autosuspend() from
probe. This allows pm_runtime_disable() to handle reference count cleanup
correctly regardless of current suspend state.
The driver calls pm_runtime_put_sync() unconditionally in remove, but the
device may already be suspended due to autosuspend configured in probe.
When autosuspend has already suspended the device, the usage count is 0,
and pm_runtime_put_sync() decrements it to -1.
This causes the following warning on module unload:
------------[ cut here ]------------
WARNING: CPU: 1 PID: 963 at kernel/kthread.c:1430
kthread_destroy_worker+0x84/0x98
...
vdec 30210000.video-codec: Runtime PM usage count underflow! |
| In the Linux kernel, the following vulnerability has been resolved:
drm/panel: Fix a possible null-pointer dereference in jdi_panel_dsi_remove()
In jdi_panel_dsi_remove(), jdi is explicitly checked, indicating that it
may be NULL:
if (!jdi)
mipi_dsi_detach(dsi);
However, when jdi is NULL, the function does not return and continues by
calling jdi_panel_disable():
err = jdi_panel_disable(&jdi->base);
Inside jdi_panel_disable(), jdi is dereferenced unconditionally, which can
lead to a NULL-pointer dereference:
struct jdi_panel *jdi = to_panel_jdi(panel);
backlight_disable(jdi->backlight);
To prevent such a potential NULL-pointer dereference, return early from
jdi_panel_dsi_remove() when jdi is NULL. |
| In the Linux kernel, the following vulnerability has been resolved:
octeontx2-af: Workaround SQM/PSE stalls by disabling sticky
NIX SQ manager sticky mode is known to cause stalls when multiple SQs
share an SMQ and transmit concurrently. Additionally, PSE may deadlock
on transitions between sticky and non-sticky transmissions. There is
also a credit drop issue observed when certain condition clocks are
gated.
work around these hardware errata by:
- Disabling SQM sticky operation:
- Clear TM6 (bit 15)
- Clear TM11 (bit 14)
- Disabling sticky → non-sticky transition path that can deadlock PSE:
- Clear TM5 (bit 23)
- Preventing credit drops by keeping the control-flow clock enabled:
- Set TM9 (bit 21)
These changes are applied via NIX_AF_SQM_DBG_CTL_STATUS. With this
configuration the SQM/PSE maintain forward progress under load without
credit loss, at the cost of disabling sticky optimizations. |
| In the Linux kernel, the following vulnerability has been resolved:
drm: renesas: rz-du: mipi_dsi: fix kernel panic when rebooting for some panels
Since commit 56de5e305d4b ("clk: renesas: r9a07g044: Add MSTOP for RZ/G2L")
we may get the following kernel panic, for some panels, when rebooting:
systemd-shutdown[1]: Rebooting.
Call trace:
...
do_serror+0x28/0x68
el1h_64_error_handler+0x34/0x50
el1h_64_error+0x6c/0x70
rzg2l_mipi_dsi_host_transfer+0x114/0x458 (P)
mipi_dsi_device_transfer+0x44/0x58
mipi_dsi_dcs_set_display_off_multi+0x9c/0xc4
ili9881c_unprepare+0x38/0x88
drm_panel_unprepare+0xbc/0x108
This happens for panels that need to send MIPI-DSI commands in their
unprepare() callback. Since the MIPI-DSI interface is stopped at that
point, rzg2l_mipi_dsi_host_transfer() triggers the kernel panic.
Fix by moving rzg2l_mipi_dsi_stop() to new callback function
rzg2l_mipi_dsi_atomic_post_disable().
With this change we now have the correct power-down/stop sequence:
systemd-shutdown[1]: Rebooting.
rzg2l-mipi-dsi 10850000.dsi: rzg2l_mipi_dsi_atomic_disable(): entry
ili9881c-dsi 10850000.dsi.0: ili9881c_unprepare(): entry
rzg2l-mipi-dsi 10850000.dsi: rzg2l_mipi_dsi_atomic_post_disable(): entry
reboot: Restarting system |
| In the Linux kernel, the following vulnerability has been resolved:
media: chips-media: wave5: Fix kthread worker destruction in polling mode
Fix the cleanup order in polling mode (irq < 0) to prevent kernel warnings
during module removal. Cancel the hrtimer before destroying the kthread
worker to ensure work queues are empty.
In polling mode, the driver uses hrtimer to periodically trigger
wave5_vpu_timer_callback() which queues work via kthread_queue_work().
The kthread_destroy_worker() function validates that both work queues
are empty with WARN_ON(!list_empty(&worker->work_list)) and
WARN_ON(!list_empty(&worker->delayed_work_list)).
The original code called kthread_destroy_worker() before hrtimer_cancel(),
creating a race condition where the timer could fire during worker
destruction and queue new work, triggering the WARN_ON.
This causes the following warning on every module unload in polling mode:
------------[ cut here ]------------
WARNING: CPU: 2 PID: 1034 at kernel/kthread.c:1430
kthread_destroy_worker+0x84/0x98
Modules linked in: wave5(-) rpmsg_ctrl rpmsg_char ...
Call trace:
kthread_destroy_worker+0x84/0x98
wave5_vpu_remove+0xc8/0xe0 [wave5]
platform_remove+0x30/0x58
...
---[ end trace 0000000000000000 ]--- |
| In the Linux kernel, the following vulnerability has been resolved:
net: nfc: nci: Fix parameter validation for packet data
Since commit 9c328f54741b ("net: nfc: nci: Add parameter validation for
packet data") communication with nci nfc chips is not working any more.
The mentioned commit tries to fix access of uninitialized data, but
failed to understand that in some cases the data packet is of variable
length and can therefore not be compared to the maximum packet length
given by the sizeof(struct). |
| In the Linux kernel, the following vulnerability has been resolved:
media: uvcvideo: Return queued buffers on start_streaming() failure
Return buffers if streaming fails to start due to uvc_pm_get() error.
This bug may be responsible for a warning I got running
while :; do yavta -c3 /dev/video0; done
on an xHCI controller which failed under this workload.
I had no luck reproducing this warning again to confirm.
xhci_hcd 0000:09:00.0: HC died; cleaning up
usb 13-2: USB disconnect, device number 2
WARNING: CPU: 2 PID: 29386 at drivers/media/common/videobuf2/videobuf2-core.c:1803 vb2_start_streaming+0xac/0x120 |
| In the Linux kernel, the following vulnerability has been resolved:
kexec: derive purgatory entry from symbol
kexec_load_purgatory() derives image->start by locating e_entry inside an
SHF_EXECINSTR section. If the purgatory object contains multiple
executable sections with overlapping sh_addr, the entrypoint check can
match more than once and trigger a WARN.
Derive the entry section from the purgatory_start symbol when present and
compute image->start from its final placement. Keep the existing e_entry
fallback for purgatories that do not expose the symbol.
WARNING: kernel/kexec_file.c:1009 at kexec_load_purgatory+0x395/0x3c0, CPU#10: kexec/1784
Call Trace:
<TASK>
bzImage64_load+0x133/0xa00
__do_sys_kexec_file_load+0x2b3/0x5c0
do_syscall_64+0x81/0x610
entry_SYSCALL_64_after_hwframe+0x76/0x7e
[me@linux.beauty: move helper to avoid forward declaration, per Baoquan] |
| In the Linux kernel, the following vulnerability has been resolved:
mm/hugetlb: restore failed global reservations to subpool
Commit a833a693a490 ("mm: hugetlb: fix incorrect fallback for subpool")
fixed an underflow error for hstate->resv_huge_pages caused by incorrectly
attributing globally requested pages to the subpool's reservation.
Unfortunately, this fix also introduced the opposite problem, which would
leave spool->used_hpages elevated if the globally requested pages could
not be acquired. This is because while a subpool's reserve pages only
accounts for what is requested and allocated from the subpool, its "used"
counter keeps track of what is consumed in total, both from the subpool
and globally. Thus, we need to adjust spool->used_hpages in the other
direction, and make sure that globally requested pages are uncharged from
the subpool's used counter.
Each failed allocation attempt increments the used_hpages counter by how
many pages were requested from the global pool. Ultimately, this renders
the subpool unusable, as used_hpages approaches the max limit.
The issue can be reproduced as follows:
1. Allocate 4 hugetlb pages
2. Create a hugetlb mount with max=4, min=2
3. Consume 2 pages globally
4. Request 3 pages from the subpool (2 from subpool + 1 from global)
4.1 hugepage_subpool_get_pages(spool, 3) succeeds.
used_hpages += 3
4.2 hugetlb_acct_memory(h, 1) fails: no global pages left
used_hpages -= 2
5. Subpool now has used_hpages = 1, despite not being able to
successfully allocate any hugepages. It believes it can now only
allocate 3 more hugepages, not 4.
With each failed allocation attempt incrementing the used counter, the
subpool eventually reaches a point where its used counter equals its
max counter. At that point, any future allocations that try to
allocate hugeTLB pages from the subpool will fail, despite the subpool
not having any of its hugeTLB pages consumed by any user.
Once this happens, there is no way to make the subpool usable again,
since there is no way to decrement the used counter as no process is
really consuming the hugeTLB pages.
The underflow issue that the original commit fixes still remains fixed
as well.
Without this fix, used_hpages would keep on leaking if
hugetlb_acct_memory() fails. |
| In the Linux kernel, the following vulnerability has been resolved:
mm/slab: do not access current->mems_allowed_seq if !allow_spin
Lockdep complains when get_from_any_partial() is called in an NMI
context, because current->mems_allowed_seq is seqcount_spinlock_t and
not NMI-safe:
================================
WARNING: inconsistent lock state
6.19.0-rc5-kfree-rcu+ #315 Tainted: G N
--------------------------------
inconsistent {INITIAL USE} -> {IN-NMI} usage.
kunit_try_catch/9989 [HC1[1]:SC0[0]:HE0:SE1] takes:
ffff889085799820 (&____s->seqcount#3){.-.-}-{0:0}, at: ___slab_alloc+0x58f/0xc00
{INITIAL USE} state was registered at:
lock_acquire+0x185/0x320
kernel_init_freeable+0x391/0x1150
kernel_init+0x1f/0x220
ret_from_fork+0x736/0x8f0
ret_from_fork_asm+0x1a/0x30
irq event stamp: 56
hardirqs last enabled at (55): [<ffffffff850a68d7>] _raw_spin_unlock_irq+0x27/0x70
hardirqs last disabled at (56): [<ffffffff850858ca>] __schedule+0x2a8a/0x6630
softirqs last enabled at (0): [<ffffffff81536711>] copy_process+0x1dc1/0x6a10
softirqs last disabled at (0): [<0000000000000000>] 0x0
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&____s->seqcount#3);
<Interrupt>
lock(&____s->seqcount#3);
*** DEADLOCK ***
According to Documentation/locking/seqlock.rst, seqcount_t is not
NMI-safe and seqcount_latch_t should be used when read path can interrupt
the write-side critical section. In this case, do not access
current->mems_allowed_seq and avoid retry. |