| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
ublk: fix NULL pointer dereference in ublk_ctrl_set_size()
ublk_ctrl_set_size() unconditionally dereferences ub->ub_disk via
set_capacity_and_notify() without checking if it is NULL.
ub->ub_disk is NULL before UBLK_CMD_START_DEV completes (it is only
assigned in ublk_ctrl_start_dev()) and after UBLK_CMD_STOP_DEV runs
(ublk_detach_disk() sets it to NULL). Since the UBLK_CMD_UPDATE_SIZE
handler performs no state validation, a user can trigger a NULL pointer
dereference by sending UPDATE_SIZE to a device that has been added but
not yet started, or one that has been stopped.
Fix this by checking ub->ub_disk under ub->mutex before dereferencing
it, and returning -ENODEV if the disk is not available. |
| In the Linux kernel, the following vulnerability has been resolved:
xfs: fix undersized l_iclog_roundoff values
If the superblock doesn't list a log stripe unit, we set the incore log
roundoff value to 512. This leads to corrupt logs and unmountable
filesystems in generic/617 on a disk with 4k physical sectors...
XFS (sda1): Mounting V5 Filesystem ff3121ca-26e6-4b77-b742-aaff9a449e1c
XFS (sda1): Torn write (CRC failure) detected at log block 0x318e. Truncating head block from 0x3197.
XFS (sda1): failed to locate log tail
XFS (sda1): log mount/recovery failed: error -74
XFS (sda1): log mount failed
XFS (sda1): Mounting V5 Filesystem ff3121ca-26e6-4b77-b742-aaff9a449e1c
XFS (sda1): Ending clean mount
...on the current xfsprogs for-next which has a broken mkfs. xfs_info
shows this...
meta-data=/dev/sda1 isize=512 agcount=4, agsize=644992 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=1
= exchange=1 metadir=1
data = bsize=4096 blocks=2579968, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=4096 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
= rgcount=0 rgsize=268435456 extents
= zoned=0 start=0 reserved=0
...observe that the log section has sectsz=4096 sunit=0, which means
that the roundoff factor is 512, not 4096 as you'd expect. We should
fix mkfs not to generate broken filesystems, but anyone can fuzz the
ondisk superblock so we should be more cautious. I think the inadequate
logic predates commit a6a65fef5ef8d0, but that's clearly going to
require a different backport. |
| 18next-http-middleware is a middleware to be used with Node.js web frameworks like express or Fastify and also for Deno. Versions prior to 3.9.3 allow an unauthenticated HTTP client to pollute Object.prototype in the Node.js process hosting the middleware, via two unvalidated entry points that reach internal object-key writes: getResourcesHandler and missingKeyHandler. This can break authorisation checks (if (user.isAdmin) returning true for any user), cause type-confusion DoS, and depending on downstream code it can be chained into RCE. |
| In th30d4y/IP from version 1.0.1 to before version 2.0.1, a DOM-Based Cross-Site Scripting (XSS) vulnerability was identified in an IP Reputation Checker application. Unsanitized user input was directly rendered in the browser, allowing attackers to execute arbitrary JavaScript. This issue has been patched in version 2.0.1. |
| In the Linux kernel, the following vulnerability has been resolved:
ovpn: tcp - fix packet extraction from stream
When processing TCP stream data in ovpn_tcp_recv, we receive large
cloned skbs from __strp_rcv that may contain multiple coalesced packets.
The current implementation has two bugs:
1. Header offset overflow: Using pskb_pull with large offsets on
coalesced skbs causes skb->data - skb->head to exceed the u16 storage
of skb->network_header. This causes skb_reset_network_header to fail
on the inner decapsulated packet, resulting in packet drops.
2. Unaligned protocol headers: Extracting packets from arbitrary
positions within the coalesced TCP stream provides no alignment
guarantees for the packet data causing performance penalties on
architectures without efficient unaligned access. Additionally,
openvpn's 2-byte length prefix on TCP packets causes the subsequent
4-byte opcode and packet ID fields to be inherently misaligned.
Fix both issues by allocating a new skb for each openvpn packet and
using skb_copy_bits to extract only the packet content into the new
buffer, skipping the 2-byte length prefix. Also, check the length before
invoking the function that performs the allocation to avoid creating an
invalid skb.
If the packet has to be forwarded to userspace the 2-byte prefix can be
pushed to the head safely, without misalignment.
As a side effect, this approach also avoids the expensive linearization
that pskb_pull triggers on cloned skbs with page fragments. In testing,
this resulted in TCP throughput improvements of up to 74%. |
| In the Linux kernel, the following vulnerability has been resolved:
mm/hugetlb: restore failed global reservations to subpool
Commit a833a693a490 ("mm: hugetlb: fix incorrect fallback for subpool")
fixed an underflow error for hstate->resv_huge_pages caused by incorrectly
attributing globally requested pages to the subpool's reservation.
Unfortunately, this fix also introduced the opposite problem, which would
leave spool->used_hpages elevated if the globally requested pages could
not be acquired. This is because while a subpool's reserve pages only
accounts for what is requested and allocated from the subpool, its "used"
counter keeps track of what is consumed in total, both from the subpool
and globally. Thus, we need to adjust spool->used_hpages in the other
direction, and make sure that globally requested pages are uncharged from
the subpool's used counter.
Each failed allocation attempt increments the used_hpages counter by how
many pages were requested from the global pool. Ultimately, this renders
the subpool unusable, as used_hpages approaches the max limit.
The issue can be reproduced as follows:
1. Allocate 4 hugetlb pages
2. Create a hugetlb mount with max=4, min=2
3. Consume 2 pages globally
4. Request 3 pages from the subpool (2 from subpool + 1 from global)
4.1 hugepage_subpool_get_pages(spool, 3) succeeds.
used_hpages += 3
4.2 hugetlb_acct_memory(h, 1) fails: no global pages left
used_hpages -= 2
5. Subpool now has used_hpages = 1, despite not being able to
successfully allocate any hugepages. It believes it can now only
allocate 3 more hugepages, not 4.
With each failed allocation attempt incrementing the used counter, the
subpool eventually reaches a point where its used counter equals its
max counter. At that point, any future allocations that try to
allocate hugeTLB pages from the subpool will fail, despite the subpool
not having any of its hugeTLB pages consumed by any user.
Once this happens, there is no way to make the subpool usable again,
since there is no way to decrement the used counter as no process is
really consuming the hugeTLB pages.
The underflow issue that the original commit fixes still remains fixed
as well.
Without this fix, used_hpages would keep on leaking if
hugetlb_acct_memory() fails. |
| In the Linux kernel, the following vulnerability has been resolved:
kexec: derive purgatory entry from symbol
kexec_load_purgatory() derives image->start by locating e_entry inside an
SHF_EXECINSTR section. If the purgatory object contains multiple
executable sections with overlapping sh_addr, the entrypoint check can
match more than once and trigger a WARN.
Derive the entry section from the purgatory_start symbol when present and
compute image->start from its final placement. Keep the existing e_entry
fallback for purgatories that do not expose the symbol.
WARNING: kernel/kexec_file.c:1009 at kexec_load_purgatory+0x395/0x3c0, CPU#10: kexec/1784
Call Trace:
<TASK>
bzImage64_load+0x133/0xa00
__do_sys_kexec_file_load+0x2b3/0x5c0
do_syscall_64+0x81/0x610
entry_SYSCALL_64_after_hwframe+0x76/0x7e
[me@linux.beauty: move helper to avoid forward declaration, per Baoquan] |
| In the Linux kernel, the following vulnerability has been resolved:
iio: accel: adxl380: Avoid reading more entries than present in FIFO
The interrupt handler reads FIFO entries in batches of N samples, where N
is the number of scan elements that have been enabled. However, the sensor
fills the FIFO one sample at a time, even when more than one channel is
enabled. Therefore,the number of entries reported by the FIFO status
registers may not be a multiple of N; if this number is not a multiple, the
number of entries read from the FIFO may exceed the number of entries
actually present.
To fix the above issue, round down the number of FIFO entries read from the
status registers so that it is always a multiple of N. |
| In the Linux kernel, the following vulnerability has been resolved:
KVM: arm64: Eagerly init vgic dist/redist on vgic creation
If vgic_allocate_private_irqs_locked() fails for any odd reason,
we exit kvm_vgic_create() early, leaving dist->rd_regions uninitialised.
kvm_vgic_dist_destroy() then comes along and walks into the weeds
trying to free the RDs. Got to love this stuff.
Solve it by moving all the static initialisation early, and make
sure that if we fail halfway, we're in a reasonable shape to
perform the rest of the teardown. While at it, reset the vgic model
on failure, just in case... |
| Inefficient Algorithmic Complexity vulnerability in absinthe-graphql absinthe allows unauthenticated denial of service via quadratic fragment-name uniqueness validation.
'Elixir.Absinthe.Phase.Document.Validation.UniqueFragmentNames':run/2 iterates over all fragments and for each one calls duplicate?/2, which evaluates Enum.count(fragments, &(&1.name == name)) — a full linear scan of the fragment list. The result is O(N²) comparisons per document, where N is the number of fragment definitions supplied by the caller.
Because input.fragments is built directly from the GraphQL query body, N is fully attacker-controlled. A minimum-size fragment definition is roughly 16 bytes, so a ~1 MB document carries ~60,000 fragments and forces ~3.6 × 10⁹ comparisons inside this single validation phase. No authentication, schema knowledge, or special configuration is required.
This issue affects absinthe: from 1.2.0 before 1.10.2. |
| Allocation of Resources Without Limits or Throttling vulnerability in absinthe-graphql absinthe allows unauthenticated denial of service via atom table exhaustion when parsing attacker-controlled GraphQL SDL.
Multiple Blueprint.Draft.convert/2 implementations in Absinthe's SDL language modules call String.to_atom/1 on attacker-controlled names from parsed GraphQL SDL documents, including directive names, field names, type names, and argument names. Because atoms are never garbage-collected and the BEAM atom table has a fixed limit (default 1,048,576), each unique name permanently consumes one slot. An attacker can exhaust the atom table by submitting SDL documents containing enough unique names, causing the Erlang VM to abort with system_limit and taking down the entire node.
Any application that passes attacker-controlled GraphQL SDL through Absinthe's parser is exposed — for example, a schema-upload endpoint, a federation gateway that ingests remote SDL, or any developer tool that runs the parser over user-supplied documents.
This issue affects absinthe: from 1.5.0 before 1.10.2. |
| i18next-locize-backend is a simple i18next backend for locize.com which can be used in Node.js, in the browser and for Deno. Prior to version 9.0.2, i18next-locize-backend interpolates lng, ns, projectId, and version directly into the configured loadPath / privatePath / addPath / updatePath / getLanguagesPath URL templates with no path-component validation and no encoding. When an application exposes any of these values to user-controlled input (?lng= / ?ns= query parameters via i18next-browser-languagedetector, cookies, request headers, or a URL-derived projectId), a crafted value can change the structure of the outgoing request URL. Affected call sites in lib/index.js (pre-patch): the interpolate() helper is used at the five URL-build sites — _readAny/read (line 415 for private, 426 for public), getLanguages (lines 271 and 296), and writePage (lines 616 and 622) for the missing-key and update POST paths. The helper interpolate in lib/utils.js substitutes raw values with no encoding. This issue has been patched in version 9.0.2. |
| openvpn-auth-oauth2 is a plugin/management interface client for OpenVPN server to handle an OIDC based single sign-on (SSO) auth flows. From version 1.26.3 to before version 1.27.3, when openvpn-auth-oauth2 is deployed in the experimental plugin mode (shared library loaded by OpenVPN via the plugin directive), clients that do not support WebAuth/SSO (e.g., the openvpn CLI on Linux) are incorrectly admitted to the VPN despite being denied by the authentication logic. The default management-interface mode is not affected because it does not use the OpenVPN plugin return-code mechanism. This issue has been patched in version 1.27.3. |
| In the Linux kernel, the following vulnerability has been resolved:
media: rockchip: rga: Fix possible ERR_PTR dereference in rga_buf_init()
rga_get_frame() can return ERR_PTR(-EINVAL) when buffer type is
unsupported or invalid. rga_buf_init() does not check the return value
and unconditionally dereferences the pointer when accessing f->size.
Add proper ERR_PTR checking and return the error to prevent
dereferencing an invalid pointer. |
| In the Linux kernel, the following vulnerability has been resolved:
md raid: fix hang when stopping arrays with metadata through dm-raid
When using device-mapper's dm-raid target, stopping a RAID array can cause
the system to hang under specific conditions.
This occurs when:
- A dm-raid managed device tree is suspended from top to bottom
(the top-level RAID device is suspended first, followed by its
underlying metadata and data devices)
- The top-level RAID device is then removed
Removing the top-level device triggers a hang in the following sequence:
the dm-raid destructor calls md_stop(), which tries to flush the
write-intent bitmap by writing to the metadata sub-devices. However, these
devices are already suspended, making them unable to complete the write-intent
operations and causing an indefinite block.
Fix:
- Prevent bitmap flushing when md_stop() is called from dm-raid
destructor context
and avoid a quiescing/unquescing cycle which could also cause I/O
- Still allow write-intent bitmap flushing when called from dm-raid
suspend context
This ensures that RAID array teardown can complete successfully even when the
underlying devices are in a suspended state.
This second patch uses md_is_rdwr() to distinguish between suspend and
destructor paths as elaborated on above. |
| In the Linux kernel, the following vulnerability has been resolved:
soc/tegra: pmc: Fix unsafe generic_handle_irq() call
Currently, when resuming from system suspend on Tegra platforms,
the following warning is observed:
WARNING: CPU: 0 PID: 14459 at kernel/irq/irqdesc.c:666
Call trace:
handle_irq_desc+0x20/0x58 (P)
tegra186_pmc_wake_syscore_resume+0xe4/0x15c
syscore_resume+0x3c/0xb8
suspend_devices_and_enter+0x510/0x540
pm_suspend+0x16c/0x1d8
The warning occurs because generic_handle_irq() is being called from
a non-interrupt context which is considered as unsafe.
Fix this warning by deferring generic_handle_irq() call to an IRQ work
which gets executed in hard IRQ context where generic_handle_irq()
can be called safely.
When PREEMPT_RT kernels are used, regular IRQ work (initialized with
init_irq_work) is deferred to run in per-CPU kthreads in preemptible
context rather than hard IRQ context. Hence, use the IRQ_WORK_INIT_HARD
variant so that with PREEMPT_RT kernels, the IRQ work is processed in
hardirq context instead of being deferred to a thread which is required
for calling generic_handle_irq().
On non-PREEMPT_RT kernels, both init_irq_work() and IRQ_WORK_INIT_HARD()
execute in IRQ context, so this change has no functional impact for
standard kernel configurations.
[treding@nvidia.com: miscellaneous cleanups] |
| In the Linux kernel, the following vulnerability has been resolved:
media: i2c: ov5647: Initialize subdev before controls
In ov5647_init_controls() we call v4l2_get_subdevdata, but it is
initialized by v4l2_i2c_subdev_init() in the probe, which currently
happens after init_controls(). This can result in a segfault if the
error condition is hit, and we try to access i2c_client, so fix the
order. |
| Dolibarr is an enterprise resource planning (ERP) and customer relationship management (CRM) software package. Versions 22.0.2 and earlier contains an authenticated remote code execution vulnerability in the user extrafields functionality. User-controlled input from the "computed value" field is passed to PHP's `eval()` function without adequate sanitization, allowing authenticated administrators to execute arbitrary PHP code on the server. As of time of publication, no patched versions are available. |
| Password Pusher is an open source application to communicate sensitive information over the web. Prior to versions 1.69.3 and 2.4.2, a security issue in OSS PasswordPusher allowed unauthenticated creation of file-type pushes through a generic JSON API create path under certain configurations. This could bypass the intended authentication boundary for file push creation. This issue has been patched in versions 1.69.3 and 2.4.2. |
| CoreDNS is a DNS server that chains plugins. In versions prior to 1.14.3, the DNS-over-QUIC (DoQ) server can be driven into unbounded goroutine and memory growth by a remote client that opens many QUIC streams and sends only 1 byte per stream. When the worker pool is full, CoreDNS still spawns a goroutine per accepted stream to wait for a worker token. Additionally, active workers block indefinitely in io.ReadFull() with no per-stream read deadline, allowing an attacker to pin all workers by sending a single byte so the read blocks waiting for the second byte of the DoQ length prefix. This enables an unauthenticated remote attacker to cause memory exhaustion and OOM-kill. This issue has been fixed in version 1.14.3. No known workarounds exist. |