| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| The Ecwid by Lightspeed Ecommerce Shopping Cart plugin for WordPress is vulnerable to Privilege Escalation in all versions up to, and including, 7.0.7. This is due to a missing capability check in the 'save_custom_user_profile_fields' function. This makes it possible for authenticated attackers, with minimal permissions such as a subscriber, to supply the 'ec_store_admin_access' parameter during a profile update and gain store manager access to the site. |
| A vulnerability was found in EFM iptime A6004MX 14.18.2. Affected is the function commit_vpncli_file_upload of the file /cgi/timepro.cgi. The manipulation results in unrestricted upload. The attack may be performed from remote. The exploit has been made public and could be used. The vendor was contacted early about this disclosure but did not respond in any way. |
| A flaw has been found in GeekAI up to 4.2.4. The affected element is the function Download of the file api/handler/net_handler.go. This manipulation of the argument url causes server-side request forgery. Remote exploitation of the attack is possible. The exploit has been published and may be used. The project was informed of the problem early through an issue report but has not responded yet. |
| The RegistrationMagic WordPress plugin before 6.0.7.2 does not have proper capability checks, allowing subscribers and above to create forms on the site. |
| URL Redirection to Untrusted Site ('Open Redirect') vulnerability in TR7 Cyber Defense Inc. Web Application Firewall allows Phishing.This issue affects Web Application Firewall: from 4.30 through 16022026.
NOTE: The vendor was contacted early about this disclosure but did not respond in any way. |
| OpenS100 (the reference implementation S-100 viewer) prior to commit 753cf29 contain a remote code execution vulnerability via an unrestricted Lua interpreter. The Portrayal Engine initializes Lua using luaL_openlibs() without sandboxing or capability restrictions, exposing standard libraries such as 'os' and 'io' to untrusted portrayal catalogues. An attacker can provide a malicious S-100 portrayal catalogue containing Lua scripts that execute arbitrary commands with the privileges of the OpenS100 process when a user imports the catalogue and loads a chart. |
| In the Linux kernel, the following vulnerability has been resolved:
net: cpsw: Execute ndo_set_rx_mode callback in a work queue
Commit 1767bb2d47b7 ("ipv6: mcast: Don't hold RTNL for
IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP.") removed the RTNL lock for
IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP operations. However, this
change triggered the following call trace on my BeagleBone Black board:
WARNING: net/8021q/vlan_core.c:236 at vlan_for_each+0x120/0x124, CPU#0: rpcbind/481
RTNL: assertion failed at net/8021q/vlan_core.c (236)
Modules linked in:
CPU: 0 UID: 997 PID: 481 Comm: rpcbind Not tainted 6.19.0-rc7-next-20260130-yocto-standard+ #35 PREEMPT
Hardware name: Generic AM33XX (Flattened Device Tree)
Call trace:
unwind_backtrace from show_stack+0x28/0x2c
show_stack from dump_stack_lvl+0x30/0x38
dump_stack_lvl from __warn+0xb8/0x11c
__warn from warn_slowpath_fmt+0x130/0x194
warn_slowpath_fmt from vlan_for_each+0x120/0x124
vlan_for_each from cpsw_add_mc_addr+0x54/0x98
cpsw_add_mc_addr from __hw_addr_ref_sync_dev+0xc4/0xec
__hw_addr_ref_sync_dev from __dev_mc_add+0x78/0x88
__dev_mc_add from igmp6_group_added+0x84/0xec
igmp6_group_added from __ipv6_dev_mc_inc+0x1fc/0x2f0
__ipv6_dev_mc_inc from __ipv6_sock_mc_join+0x124/0x1b4
__ipv6_sock_mc_join from do_ipv6_setsockopt+0x84c/0x1168
do_ipv6_setsockopt from ipv6_setsockopt+0x88/0xc8
ipv6_setsockopt from do_sock_setsockopt+0xe8/0x19c
do_sock_setsockopt from __sys_setsockopt+0x84/0xac
__sys_setsockopt from ret_fast_syscall+0x0/0x54
This trace occurs because vlan_for_each() is called within
cpsw_ndo_set_rx_mode(), which expects the RTNL lock to be held.
Since modifying vlan_for_each() to operate without the RTNL lock is not
straightforward, and because ndo_set_rx_mode() is invoked both with and
without the RTNL lock across different code paths, simply adding
rtnl_lock() in cpsw_ndo_set_rx_mode() is not a viable solution.
To resolve this issue, we opt to execute the actual processing within
a work queue, following the approach used by the icssg-prueth driver.
Please note: To reproduce this issue, I manually reverted the changes to
am335x-bone-common.dtsi from commit c477358e66a3 ("ARM: dts: am335x-bone:
switch to new cpsw switch drv") in order to revert to the legacy cpsw
driver. |
| In the Linux kernel, the following vulnerability has been resolved:
mm, shmem: prevent infinite loop on truncate race
When truncating a large swap entry, shmem_free_swap() returns 0 when the
entry's index doesn't match the given index due to lookup alignment. The
failure fallback path checks if the entry crosses the end border and
aborts when it happens, so truncate won't erase an unexpected entry or
range. But one scenario was ignored.
When `index` points to the middle of a large swap entry, and the large
swap entry doesn't go across the end border, find_get_entries() will
return that large swap entry as the first item in the batch with
`indices[0]` equal to `index`. The entry's base index will be smaller
than `indices[0]`, so shmem_free_swap() will fail and return 0 due to the
"base < index" check. The code will then call shmem_confirm_swap(), get
the order, check if it crosses the END boundary (which it doesn't), and
retry with the same index.
The next iteration will find the same entry again at the same index with
same indices, leading to an infinite loop.
Fix this by retrying with a round-down index, and abort if the index is
smaller than the truncate range. |
| In the Linux kernel, the following vulnerability has been resolved:
HID: i2c-hid: fix potential buffer overflow in i2c_hid_get_report()
`i2c_hid_xfer` is used to read `recv_len + sizeof(__le16)` bytes of data
into `ihid->rawbuf`.
The former can come from the userspace in the hidraw driver and is only
bounded by HID_MAX_BUFFER_SIZE(16384) by default (unless we also set
`max_buffer_size` field of `struct hid_ll_driver` which we do not).
The latter has size determined at runtime by the maximum size of
different report types you could receive on any particular device and
can be a much smaller value.
Fix this by truncating `recv_len` to `ihid->bufsize - sizeof(__le16)`.
The impact is low since access to hidraw devices requires root. |
| In the Linux kernel, the following vulnerability has been resolved:
nvmet-tcp: fixup hang in nvmet_tcp_listen_data_ready()
When the socket is closed while in TCP_LISTEN a callback is run to
flush all outstanding packets, which in turns calls
nvmet_tcp_listen_data_ready() with the sk_callback_lock held.
So we need to check if we are in TCP_LISTEN before attempting
to get the sk_callback_lock() to avoid a deadlock. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: sync read disk super and set block size
When the user performs a btrfs mount, the block device is not set
correctly. The user sets the block size of the block device to 0x4000
by executing the BLKBSZSET command.
Since the block size change also changes the mapping->flags value, this
further affects the result of the mapping_min_folio_order() calculation.
Let's analyze the following two scenarios:
Scenario 1: Without executing the BLKBSZSET command, the block size is
0x1000, and mapping_min_folio_order() returns 0;
Scenario 2: After executing the BLKBSZSET command, the block size is
0x4000, and mapping_min_folio_order() returns 2.
do_read_cache_folio() allocates a folio before the BLKBSZSET command
is executed. This results in the allocated folio having an order value
of 0. Later, after BLKBSZSET is executed, the block size increases to
0x4000, and the mapping_min_folio_order() calculation result becomes 2.
This leads to two undesirable consequences:
1. filemap_add_folio() triggers a VM_BUG_ON_FOLIO(folio_order(folio) <
mapping_min_folio_order(mapping)) assertion.
2. The syzbot report [1] shows a null pointer dereference in
create_empty_buffers() due to a buffer head allocation failure.
Synchronization should be established based on the inode between the
BLKBSZSET command and read cache page to prevent inconsistencies in
block size or mapping flags before and after folio allocation.
[1]
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
RIP: 0010:create_empty_buffers+0x4d/0x480 fs/buffer.c:1694
Call Trace:
folio_create_buffers+0x109/0x150 fs/buffer.c:1802
block_read_full_folio+0x14c/0x850 fs/buffer.c:2403
filemap_read_folio+0xc8/0x2a0 mm/filemap.c:2496
do_read_cache_folio+0x266/0x5c0 mm/filemap.c:4096
do_read_cache_page mm/filemap.c:4162 [inline]
read_cache_page_gfp+0x29/0x120 mm/filemap.c:4195
btrfs_read_disk_super+0x192/0x500 fs/btrfs/volumes.c:1367 |
| In the Linux kernel, the following vulnerability has been resolved:
cgroup/dmem: fix NULL pointer dereference when setting max
An issue was triggered:
BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: Oops: 0000 [#1] SMP NOPTI
CPU: 15 UID: 0 PID: 658 Comm: bash Tainted: 6.19.0-rc6-next-2026012
Tainted: [O]=OOT_MODULE
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
RIP: 0010:strcmp+0x10/0x30
RSP: 0018:ffffc900017f7dc0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff888107cd4358
RDX: 0000000019f73907 RSI: ffffffff82cc381a RDI: 0000000000000000
RBP: ffff8881016bef0d R08: 000000006c0e7145 R09: 0000000056c0e714
R10: 0000000000000001 R11: ffff888107cd4358 R12: 0007ffffffffffff
R13: ffff888101399200 R14: ffff888100fcb360 R15: 0007ffffffffffff
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000105c79000 CR4: 00000000000006f0
Call Trace:
<TASK>
dmemcg_limit_write.constprop.0+0x16d/0x390
? __pfx_set_resource_max+0x10/0x10
kernfs_fop_write_iter+0x14e/0x200
vfs_write+0x367/0x510
ksys_write+0x66/0xe0
do_syscall_64+0x6b/0x390
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7f42697e1887
It was trriggered setting max without limitation, the command is like:
"echo test/region0 > dmem.max". To fix this issue, add check whether
options is valid after parsing the region_name. |
| In the Linux kernel, the following vulnerability has been resolved:
binder: fix UAF in binder_netlink_report()
Oneway transactions sent to frozen targets via binder_proc_transaction()
return a BR_TRANSACTION_PENDING_FROZEN error but they are still treated
as successful since the target is expected to thaw at some point. It is
then not safe to access 't' after BR_TRANSACTION_PENDING_FROZEN errors
as the transaction could have been consumed by the now thawed target.
This is the case for binder_netlink_report() which derreferences 't'
after a pending frozen error, as pointed out by the following KASAN
report:
==================================================================
BUG: KASAN: slab-use-after-free in binder_netlink_report.isra.0+0x694/0x6c8
Read of size 8 at addr ffff00000f98ba38 by task binder-util/522
CPU: 4 UID: 0 PID: 522 Comm: binder-util Not tainted 6.19.0-rc6-00015-gc03e9c42ae8f #1 PREEMPT
Hardware name: linux,dummy-virt (DT)
Call trace:
binder_netlink_report.isra.0+0x694/0x6c8
binder_transaction+0x66e4/0x79b8
binder_thread_write+0xab4/0x4440
binder_ioctl+0x1fd4/0x2940
[...]
Allocated by task 522:
__kmalloc_cache_noprof+0x17c/0x50c
binder_transaction+0x584/0x79b8
binder_thread_write+0xab4/0x4440
binder_ioctl+0x1fd4/0x2940
[...]
Freed by task 488:
kfree+0x1d0/0x420
binder_free_transaction+0x150/0x234
binder_thread_read+0x2d08/0x3ce4
binder_ioctl+0x488/0x2940
[...]
==================================================================
Instead, make a transaction copy so the data can be safely accessed by
binder_netlink_report() after a pending frozen error. While here, add a
comment about not using t->buffer in binder_netlink_report(). |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: iwlwifi: mld: cancel mlo_scan_start_wk
mlo_scan_start_wk is not canceled on disconnection. In fact, it is not
canceled anywhere except in the restart cleanup, where we don't really
have to.
This can cause an init-after-queue issue: if, for example, the work was
queued and then drv_change_interface got executed.
This can also cause use-after-free: if the work is executed after the
vif is freed. |
| In the Linux kernel, the following vulnerability has been resolved:
hwmon: (acpi_power_meter) Fix deadlocks related to acpi_power_meter_notify()
The acpi_power_meter driver's .notify() callback function,
acpi_power_meter_notify(), calls hwmon_device_unregister() under a lock
that is also acquired by callbacks in sysfs attributes of the device
being unregistered which is prone to deadlocks between sysfs access and
device removal.
Address this by moving the hwmon device removal in
acpi_power_meter_notify() outside the lock in question, but notice
that doing it alone is not sufficient because two concurrent
METER_NOTIFY_CONFIG notifications may be attempting to remove the
same device at the same time. To prevent that from happening, add a
new lock serializing the execution of the switch () statement in
acpi_power_meter_notify(). For simplicity, it is a static mutex
which should not be a problem from the performance perspective.
The new lock also allows the hwmon_device_register_with_info()
in acpi_power_meter_notify() to be called outside the inner lock
because it prevents the other notifications handled by that function
from manipulating the "resource" object while the hwmon device based
on it is being registered. The sending of ACPI netlink messages from
acpi_power_meter_notify() is serialized by the new lock too which
generally helps to ensure that the order of handling firmware
notifications is the same as the order of sending netlink messages
related to them.
In addition, notice that hwmon_device_register_with_info() may fail
in which case resource->hwmon_dev will become an error pointer,
so add checks to avoid attempting to unregister the hwmon device
pointer to by it in that case to acpi_power_meter_notify() and
acpi_power_meter_remove(). |
| In the Linux kernel, the following vulnerability has been resolved:
net: usb: r8152: fix resume reset deadlock
rtl8152 can trigger device reset during reset which
potentially can result in a deadlock:
**** DPM device timeout after 10 seconds; 15 seconds until panic ****
Call Trace:
<TASK>
schedule+0x483/0x1370
schedule_preempt_disabled+0x15/0x30
__mutex_lock_common+0x1fd/0x470
__rtl8152_set_mac_address+0x80/0x1f0
dev_set_mac_address+0x7f/0x150
rtl8152_post_reset+0x72/0x150
usb_reset_device+0x1d0/0x220
rtl8152_resume+0x99/0xc0
usb_resume_interface+0x3e/0xc0
usb_resume_both+0x104/0x150
usb_resume+0x22/0x110
The problem is that rtl8152 resume calls reset under
tp->control mutex while reset basically re-enters rtl8152
and attempts to acquire the same tp->control lock once
again.
Reset INACCESSIBLE device outside of tp->control mutex
scope to avoid recursive mutex_lock() deadlock. |
| In the Linux kernel, the following vulnerability has been resolved:
ceph: fix NULL pointer dereference in ceph_mds_auth_match()
The CephFS kernel client has regression starting from 6.18-rc1.
We have issue in ceph_mds_auth_match() if fs_name == NULL:
const char fs_name = mdsc->fsc->mount_options->mds_namespace;
...
if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) {
/ fsname mismatch, try next one */
return 0;
}
Patrick Donnelly suggested that: In summary, we should definitely start
decoding `fs_name` from the MDSMap and do strict authorizations checks
against it. Note that the `-o mds_namespace=foo` should only be used for
selecting the file system to mount and nothing else. It's possible
no mds_namespace is specified but the kernel will mount the only
file system that exists which may have name "foo".
This patch reworks ceph_mdsmap_decode() and namespace_equals() with
the goal of supporting the suggested concept. Now struct ceph_mdsmap
contains m_fs_name field that receives copy of extracted FS name
by ceph_extract_encoded_string(). For the case of "old" CephFS file
systems, it is used "cephfs" name.
[ idryomov: replace redundant %*pE with %s in ceph_mdsmap_decode(),
get rid of a series of strlen() calls in ceph_namespace_match(),
drop changes to namespace_equals() body to avoid treating empty
mds_namespace as equal, drop changes to ceph_mdsc_handle_fsmap()
as namespace_equals() isn't an equivalent substitution there ] |
| In the Linux kernel, the following vulnerability has been resolved:
ALSA: aloop: Fix racy access at PCM trigger
The PCM trigger callback of aloop driver tries to check the PCM state
and stop the stream of the tied substream in the corresponding cable.
Since both check and stop operations are performed outside the cable
lock, this may result in UAF when a program attempts to trigger
frequently while opening/closing the tied stream, as spotted by
fuzzers.
For addressing the UAF, this patch changes two things:
- It covers the most of code in loopback_check_format() with
cable->lock spinlock, and add the proper NULL checks. This avoids
already some racy accesses.
- In addition, now we try to check the state of the capture PCM stream
that may be stopped in this function, which was the major pain point
leading to UAF. |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: target: iscsi: Fix use-after-free in iscsit_dec_session_usage_count()
In iscsit_dec_session_usage_count(), the function calls complete() while
holding the sess->session_usage_lock. Similar to the connection usage count
logic, the waiter signaled by complete() (e.g., in the session release
path) may wake up and free the iscsit_session structure immediately.
This creates a race condition where the current thread may attempt to
execute spin_unlock_bh() on a session structure that has already been
deallocated, resulting in a KASAN slab-use-after-free.
To resolve this, release the session_usage_lock before calling complete()
to ensure all dereferences of the sess pointer are finished before the
waiter is allowed to proceed with deallocation. |
| In the Linux kernel, the following vulnerability has been resolved:
rust_binder: correctly handle FDA objects of length zero
Fix a bug where an empty FDA (fd array) object with 0 fds would cause an
out-of-bounds error. The previous implementation used `skip == 0` to
mean "this is a pointer fixup", but 0 is also the correct skip length
for an empty FDA. If the FDA is at the end of the buffer, then this
results in an attempt to write 8-bytes out of bounds. This is caught and
results in an EINVAL error being returned to userspace.
The pattern of using `skip == 0` as a special value originates from the
C-implementation of Binder. As part of fixing this bug, this pattern is
replaced with a Rust enum.
I considered the alternate option of not pushing a fixup when the length
is zero, but I think it's cleaner to just get rid of the zero-is-special
stuff.
The root cause of this bug was diagnosed by Gemini CLI on first try. I
used the following prompt:
> There appears to be a bug in @drivers/android/binder/thread.rs where
> the Fixups oob bug is triggered with 316 304 316 324. This implies
> that we somehow ended up with a fixup where buffer A has a pointer to
> buffer B, but the pointer is located at an index in buffer A that is
> out of bounds. Please investigate the code to find the bug. You may
> compare with @drivers/android/binder.c that implements this correctly. |