summaryrefslogtreecommitdiff
path: root/sys/kern
AgeCommit message (Collapse)Author
2023-07-08clock_gettime(2): Fix CLOCK_PROCESS/THREAD_CPUTIME_ID.riastradh
Use same calculation as getrusage, not some ad-hoc arithmetic of internal scheduler parameters that are periodically rewound. PR kern/57512 XXX pullup-8 XXX pullup-9 XXX pullup-10
2023-07-08curcpu_stable(9): New function for asserting curcpu() is stable.riastradh
2023-07-08kern_resource.c: Fix brace placement.riastradh
No functional change intended.
2023-07-07Revert unintentional changes to kern_lock.c in previous commit.riastradh
2023-07-07heartbeat(9): Test whether curcpu is stable, not kpreempt_disabled.riastradh
kpreempt_disabled worked for my testing because I tested on aarch64, which doesn't have kpreemption. XXX Should move curcpu_stable() to somewhere that other things can use it.
2023-07-07xcall(9): If !mp_online, raise spl or set LP_BOUND to call func.riastradh
High-priority xcalls may reasonably assume that the spl is raised to splsoftserial, so make sure to do that in xc_broadcast. Low-priority xcalls may reasonably enter paths that assume the lwp is bound to a CPU, so let's make it assertable even if it doesn't have any other consequences when !mp_online. XXX pullup-8 XXX pullup-9 XXX pullup-10
2023-07-07heartbeat(9): New mechanism to check progress of kernel.riastradh
This uses hard interrupts to check progress of low-priority soft interrupts, and one CPU to check progress of another CPU. If no progress has been made after a configurable number of seconds (kern.heartbeat.max_period, default 15), then the system panics -- preferably on the CPU that is stuck so we get a stack trace in dmesg of where it was stuck, but if the stuckness was detected by another CPU and the stuck CPU doesn't acknowledge the request to panic within one second, the detecting CPU panics instead. This doesn't supplant hardware watchdog timers. It is possible for hard interrupts to be stuck on all CPUs for some reason too; in that case heartbeat(9) has no opportunity to complete. Downside: heartbeat(9) relies on hardclock to run at a reasonably consistent rate, which might cause trouble for the glorious tickless future. However, it could be adapted to take a parameter for an approximate number of units that have elapsed since the last call on the current CPU, rather than treating that as a constant 1. XXX kernel revbump -- changes struct cpu_info layout
2023-07-07crashme(9): New crash methods with raised ipl or kpreempt disabled.riastradh
2023-06-30entropy(9): Reintroduce netbsd<=9 time-delta estimator for unblocking.riastradh
The system will (in a subsequent change) by default block for this condition before almost all of userland is running (including /etc/rc.d/sshd key generation). That way, a never-blocking getentropy(3) API will never return any data without at least best-effort entropy like netbsd<=9 did to applications except in single-user mode (where you have to be careful about everything anyway) or in the few processes that run before a seed can even be loaded (where blocking indefinitely, e.g. when generating a stack protector cookie in libc, could pose a severe availability problem that can't be configured away, but where the security impact is low). However, (in another subsequent change) we will continue to use _only_ HWRNG driver estimates and seed estimates, and _not_ time-delta estimator, for _warning_ about security in motd, daily security report, etc. And if HWRNG/seed provides enough entropy before time-delta estimator does, that will unblock /dev/random too. The result is: - Machines with HWRNG or seed won't warn about entropy and will essentially never block -- even on first boot without a seed, it will take only as long as the fastest HWRNG to unblock. - Machines with neither HWRNG nor seed: . will warn about entropy, giving feedback about security; and . will avoid returning anything more predictable than netbsd<=9; but . won't block (much) longer than netbsd<=9 would (and won't block again after blocking once, except with kern.entropy.depletion=1 for testing). (The threshold for unblocking is now somewhat higher than before: 512 samples that pass the time-delta estimator, rather than 80 as it used to be.) And, of course, adding a seed (or HWRNG) will prevent both warnings and blocking. The mechanism is: 1. /dev/random will block until _either_ (a) enough bits of entropy (256) from reliable sources have been added to the pool, _or_ (b) enough samples have been added from any sources (512), passing the old time-delta entropy estimator, that the possible security benefit doesn't justify holding up availability any longer (`best effort'), except on systems with higher security requirements like securelevel=2 which can disable non-HWRNG, non-seed sources with rndctl_flags in rc.conf(5). 2. dmesg will report `entropy: ready' when 1(a) is satisfied, but if 1(b) is satisfied first, it will report `entropy: best effort', so the concise log messages will reflect the timing and whether in any period of time any of the system might be relying on best effort entropy. 3. The sysctl knob kern.entropy.needed (and the ioctl RNDGETPOOLSTAT variable rndpoolstat_t::added) still reflects the number of bits of entropy from reliable sources, so we can still use this to suggest regenerating ssh keys. This matters on platforms that can only be reached, after flashing an installation image, by sshing in over a (private) network, like small network appliances or remote virtual machines without (interactive) serial consoles. If we blocked indefinitely at boot when generating ssh keys, such platforms would be unusable. This way, platforms are usable, but operators can still be advised at login time to regenerate keys as soon as they can actually load entropy onto the system, e.g. with rndctl(8) on a seed file copied from a local machine over the (private) network. 4. On machines without HWRNG, using a seed file still suppresses warnings for users who need more confident security. But it is no longer necessary for availability. This is a compromise between availability and security: - The security mechanism of blocking indefinitely on machines without HWRNG hurts availability too much, as painful experience over the multiple years since I made the mistake of introducing it have shown. (Sorry!) - The other main alternative, not having a blocking path at all (as I pushed for, and as OpenBSD has done for a long time) could potentially reduce security vs netbsd<=9, and would run against the expectations set by many popular operating systems to the severe detriment of public perception of NetBSD security. Even though we can't _confidently_ assess enough entropy from, e.g., sampling interrupt timings, this is the traditional behaviour that most operating systems provide -- and the result here is a net nondecrease in security over netbsd<=9, because all paths from the entropy pool to userland now have at least as high a standard before returning data as they did in netbsd<=9. PR kern/55641 PR pkg/55847 PR kern/57185 https://mail-index.netbsd.org/current-users/2020/09/02/msg039470.html https://mail-index.netbsd.org/current-users/2020/11/21/msg039931.html https://mail-index.netbsd.org/current-users/2020/12/05/msg040019.html XXX pullup-10
2023-06-27callout(9): Delete the unused member cc_cancel from struct callout_cpupho
I see no reason why it should be there, and believe its a leftover from some old code.
2023-06-27callout(9): Tidy up the condition for "callout is running on another LWP"pho
No functional changes.
2023-06-27callout(9): Fix panic() in callout_destroy() (kern/57226)pho
The culprit was callout_halt(). "(c->c_flags & CALLOUT_FIRED) != 0" wasn't the correct way to check if a callout is running. It failed to wait for a running callout to finish in the following scenario: 1. cpu0 initializes a callout and schedules it. 2. cpu0 invokes callout_softlock() and fires the callout, setting the flag CALLOUT_FIRED. 3. The callout invokes callout_schedule() to re-schedule itself. 4. callout_schedule_locked() clears the flag CALLOUT_FIRED, and releases the lock. 5. Before the lock is re-acquired by callout_softlock(), cpu1 decides to destroy the callout. It first invokes callout_halt() to make sure the callout finishes running. 6. But since CALLOUT_FIRED has been cleared, callout_halt() thinks it's not running and therefore returns without invoking callout_wait(). 7. cpu1 proceeds to invoke callout_destroy() while it's still running on cpu0. callout_destroy() detects that and panics.
2023-06-23tsleep: Comment out kernel lock assertion for now.riastradh
Breaks tpm(4) which breaks boot on a lot of systems. tpm(4) shouldn't be using tsleep; it doesn't appear to even have an interrupt handler for wakeups, so it could get by with kpause. If it ever did sprout an interrupt handler it should use condvar(9) anyway. But for now I don't have time to fix it tonight.
2023-06-23tsleep(9): Assert kernel lock held.riastradh
This is never safe to use without the kernel lock. It should only appear in legacy subsystems that still run with the kernel lock.
2023-06-15Regen.hannken
2023-06-15VOP_IOCTL() is a wrapper around spec_ioctl() aka Xdev_ioctl() andhannken
protected with spec_io_enter()/spec_io_exit() so there is no need to force specific vnode locking. Set locking requirement to '= = =' (unchanged, locked or unlocked). PR kern/57450 (unplugging hung USB disk triggers panic via _vstate_assert)
2023-05-24entropy(9): Avoid race between rnd_add_data and ioctl(RNDCTL).riastradh
XXX pullup-10
2023-05-24entropy(9): On flags change, cancel any scheduled consolidation.riastradh
We've been instructed to lose confidence in existing entropy sources, so let's make sure to re-gather enough entropy before the next consolidation can happen, in case some of what would be counted in consolidation is from those entropy sources. XXX pullup-10
2023-05-23autoconf(9): Omit config_detach kernel lock assertion too for now.riastradh
like in config_attach_pseudo, this assertion almost certainly indicates real bugs, but let's try to get the tests back and running again before addressing those.
2023-05-23autoconf(9): Omit config_attach_pseudo kernel lock assertion for now.riastradh
Breaks too many things that I didn't test in the branch (cgd, fss, &c.); let's address all forty-odd cases before turning it on.
2023-05-22autoconf(9): New functions for referenced attach/detach.riastradh
New functions: - config_found_acquire(dev, aux, print, cfargs) - config_attach_acquire(parent, cf, aux, print, cfargs) - config_attach_pseudo_acquire(cf, aux) - config_detach_release(dev, flags) - device_acquire(dev) The config_*_acquire functions are like the non-acquire versions, but they return a referenced device_t, which is guaranteed to be safe to use until released. The device's detach function may run while it is referenced, but the device_t will not be freed and the parent's .ca_childdetached routine will not be called. => config_attach_pseudo_acquire additionally lets you pass an aux argument to the device's .ca_attach routine, unlike config_attach_pseudo which always passes NULL. => Eventually, config_found, config_attach, and config_attach_pseudo should be made to return void, because use of the device_t they return is unsafe without the kernel lock and difficult to use safely even with the kernel lock or in a UP system. For now, they require the caller to hold the kernel lock, while config_*_acquire do not. config_detach_release is like device_release and then config_detach, but avoids the race inherent with that sequence. => Eventually, config_detach should be eliminated, because getting at the device_t it needs is unsafe without the kernel lock and difficult to use safely even with the kernel lock or in a UP system. For now, it requires the caller to hold the kernel lock, while config_detach_release does not. device_acquire acquires a reference to a device. It never fails and can be used in thread context (but not interrupt context, hard or soft). Caller is responsible for ensuring that the device_t cannot be freed; in other words, the device_t must be made unavailable to any device_acquire callers before the .ca_detach function returns. Typically device_acquire will be used in a read section (mutex, rwlock, pserialize, &c.) in a data structure lookup, with corresponding logic in the .ca_detach function to remove the device from the data structure and wait for all read sections to complete. Proposed on tech-kern: https://mail-index.netbsd.org/tech-kern/2023/05/10/msg028889.html
2023-05-22tty(9): Make ttwrite update uio with only how much it has consumed.riastradh
As is, it leaves uio in an inconsistent state. Good enough for the write(2) return value to be correct for a userland caller to restart write(2) where it left off, but not good enough for a loop in the kernel to reuse the same uio. Reported-by: syzbot+e0f56178d0add0d8be20@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=6290eb02b8fe73361dc15c7bc44e1208601e6af8 Reported-by: syzbot+7caa189e8fccd926357e@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=c0a3b77b4831dfa81fc855857bde81755d246bd3 Reported-by: syzbot+4a1eff91eb4e7c1970b6@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=10523a633a4ad9749f57dc7cf03f9447d518c5b8 Reported-by: syzbot+1d3c280f59099dc82e17@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=8e02ebb0da76a8e286461f33502117a1d30275c6 Reported-by: syzbot+080d51214d0634472b12@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=1f617747db8087e5554d3df1b79a545dee26a650 Reported-by: syzbot+dd50b448e49e5020131a@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=f71c8cef4110b7eeac6eca67b6a4d1f4a8b3e96f Reported-by: syzbot+26b675ecf0cc9dfd8586@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=57b1901f5b3e090a964d08dd0d729f9909f203be Reported-by: syzbot+87f0df2c9056313a5c4b@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=67994a3da32d075144e25d1ac314be1d9694ae6e Reported-by: syzbot+e5bc98e18aa42f0cb25d@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=6374bd286532423c63f2b331748280729134224c Reported-by: syzbot+7e587f4c5aaaf80e84b3@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=976210ed438d48ac275d77d7ebf4a086e43b5fcb
2023-05-22uiomove(9): Add uiopeek/uioskip operations.riastradh
This allows a caller to grab some data, consume part of it, and atomically update the uio with only the amount it consumed. This way, the caller can use a buffer of a size that doesn't depend on how much it will actually consume, which it may not know in advance -- e.g., because it depends on how much an underlying hardware tty device will accept before it decides it has had too much. Proposed on tech-kern: https://mail-index.netbsd.org/tech-kern/2023/05/09/msg028883.html (Opinions were divided between `uioadvance' and `uioskip'. I stuck with `uioskip' because that was less work for me.)
2023-05-14kern/sys_descrip.c: Nix trailing whitespace.riastradh
2023-05-09ioctl(DIOCRMWEDGES): Delete only idle wedges.riastradh
Don't forcibly delete busy wedges. Reported-by: syzbot+e46f31fe56e04f567d88@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=8a00fd7f2e7459748d7a274098180a4708ff0f61 Fixes accidental destruction of the busy wedge that the root file system is mounted on, triggered by syzbot's ioctl(DIOCRMWEDGES).
2023-05-01mutex(9): Write comments in terms of ordering semantics.riastradh
Phrasing things in terms of implementation details like `acquiring and locking cache lines' both suggests a particular cache coherency protocol, paints an incomplete picture for more involved protocols, and doesn't really help to prove theorems the way ordering relations do. No functional change intended.
2023-05-01mutex(9): Omit needless membar_consumer.riastradh
In practical terms, this is not necessary because MUTEX_SET_WAITERS already issues MUTEX_MEMBAR_ENTER, which on all architectures is a sequential consistency barrier, i.e., read/write-before-read/write, subsuming membar_consumer. In theoretical terms, MUTEX_MEMBAR_ENTER might imply only write-before-read/write, so one might imagine that the read-before-read ordering of membar_consumer _could_ be necessary. However, the memory operations that are significant here are: 1. load owner := mtx->mtx_owner 2. store mtx->mtx_owner := owner | MUTEX_BIT_WAITERS 3. load owner->l_cpu->ci_curlwp to test if equal to owner (1) is program-before (2) and at the same memory location, mtx->mtx_owner, so (1) happens-before (2). And (2) is separated in program order by MUTEX_MEMBAR_ENTER from (3), so (2) happens-before (3). So even if the membar_consumer were intended to guarantee that (1) happens-before (3), it's not necessary, because we can already prove it from MUTEX_MEMBAR_ENTER. But actually, we don't really need (1) happens-before (3), exactly; what we really need is (2) happens-before (3), since this is a little manifestation of Dekker's algorithm between cpu_switchto and mutex_exit, where each CPU sets one flag and must ensure it is visible to the other CPUs before testing the other flag -- one flag here is the MUTEX_BIT_WAITERS bit, and the other `flag' here is the condition owner->l_cpu->ci_curlwp == owner; the corresponding logic, in cpu_switchto, is: 1'. store owner->l_cpu->ci_curlwp := owner 2'. load mtx->mtx_owner to test if MUTEX_BIT_WAITERS set
2023-05-01Default PROC_MACHINE_ARCH to machine_arch and use this for magicmlelstv
symlinks to resolve "@machine_arch". This keeps behaviour of magic symlinks and 'uname -p' output the same. Fixes PR 57320.
2023-04-30kern/vfs_subr.c: Revert previous build fixes, no longer needed.riastradh
SDT_PROBE* will now DTRT here.
2023-04-29Fix builds (hopefully) when DTRACE hooks are not included.kre
2023-04-29vfs: Sprinkle dtrace probes into syncer.riastradh
2023-04-29vfs(9): Move SDT_PROVIDER_DEFINE(vfs) from vfs_cache.c to vfs_init.c.riastradh
Not a namecache-specific thing.
2023-04-29kern/vfs_init.c: Sort includes. No functional change intended.riastradh
2023-04-29kern/vfs_subr.c: Sort includes. No functional change intended.riastradh
2023-04-29kern/vfs_syscalls.c: Nix trailing whitesapce.riastradh
No functional change intended.
2023-04-29White space fix.isaki
2023-04-28Pass local symbols relocations in both passes and provide the kobj_relocskrll
implementation visibility of these relocations. Currently all implementations resolve local symbol relocations in the first pass and simply skip them in the second. The RISC-V implementation will make use of this visiblity.
2023-04-22fcntl(2), flock(2): Assert FHASLOCK is clear if no fo_advlock.riastradh
2023-04-22fcntl(2), flock(2): Unify error branches.riastradh
Let's make this a bit less error-prone by having everything converge in the same place instead of multiple returns in different contexts.
2023-04-22fcntl(2), flock(2): Fix missing fd_putfile in error branch.riastradh
Oops!
2023-04-22file(9): New fo_posix_fadvise operation.riastradh
XXX kernel revbump -- changes struct fileops API and ABI
2023-04-22file(9): New fo_fpathconf operation.riastradh
XXX kernel revbump -- struct fileops API and ABI change
2023-04-22file(9): New fo_advlock operation.riastradh
This moves the vnode-specific logic from sys_descrip.c into vfs_vnode.c, like we did for fo_seek. XXX kernel revbump -- struct fileops API and ABI change
2023-04-22disk(9): Fix missing unlock in error branch in previous change.riastradh
Reported-by: syzbot+870665adaf8911c0d94d@syzkaller.appspotmail.com https://syzkaller.appspot.com/bug?id=a4ae17cf66b5bb999182ae77fd3c7ad9ad18c891
2023-04-22readdir(2), lseek(2): Fix races in access to struct file::f_offset.riastradh
For non-directory vnodes: - reading f_offset requires a shared or exclusive vnode lock - writing f_offset requires an exclusive vnode lock For directory vnodes, access (read or write) requires either: - a shared vnode lock AND f_lock, or - an exclusive vnode lock. This way, two files for the same underlying directory vnode can still do VOP_READDIR in parallel, but if two readdir(2) or lseek(2) calls run in parallel on the same file, the load and store of f_offset is atomic (otherwise, e.g., on 32-bit systems it might be torn and lead to corrupt offsets). There is still a potential problem: the _whole transaction_ of readdir(2) may not be atomic. For example, if thread A and thread B read n bytes of directory content, thread A might get bytes [0,n) and thread B might get bytes [n,2n) but f_offset might end up at n instead of 2n once both operations complete. (However, f_offset wouldn't be some corrupt garbled number like n & 0xffffffff00000000.) Fixing this would require either: (a) using an exclusive vnode lock in vn_readdir, (b) introducing a new lock that serializes vn_readdir on the same file (but ont necessarily the same vnode), or (c) proving it is safe to hold f_lock across VOP_READDIR, VOP_SEEK, and VOP_GETATTR.
2023-04-21disk(9): Fix use-after-free race with concurrent disk_set_info.riastradh
This can happen with dk(4), which allows wedges to have their size increased without destroying and recreating the device instance. Drivers which allow concurrent disk_set_info and disk_ioctl must serialize disk_set_info with dk_openlock.
2023-04-21autoconf(9): Add a comment where we risk arithmetic overflow.riastradh
2023-04-20Extend optstr(9) to provide some functions to convert the value.skrll
Proposed on tech-kern some time ago.
2023-04-17KNFskrll
2023-04-16autoconf(9): Assert alldevs_lock held in config_unit_nextfree.riastradh
The one caller, config_unit_alloc, guarantees it through config_alldevs_enter/exit.