| Age | Commit message (Collapse) | Author |
|
|
|
This just goes through my recent reference count membar audit and
changes membar_exit to membar_release and membar_enter to
membar_acquire -- this should make everything cheaper on most CPUs
without hurting correctness, because membar_acquire is generally
cheaper than membar_enter.
|
|
If two threads are using an object that is freed when the reference
count goes to zero, we need to ensure that all memory operations
related to the object happen before freeing the object.
Using an atomic_dec_uint_nv(&refcnt) == 0 ensures that only one
thread takes responsibility for freeing, but it's not enough to
ensure that the other thread's memory operations happen before the
freeing.
Consider:
Thread A Thread B
obj->foo = 42; obj->baz = 73;
mumble(&obj->bar); grumble(&obj->quux);
/* membar_exit(); */ /* membar_exit(); */
atomic_dec -- not last atomic_dec -- last
/* membar_enter(); */
KASSERT(invariant(obj->foo,
obj->bar));
free_stuff(obj);
The memory barriers ensure that
obj->foo = 42;
mumble(&obj->bar);
in thread A happens before
KASSERT(invariant(obj->foo, obj->bar));
free_stuff(obj);
in thread B. Without them, this ordering is not guaranteed.
So in general it is necessary to do
membar_exit();
if (atomic_dec_uint_nv(&obj->refcnt) != 0)
return;
membar_enter();
to release a reference, for the `last one out hit the lights' style
of reference counting. (This is in contrast to the style where one
thread blocks new references and then waits under a lock for existing
ones to drain with a condvar -- no membar needed thanks to mutex(9).)
I searched for atomic_dec to find all these. Obviously we ought to
have a better abstraction for this because there's so much copypasta.
This is a stop-gap measure to fix actual bugs until we have that. It
would be nice if an abstraction could gracefully handle the different
styles of reference counting in use -- some years ago I drafted an
API for this, but making it cover everything got a little out of hand
(particularly with struct vnode::v_usecount) and I ended up setting
it aside to work on psref/localcount instead for better scalability.
I got bored of adding #ifdef __HAVE_ATOMIC_AS_MEMBAR everywhere, so I
only put it on things that look performance-critical on 5sec review.
We should really adopt membar_enter_preatomic/membar_exit_postatomic
or something (except they are applicable only to atomic r/m/w, not to
atomic_load/store_*, making the naming annoying) and get rid of all
the ifdefs.
|
|
binaries by refusing to execute them.
|
|
- Repeating "modload compat_linux && /emul/linux/bin/ls && modunload compat_linux"
will reproduce this problem.
- It cause in exec_sigcode_map(), anon-object for sigcode was created at
first exec, but it remained even after exec_remove.
- Fixed that the anon-object for sigcode is created at exec_add(), and the
anon-object reference is removed at exec_remove().
- sigobject_lock is no longer needed since it is locked by exec_lock.
- The compat_16 module rewrites the e_sigcode entry in emul_netbsd directly and
does not use exec_add()/exec_remove(), so it needs to call
sigcode_alloc()/sigcode_free() on its own.
|
|
This changes was insufficient because es_emul is referenced by multiple execsw.
|
|
- Repeating "modload compat_linux && /emul/linux/bin/ls && modunload compat_linux"
will reproduce this problem.
- It cause in exec_sigcode_map(), anon-object for sigcode was created at
first exec, but it remained even after exec_remove.
- Fixed that the anon-object for sigcode is created at exec_add(), and the
anon-object reference is removed at exec_remove().
- sigobject_lock is no longer needed since it is locked by exec_lock.
|
|
|
|
Because the locking protocol around processes is somewhat complex
compared to other events that can be posted on kqueues, introduce
new functions for posting NOTE_EXEC, NOTE_EXIT, and NOTE_FORK,
rather than just using the generic knote() function. These functions
KASSERT() their locking expectations, and deal with other complexities
for each situation.
knote_proc_fork(), in particiular, needs to handle NOTE_TRACK, which
requires allocation of a new knote to attach to the child process. We
don't want to be allocating memory while holding the parent's p_lock.
Furthermore, we also have to attach the tracking note to the child
process, which means we have to acquire the child's p_lock.
So, to handle all this, we introduce some additional synchronization
infrastructure around the 'knote' structure:
- Add the ability to mark a knote as being in a state of flux. Knotes
in this state are guaranteed not to be detached/deleted, thus allowing
a code path drop other locks after putting a knote in this state.
- Code paths that wish to detach/delete a knote must first check if the
knote is in-flux. If so, they must wait for it to quiesce. Because
multiple threads of execution may attempt this concurrently, a mechanism
exists for a single LWP to claim the detach responsibility; all other
threads simply wait for the knote to disappear before they can make
further progress.
- When kqueue_scan() encounters an in-flux knote, it simply treats the
situation just like encountering another thread's queue marker -- wait
for the flux to settle and continue on.
(The "in-flux knote" idea was inspired by FreeBSD, but this works differently
from their implementation, as the two kqueue implementations have diverged
quite a bit.)
knote_proc_fork() uses this infrastructure to implement NOTE_TRACK like so:
- Attempt to put the original tracking knote into a state of flux; if this
fails (because the note has a detach pending), we skip all processing
(the original process has lost interest, and we simply won the race).
- Once the note is in-flux, drop the kq and forking process's locks, and
allocate 2 knotes: one to post the NOTE_CHILD event, and one to attach
a new NOTE_TRACK to the child process. Notably, we do NOT go through
kqueue_register() to do this, but rather do all of the work directly
and KASSERT() our assumptions; this allows us to directly control our
interaction with locks. All memory allocations here are performed with
KM_NOSLEEP, in order to prevent holding the original knote in-flux
indefinitely.
- Because the NOTE_TRACK use case adds knotes to kqueues through a
sort of back-door mechanism, we must serialize with the closing of
the destination kqueue's file descriptor, so steal another bit from
the kq_count field to notify other threads that a kqueue is on its
way out to prevent new knotes from being enqueued while the close
path detaches them.
In addition to fixing EVFILT_PROC's reliance on KERNEL_LOCK, this also
fixes a long-standing bug whereby a NOTE_CHILD event could be dropped
if the child process exited before the interested process received the
NOTE_CHILD event (the same knote would be used to deliver the NOTE_EXIT
event, and would clobber the NOTE_CHILD's 'data' field).
Add a bunch of comments to explain what's going on in various critical
sections, and sprinkle additional KASSERT()s to validate assumptions
in several more locations.
|
|
will live on with a different program image. (Thanks ryo@ for
pointing out my mistake.)
|
|
is a vestige of an older version of the code. Also, move a KASSERT() that
both futex_release_all_lwp() call sites had inside of futex_release_all_lwp()
itself.
|
|
by calling exit_lwps(), except for the last LWP. So, dispose of that
LWP's robust futexes right before calling lwp_ctl_exit().
Fixes a "WARNING: ... : unmapped robust futex list head" message when
running bash under Linux emulation on aarch64.
Root caused and patch proposed by ryo@. I have tweaked it slightly,
just to add a comment and a KASSERT().
|
|
The standard is explicit about it and it matters if e.g. RESETIDS is
used as an attribute and file actions depend on the group rights for
opening a file.
|
|
|
|
the BSD/POSIX per-process timers:
- "struct ptimer" is split into "struct itimer" (common interval timer
data) and "struct ptimer" (per-process timer data, which contains a
"struct itimer").
- Introduce a new "struct itimer_ops" that supplies information about
the specific kind of interval timer, including it's processing
queue, the softint handle used to schedule processing, the function
to call when the timer fires (which adds it to the queue), and an
optional function to call when the CLOCK_REALTIME clock is changed by
a call to clock_settime() or settimeofday().
- Rename some fuctions to clearly identify what they're operating on
(ptimer vs itimer).
- Use kmem(9) to allocate ptimer-related structures, rather than having
dedicated pools for them.
Welcome to NetBSD 9.99.77.
|
|
|
|
|
|
at the time we had mutex_obj_alloc() but not __cacheline_aligned.
|
|
Introduce PSL_TRACEDCHILD that indicates tracking of birth of a process.
A freshly forked process checks whether it is traced and if so, reports
SIGTRAP + TRAP_CHLD event to a debugger as a result of tracking forks-like
events. There is a time window when a debugger can attach to a newly
created process and receive SIGTRAP + TRAP_CHLD instead of SIGSTOP.
Fixes races in t_ptrace_wait* tests when a test hangs or misbehaves,
especially the ones reported in tracer_sysctl_lookup_without_duplicates.
|
|
own LWP ID space, LWP IDs came from the same number space as PIDs. The
lead LWP of a process gets the PID as its LID. If a multi-LWP process's
lead LWP exits, the PID persists for the process.
In addition to providing system-wide unique thread IDs, this also lets us
eliminate the per-process LWP radix tree, and some associated locks.
Remove the separate "global thread ID" map added previously; it is no longer
needed to provide this functionality.
Nudged in this direction by ad@ and chs@.
|
|
which relied on taking extra vnode refs.
Having benchmarked various experimental changes over the past few months it
seems that it's better to avoid vnode refs as much as possible. cwdi_lock
as a RW lock already did that to some extent for getcwd() and will permit
the same for namei() too.
|
|
when allocating a PID.
- Per above, proc_free_pid() no longer decrements nprocs. It's now done
in proc_free() right after proc_free_pid().
- Ensure nprocs is accessed using atomics everywhere.
|
|
PR kern/55151 by Martin Husemann
|
|
Relying on p_opptr is not safe as there is a race between:
- spawner giving a birth to a child process and being killed
- spawnee accessng p_opptr and reporting TRAP_CHLD
PR kern/54786 by Andreas Gustafsson
|
|
- Merge the eventswitch parent notification code which was copied in two
places (eventswitchchild)
- Fix bugs in the eventswitch parent notification code:
1. p_slflags should be accessed holding both proc_lock and p->p_lock
2. p->p_opptr can be NULL if the parent was PSL_CHTRACED and exited.
Fixes random crashes the posix_spawn_kill_spawner unit test which tried
to dereference a NULL pptr.
|
|
- Have a stab at clustering the members of vnode_t and vnode_impl_t in a
more cache-conscious way. With that done, go back to adjusting v_usecount
with atomics and keep vi_lock directly in vnode_impl_t (saves KVA).
- Allow VOP_LOCK(LK_NONE) for the benefit of VFS_VGET() and VFS_ROOT().
Make sure LK_UPGRADE always comes with LK_NOWAIT.
- Make cwdinfo use mostly lockless.
|
|
triggers vpp != NULL in exit1()->radixtree.c line 674
Create an lwp_renumber() from the code in emulexec() and use in
linux_e_proc_exec() and linux_e_proc_fork() too.
|
|
- put back the compat_linux modules in the exec array (commented out)
- remove extra parens
|
|
single threaded case. Replace scans of p->p_lwps with lookups in the
tree. Find free LIDs for new LWPs in the tree. Replace the hashed sleep
queues for park/unpark with lookups in the tree under cover of a RW lock.
- lwp_wait(): if waiting on a specific LWP, find the LWP via tree lookup and
return EINVAL if it's detached, not ESRCH.
- Group the locks in struct proc at the end of the struct in their own cache
line.
- Add some comments.
|
|
|
|
- Try hard to keep vfork() parent and child on the same CPU until execve(),
failing that on the same core, but in all other cases scatter new LWPs
among the different CPU packages, round robin, to try and get the best out
of the available cache and bus bandwidth.
- Remove attempts at balancing. Replace with a rate-limited skim of other
CPU's run queues in sched_idle(), starting in the current package and
moving outwards. Add a sysctl tunable to change the interval.
- Make the cacheht_time tuneable take a milliseconds value.
- It's possible to configure things such that there's no CPU allowed to run
an LWP. Defeat this by always having a default:
Reported-by: syzbot+46968944dd9359ab93bc@syzkaller.appspotmail.com
Reported-by: syzbot+7f750a4cc230d1e831f9@syzkaller.appspotmail.com
Reported-by: syzbot+88d7675158f5cb4684db@syzkaller.appspotmail.com
Reported-by: syzbot+d409c2338150e9a8ae1e@syzkaller.appspotmail.com
Reported-by: syzbot+e152dc5bff188f67358a@syzkaller.appspotmail.com
|
|
release the locks fewer times. Proposed on tech-kern a very long time go.
|
|
where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before
calling cpu_switchto(). It's not safe to let other actors mess with the
LWP (in particular l->l_cpu) while it's still context switching. This
removes l->l_ctxswtch.
- Move the LP_RUNNING flag into l->l_flag and rename to LW_RUNNING since
it's now covered by the LWP's lock.
- Ditch lwp_exit_switchaway() and just call mi_switch() instead. Everything
is in cache anyway so it wasn't buying much by trying to avoid saving old
state. This means cpu_switchto() will never be called with prevlwp ==
NULL.
- Remove some KERNEL_LOCK handling which hasn't been needed for years.
|
|
This seems to take about 3us on my Intel system. Two changes required:
- Have the caller to mi_switch() be responsible for calling spc_lock().
- Avoid using l->l_cpu in mi_switch().
While here:
- Add a couple of calls to membar_enter()
- Have the idle LWP set itself to LSIDL, to match softint_thread().
- Remove unused return value from mi_switch().
|
|
- Adapt to cpu_need_resched() changes. Avoid lost & duplicate IPIs and ASTs.
sched_resched_cpu() and sched_resched_lwp() contain the logic for this.
- Changes for LSIDL to make the locking scheme match the intended design.
- Reduce lock contention and false sharing further.
- Numerous small bugfixes, including some corrections for SCHED_FIFO/RT.
- Use setrunnable() in more places, and merge cut & pasted code.
|
|
This field is not needed as it duplicated p_opptr that is alread safe to
use, unless proven otherwise.
eventswitch() already contained a check for != initproc (pid1).
Ride ABI bump for 9.99.16.
|
|
Storing struct ptrace_state information inside struct proc was vulnerable
to synchronization bugs, as multiple events emitted in the same time were
overwritting other ones.
Cache the original parent process id in p_oppid. Reusing here p_opptr is
in theory prone to slight race codition.
Change the semantics of PT_GET_PROCESS_STATE, reutning EINVAL for calls
prompting for the value in cases when there wasn't registered an
appropriate event.
Add an alternative approach to check the ptrace_state information, directly
from the siginfo_t value returned from PT_GET_SIGINFO. The original
PT_GET_PROCESS_STATE approach is kept for compat with older NetBSD and
OpenBSD. New code is recommended to keep using PT_GET_PROCESS_STATE.
Add a couple of compile-time asserts for assumptions in the code.
No functional change intended in existing ptrace(2) software.
All ATF ptrace(2) and ATF GDB tests pass.
This change improves reliability of the threading ptrace(2) code.
|
|
fd/false (fexecve). This is needed to differentiate between them because
NULL/-1 can be readily passed from userland.
|
|
- get the vnode from the fd passed instead of calling namei() on the
path
- try to reverse resolve the vnode to extract the pathname
- deal with not having a resolved path available
- rename variable that was not a pathbuf
|
|
- delete #if 1 and #if 0 code
|
|
in the proc, and can later be obtained by userland.
|
|
|
|
initialized. Pointed out by maxv.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
basics of C programming.
Reported-by: syzbot+8665827f389a9fac5cc9@syzkaller.appspotmail.com
|