| Age | Commit message (Collapse) | Author |
|
|
|
implementation visibility of these relocations.
Currently all implementations resolve local symbol relocations in the first
pass and simply skip them in the second. The RISC-V implementation will
make use of this visiblity.
|
|
Details in comments.
PR kern/57240
XXX pullup-8
XXX pullup-9
XXX pullup-10
|
|
- Ensure always at end
- Use tab rather than spaces
- Add consistent comment
"Pull in optional local configuration - always at end"
The only functional change is that a local file which tried to
override an existing setting (eg with "no foo") would have failed
in some cases before, but now will work
|
|
Written by Hiroshi Noguchi, of which an updated version was posted to
port-mac68k in 2001.
Attachments were added to kernel configs for platforms that already had
the Cabletron (se.4) driver added, although other platorms may benefit.
Reviewed on tech-net by Izumi Tsutsui.
|
|
This can be included unconditionally, and db_active can then be
queried unconditionally; if DDB is not in the kernel, then db_active
is a constant zero. Reduces need for #include opt_ddb.h, #ifdef DDB.
|
|
Plus a handful of others that I'm familiar with. Lots of special-
purpose kernels should probably have this too but I'm not going
through all the arm, mips, and ppc evaluation board kernels to see
which ones are relevant.
Omitted from systems I know to be very small:
- sun2/GENERIC
- dreamcast/GENERIC
Feel free to remove it from others that need to be kept smaller.
Compile-tested a few of these just in case:
- alpha/GENERIC
- amd64/GENERIC
- evbmips/OCTEON
- i386/GENERIC
- riscv/GENERIC
PR kern/29702
|
|
|
|
- Enable UFS_DIRHASH if the architecture or kernel model specific config
file can use 128MB of RAM or more.
- Remove experimental tag from UFS_DIRHASH; it's been with RUMP kernel
and by a number of NetBSD developers for years.
- Add LFS_DIRHASH if LFS was enabled.
- Be somewhat consistent with FS options order.
|
|
|
|
|
|
- Note that this function is only used by dumpsys().
- Don't safe the PS word; there isn't actually a spot for it in the PCB.
- Don't bother returning anything; savectx() is declared void.
|
|
|
|
alpha_wmb() have been audited and fixed-up as necessary).
|
|
order reads with respect to writes. Remove now-redundant tc_wmb()
calls before tc_syncbus().
NFC on MIPS other than removing a redundant wbflush() (tc_wmb() followed
by tc_syncbus()).
|
|
|
|
|
|
|
|
|
|
|
|
No semantic change is possible because all of these membars are just
mb on alpha -- change just makes the intent clearer. (Only
membar_producer is weaker, wmb.)
|
|
XXX Maybe this should really use alpha_mb, since it's not writing to
normal MI-type memory so technically the membr_* semantics doesn't
apply?
|
|
This just goes through my recent reference count membar audit and
changes membar_exit to membar_release and membar_enter to
membar_acquire -- this should make everything cheaper on most CPUs
without hurting correctness, because membar_acquire is generally
cheaper than membar_enter.
|
|
If the pmap is published enough for us to obtain a reference to it
then there's no membar needed. If it's not then something else is
wrong and we can't use pmap_reference here anyway. Membars are
needed only on the destruction side to make sure all use, by any
thread, happens-before all freeing in the last user thread.
|
|
|
|
If two threads are using an object that is freed when the reference
count goes to zero, we need to ensure that all memory operations
related to the object happen before freeing the object.
Using an atomic_dec_uint_nv(&refcnt) == 0 ensures that only one
thread takes responsibility for freeing, but it's not enough to
ensure that the other thread's memory operations happen before the
freeing.
Consider:
Thread A Thread B
obj->foo = 42; obj->baz = 73;
mumble(&obj->bar); grumble(&obj->quux);
/* membar_exit(); */ /* membar_exit(); */
atomic_dec -- not last atomic_dec -- last
/* membar_enter(); */
KASSERT(invariant(obj->foo,
obj->bar));
free_stuff(obj);
The memory barriers ensure that
obj->foo = 42;
mumble(&obj->bar);
in thread A happens before
KASSERT(invariant(obj->foo, obj->bar));
free_stuff(obj);
in thread B. Without them, this ordering is not guaranteed.
So in general it is necessary to do
membar_exit();
if (atomic_dec_uint_nv(&obj->refcnt) != 0)
return;
membar_enter();
to release a reference, for the `last one out hit the lights' style
of reference counting. (This is in contrast to the style where one
thread blocks new references and then waits under a lock for existing
ones to drain with a condvar -- no membar needed thanks to mutex(9).)
I searched for atomic_dec to find all these. Obviously we ought to
have a better abstraction for this because there's so much copypasta.
This is a stop-gap measure to fix actual bugs until we have that. It
would be nice if an abstraction could gracefully handle the different
styles of reference counting in use -- some years ago I drafted an
API for this, but making it cover everything got a little out of hand
(particularly with struct vnode::v_usecount) and I ended up setting
it aside to work on psref/localcount instead for better scalability.
I got bored of adding #ifdef __HAVE_ATOMIC_AS_MEMBAR everywhere, so I
only put it on things that look performance-critical on 5sec review.
We should really adopt membar_enter_preatomic/membar_exit_postatomic
or something (except they are applicable only to atomic r/m/w, not to
atomic_load/store_*, making the naming annoying) and get rid of all
the ifdefs.
|
|
While here, reduce it to membar_exit -- it's obviously not needed for
store-before-load here (although alpha doesn't have anything weaker
than the full sequential consistency `mb'), and although we do need a
store-before-load (and load-before-load) to spin waiting for the CPU
to wake up, that already happens a few lines below with alpha_mb in
the loop anyway. So no need for membar_sync, which is just `mb'
under the hood -- deleting the membar_sync in this place can't hurt.
The membar_sync had been inserted automatically when converting from
an older style of atomic_ops(3) API.
|
|
Add missing "cc" and "memory" asm clobbers to the compiler can't
reorder memory access around these. The necessary memory barrier
instructions, mb, already appear in all the right places.
|
|
It is, and always has been, the caller's responsibility to ensure the
lock is initialized before it can be used -- otherwise the memory
could hold garbage; it is nonsensical to even attempt locking
operations on it before initialization.
So there's no need to issue explicit barriers here. The barrier
seems to have been introduced in sys/arch/alpha/alpha/lock_machdep.c
rev. 1.1 (since moved to inline asm in alpha/include/lock.h) and then
copied & pasted into several other architectures.
|
|
everywhere.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
COPTS=-O0,
sprinkle `__always_inline' to make _mcount() be generated as a single function.
|
|
|
|
|
|
- sparc and sparc64 were not using version 0 sigcontext when there were
no arguments in the signal version. This was probably a bug.
- vax is using +1 the version numbers of the other archs.
- Only hppa was defining __LIBC12_SOURCE__ so it was getting a working
sigcontext before. all the other ports that supported sigcontext had
the compat code disabled.
[pointed out by thorpej, thanks!]
If we want to remove sigcontext support from userland at least now there
is less work to do so.
|
|
|
|
rather the macros.
|
|
|
|
|
|
|
|
|
|
|
|
with appropriate memory barriers).
|
|
atomic_cas_ulong().
- For arm, ia64, m68k, mips, or1k, riscv, vax: don't define our own
MUTEX_CAS(), as they either use atomic_cas_ulong() or equivalent
(atomic_cas_uint() on m68k).
- For alpha and sparc64, don't define MUTEX_CAS() in terms of their own
_lock_cas(), which has its own memory barriers; the call sites in
kern_mutex.c already have the appropriate memory barrier calls. Thus,
alpha and sparc64 can use default definition.
- For sh3, don't define MUTEX_CAS() in terms of its own _lock_cas();
atomic_cas_ulong() is strong-aliased to _lock_cas(), therefore defining
our own MUTEX_CAS() is redundant.
Per thread:
https://mail-index.netbsd.org/tech-kern/2021/07/25/msg027562.html
|