| Age | Commit message (Collapse) | Author |
|
preemption needs to be disabled more clearly.
|
|
|
|
implementation visibility of these relocations.
Currently all implementations resolve local symbol relocations in the first
pass and simply skip them in the second. The RISC-V implementation will
make use of this visiblity.
|
|
arrays to use DEVMAP_ENTRY{,_END}
|
|
|
|
It's less letters, matches other similar variables and will help with
sharing code between the two architectures.
NFCI.
|
|
|
|
|
|
When we are triggering a softint, it can't already hold any mutexes.
So any path to mutex_exit(mtx) must go via mutex_enter(mtx), which is
always done with atomic r/m/w, and we need not issue any explicit
barrier between ci->ci_curlwp = softlwp and a potential load of
mtx->mtx_owner in mutex_exit.
PR kern/57240
XXX pullup-9
XXX pullup-10
|
|
|
|
Sprinkle KASSERT (or KDASSERT in hot paths) for kpreempt_disabled()
when we use curcpu() and it's not immediately obvious that the caller
has preemption disabled but closer scrutiny suggests the caller has.
Note unsafe curcpu()s for syscall event counting. Not sure this is
worth changing.
Possible bugs fixed:
- cpu_irq and cpu_fiq could be preempted while trying to run softints
on this CPU.
- data_abort_handler might incorrectly think it was invoked in
interrupt context when it was only preempted and migrated to
another CPU.
- pmap_fault_fixup might report the wrong CPU logs.
(However, we don't currently run with kpreemption on aarch64, so
these are not yet real bugs fixed except if you patch it to build
with __HAVE_PREEMPTION.)
|
|
Details in comments.
Note: This is a conservative change that inserts a barrier where
there was a comment saying none is needed, which is probably correct.
The goal of this change is to systematically add barriers to be
confident in correctness; subsequent changes may remove some bariers,
as an optimization, with an explanation of why each barrier is not
needed.
PR kern/57240
XXX pullup-9
XXX pullup-10
|
|
|
|
be printed before.
|
|
|
|
|
|
|
|
pmap_md_pdetab_init.
Call pmap_md_pdetab_fini from pmap_segtab_destroy.
|
|
- Multiple events can now be handled simultaneously.
- Counters should be configured with TPROF_IOC_CONFIGURE_EVENT in advance,
instead of being configured at TPROF_IOC_START.
- The configured counters can be started and stopped repeatedly by
PROF_IOC_START/TPROF_IOC_STOP.
- The value of the performance counter can be obtained at any timing as a 64bit
value with TPROF_IOC_GETCOUNTS.
- Backend common parts are handled in tprof.c as much as possible, and functions
on the tprof_backend side have been reimplemented to be more primitive.
- The reset value of counter overflows for profiling can now be adjusted.
It is calculated by default from the CPU clock (speed of cycle counter) and
TPROF_HZ, but for some events the value may be too large to be sufficient for
profiling. The event counter can be specified as a ratio to the default or as
an absolute value when configuring the event counter.
- Due to overall changes, API and ABI have been changed. TPROF_VERSION and
TPROF_BACKEND_VERSION were updated.
|
|
PMCR.E controls not only performance event counters but also the cycle
counter operation, and the cycle counter may be used for cpu_counter.
Similarly, the 31st bit in PMINTENCLR and PMCNTENCLR controls the cycle
counter, not performance event counters, and should not be modified.
|
|
|
|
|
|
|
|
Use the pte bit that says whether this is a PMAP_WIRED page, not the
bit that says whether this is a non-global page.
(Forgot to git commit --amend before exporting to CVS, sorry!)
|
|
Pages mapped with pmap_kenter_pa are necessarily unmanaged, so there
are no P->V records, and pmap_kenter_pa leaves pp->pp_pv.pv_va zero
with no modified/referenced state.
However, pmap_protect erroneously examined pp->pp_pv.pv_va to
ascertain the modified/referenced state -- and if the page was not
marked referenced, pmap_protect would clear the LX_BLKPAG_AF bit
(Access Flag), with the effect that subsequent uses of the page fault
and require a detour through pmap_fault_fixup.
This caused problems for the kernel module loader:
- When loading the text section, kobj_load first allocates kva with
uvm_km_alloc(UVM_KMF_WIRED|UVM_KMF_EXEC), which creates ptes with
pmap_kenter_pa. These ptes are writable, so we can copy the text
section into them, and have LX_BLKPAG_AF set so there will be no
fault when they are used by the kernel.
- But then kobj_affix makes the text section read/execute-only (and
nonwritable) with uvm_km_protect(VM_PROT_READ|VM_PROT_EXECUTE),
which updates the ptes with pmap_protect. This _should_ leave
LX_BLKPAG_AF set, but by inadvertently treating the page as managed
when it should be unmanaged, pmap_protect cleared it instead.
- Most of the time, clearing LX_BLKPAG_AF caused no problem, because
pmap_fault_fixup would silently resolve it. But if a hard
interrupt handler tried to use any page in the module's text (or
rodata, I suspect) that was not yet fixed up, the CPU would fault
and enter pmap_fault_fixup -- which would promptly crash (or hang)
by trying to take the pmap lock in interrupt context, which is
forbidden.
I observed this by loading dtrace.kmod early at boot and trying to
dtrace hard interrupt handlers.
With this change, pmap_protect now recognizes wired mappings (as
created by pmap_kenter_pa) before consulting pp->pp_pv.pv_va, and
preserves then LX_BLKPAG_AF bit in that case.
ok skrll
|
|
|
|
|
|
|
|
|
|
|
|
This can be included unconditionally, and db_active can then be
queried unconditionally; if DDB is not in the kernel, then db_active
is a constant zero. Reduces need for #include opt_ddb.h, #ifdef DDB.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Rename the following defines:
- _ARM_BUS_SPACE_MAP_STRONGLY_ORDERED to BUS_SPACE_MAP_NONPOSTED
- PMAP_DEV_SO to PMAP_DEV_NP
- LX_BLKPAG_ATTR_DEVICE_MEM_SO to LX_BLKPAG_ATTR_DEVICE_MEM_NP
Rename the following option:
- AARCH64_DEVICE_MEM_STRONGLY_ORDERED to AARCH64_DEVICE_MEM_NONPOSTED
|
|
<frame-address> is a frame pointer, not a trapframe, and it worked correctly. (e.g., trace $x29)
|
|
|
|
the stack analysis backtrace (bt/s) would fail because $lr would point
to the beginning of the next function.
|
|
|
|
db_interface.c.
|
|
|
|
Some `__attribute__((__section__(".data")))' hack will no longer be needed.
|
|
It appears that there are bootloaders that cannot specify the load address or ignore it.
|
|
These don't work because mutex_enter/exit on a spin lock may raise an
IPL but not lower it, if another spin lock was already held. For
example,
mutex_enter(some_lock_at_IPL_VM);
printf("foo\n");
fpu_kern_enter();
...
fpu_kern_leave();
mutex_exit(some_lock_at_IPL_VM);
will trigger the panic, because printf takes a lock at IPL_HIGH where
the IPL wil remain until the mutex_exit. (This was a nightmare to
track down before I remembered that detail of spin lock IPL
semantics...)
|
|
above 0x0001000000000000 in /dev/mem with mmap().
|
|
prot == PROT_WRITE.
|
|
|