Remove unused functions and parameters from ACPI IORT code.
(Zenghui Yu via Lorenzo Pieralisi)
* for-next/acpi:
ACPI/IORT: Remove the unused inline functions
ACPI/IORT: Drop the unused @ops of iort_add_device_replay()
Remove redundant code and fix documentation of caching behaviour for the
HVC_SOFT_RESTART hypercall.
(Pingfan Liu)
* for-next/boot:
Documentation/kvm/arm: improve description of HVC_SOFT_RESTART
arm64/relocate_kernel: remove redundant code
Improve reporting of unexpected kernel traps due to BPF JIT failure.
(Will Deacon)
* for-next/bpf:
arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMM
Improve robustness of user-visible HWCAP strings and their corresponding
numerical constants.
(Anshuman Khandual)
* for-next/cpuinfo:
arm64/cpuinfo: Define HWCAP name arrays per their actual bit definitions
Cleanups to handling of SVE and FPSIMD register state in preparation
for potential future optimisation of handling across syscalls.
(Julien Grall)
* for-next/fpsimd:
arm64/sve: Implement a helper to load SVE registers from FPSIMD state
arm64/sve: Implement a helper to flush SVE registers
arm64/fpsimdmacros: Allow the macro "for" to be used in more cases
arm64/fpsimdmacros: Introduce a macro to update ZCR_EL1.LEN
arm64/signal: Update the comment in preserve_sve_context
arm64/fpsimd: Update documentation of do_sve_acc
Miscellaneous changes.
(Tian Tao and others)
* for-next/misc:
arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
arm64: mm: Fix missing-prototypes in pageattr.c
arm64/fpsimd: Fix missing-prototypes in fpsimd.c
arm64: hibernate: Remove unused including <linux/version.h>
arm64/mm: Refactor {pgd, pud, pmd, pte}_ERROR()
arm64: Remove the unused include statements
arm64: get rid of TEXT_OFFSET
arm64: traps: Add str of description to panic() in die()
Memory management updates and cleanups.
(Anshuman Khandual and others)
* for-next/mm:
arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD
arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op
arm64/mm: Unify CONT_PMD_SHIFT
arm64/mm: Unify CONT_PTE_SHIFT
arm64/mm: Remove CONT_RANGE_OFFSET
arm64/mm: Enable THP migration
arm64/mm: Change THP helpers to comply with generic MM semantics
arm64/mm/ptdump: Add address markers for BPF regions
Allow prefetchable PCI BARs to be exposed to userspace using normal
non-cacheable mappings.
(Clint Sbisa)
* for-next/pci:
arm64: Enable PCI write-combine resources under sysfs
Perf/PMU driver updates.
(Julien Thierry and others)
* for-next/perf:
perf: arm-cmn: Fix conversion specifiers for node type
perf: arm-cmn: Fix unsigned comparison to less than zero
arm_pmu: arm64: Use NMIs for PMU
arm_pmu: Introduce pmu_irq_ops
KVM: arm64: pmu: Make overflow handler NMI safe
arm64: perf: Defer irq_work to IPI_IRQ_WORK
arm64: perf: Remove PMU locking
arm64: perf: Avoid PMXEV* indirection
arm64: perf: Add missing ISB in armv8pmu_enable_counter()
perf: Add Arm CMN-600 PMU driver
perf: Add Arm CMN-600 DT binding
arm64: perf: Add support caps under sysfs
drivers/perf: thunderx2_pmu: Fix memory resource error handling
drivers/perf: xgene_pmu: Fix uninitialized resource struct
perf: arm_dsu: Support DSU ACPI devices
arm64: perf: Remove unnecessary event_idx check
drivers/perf: hisi: Add missing include of linux/module.h
arm64: perf: Add general hardware LLC events for PMUv3
Support for the Armv8.3 Pointer Authentication enhancements.
(By Amit Daniel Kachhap)
* for-next/ptrauth:
arm64: kprobe: clarify the comment of steppable hint instructions
arm64: kprobe: disable probe of fault prone ptrauth instruction
arm64: cpufeature: Modify address authentication cpufeature to exact
arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements
arm64: traps: Allow force_signal_inject to pass esr error code
arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
Tonnes of cleanup to the SDEI driver.
(Gavin Shan)
* for-next/sdei:
firmware: arm_sdei: Remove _sdei_event_unregister()
firmware: arm_sdei: Remove _sdei_event_register()
firmware: arm_sdei: Introduce sdei_do_local_call()
firmware: arm_sdei: Cleanup on cross call function
firmware: arm_sdei: Remove while loop in sdei_event_unregister()
firmware: arm_sdei: Remove while loop in sdei_event_register()
firmware: arm_sdei: Remove redundant error message in sdei_probe()
firmware: arm_sdei: Remove duplicate check in sdei_get_conduit()
firmware: arm_sdei: Unregister driver on error in sdei_init()
firmware: arm_sdei: Avoid nested statements in sdei_init()
firmware: arm_sdei: Retrieve event number from event instance
firmware: arm_sdei: Common block for failing path in sdei_event_create()
firmware: arm_sdei: Remove sdei_is_err()
Selftests for Pointer Authentication and FPSIMD/SVE context-switching.
(Mark Brown and Boyan Karatotev)
* for-next/selftests:
selftests: arm64: Add build and documentation for FP tests
selftests: arm64: Add wrapper scripts for stress tests
selftests: arm64: Add utility to set SVE vector lengths
selftests: arm64: Add stress tests for FPSMID and SVE context switching
selftests: arm64: Add test for the SVE ptrace interface
selftests: arm64: Test case for enumeration of SVE vector lengths
kselftests/arm64: add PAuth tests for single threaded consistency and differently initialized keys
kselftests/arm64: add PAuth test for whether exec() changes keys
kselftests/arm64: add nop checks for PAuth tests
kselftests/arm64: add a basic Pointer Authentication test
Implementation of ARCH_STACKWALK for unwinding.
(Mark Brown)
* for-next/stacktrace:
arm64: Move console stack display code to stacktrace.c
arm64: stacktrace: Convert to ARCH_STACKWALK
arm64: stacktrace: Make stack walk callback consistent with generic code
stacktrace: Remove reliable argument from arch_stack_walk() callback
Support for ASID pinning, which is required when sharing page-tables with
the SMMU.
(Jean-Philippe Brucker)
* for-next/svm:
arm64: cpufeature: Export symbol read_sanitised_ftr_reg()
arm64: mm: Pin down ASIDs for sharing mm with devices
Rely on firmware tables for establishing CPU topology.
(Valentin Schneider)
* for-next/topology:
arm64: topology: Stop using MPIDR for topology information
Spelling fixes.
(Xiaoming Ni and Yanfei Xu)
* for-next/tpyos:
arm64/numa: Fix a typo in comment of arm64_numa_init
arm64: fix some spelling mistakes in the comments by codespell
vDSO cleanups.
(Will Deacon)
* for-next/vdso:
arm64: vdso: Fix unusual formatting in *setup_additional_pages()
arm64: vdso32: Remove a bunch of #ifdef CONFIG_COMPAT_VDSO guards
Will Deacon [Thu, 1 Oct 2020 10:54:54 +0000 (11:54 +0100)]
perf: arm-cmn: Fix unsigned comparison to less than zero
Ensure that the 'irq' field of 'struct arm_cmn_dtc' is a signed int
so that it can be compared '< 0'.
Link: https://lore.kernel.org/r/20200929170835.GA15956@embeddedor
Addresses-Coverity-ID: 1497488 ("Unsigned compared against 0") Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver") Reported-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
Will Deacon [Thu, 1 Oct 2020 08:48:21 +0000 (09:48 +0100)]
arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD
TCR_EL1.HD is permitted to be cached in a TLB, so invalidate the local
TLB after setting the bit when detected support for the feature. Although
this isn't strictly necessary, since we can happily operate with the bit
effectively clear, the current code uses an ISB in a half-hearted attempt
to make the change effective, so let's just fix that up.
Will Deacon [Wed, 30 Sep 2020 12:20:40 +0000 (13:20 +0100)]
arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op
Our use of broadcast TLB maintenance means that spurious page-faults
that have been handled already by another CPU do not require additional
TLB maintenance.
Make flush_tlb_fix_spurious_fault() a no-op and rely on the existing TLB
invalidation instead. Add an explicit flush_tlb_page() when making a page
dirty, as the TLB is permitted to cache the old read-only entry.
arm64: cpufeature: Export symbol read_sanitised_ftr_reg()
The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to
share CPU page tables with devices. Allow the driver to be built as
module by exporting the read_sanitized_ftr_reg() cpufeature symbol.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200918101852.582559-7-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
arm64: mm: Pin down ASIDs for sharing mm with devices
To enable address space sharing with the IOMMU, introduce
arm64_mm_context_get() and arm64_mm_context_put(), that pin down a
context and ensure that it will keep its ASID after a rollover. Export
the symbols to let the modular SMMUv3 driver use them.
Pinning is necessary because a device constantly needs a valid ASID,
unlike tasks that only require one when running. Without pinning, we would
need to notify the IOMMU when we're about to use a new ASID for a task,
and it would get complicated when a new task is assigned a shared ASID.
Consider the following scenario with no ASID pinned:
1. Task t1 is running on CPUx with shared ASID (gen=1, asid=1)
2. Task t2 is scheduled on CPUx, gets ASID (1, 2)
3. Task tn is scheduled on CPUy, a rollover occurs, tn gets ASID (2, 1)
We would now have to immediately generate a new ASID for t1, notify
the IOMMU, and finally enable task tn. We are holding the lock during
all that time, since we can't afford having another CPU trigger a
rollover. The IOMMU issues invalidation commands that can take tens of
milliseconds.
It gets needlessly complicated. All we wanted to do was schedule task tn,
that has no business with the IOMMU. By letting the IOMMU pin tasks when
needed, we avoid stalling the slow path, and let the pinning fail when
we're out of shareable ASIDs.
After a rollover, the allocator expects at least one ASID to be available
in addition to the reserved ones (one per CPU). So (NR_ASIDS - NR_CPUS -
1) is the maximum number of ASIDs that can be shared with the IOMMU.
_sdei_event_unregister() is called by sdei_event_unregister() and
sdei_device_freeze(). _sdei_event_unregister() covers the shared
and private events, but sdei_device_freeze() only covers the shared
events. So the logic to cover the private events isn't needed by
sdei_device_freeze().
The function _sdei_event_register() is called by sdei_event_register()
and sdei_device_thaw() as the following functional call chain shows.
_sdei_event_register() covers the shared and private events, but
sdei_device_thaw() only covers the shared events. So the logic to
cover the private events in _sdei_event_register() isn't needed by
sdei_device_thaw().
Similarly, sdei_reregister_event_llocked() covers the shared and
private events in the regard of reenablement. The logic to cover
the private events isn't needed by sdei_device_thaw() either.
This removes _sdei_event_register() and sdei_reregister_event_llocked().
Their logic is moved to sdei_event_register() and sdei_reregister_shared().
This shouldn't cause any logical changes.
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-13-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
During the CPU hotplug, the private events are registered, enabled
or unregistered on the specific CPU. It repeats the same steps:
initializing cross call argument, make function call on local CPU,
check the returned error.
This introduces sdei_do_local_call() to cover the first steps. The
other benefit is to make CROSSCALL_INIT and struct sdei_crosscall_args
are only visible to sdei_do_{cross, local}_call().
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-12-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
firmware: arm_sdei: Cleanup on cross call function
This applies cleanup on the cross call functions, no functional
changes are introduced:
* Wrap the code block of CROSSCALL_INIT inside "do { } while (0)"
as linux kernel usually does. Otherwise, scripts/checkpatch.pl
reports warning regarding this.
* Use smp_call_func_t for @fn argument in sdei_do_cross_call()
as the function is called on target CPU(s).
* Remove unnecessary space before @event in sdei_do_cross_call()
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-11-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
firmware: arm_sdei: Remove while loop in sdei_event_unregister()
This removes the unnecessary while loop in sdei_event_unregister()
because of the following two reasons. This shouldn't cause any
functional changes.
* The while loop is executed for once, meaning it's not needed
in theory.
* With the while loop removed, the nested statements can be
avoid to make the code a bit cleaner.
firmware: arm_sdei: Remove while loop in sdei_event_register()
This removes the unnecessary while loop in sdei_event_register()
because of the following two reasons. This shouldn't cause any
functional changes.
* The while loop is executed for once, meaning it's not needed
in theory.
* With the while loop removed, the nested statements can be
avoid to make the code a bit cleaner.
firmware: arm_sdei: Remove duplicate check in sdei_get_conduit()
The following two checks are duplicate because @acpi_disabled doesn't
depend on CONFIG_ACPI. So the duplicate check (IS_ENABLED(CONFIG_ACPI))
can be dropped. More details is provided to keep the commit log complete:
* @acpi_disabled is defined in arch/arm64/kernel/acpi.c when
CONFIG_ACPI is enabled.
* @acpi_disabled in defined in include/acpi.h when CONFIG_ACPI
is disabled.
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-7-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
firmware: arm_sdei: Unregister driver on error in sdei_init()
The SDEI platform device is created from device-tree node or ACPI
(SDEI) table. For the later case, the platform device is created
explicitly by this module. It'd better to unregister the driver on
failure to create the device to keep the symmetry. The driver, owned
by this module, isn't needed if the device isn't existing.
Besides, the errno (@ret) should be updated accordingly in this
case.
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-6-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
firmware: arm_sdei: Avoid nested statements in sdei_init()
In sdei_init(), the nested statements can be avoided by bailing
on error from platform_driver_register() or absent ACPI SDEI table.
With it, the code looks a bit more readable.
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-5-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
firmware: arm_sdei: Retrieve event number from event instance
In sdei_event_create(), the event number is retrieved from the
variable @event_num for the shared event. The event number was
stored in the event instance. So we can fetch it from the event
instance, similar to what we're doing for the private event.
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-4-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
firmware: arm_sdei: Common block for failing path in sdei_event_create()
There are multiple calls of kfree(event) in the failing path of
sdei_event_create() to free the SDEI event. It means we need to
call it again when adding more code in the failing path. It's
prone to miss doing that and introduce memory leakage.
This introduces common block for failing path in sdei_event_create()
to resolve the issue. This shouldn't cause functional changes.
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200922130423.10173-3-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
sdei_is_err() is only called by sdei_to_linux_errno(). The logic of
checking on the error number is common to them. They can be combined
finely.
This removes sdei_is_err() and its logic is combined to the function
sdei_to_linux_errno(). Also, the assignment of @err to zero is also
dropped in invoke_sdei_fn() because it's always overridden afterwards.
This shouldn't cause functional changes.
Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200922130423.10173-2-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
Currently the PMU interrupt can either be a normal irq or a percpu irq.
Supporting NMI will introduce two cases for each existing one. It becomes
a mess of 'if's when managing the interrupt.
Define sets of callbacks for operations commonly done on the interrupt. The
appropriate set of callbacks is selected at interrupt request time and
simplifies interrupt enabling/disabling and freeing.
Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-7-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
NMI context, defer waking the vcpu to an irq_work queue.
A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent
running the irq_work for a non-existent vcpu by calling irq_work_sync() on
the PMU destroy path.
[Alexandru E.: Added irq_work_sync()]
Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com> Cc: kvm@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/20200924110706.254996-6-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
When handling events, armv8pmu_handle_irq() calls perf_event_overflow(),
and subsequently calls irq_work_run() to handle any work queued by
perf_event_overflow(). As perf_event_overflow() raises IPI_IRQ_WORK when
queuing the work, this isn't strictly necessary and the work could be
handled as part of the IPI_IRQ_WORK handler.
In the common case the IPI handler will run immediately after the PMU IRQ
handler, and where the PE is heavily loaded with interrupts other handlers
may run first, widening the window where some counters are disabled.
In practice this window is unlikely to be a significant issue, and removing
the call to irq_work_run() would make the PMU IRQ handler NMI safe in
addition to making it simpler, so let's do that.
[Alexandru E.: Reworded commit message]
Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-5-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
The PMU is disabled and enabled, and the counters are programmed from
contexts where interrupts or preemption is disabled.
The functions to toggle the PMU and to program the PMU counters access the
registers directly and don't access data modified by the interrupt handler.
That, and the fact that they're always called from non-preemptible
contexts, means that we don't need to disable interrupts or use a spinlock.
[Alexandru E.: Explained why locking is not needed, removed WARN_ONs]
Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-4-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Thu, 24 Sep 2020 11:07:01 +0000 (12:07 +0100)]
arm64: perf: Avoid PMXEV* indirection
Currently we access the counter registers and their respective type
registers indirectly. This requires us to write to PMSELR, issue an ISB,
then access the relevant PMXEV* registers.
This is unfortunate, because:
* Under virtualization, accessing one register requires two traps to
the hypervisor, even though we could access the register directly with
a single trap.
* We have to issue an ISB which we could otherwise avoid the cost of.
* When we use NMIs, the NMI handler will have to save/restore the select
register in case the code it preempted was attempting to access a
counter or its type register.
We can avoid these issues by directly accessing the relevant registers.
This patch adds helpers to do so.
In armv8pmu_enable_event() we still need the ISB to prevent the PE from
reordering the write to PMINTENSET_EL1 register. If the interrupt is
enabled before we disable the counter and the new event is configured,
we might get an interrupt triggered by the previously programmed event
overflowing, but which we wrongly attribute to the event that we are
enabling. Execute an ISB after we disable the counter.
In the process, remove the comment that refers to the ARMv7 PMU.
[Julien T.: Don't inline read/write functions to avoid big code-size
increase, remove unused read_pmevtypern function,
fix counter index issue.]
[Alexandru E.: Removed comment, removed trailing semicolons in macros,
added ISB]
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-3-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Alexandru Elisei [Thu, 24 Sep 2020 11:07:00 +0000 (12:07 +0100)]
arm64: perf: Add missing ISB in armv8pmu_enable_counter()
Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
armv8pmu_enable_event(), the PE can reorder configuring the event type
after we have enabled the counter and the interrupt. This can lead to an
interrupt being asserted because of the previous event type that we were
counting using the same counter, not the one that we've just configured.
The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
can reorder enabling the interrupt at any point in the future after we have
enabled the event.
Prevent both situations from happening by adding an ISB just before we
enable the event counter.
Fixes: 030896885ade ("arm64: Performance counters support") Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox) Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200924110706.254996-2-alexandru.elisei@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Robin Murphy [Fri, 18 Sep 2020 13:28:38 +0000 (14:28 +0100)]
perf: Add Arm CMN-600 PMU driver
Initial driver for PMU event counting on the Arm CMN-600 interconnect.
CMN sports an obnoxiously complex distributed PMU system as part of
its debug and trace features, which can do all manner of things like
sampling, cross-triggering and generating CoreSight trace. This driver
covers the PMU functionality, plus the relevant aspects of watchpoints
for simply counting matching flits.
Tested-by: Tsahi Zidenberg <tsahee@amazon.com> Tested-by: Tuan Phan <tuanphan@os.amperecomputing.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
Robin Murphy [Fri, 18 Sep 2020 18:24:09 +0000 (19:24 +0100)]
perf: Add Arm CMN-600 DT binding
Document the requirements for the CMN-600 DT binding. The internal
topology is almost entirely discoverable by walking a tree of ID
registers, but sadly both the starting point for that walk and the
exact format of those registers are configuration-dependent and not
discoverable from some sane fixed location. Oh well.
Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
ARMv8.4-PMU introduces the PMMIR_EL1 registers and some new PMU events,
like STALL_SLOT etc, are related to it. Let's add a caps directory to
/sys/bus/event_source/devices/armv8_pmuv3_0/ and support slots from
PMMIR_EL1 registers in this entry. The user programs can get the slots
from sysfs directly.
/sys/bus/event_source/devices/armv8_pmuv3_0/caps/slots is exposed
under sysfs. Both ARMv8.4-PMU and STALL_SLOT event are implemented,
it returns the slots from PMMIR_EL1, otherwise it will return 0.
Zhengyuan Liu [Mon, 21 Sep 2020 02:39:36 +0000 (10:39 +0800)]
arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
The @node passed to cpumask_of_node() can be NUMA_NO_NODE, in that
case it will trigger the following WARN_ON(node >= nr_node_ids) due to
mismatched data types of @node and @nr_node_ids. Actually we should
return cpu_all_mask just like most other architectures do if passed
NUMA_NO_NODE.
Also add a similar check to the inline cpumask_of_node() in numa.h.
Mark Brown [Mon, 21 Sep 2020 12:23:41 +0000 (13:23 +0100)]
arm64: Move console stack display code to stacktrace.c
Currently the code for displaying a stack trace on the console is located
in traps.c rather than stacktrace.c, using the unwinding code that is in
stacktrace.c. This can be confusing and make the code hard to find since
such output is often referred to as a stack trace which might mislead the
unwary. Due to this and since traps.c doesn't interact with this code
except for via the public interfaces move the code to stacktrace.c to
make it easier to find.
Julien Grall [Fri, 28 Aug 2020 18:11:53 +0000 (19:11 +0100)]
arm64/sve: Implement a helper to load SVE registers from FPSIMD state
In a follow-up patch, we may save the FPSIMD rather than the full SVE
state when the state has to be zeroed on return to userspace (e.g
during a syscall).
Introduce an helper to load SVE vectors from FPSIMD state and zero the rest
of SVE registers.
Julien Grall [Fri, 28 Aug 2020 18:11:52 +0000 (19:11 +0100)]
arm64/sve: Implement a helper to flush SVE registers
Introduce a new helper that will zero all SVE registers but the first
128-bits of each vector. This will be used by subsequent patches to
avoid costly store/maipulate/reload sequences in places like do_sve_acc().
Julien Grall [Fri, 28 Aug 2020 18:11:50 +0000 (19:11 +0100)]
arm64/fpsimdmacros: Allow the macro "for" to be used in more cases
The current version of the macro "for" is not able to work when the
counter is used to generate registers using mnemonics. This is because
gas is not able to evaluate the expression generated if used in
register's name (i.e x\n).
Gas offers a way to evaluate macro arguments by using % in front of
them under the alternate macro mode.
The implementation of "for" is updated to use the alternate macro mode
and %, so we can use the macro in more cases. As the alternate macro
mode may have side-effects, this is disabled when expanding the body.
While it is enough to prefix the argument of the macro "__for_body"
with %, the arguments of "__for" are also prefixed to get a more
bearable value in case of compilation error.
Suggested-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200828181155.17745-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
Julien Grall [Fri, 28 Aug 2020 18:11:49 +0000 (19:11 +0100)]
arm64/signal: Update the comment in preserve_sve_context
The SVE state is saved by fpsimd_signal_preserve_current_state() and not
preserve_fpsimd_context(). Update the comment in preserve_sve_context to
reflect the current behavior.
Julien Grall [Fri, 28 Aug 2020 18:11:48 +0000 (19:11 +0100)]
arm64/fpsimd: Update documentation of do_sve_acc
fpsimd_restore_current_state() enables and disables the SVE access trap
based on TIF_SVE, not task_fpsimd_load(). Update the documentation of
do_sve_acc to reflect this behavior.
Will Deacon [Tue, 15 Sep 2020 14:48:09 +0000 (15:48 +0100)]
arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMM
When generating instructions at runtime, for example due to kernel text
patching or the BPF JIT, we can emit a trapping BRK instruction if we
are asked to encode an invalid instruction such as an out-of-range]
branch. This is indicative of a bug in the caller, and will result in a
crash on executing the generated code. Unfortunately, the message from
the crash is really unhelpful, and mumbles something about ptrace:
We can do better than this. Install a break handler for FAULT_BRK_IMM,
which is the immediate used to encode the "I've been asked to generate
an invalid instruction" error, and triage the faulting PC to determine
whether or not the failure occurred in the BPF JIT.
In tx2_uncore_pmu_init_dev(), a call to acpi_dev_get_resources() is used
to create a list _CRS resources which is searched for the device base
address. There is an error check following this:
if (!rentry->res)
return NULL
In no case, will rentry->res be NULL, so the test is useless. Even
if the test worked, it comes before the resource list memory is
freed. None of this really matters as long as the ACPI table has
the memory resource. Let's clean it up so that it makes sense and
will give a meaningful error should firmware leave out the memory
resource.
This is due to use of an uninitialized local resource struct in the xgene
pmu driver. The thunderx2_pmu driver avoids this by using the resource list
constructed by acpi_dev_get_resources() rather than using a callback from
that function. The callback in the xgene driver didn't fully initialize
the resource. So get rid of the callback and search the resource list as
done by thunderx2.
Tian Tao [Wed, 16 Sep 2020 02:20:47 +0000 (10:20 +0800)]
arm64: mm: Fix missing-prototypes in pageattr.c
Fix the following warnings.
‘set_memory_valid’ [-Wmissing-prototypes]
int set_memory_valid(unsigned long addr, int numpages, int enable)
^
‘set_direct_map_invalid_noflush’ [-Wmissing-prototypes]
int set_direct_map_invalid_noflush(struct page *page)
^
‘set_direct_map_default_noflush’ [-Wmissing-prototypes]
int set_direct_map_default_noflush(struct page *page)
^
Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Link: https://lore.kernel.org/r/1600222847-56792-1-git-send-email-tiantao6@hisilicon.com
arch/arm64/mm/pageattr.c:138:5: warning: no previous prototype for
arch/arm64/mm/pageattr.c:150:5: warning: no previous prototype for
arch/arm64/mm/pageattr.c:165:5: warning: no previous prototype for Signed-off-by: Will Deacon <will@kernel.org>
Tian Tao [Tue, 15 Sep 2020 08:19:59 +0000 (16:19 +0800)]
arm64/fpsimd: Fix missing-prototypes in fpsimd.c
Fix the following warnings.
arch/arm64/kernel/fpsimd.c:935:6: warning: no previous prototype for
‘do_sve_acc’ [-Wmissing-prototypes]
arch/arm64/kernel/fpsimd.c:962:6: warning: no previous prototype for
‘do_fpsimd_acc’ [-Wmissing-prototypes]
arch/arm64/kernel/fpsimd.c:971:6: warning: no previous prototype for
‘do_fpsimd_exc’ [-Wmissing-prototypes]
arch/arm64/kernel/fpsimd.c:1266:6: warning: no previous prototype for
‘kernel_neon_begin’ [-Wmissing-prototypes]
arch/arm64/kernel/fpsimd.c:1292:6: warning: no previous prototype for
‘kernel_neon_end’ [-Wmissing-prototypes]
Mark Brown [Mon, 14 Sep 2020 15:34:09 +0000 (16:34 +0100)]
arm64: stacktrace: Convert to ARCH_STACKWALK
Historically architectures have had duplicated code in their stack trace
implementations for filtering what gets traced. In order to avoid this
duplication some generic code has been provided using a new interface
arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which
factors all this out into the generic stack trace code. Convert arm64
to use this common infrastructure.
Mark Brown [Mon, 14 Sep 2020 15:34:08 +0000 (16:34 +0100)]
arm64: stacktrace: Make stack walk callback consistent with generic code
As with the generic arch_stack_walk() code the arm64 stack walk code takes
a callback that is called per stack frame. Currently the arm64 code always
passes a struct stackframe to the callback and the generic code just passes
the pc, however none of the users ever reference anything in the struct
other than the pc value. The arm64 code also uses a return type of int
while the generic code uses a return type of bool though in both cases the
return value is a boolean value and the sense is inverted between the two.
In order to reduce code duplication when arm64 is converted to use
arch_stack_walk() change the signature and return sense of the arm64
specific callback to match that of the generic code.
Mark Brown [Mon, 14 Sep 2020 15:34:07 +0000 (16:34 +0100)]
stacktrace: Remove reliable argument from arch_stack_walk() callback
Currently the callback passed to arch_stack_walk() has an argument called
reliable passed to it to indicate if the stack entry is reliable, a comment
says that this is used by some printk() consumers. However in the current
kernel none of the arch_stack_walk() implementations ever set this flag to
true and the only callback implementation we have is in the generic
stacktrace code which ignores the flag. It therefore appears that this
flag is redundant so we can simplify and clarify things by removing it.
Mark Brown [Wed, 19 Aug 2020 11:48:37 +0000 (12:48 +0100)]
selftests: arm64: Add build and documentation for FP tests
Integrate the FP tests with the build system and add some documentation
for the ones run outside the kselftest infrastructure. The content in
the README was largely written by Dave Martin with edits by me.
Mark Brown [Wed, 19 Aug 2020 11:48:36 +0000 (12:48 +0100)]
selftests: arm64: Add wrapper scripts for stress tests
Add wrapper scripts which invoke fpsimd-test and sve-test with several
copies per CPU such that the context switch code will be appropriately
exercised.
Mark Brown [Wed, 19 Aug 2020 11:48:35 +0000 (12:48 +0100)]
selftests: arm64: Add utility to set SVE vector lengths
vlset is a small utility for use in conjunction with tests like the sve-test
stress test which allows another executable to be invoked with a configured
SVE vector length.
Mark Brown [Wed, 19 Aug 2020 11:48:34 +0000 (12:48 +0100)]
selftests: arm64: Add stress tests for FPSMID and SVE context switching
Add programs sve-test and fpsimd-test which spin reading and writing to
the SVE and FPSIMD registers, verifying the operations they perform. The
intended use is to leave them running to stress the context switch code's
handling of these registers which isn't compatible with what kselftest
does so they're not integrated into the framework but there's no other
obvious testsuite where they fit so let's store them here.
These tests were written by Dave Martin and lightly adapted by me.
Mark Brown [Wed, 19 Aug 2020 11:48:33 +0000 (12:48 +0100)]
selftests: arm64: Add test for the SVE ptrace interface
Add a test case that does some basic verification of the SVE ptrace
interface, forking off a child with known values in the registers and
then using ptrace to inspect and manipulate the SVE registers of the
child, including in FPSIMD mode to account for sharing between the SVE
and FPSIMD registers.
This program was written by Dave Martin and modified for kselftest by
me.
Mark Brown [Wed, 19 Aug 2020 11:48:32 +0000 (12:48 +0100)]
selftests: arm64: Test case for enumeration of SVE vector lengths
Add a test case that verifies that we can enumerate the SVE vector lengths
on systems where we detect SVE, and that those SVE vector lengths are
valid. This program was written by Dave Martin and adapted to kselftest by
me.
PAuth adds sign/verify controls to enable and disable groups of
instructions in hardware for compatibility with libraries that do not
implement PAuth. The kernel always enables them if it detects PAuth.
Add a test that checks that each group of instructions is enabled, if the
kernel reports PAuth as detected.
Note: For groups, for the purpose of this patch, we intend instructions
that use a certain key.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918104715.182310-3-boian4o1@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
kselftests/arm64: add a basic Pointer Authentication test
PAuth signs and verifies return addresses on the stack. It does so by
inserting a Pointer Authentication code (PAC) into some of the unused top
bits of an address. This is achieved by adding paciasp/autiasp instructions
at the beginning and end of a function.
This feature is partially backwards compatible with earlier versions of the
ARM architecture. To coerce the compiler into emitting fully backwards
compatible code the main file is compiled to target an earlier ARM version.
This allows the tests to check for the feature and print meaningful error
messages instead of crashing.
Add a test to verify that corrupting the return address results in a
SIGSEGV on return.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918104715.182310-2-boian4o1@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
arm64: Enable PCI write-combine resources under sysfs
This change exposes write-combine mappings under sysfs for
prefetchable PCI resources on arm64.
Originally, the usage of "write combine" here was driven by the x86
definition of write combine. This definition is specific to x86 and
does not generalize to other architectures. However, the usage of WC
has mutated to "write combine" semantics, which is implemented
differently on each arch.
Generally, prefetchable BARs are accepted to allow speculative
accesses, write combining, and re-ordering-- from the PCI perspective,
this means there are no read side effects. (This contradicts the PCI
spec which allows prefetchable BARs to have read side effects, but
this definition is ill-advised as it is impossible to meet.) On x86,
prefetchable BARs are mapped as WC as originally defined (with some
conditionals on arch features). On arm64, WC is taken to mean normal
non-cacheable memory.
In practice, write combine semantics are used to minimize write
operations. A common usage of this is minimizing PCI TLPs which can
significantly improve performance with PCI devices. In order to
provide the same benefits to userspace, we need to allow userspace to
map prefetchable BARs with write combine semantics. The resourceX_wc
mapping is used today by userspace programs and libraries.
While this model is flawed as "write combine" is very ill-defined, it
is already used by multiple non-x86 archs to expose write combine
semantics to user space. We enable this on arm64 to give userspace on
arm64 an equivalent mechanism for utilizing write combining with PCI
devices.
Signed-off-by: Clint Sbisa <csbisa@amazon.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Bjorn Helgaas <helgaas@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918033312.ddfpibgfylfjpex2@amazon.com Signed-off-by: Will Deacon <will@kernel.org>
The function __{pgd, pud, pmd, pte}_error() are introduced so that
they can be called by {pgd, pud, pmd, pte}_ERROR(). However, some
of the functions could never be called when the corresponding page
table level isn't enabled. For example, __{pud, pmd}_error() are
unused when PUD and PMD are folded to PGD.
This removes __{pgd, pud, pmd, pte}_error() and call pr_err() from
{pgd, pud, pmd, pte}_ERROR() directly, similar to what x86/powerpc
are doing. With this, the code looks a bit simplified either.
arm64: kprobe: clarify the comment of steppable hint instructions
The existing comment about steppable hint instruction is not complete
and only describes NOP instructions as steppable. As the function
aarch64_insn_is_steppable_hint allows all white-listed instruction
to be probed so the comment is updated to reflect this.
arm64: kprobe: disable probe of fault prone ptrauth instruction
With the addition of ARMv8.3-FPAC feature, the probe of authenticate
ptrauth instructions (AUT*) may cause ptrauth fault exception in case of
authenticate failure so they cannot be safely single stepped.
Hence the probe of authenticate instructions is disallowed but the
corresponding pac ptrauth instruction (PAC*) is not affected and they can
still be probed. Also AUTH* instructions do not make sense at function
entry points so most realistic probes would be unaffected by this change.
arm64: cpufeature: Modify address authentication cpufeature to exact
The current address authentication cpufeature levels are set as LOWER_SAFE
which is not compatible with the different configurations added for Armv8.3
ptrauth enhancements as the different levels have different behaviour and
there is no tunable to enable the lower safe versions. This is rectified
by setting those cpufeature type as EXACT.
The current cpufeature framework also does not interfere in the booting of
non-exact secondary cpus but rather marks them as tainted. As a workaround
this is fixed by replacing the generic match handler with a new handler
specific to ptrauth.
After this change, if there is any variation in ptrauth configurations in
secondary cpus from boot cpu then those mismatched cpus are parked in an
infinite loop.
Following ptrauth crash log is observed in Arm fastmodel with simulated
mismatched cpus without this fix,
After this fix, the mismatched CPU4 is parked as,
CPU features: CPU4: Detected conflict for capability 39 (Address authentication (IMP DEF algorithm)), System: 1, CPU: 0
CPU4: will not boot
CPU4: failed to come online
CPU4: died during early boot
[Suzuki: Introduce new matching function for address authentication]
Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-5-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Some Armv8.3 Pointer Authentication enhancements have been introduced
which are mandatory for Armv8.6 and optional for Armv8.3. These features
are,
* ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens
finding the correct PAC value of the authenticated pointer.
* ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication
instruction fails in authenticating the PAC present in the address.
This is different from earlier case when such failures just adds an
error code in the top byte and waits for subsequent load/store to abort.
The ptrauth instructions which may cause this fault are autiasp, retaa
etc.
The above features are now represented by additional configurations
for the Address Authentication cpufeature and a new ESR exception class.
The userspace fault received in the kernel due to ARMv8.3-FPAC is treated
as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN
as the signal code. Note that this is different from earlier ARMv8.3
ptrauth where signal SIGSEGV is issued due to Pointer authentication
failures. The in-kernel PAC fault causes kernel to crash.
arm64: traps: Allow force_signal_inject to pass esr error code
Some error signal need to pass proper ARM esr error code to userspace to
better identify the cause of the signal. So the function
force_signal_inject is extended to pass this as a parameter. The
existing code is not affected by this change.
arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions
Currently the ARMv8.3-PAuth combined branch instructions (braa, retaa
etc.) are not simulated for out-of-line execution with a handler. Hence the
uprobe of such instructions leads to kernel warnings in a loop as they are
not explicitly checked and fall into INSN_GOOD categories. Other combined
instructions like LDRAA and LDRBB can be probed.
The issue of the combined branch instructions is fixed by adding
group definitions of all such instructions and rejecting their probes.
The instruction groups added are br_auth(braa, brab, braaz and brabz),
blr_auth(blraa, blrab, blraaz and blrabz), ret_auth(retaa and retab) and
eret_auth(eretaa and eretab).
Fixes: 74afda4016a7 ("arm64: compile the kernel with ptrauth return address signing") Fixes: 04ca3204fa09 ("arm64: enable pointer authentication") Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-2-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Pingfan Liu [Fri, 28 Aug 2020 13:39:57 +0000 (21:39 +0800)]
arm64/relocate_kernel: remove redundant code
Kernel startup entry point requires disabling MMU and D-cache.
As for kexec-reboot, taking a close look at "msr sctlr_el1, x12" in
__cpu_soft_restart as the following:
-1. booted at EL1
The instruction is enough to disable MMU and I/D cache for
EL1 regime.
-2. booted at EL2, using VHE
Access to SCTLR_EL1 is redirected to SCTLR_EL2 in EL2. So the instruction
is enough to disable MMU and clear I+C bits for EL2 regime.
-3. booted at EL2, not using VHE
The instruction itself can not affect EL2 regime. But The hyp-stub doesn't
enable the MMU and I/D cache for EL2 regime. And KVM also disable them for EL2
regime when its unloaded, or execute a HVC_SOFT_RESTART call. So when
kexec-reboot, the code in KVM has prepare the requirement.
As a conclusion, disabling MMU and clearing I+C bits in
SYM_CODE_START(arm64_relocate_new_kernel) is redundant, and can be removed
Signed-off-by: Pingfan Liu <kernelfans@gmail.com> Cc: James Morse <james.morse@arm.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Remi Denis-Courmont <remi.denis.courmont@huawei.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/1598621998-20563-1-git-send-email-kernelfans@gmail.com
To: linux-arm-kernel@lists.infradead.org Signed-off-by: Will Deacon <will@kernel.org>
The macro was introduced by commit <ecf35a237a85> ("arm64: PTE/PMD
contiguous bit definition") at the beginning. It's only used by
commit <348a65cdcbbf> ("arm64: Mark kernel page ranges contiguous"),
which was reverted later by commit <667c27597ca8>. This makes the
macro unused.
This removes the unused macro (CONT_RANGE_OFFSET).
arm64/cpuinfo: Define HWCAP name arrays per their actual bit definitions
HWCAP name arrays (hwcap_str, compat_hwcap_str, compat_hwcap2_str) that are
scanned for /proc/cpuinfo are detached from their bit definitions making it
vulnerable and difficult to correlate. It is also bit problematic because
during /proc/cpuinfo dump these arrays get traversed sequentially assuming
they reflect and match actual HWCAP bit sequence, to test various features
for a given CPU. This redefines name arrays per their HWCAP bit definitions
. It also warns after detecting any feature which is not expected on arm64.
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1599630535-29337-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
In certain page migration situations, a THP page can be migrated without
being split into it's constituent subpages. This saves time required to
split a THP and put it back together when required. But it also saves an
wider address range translation covered by a single TLB entry, reducing
future page fault costs.
A previous patch changed platform THP helpers per generic memory semantics,
clearing the path for THP migration support. This adds two more THP helpers
required to create PMD migration swap entries. Now enable THP migration via
ARCH_ENABLE_THP_MIGRATION.
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1599627183-14453-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
arm64/mm: Change THP helpers to comply with generic MM semantics
pmd_present() and pmd_trans_huge() are expected to behave in the following
manner during various phases of a given PMD. It is derived from a previous
detailed discussion on this topic [1] and present THP documentation [2].
pmd_present(pmd):
- Returns true if pmd refers to system RAM with a valid pmd_page(pmd)
- Returns false if pmd refers to a migration or swap entry
pmd_trans_huge(pmd):
- Returns true if pmd refers to system RAM and is a trans huge mapping
Once PMD_SECT_VALID gets cleared, it results in pmd_present() return false
on the PMD entry. It will need another bit apart from PMD_SECT_VALID to re-
affirm pmd_present() as true during the THP split process. To comply with
above mentioned semantics, pmd_trans_huge() should also check pmd_present()
first before testing presence of an actual transparent huge mapping.
The solution:
Ideally PMD_TYPE_SECT should have been used here instead. But it shares the
bit position with PMD_SECT_VALID which is used for THP invalidation. Hence
it will not be there for pmd_present() check after pmdp_invalidate().
A new software defined PMD_PRESENT_INVALID (bit 59) can be set on the PMD
entry during invalidation which can help pmd_present() return true and in
recognizing the fact that it still points to memory.
This bit is transient. During the split process it will be overridden by a
page table page representing normal pages in place of erstwhile huge page.
Other pmdp_invalidate() callers always write a fresh PMD value on the entry
overriding this transient PMD_PRESENT_INVALID bit, which makes it safe.
arm64: topology: Stop using MPIDR for topology information
In the absence of ACPI or DT topology data, we fallback to haphazardly
decoding *something* out of MPIDR. Sadly, the contents of that register are
mostly unusable due to the implementation leniancy and things like Aff0
having to be capped to 15 (despite being encoded on 8 bits).
Consider a simple system with a single package of 32 cores, all under the
same LLC. We ought to be shoving them in the same core_sibling mask, but
MPIDR is going to look like:
The scheduler's MC domain (all CPUs with same LLC) is going to be built via
arch_topology.c::cpu_coregroup_mask()
In there we try to figure out a sensible mask out of the topology
information we have. In short, here we'll pick the smallest of NUMA or
core sibling mask.
MC mask for CPU16 will thus be 16-19... Uh oh. CPUs 16-19 are in two
different unique MC spans, and the scheduler has no idea what to make of
that. That triggers the WARN_ON() added by commit
We could try to come up with some cleverer scheme to figure out which of
the available masks to pick, but really if one of those masks resulted from
MPIDR then it should be discarded because it's bound to be bogus.
I was hoping to give MPIDR a chance for SMT, to figure out which threads are
in the same core using Aff1-3 as core ID, but Sudeep and Robin pointed out
to me that there are systems out there where *all* cores have non-zero
values in their higher affinity fields (e.g. RK3288 has "5" in all of its
cores' MPIDR.Aff1), which would expose a bogus core ID to userspace.
Stop using MPIDR for topology information. When no other source of topology
information is available, mark each CPU as its own core and its NUMA node
as its LLC domain.
arm64/mm/ptdump: Add address markers for BPF regions
Kernel virtual region [BPF_JIT_REGION_START..BPF_JIT_REGION_END] is missing
from address_markers[], hence relevant page table entries are not displayed
with /sys/kernel/debug/kernel_page_tables. This adds those missing markers.
While here, also rename arch/arm64/mm/dump.c which sounds bit ambiguous, as
arch/arm64/mm/ptdump.c instead.
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Steven Price <steven.price@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1599208259-11191-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Qi Liu [Fri, 4 Sep 2020 09:57:38 +0000 (17:57 +0800)]
arm64: perf: Remove unnecessary event_idx check
event_idx is obtained from armv8pmu_get_event_idx(), and this idx must be
between ARMV8_IDX_CYCLE_COUNTER and cpu_pmu->num_events. So it's unnecessary
to do this check. Let's remove it.
Ard Biesheuvel [Tue, 25 Aug 2020 13:54:40 +0000 (15:54 +0200)]
arm64: get rid of TEXT_OFFSET
TEXT_OFFSET serves no purpose, and for this reason, it was redefined
as 0x0 in the v5.8 timeframe. Since this does not appear to have caused
any issues that require us to revisit that decision, let's get rid of the
macro entirely, along with any references to it.
Zenghui Yu [Tue, 18 Aug 2020 06:36:25 +0000 (14:36 +0800)]
ACPI/IORT: Remove the unused inline functions
Since commit 8212688600ed ("ACPI/IORT: Fix build error when IOMMU_SUPPORT
is disabled"), iort_fwspec_iommu_ops() and iort_add_device_replay() are not
needed anymore when CONFIG_IOMMU_API is not selected. Let's remove them.
Zenghui Yu [Tue, 18 Aug 2020 06:36:24 +0000 (14:36 +0800)]
ACPI/IORT: Drop the unused @ops of iort_add_device_replay()
Since commit d2e1a003af56 ("ACPI/IORT: Don't call iommu_ops->add_device
directly"), we use the IOMMU core API to replace a direct invoke of the
specified callback. The parameter @ops has therefore became unused. Let's
drop it.
drivers/perf: hisi: Add missing include of linux/module.h
MODULE_*** is used in HiSilicon uncore PMU drivers and is provided by
linux/module.h, but the header file is not directly included. Add the
missing include.
Yue Hu [Tue, 4 Aug 2020 08:53:47 +0000 (16:53 +0800)]
arm64: traps: Add str of description to panic() in die()
Currently, there are different description strings in die() such as
die("Oops",,), die("Oops - BUG",,). And panic() called by die() will
always show "Fatal exception" or "Fatal exception in interrupt".
Note that panic() will run any panic handler via panic_notifier_list.
And the string above will be formatted and placed in static buf[]
which will be passed to handler.
So panic handler can not distinguish which Oops it is from the buf if
we want to do some things like reserve the string in memory or panic
statistics. It's not benefit to debug. We need to add more codes to
troubleshoot. Let's utilize existing resource to make debug much simpler.
Will Deacon [Mon, 22 Jun 2020 12:37:09 +0000 (13:37 +0100)]
arm64: vdso: Fix unusual formatting in *setup_additional_pages()
There's really no need to put every parameter on a new line when calling
a function with a long name, so reformat the *setup_additional_pages()
functions in the vDSO setup code to follow the usual conventions.
Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
Linus Torvalds [Sun, 30 Aug 2020 22:53:44 +0000 (15:53 -0700)]
Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
- fix regression in af_alg that affects iwd
- restore polling delay in qat
- fix double free in ingenic on error path
- fix potential build failure in sa2ul due to missing Kconfig dependency
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: af_alg - Work around empty control messages without MSG_MORE
crypto: sa2ul - add Kconfig selects to fix build error
crypto: ingenic - Drop kfree for memory allocated with devm_kzalloc
crypto: qat - add delay before polling mailbox
Linus Torvalds [Sun, 30 Aug 2020 19:01:23 +0000 (12:01 -0700)]
Merge tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
"Three interrupt related fixes for X86:
- Move disabling of the local APIC after invoking fixup_irqs() to
ensure that interrupts which are incoming are noted in the IRR and
not ignored.
- Unbreak affinity setting.
The rework of the entry code reused the regular exception entry
code for device interrupts. The vector number is pushed into the
errorcode slot on the stack which is then lifted into an argument
and set to -1 because that's regs->orig_ax which is used in quite
some places to check whether the entry came from a syscall.
But it was overlooked that orig_ax is used in the affinity cleanup
code to validate whether the interrupt has arrived on the new
target. It turned out that this vector check is pointless because
interrupts are never moved from one vector to another on the same
CPU. That check is a historical leftover from the time where x86
supported multi-CPU affinities, but not longer needed with the now
strict single CPU affinity. Famous last words ...
- Add a missing check for an empty cpumask into the matrix allocator.
The affinity change added a warning to catch the case where an
interrupt is moved on the same CPU to a different vector. This
triggers because a condition with an empty cpumask returns an
assignment from the allocator as the allocator uses for_each_cpu()
without checking the cpumask for being empty. The historical
inconsistent for_each_cpu() behaviour of ignoring the cpumask and
unconditionally claiming that CPU0 is in the mask struck again.
Sigh.
plus a new entry into the MAINTAINER file for the HPE/UV platform"
* tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
genirq/matrix: Deal with the sillyness of for_each_cpu() on UP
x86/irq: Unbreak interrupt affinity setting
x86/hotplug: Silence APIC only after all interrupts are migrated
MAINTAINERS: Add entry for HPE Superdome Flex (UV) maintainers
Linus Torvalds [Sun, 30 Aug 2020 18:56:54 +0000 (11:56 -0700)]
Merge tag 'irq-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq fixes from Thomas Gleixner:
"A set of fixes for interrupt chip drivers:
- Revert the platform driver conversion of interrupt chip drivers as
it turned out to create more problems than it solves.
- Fix a trivial typo in the new module helpers which made probing
reliably fail.
- Small fixes in the STM32 and MIPS Ingenic drivers
- The TI firmware rework which had badly managed dependencies and had
to wait post rc1"
* tag 'irq-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/ingenic: Leave parent IRQ unmasked on suspend
irqchip/stm32-exti: Avoid losing interrupts due to clearing pending bits by mistake
irqchip: Revert modular support for drivers using IRQCHIP_PLATFORM_DRIVER helperse
irqchip: Fix probing deferal when using IRQCHIP_PLATFORM_DRIVER helpers
arm64: dts: k3-am65: Update the RM resource types
arm64: dts: k3-am65: ti-sci-inta/intr: Update to latest bindings
arm64: dts: k3-j721e: ti-sci-inta/intr: Update to latest bindings
irqchip/ti-sci-inta: Add support for INTA directly connecting to GIC
irqchip/ti-sci-inta: Do not store TISCI device id in platform device id field
dt-bindings: irqchip: Convert ti, sci-inta bindings to yaml
dt-bindings: irqchip: ti, sci-inta: Update docs to support different parent.
irqchip/ti-sci-intr: Add support for INTR being a parent to INTR
dt-bindings: irqchip: Convert ti, sci-intr bindings to yaml
dt-bindings: irqchip: ti, sci-intr: Update bindings to drop the usage of gic as parent
firmware: ti_sci: Add support for getting resource with subtype
firmware: ti_sci: Drop unused structure ti_sci_rm_type_map
firmware: ti_sci: Drop the device id to resource type translation
Linus Torvalds [Sun, 30 Aug 2020 18:50:53 +0000 (11:50 -0700)]
Merge tag 'sched-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler fix from Thomas Gleixner:
"A single fix for the scheduler:
- Make is_idle_task() __always_inline to prevent the compiler from
putting it out of line into the wrong section because it's used
inside noinstr sections"
* tag 'sched-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched: Use __always_inline on is_idle_task()
Linus Torvalds [Sun, 30 Aug 2020 17:56:12 +0000 (10:56 -0700)]
Merge tag 'powerpc-5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
- Revert our removal of PROT_SAO, at least one user expressed an
interest in using it on Power9. Instead don't allow it to be used in
guests unless enabled explicitly at compile time.
- A fix for a crash introduced by a recent change to FP handling.
- Revert a change to our idle code that left Power10 with no idle
support.
- One minor fix for the new scv system call path to set PPR.
- Fix a crash in our "generic" PMU if branch stack events were enabled.
- A fix for the IMC PMU, to correctly identify host kernel samples.
- The ADB_PMU powermac code was found to be incompatible with
VMAP_STACK, so make them incompatible in Kconfig until the code can
be fixed.
- A build fix in drivers/video/fbdev/controlfb.c, and a documentation
fix.
Thanks to Alexey Kardashevskiy, Athira Rajeev, Christophe Leroy,
Giuseppe Sacco, Madhavan Srinivasan, Milton Miller, Nicholas Piggin,
Pratik Rajesh Sampat, Randy Dunlap, Shawn Anastasio, Vaidyanathan
Srinivasan.
* tag 'powerpc-5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/32s: Disable VMAP stack which CONFIG_ADB_PMU
Revert "powerpc/powernv/idle: Replace CPU feature check with PVR check"
powerpc/perf: Fix reading of MSR[HV/PR] bits in trace-imc
powerpc/perf: Fix crashes with generic_compat_pmu & BHRB
powerpc/64s: Fix crash in load_fp_state() due to fpexc_mode
powerpc/64s: scv entry should set PPR
Documentation/powerpc: fix malformed table in syscall64-abi
video: fbdev: controlfb: Fix build for COMPILE_TEST=y && PPC_PMAC=n
selftests/powerpc: Update PROT_SAO test to skip ISA 3.1
powerpc/64s: Disallow PROT_SAO in LPARs by default
Revert "powerpc/64s: Remove PROT_SAO support"
Linus Torvalds [Sun, 30 Aug 2020 17:51:03 +0000 (10:51 -0700)]
Merge tag 'usb-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Pull USB fixes from Greg KH:
"Let's try this again... Here are some USB fixes for 5.9-rc3.
This differs from the previous pull request for this release in that
the usb gadget patch now does not break some systems, and actually
does what it was intended to do. Many thanks to Marek Szyprowski for
quickly noticing and testing the patch from Andy Shevchenko to resolve
this issue.
Additionally, some more new USB quirks have been added to get some new
devices to work properly based on user reports.
Other than that, the patches are all here, and they contain:
- usb gadget driver fixes
- xhci driver fixes
- typec fixes
- new quirks and ids
- fixes for USB patches that went into 5.9-rc1.
All of these have been tested in linux-next with no reported issues"
* tag 'usb-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (33 commits)
usb: storage: Add unusual_uas entry for Sony PSZ drives
USB: Ignore UAS for JMicron JMS567 ATA/ATAPI Bridge
usb: host: ohci-exynos: Fix error handling in exynos_ohci_probe()
USB: gadget: u_f: Unbreak offset calculation in VLAs
USB: quirks: Ignore duplicate endpoint on Sound Devices MixPre-D
usb: typec: tcpm: Fix Fix source hard reset response for TDA 2.3.1.1 and TDA 2.3.1.2 failures
USB: PHY: JZ4770: Fix static checker warning.
USB: gadget: f_ncm: add bounds checks to ncm_unwrap_ntb()
USB: gadget: u_f: add overflow checks to VLA macros
xhci: Always restore EP_SOFT_CLEAR_TOGGLE even if ep reset failed
xhci: Do warm-reset when both CAS and XDEV_RESUME are set
usb: host: xhci: fix ep context print mismatch in debugfs
usb: uas: Add quirk for PNY Pro Elite
tools: usb: move to tools buildsystem
USB: Fix device driver race
USB: Also match device drivers using the ->match vfunc
usb: host: xhci-tegra: fix tegra_xusb_get_phy()
usb: host: xhci-tegra: otg usb2/usb3 port init
usb: hcd: Fix use after free in usb_hcd_pci_remove()
usb: typec: ucsi: Hold con->lock for the entire duration of ucsi_register_port()
...
Linus Torvalds [Sun, 30 Aug 2020 17:47:23 +0000 (10:47 -0700)]
Merge tag 'edac_urgent_for_v5.9_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
Pull EDAC fix from Borislav Petkov:
"A fix to properly clear ghes_edac driver state on driver remove so
that a subsequent load can probe the system properly (Shiju Jose)"
* tag 'edac_urgent_for_v5.9_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
EDAC/ghes: Fix NULL pointer dereference in ghes_edac_register()
Thomas Gleixner [Sun, 30 Aug 2020 17:07:53 +0000 (19:07 +0200)]
genirq/matrix: Deal with the sillyness of for_each_cpu() on UP
Most of the CPU mask operations behave the same way, but for_each_cpu() and
it's variants ignore the cpumask argument and claim that CPU0 is always in
the mask. This is historical, inconsistent and annoying behaviour.
The matrix allocator uses for_each_cpu() and can be called on UP with an
empty cpumask. The calling code does not expect that this succeeds but
until commit e027fffff799 ("x86/irq: Unbreak interrupt affinity setting")
this went unnoticed. That commit added a WARN_ON() to catch cases which
move an interrupt from one vector to another on the same CPU. The warning
triggers on UP.
Add a check for the cpumask being empty to prevent this.
Fixes: 2f75d9e1c905 ("genirq: Implement bitmap matrix allocator") Reported-by: kernel test robot <rong.a.chen@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org