summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)Author
2020-07-17drivers/perf: Prevent forced unbinding of PMU driversQi Liu
Forcefully unbinding PMU drivers during perf sampling will lead to a kernel panic, because the perf upper-layer framework call a NULL pointer in this situation. To solve this issue, "suppress_bind_attrs" should be set to true, so that bind/unbind can be disabled via sysfs and prevent unbinding PMU drivers during perf sampling. Signed-off-by: Qi Liu <liuqi115@huawei.com> Reviewed-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1594975763-32966-1-git-send-email-liuqi115@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-17asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible()Will Deacon
Although mmiowb() is concerned only with serialising MMIO writes occuring in contexts where a spinlock is held, the call to mmiowb_set_pending() from the MMIO write accessors can occur in preemptible contexts, such as during driver probe() functions where ordering between CPUs is not usually a concern, assuming that the task migration path provides the necessary ordering guarantees. Unfortunately, the default implementation of mmiowb_set_pending() is not preempt-safe, as it makes use of a a per-cpu variable to track its internal state. This has been reported to generate the following splat on riscv: | BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1 | caller is regmap_mmio_write32le+0x1c/0x46 | CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc3-hfu+ #1 | Call Trace: | walk_stackframe+0x0/0x7a | dump_stack+0x6e/0x88 | regmap_mmio_write32le+0x18/0x46 | check_preemption_disabled+0xa4/0xaa | regmap_mmio_write32le+0x18/0x46 | regmap_mmio_write+0x26/0x44 | regmap_write+0x28/0x48 | sifive_gpio_probe+0xc0/0x1da Although it's possible to fix the driver in this case, other splats have been seen from other drivers, including the infamous 8250 UART, and so it's better to address this problem in the mmiowb core itself. Fix mmiowb_set_pending() by using the raw_cpu_ptr() to get at the mmiowb state and then only updating the 'mmiowb_pending' field if we are not preemptible (i.e. we have a non-zero nesting count). Cc: Arnd Bergmann <arnd@arndb.de> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Guo Ren <guoren@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Reported-by: Palmer Dabbelt <palmer@dabbelt.com> Reported-by: Emil Renner Berthing <kernel@esmil.dk> Tested-by: Emil Renner Berthing <kernel@esmil.dk> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com> Link: https://lore.kernel.org/r/20200716112816.7356-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-16arm64: Use test_tsk_thread_flag() for checking TIF_SINGLESTEPWill Deacon
Rather than open-code test_tsk_thread_flag() at each callsite, simply replace the couple of offenders with calls to test_tsk_thread_flag() directly. Signed-off-by: Will Deacon <will@kernel.org>
2020-07-16arm64: ptrace: Use NO_SYSCALL instead of -1 in syscall_trace_enter()Will Deacon
Setting a system call number of -1 is special, as it indicates that the current system call should be skipped. Use NO_SYSCALL instead of -1 when checking for this scenario, which is different from the -1 returned due to a seccomp failure. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Cc: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-16arm64: syscall: Expand the comment about ptrace and syscall(-1)Will Deacon
If a task executes syscall(-1), we intercept this early and force x0 to be -ENOSYS so that we don't need to distinguish this scenario from one where the scno is -1 because a tracer wants to skip the system call using ptrace. With the return value set, the return path is the same as the skip case. Although there is a one-line comment noting this in el0_svc_common(), it misses out most of the detail. Expand the comment to describe a bit more about what is going on. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Cc: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-16arm64: ptrace: Add a comment describing our syscall entry/exit trap ABIWill Deacon
Our tracehook logic for syscall entry/exit raises a SIGTRAP back to the tracer following a ptrace request such as PTRACE_SYSCALL. As part of this procedure, we clobber the reported value of one of the tracee's general purpose registers (x7 for native tasks, r12 for compat) to indicate whether the stop occurred on syscall entry or exit. This is a slightly unfortunate ABI, as it prevents the tracer from accessing the real register value and is at odds with other similar stops such as seccomp traps. Since we're stuck with this ABI, expand the comment in our tracehook logic to acknowledge the issue and describe the behaviour in more detail. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Luis Machado <luis.machado@linaro.org> Reported-by: Keno Fischer <keno@juliacomputing.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-16arm64: compat: Ensure upper 32 bits of x0 are zero on syscall returnWill Deacon
Although we zero the upper bits of x0 on entry to the kernel from an AArch32 task, we do not clear them on the exception return path and can therefore expose 64-bit sign extended syscall return values to userspace via interfaces such as the 'perf_regs' ABI, which deal exclusively with 64-bit registers. Explicitly clear the upper 32 bits of x0 on return from a compat system call. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Cc: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-16arm64: ptrace: Override SPSR.SS when single-stepping is enabledWill Deacon
Luis reports that, when reverse debugging with GDB, single-step does not function as expected on arm64: | I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP | request by GDB won't execute the underlying instruction. As a consequence, | the PC doesn't move, but we return a SIGTRAP just like we would for a | regular successful PTRACE_SINGLESTEP request. The underlying problem is that when the CPU register state is restored as part of a reverse step, the SPSR.SS bit is cleared and so the hardware single-step state can transition to the "active-pending" state, causing an unexpected step exception to be taken immediately if a step operation is attempted. In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate accessible by the GPR regset, but it's a bit late for that now. Instead, simply prevent userspace from configuring the bit to a value which is inconsistent with the TIF_SINGLESTEP state for the task being traced. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org Reported-by: Luis Machado <luis.machado@linaro.org> Tested-by: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-16arm64: ptrace: Consistently use pseudo-singlestep exceptionsWill Deacon
Although the arm64 single-step state machine can be fast-forwarded in cases where we wish to generate a SIGTRAP without actually executing an instruction, this has two major limitations outside of simply skipping an instruction due to emulation. 1. Stepping out of a ptrace signal stop into a signal handler where SIGTRAP is blocked. Fast-forwarding the stepping state machine in this case will result in a forced SIGTRAP, with the handler reset to SIG_DFL. 2. The hardware implicitly fast-forwards the state machine when executing an SVC instruction for issuing a system call. This can interact badly with subsequent ptrace stops signalled during the execution of the system call (e.g. SYSCALL_EXIT or seccomp traps), as they may corrupt the stepping state by updating the PSTATE for the tracee. Resolve both of these issues by injecting a pseudo-singlestep exception on entry to a signal handler and also on return to userspace following a system call. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Tested-by: Luis Machado <luis.machado@linaro.org> Reported-by: Keno Fischer <keno@juliacomputing.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-16drivers/perf: Fix kernel panic when rmmod PMU modules during perf samplingQi Liu
When users try to remove PMU modules during perf sampling, kernel panic will happen because the pmu->read() is a NULL pointer here. INFO on HiSilicon hip08 platform as follow: pc : hisi_uncore_pmu_event_update+0x30/0xa4 [hisi_uncore_pmu] lr : hisi_uncore_pmu_read+0x20/0x2c [hisi_uncore_pmu] sp : ffff800010103e90 x29: ffff800010103e90 x28: ffff0027db0c0e40 x27: ffffa29a76f129d8 x26: ffffa29a77ceb000 x25: ffffa29a773a5000 x24: ffffa29a77392000 x23: ffffddffe5943f08 x22: ffff002784285960 x21: ffff002784285800 x20: ffff0027d2e76c80 x19: ffff0027842859e0 x18: ffff80003498bcc8 x17: ffffa29a76afe910 x16: ffffa29a7583f530 x15: 16151a1512061a1e x14: 0000000000000000 x13: ffffa29a76f1e238 x12: 0000000000000001 x11: 0000000000000400 x10: 00000000000009f0 x9 : ffff8000107b3e70 x8 : ffff0027db0c1890 x7 : ffffa29a773a7000 x6 : 00000007f5131013 x5 : 00000007f5131013 x4 : 09f257d417c00000 x3 : 00000002187bd7ce x2 : ffffa29a38f0f0d8 x1 : ffffa29a38eae268 x0 : ffff0027d2e76c80 Call trace: hisi_uncore_pmu_event_update+0x30/0xa4 [hisi_uncore_pmu] hisi_uncore_pmu_read+0x20/0x2c [hisi_uncore_pmu] __perf_event_read+0x1a0/0x1f8 flush_smp_call_function_queue+0xa0/0x160 generic_smp_call_function_single_interrupt+0x18/0x20 handle_IPI+0x31c/0x4dc gic_handle_irq+0x2c8/0x310 el1_irq+0xcc/0x180 arch_cpu_idle+0x4c/0x20c default_idle_call+0x20/0x30 do_idle+0x1b4/0x270 cpu_startup_entry+0x28/0x30 secondary_start_kernel+0x1a4/0x1fc To solve the above issue, current module should be registered to kernel, so that try_module_get() can be invoked when perf sampling starts. This adds the reference counting of module and could prevent users from removing modules during sampling. Reported-by: Haifeng Wang <wang.wanghaifeng@huawei.com> Signed-off-by: Qi Liu <liuqi115@huawei.com> Reviewed-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1594891165-8228-1-git-send-email-liuqi115@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-13efi/libstub/arm64: Retain 2MB kernel Image alignment if !KASLRWill Deacon
Since commit 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with alignment check"), loading a relocatable arm64 kernel at a physical address which is not 2MB aligned and subsequently booting with EFI will leave the Image in-place, relying on the kernel to relocate itself early during boot. In conjunction with commit dd4bc6076587 ("arm64: warn on incorrect placement of the kernel by the bootloader"), which enables CONFIG_RELOCATABLE by default, this effectively means that entering an arm64 kernel loaded at an alignment smaller than 2MB with EFI (e.g. using QEMU) will result in silent relocation at runtime. Unfortunately, this has a subtle but confusing affect for developers trying to inspect the PC value during a crash and comparing it to the symbol addresses in vmlinux using tools such as 'nm' or 'addr2line'; all text addresses will be displaced by a sub-2MB offset, resulting in the wrong symbol being identified in many cases. Passing "nokaslr" on the command line or disabling "CONFIG_RANDOMIZE_BASE" does not help, since the EFI stub only copies the kernel Image to a 2MB boundary if it is not relocatable. Adjust the EFI stub for arm64 so that the minimum Image alignment is 2MB unless KASLR is in use. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: David Brazdil <dbrazdil@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-07-09arm64/alternatives: don't patch up internal branchesArd Biesheuvel
Commit f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences") moved the alternatives replacement sequences into subsections, in order to keep the as close as possible to the code that they replace. Unfortunately, this broke the logic in branch_insn_requires_update, which assumed that any branch into kernel executable code was a branch that required updating, which is no longer the case now that the code sequences that are patched in are in the same section as the patch site itself. So the only way to discriminate branches that require updating and ones that don't is to check whether the branch targets the replacement sequence itself, and so we can drop the call to kernel_text_address() entirely. Fixes: f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences") Reported-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20200709125953.30918-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-09arm64: Add missing sentinel to erratum_1463225Florian Fainelli
When the erratum_1463225 array was introduced a sentinel at the end was missing thus causing a KASAN: global-out-of-bounds in is_affected_midr_range_list on arm64 error. Fixes: a9e821b89daa ("arm64: Add KRYO4XX gold CPU cores to erratum list 1463225 and 1418040") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/linux-arm-kernel/CA+G9fYs3EavpU89-rTQfqQ9GgxAMgMAk7jiiVrfP0yxj5s+Q6g@mail.gmail.com/ Link: https://lore.kernel.org/r/20200709051345.14544-1-f.fainelli@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08arm64: Documentation: Fix broken table in generated HTMLSuzuki K Poulose
cpu-feature-registers.rst is missing a new line before a couple of tables listing the visible fields, causing broken tables in the HTML documentation generated by "make htmldocs". Fix this by adding the missing new line. Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200707143152.154541-1-suzuki.poulose@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08arm64: kgdb: Fix single-step exception handling oopsWei Li
After entering kdb due to breakpoint, when we execute 'ss' or 'go' (will delay installing breakpoints, do single-step first), it won't work correctly, and it will enter kdb due to oops. It's because the reason gotten in kdb_stub() is not as expected, and it seems that the ex_vector for single-step should be 0, like what arch powerpc/sh/parisc has implemented. Before the patch: Entering kdb (current=0xffff8000119e2dc0, pid 0) on processor 0 due to Keyboard Entry [0]kdb> bp printk Instruction(i) BP #0 at 0xffff8000101486cc (printk) is enabled addr at ffff8000101486cc, hardtype=0 installed=0 [0]kdb> g / # echo h > /proc/sysrq-trigger Entering kdb (current=0xffff0000fa878040, pid 266) on processor 3 due to Breakpoint @ 0xffff8000101486cc [3]kdb> ss Entering kdb (current=0xffff0000fa878040, pid 266) on processor 3 Oops: (null) due to oops @ 0xffff800010082ab8 CPU: 3 PID: 266 Comm: sh Not tainted 5.7.0-rc4-13839-gf0e5ad491718 #6 Hardware name: linux,dummy-virt (DT) pstate: 00000085 (nzcv daIf -PAN -UAO) pc : el1_irq+0x78/0x180 lr : __handle_sysrq+0x80/0x190 sp : ffff800015003bf0 x29: ffff800015003d20 x28: ffff0000fa878040 x27: 0000000000000000 x26: ffff80001126b1f0 x25: ffff800011b6a0d8 x24: 0000000000000000 x23: 0000000080200005 x22: ffff8000101486cc x21: ffff800015003d30 x20: 0000ffffffffffff x19: ffff8000119f2000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000 x8 : ffff800015003e50 x7 : 0000000000000002 x6 : 00000000380b9990 x5 : ffff8000106e99e8 x4 : ffff0000fadd83c0 x3 : 0000ffffffffffff x2 : ffff800011b6a0d8 x1 : ffff800011b6a000 x0 : ffff80001130c9d8 Call trace: el1_irq+0x78/0x180 printk+0x0/0x84 write_sysrq_trigger+0xb0/0x118 proc_reg_write+0xb4/0xe0 __vfs_write+0x18/0x40 vfs_write+0xb0/0x1b8 ksys_write+0x64/0xf0 __arm64_sys_write+0x14/0x20 el0_svc_common.constprop.2+0xb0/0x168 do_el0_svc+0x20/0x98 el0_sync_handler+0xec/0x1a8 el0_sync+0x140/0x180 [3]kdb> After the patch: Entering kdb (current=0xffff8000119e2dc0, pid 0) on processor 0 due to Keyboard Entry [0]kdb> bp printk Instruction(i) BP #0 at 0xffff8000101486cc (printk) is enabled addr at ffff8000101486cc, hardtype=0 installed=0 [0]kdb> g / # echo h > /proc/sysrq-trigger Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to Breakpoint @ 0xffff8000101486cc [0]kdb> g Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to Breakpoint @ 0xffff8000101486cc [0]kdb> ss Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to SS trap @ 0xffff800010082ab8 [0]kdb> Fixes: 44679a4f142b ("arm64: KGDB: Add step debugging support") Signed-off-by: Wei Li <liwei391@huawei.com> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20200509214159.19680-2-liwei391@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08arm64: entry: Tidy up block comments and label numbersWill Deacon
Continually butchering our entry code with CPU errata workarounds has led to it looking a little scruffy. Consistently used /* */ comment style for multi-line block comments and ensure that small numeric labels use consecutive integers. No functional change, but the state of things was irritating. Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08arm64: Rework ARM_ERRATUM_1414080 handlingMarc Zyngier
The current handling of erratum 1414080 has the side effect that cntkctl_el1 can get changed for both 32 and 64bit tasks. This isn't a problem so far, but if we ever need to mitigate another of these errata on the 64bit side, we'd better keep the messing with cntkctl_el1 local to 32bit tasks. For that, make sure that on entering the kernel from a 32bit tasks, userspace access to cntvct gets enabled, and disabled returning to userspace, while it never gets changed for 64bit tasks. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200706163802.1836732-5-maz@kernel.org [will: removed branch instructions per Mark's review comments] Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08arm64: arch_timer: Disable the compat vdso for cores affected by ↵Marc Zyngier
ARM64_WORKAROUND_1418040 ARM64_WORKAROUND_1418040 requires that AArch32 EL0 accesses to the virtual counter register are trapped and emulated by the kernel. This makes the vdso pretty pointless, and in some cases livelock prone. Provide a workaround entry that limits the vdso to 64bit tasks. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200706163802.1836732-4-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08arm64: arch_timer: Allow an workaround descriptor to disable compat vdsoMarc Zyngier
As we are about to disable the vdso for compat tasks in some circumstances, let's allow a workaround descriptor to express exactly that. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200706163802.1836732-3-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08arm64: Introduce a way to disable the 32bit vdsoMarc Zyngier
We have a class of errata (grouped under the ARM64_WORKAROUND_1418040 banner) that force the trapping of counter access from 32bit EL0. We would normally disable the whole vdso for such defect, except that it would disable it for 64bit userspace as well, which is a shame. Instead, add a new vdso_clock_mode, which signals that the vdso isn't usable for compat tasks. This gets checked in the new vdso_clocksource_ok() helper, now provided for the 32bit vdso. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200706163802.1836732-2-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08arm64: entry: Fix the typo in the comment of el1_dbg()Kevin Hao
The function name should be local_daif_mask(). Signed-off-by: Kevin Hao <haokexin@gmail.com> Acked-by: Mark Rutlamd <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200417103212.45812-2-haokexin@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08drivers/firmware/psci: Assign @err directly in hotplug_tests()Gavin Shan
The return value of down_and_up_cpus() can be assigned to @err directly. With that, the useless assignment to @err with zero can be dropped. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20200630075943.203954-1-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08drivers/firmware/psci: Fix memory leakage in alloc_init_cpu_groups()Gavin Shan
The CPU mask (@tmp) should be released on failing to allocate @cpu_groups or any of its elements. Otherwise, it leads to memory leakage because the CPU mask variable is dynamically allocated when CONFIG_CPUMASK_OFFSTACK is enabled. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20200630075227.199624-1-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-07-08KVM: arm64: Fix definition of PAGE_HYP_DEVICEWill Deacon
PAGE_HYP_DEVICE is intended to encode attribute bits for an EL2 stage-1 pte mapping a device. Unfortunately, it includes PROT_DEVICE_nGnRE which encodes attributes for EL1 stage-1 mappings such as UXN and nG, which are RES0 for EL2, and DBM which is meaningless as TCR_EL2.HD is not set. Fix the definition of PAGE_HYP_DEVICE so that it doesn't set RES0 bits at EL2. Acked-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20200708162546.26176-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-03arm64: Add KRYO4XX silver CPU cores to erratum list 1530923 and 1024718Sai Prakash Ranjan
KRYO4XX silver/LITTLE CPU cores with revision r1p0 are affected by erratum 1530923 and 1024718, so add them to the respective list. The variant and revision bits are implementation defined and are different from the their Cortex CPU counterparts on which they are based on, i.e., r1p0 is equivalent to rdpe. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/7013e8a3f857ca7e82863cc9e34a614293d7f80c.1593539394.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-03arm64: Add KRYO4XX gold CPU cores to erratum list 1463225 and 1418040Sai Prakash Ranjan
KRYO4XX gold/big CPU core revisions r0p0 to r3p1 are affected by erratum 1463225 and 1418040, so add them to the respective list. The variant and revision bits are implementation defined and are different from the their Cortex CPU counterparts on which they are based on, i.e., (r0p0 to r3p1) is equivalent to (rcpe to rfpf). Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/83780e80c6377c12ca51b5d53186b61241685e49.1593539394.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-03arm64: Add MIDR value for KRYO4XX gold CPU coresSai Prakash Ranjan
Add MIDR value for KRYO4XX gold/big CPU cores which are used in Qualcomm Technologies, Inc. SoCs. This will be used to identify and apply erratum which are applicable for these CPU cores. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/9093fb82e22441076280ca1b729242ffde80c432.1593539394.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2020-07-02arm64/alternatives: use subsections for replacement sequencesArd Biesheuvel
When building very large kernels, the logic that emits replacement sequences for alternatives fails when relative branches are present in the code that is emitted into the .altinstr_replacement section and patched in at the original site and fixed up. The reason is that the linker will insert veneers if relative branches go out of range, and due to the relative distance of the .altinstr_replacement from the .text section where its branch targets usually live, veneers may be emitted at the end of the .altinstr_replacement section, with the relative branches in the sequence pointed at the veneers instead of the actual target. The alternatives patching logic will attempt to fix up the branch to point to its original target, which will be the veneer in this case, but given that the patch site is likely to be far away as well, it will be out of range and so patching will fail. There are other cases where these veneers are problematic, e.g., when the target of the branch is in .text while the patch site is in .init.text, in which case putting the replacement sequence inside .text may not help either. So let's use subsections to emit the replacement code as closely as possible to the patch site, to ensure that veneers are only likely to be emitted if they are required at the patch site as well, in which case they will be in range for the replacement sequence both before and after it is transported to the patch site. This will prevent alternative sequences in non-init code from being released from memory after boot, but this is tolerable given that the entire section is only 512 KB on an allyesconfig build (which weighs in at 500+ MB for the entire Image). Also, note that modules today carry the replacement sequences in non-init sections as well, and any of those that target init code will be emitted into init sections after this change. This fixes an early crash when booting an allyesconfig kernel on a system where any of the alternatives sequences containing relative branches are activated at boot (e.g., ARM64_HAS_PAN on TX2) Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Link: https://lore.kernel.org/r/20200630081921.13443-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-25arm64: Add KRYO{3,4}XX silver CPU cores to SSB safelistSai Prakash Ranjan
QCOM KRYO{3,4}XX silver/LITTLE CPU cores are based on Cortex-A55 and are SSB safe, hence add them to SSB safelist -> arm64_ssb_cpus[]. Reported-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20200625103123.7240-1-saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-25arm64: perf: Report the PC value in REGS_ABI_32 modeJiping Ma
A 32-bit perf querying the registers of a compat task using REGS_ABI_32 will receive zeroes from w15, when it expects to find the PC. Return the PC value for register dwarf register 15 when returning register values for a compat task to perf. Cc: <stable@vger.kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Jiping Ma <jiping.ma2@windriver.com> Link: https://lore.kernel.org/r/1589165527-188401-1-git-send-email-jiping.ma2@windriver.com [will: Shuffled code and added a comment] Signed-off-by: Will Deacon <will@kernel.org>
2020-06-24kselftest: arm64: Remove redundant clean targetMark Brown
The arm64 signal tests generate warnings during build since both they and the toplevel lib.mk define a clean target: Makefile:25: warning: overriding recipe for target 'clean' ../../lib.mk:126: warning: ignoring old recipe for target 'clean' Since the inclusion of lib.mk is in the signal Makefile there is no situation where this warning could be avoided so just remove the redundant clean target. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200624104933.21125-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-24arm64: kpti: Add KRYO{3, 4}XX silver CPU cores to kpti safelistSai Prakash Ranjan
QCOM KRYO{3,4}XX silver/LITTLE CPU cores are based on Cortex-A55 and are meltdown safe, hence add them to kpti_safe_list[]. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/20200624123406.3472-1-saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-24arm64: Don't insert a BTI instruction at inner labelsJean-Philippe Brucker
Some ftrace features are broken since commit 714a8d02ca4d ("arm64: asm: Override SYM_FUNC_START when building the kernel with BTI"). For example the function_graph tracer: $ echo function_graph > /sys/kernel/debug/tracing/current_tracer [ 36.107016] WARNING: CPU: 0 PID: 115 at kernel/trace/ftrace.c:2691 ftrace_modify_all_code+0xc8/0x14c When ftrace_modify_graph_caller() attempts to write a branch at ftrace_graph_call, it finds the "BTI J" instruction inserted by SYM_INNER_LABEL() instead of a NOP, and aborts. It turns out we don't currently need the BTI landing pads inserted by SYM_INNER_LABEL: * ftrace_call and ftrace_graph_call are only used for runtime patching of the active tracer. The patched code is not reached from a branch. * install_el2_stub is reached from a CBZ instruction, which doesn't change PSTATE.BTYPE. * __guest_exit is reached from B instructions in the hyp-entry vectors, which aren't subject to BTI checks either. Remove the BTI annotation from SYM_INNER_LABEL. Fixes: 714a8d02ca4d ("arm64: asm: Override SYM_FUNC_START when building the kernel with BTI") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200624112253.1602786-1-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-24arm64: vdso: Don't use gcc plugins for building vgettimeofday.cAlexander Popov
Don't use gcc plugins for building arch/arm64/kernel/vdso/vgettimeofday.c to avoid unneeded instrumentation. Signed-off-by: Alexander Popov <alex.popov@linux.com> Link: https://lore.kernel.org/r/20200624123330.83226-4-alex.popov@linux.com Signed-off-by: Will Deacon <will@kernel.org>
2020-06-24arm64: vdso: Only pass --no-eh-frame-hdr when linker supports itWill Deacon
Commit 87676cfca141 ("arm64: vdso: Disable dwarf unwinding through the sigreturn trampoline") unconditionally passes the '--no-eh-frame-hdr' option to the linker when building the native vDSO in an attempt to prevent generation of the .eh_frame_hdr section, the presence of which has been implicated in segfaults originating from the libgcc unwinder. Unfortunately, not all versions of binutils support this option, which has been shown to cause build failures in linux-next: | CALL scripts/atomic/check-atomics.sh | CALL scripts/checksyscalls.sh | LD arch/arm64/kernel/vdso/vdso.so.dbg | ld: unrecognized option '--no-eh-frame-hdr' | ld: use the --help option for usage information | arch/arm64/kernel/vdso/Makefile:64: recipe for target | 'arch/arm64/kernel/vdso/vdso.so.dbg' failed | make[1]: *** [arch/arm64/kernel/vdso/vdso.so.dbg] Error 1 | arch/arm64/Makefile:175: recipe for target 'vdso_prepare' failed | make: *** [vdso_prepare] Error 2 Only link the vDSO with '--no-eh-frame-hdr' when the linker supports it. If we end up with the section due to linker defaults, the absence of CFI information in the sigreturn trampoline will prevent the unwinder from breaking. Link: https://lore.kernel.org/r/7a7e31a8-9a7b-2428-ad83-2264f20bdc2d@hisilicon.com Fixes: 87676cfca141 ("arm64: vdso: Disable dwarf unwinding through the sigreturn trampoline") Reported-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-06-23arm64: Depend on newer binutils when building PACMark Brown
Versions of binutils prior to 2.33.1 don't understand the ELF notes that are added by modern compilers to indicate the PAC and BTI options used to build the code. This causes them to emit large numbers of warnings in the form: aarch64-linux-gnu-nm: warning: .tmp_vmlinux.kallsyms2: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 during the kernel build which is currently causing quite a bit of disruption for automated build testing using clang. In commit 15cd0e675f3f76b (arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch) we added a dependency on binutils to avoid this issue when building with versions of GCC that emit the notes but did not do so for clang as it was believed that the existing check for .cfi_negate_ra_state was already requiring a new enough binutils. This does not appear to be the case for some versions of binutils (eg, the binutils in Debian 10) so instead refactor so we require a new enough GNU binutils in all cases other than when we are using an old GCC version that does not emit notes. Other, more exotic, combinations of tools are possible such as using clang, lld and gas together are possible and may have further problems but rather than adding further version checks it looks like the most robust thing will be to just test that we can build cleanly with the configured tools but that will require more review and discussion so do this for now to address the immediate problem disrupting build testing. Reported-by: KernelCI <bot@kernelci.org> Reported-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1054 Link: https://lore.kernel.org/r/20200619123550.48098-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-23arm64: compat: Remove 32-bit sigreturn code from the vDSOWill Deacon
The sigreturn code in the compat vDSO is unused. Remove it. Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-06-23arm64: compat: Always use sigpage for sigreturn trampolineWill Deacon
The 32-bit sigreturn trampoline in the compat sigpage matches the binary representation of the arch/arm/ sigpage exactly. This is important for debuggers (e.g. GDB) and unwinders (e.g. libunwind) since they rely on matching the instruction sequence in order to identify that they are unwinding through a signal. The same cannot be said for the sigreturn trampoline in the compat vDSO, which defeats the unwinder heuristics and instead attempts to use unwind directives for the unwinding. This is in contrast to arch/arm/, which never uses the vDSO for sigreturn. Ensure compatibility with arch/arm/ and existing unwinders by always using the sigpage for the sigreturn trampoline, regardless of the presence of the compat vDSO. Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-06-23arm64: compat: Allow 32-bit vdso and sigpage to co-existWill Deacon
In preparation for removing the signal trampoline from the compat vDSO, allow the sigpage and the compat vDSO to co-exist. For the moment the vDSO signal trampoline will still be used when built. Subsequent patches will move to the sigpage consistently. Acked-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-06-23arm64: vdso: Disable dwarf unwinding through the sigreturn trampolineWill Deacon
Commit 7e9f5e6629f6 ("arm64: vdso: Add --eh-frame-hdr to ldflags") results in a .eh_frame_hdr section for the vDSO, which in turn causes the libgcc unwinder to unwind out of signal handlers using the .eh_frame information populated by our .cfi directives. In conjunction with a4eb355a3fda ("arm64: vdso: Fix CFI directives in sigreturn trampoline"), this has been shown to cause segmentation faults originating from within the unwinder during thread cancellation: | Thread 14 "virtio-net-rx" received signal SIGSEGV, Segmentation fault. | 0x0000000000435e24 in uw_frame_state_for () | (gdb) bt | #0 0x0000000000435e24 in uw_frame_state_for () | #1 0x0000000000436e88 in _Unwind_ForcedUnwind_Phase2 () | #2 0x00000000004374d8 in _Unwind_ForcedUnwind () | #3 0x0000000000428400 in __pthread_unwind (buf=<optimized out>) at unwind.c:121 | #4 0x0000000000429808 in __do_cancel () at ./pthreadP.h:304 | #5 sigcancel_handler (sig=32, si=0xffff33c743f0, ctx=<optimized out>) at nptl-init.c:200 | #6 sigcancel_handler (sig=<optimized out>, si=0xffff33c743f0, ctx=<optimized out>) at nptl-init.c:165 | #7 <signal handler called> | #8 futex_wait_cancelable (private=0, expected=0, futex_word=0x3890b708) at ../sysdeps/unix/sysv/linux/futex-internal.h:88 After considerable bashing of heads, it appears that our CFI directives for unwinding out of the sigreturn trampoline are only processed by libgcc when both a .eh_frame_hdr section is present *and* the mysterious NOP is covered by an entry in .eh_frame. With both of these now in place, it has highlighted that our CFI directives are not comprehensive enough to restore the stack pointer of the interrupted context. This results in libgcc falling back to an arm64-specific unwinder after computing a bogus PC value from the unwind tables. The unwinder promptly dereferences this bogus address in an attempt to see if the pointed-to instruction sequence looks like the sigreturn trampoline. Restore the old unwind behaviour, which relied solely on heuristics in the unwinder, by removing the .eh_frame_hdr section from the vDSO and commenting out the insufficient CFI directives for now. Add comments to explain the current, miserable state of affairs. Cc: Tamas Zsoldos <tamas.zsoldos@arm.com> Cc: Szabolcs Nagy <szabolcs.nagy@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Daniel Kiss <daniel.kiss@arm.com> Acked-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reported-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-06-18arm64: hw_breakpoint: Don't invoke overflow handler on uaccess watchpointsWill Deacon
Unprivileged memory accesses generated by the so-called "translated" instructions (e.g. STTR) at EL1 can cause EL0 watchpoints to fire unexpectedly if kernel debugging is enabled. In such cases, the hw_breakpoint logic will invoke the user overflow handler which will typically raise a SIGTRAP back to the current task. This is futile when returning back to the kernel because (a) the signal won't have been delivered and (b) userspace can't handle the thing anyway. Avoid invoking the user overflow handler for watchpoints triggered by kernel uaccess routines, and instead single-step over the faulting instruction as we would if no overflow handler had been installed. (Fixes tag identifies the introduction of unprivileged memory accesses, which exposed this latent bug in the hw_breakpoint code) Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Fixes: 57f4959bad0a ("arm64: kernel: Add support for User Access Override") Reported-by: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-06-18arm64: kexec_file: Use struct_size() in kmalloc()Gustavo A. R. Silva
Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes. This code was detected with the help of Coccinelle and, audited and fixed manually. Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://lore.kernel.org/r/20200617213407.GA1385@embeddedor Signed-off-by: Will Deacon <will@kernel.org>
2020-06-18arm64: mm: reserve hugetlb CMA after numa_initBarry Song
hugetlb_cma_reserve() is called at the wrong place. numa_init has not been done yet. so all reserved memory will be located at node0. Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Roman Gushchin <guro@fb.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200617215828.25296-1-song.bao.hua@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-06-17arm64: bti: Require clang >= 10.0.1 for in-kernel BTI supportWill Deacon
Unfortunately, most versions of clang that support BTI are capable of miscompiling the kernel when converting a switch statement into a jump table. As an example, attempting to spawn a KVM guest results in a panic: [ 56.253312] Kernel panic - not syncing: bad mode [ 56.253834] CPU: 0 PID: 279 Comm: lkvm Not tainted 5.8.0-rc1 #2 [ 56.254225] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 [ 56.254712] Call trace: [ 56.254952] dump_backtrace+0x0/0x1d4 [ 56.255305] show_stack+0x1c/0x28 [ 56.255647] dump_stack+0xc4/0x128 [ 56.255905] panic+0x16c/0x35c [ 56.256146] bad_el0_sync+0x0/0x58 [ 56.256403] el1_sync_handler+0xb4/0xe0 [ 56.256674] el1_sync+0x7c/0x100 [ 56.256928] kvm_vm_ioctl_check_extension_generic+0x74/0x98 [ 56.257286] __arm64_sys_ioctl+0x94/0xcc [ 56.257569] el0_svc_common+0x9c/0x150 [ 56.257836] do_el0_svc+0x84/0x90 [ 56.258083] el0_sync_handler+0xf8/0x298 [ 56.258361] el0_sync+0x158/0x180 This is because the switch in kvm_vm_ioctl_check_extension_generic() is executed as an indirect branch to tail-call through a jump table: ffff800010032dc8: 3869694c ldrb w12, [x10, x9] ffff800010032dcc: 8b0c096b add x11, x11, x12, lsl #2 ffff800010032dd0: d61f0160 br x11 However, where the target case uses the stack, the landing pad is elided due to the presence of a paciasp instruction: ffff800010032e14: d503233f paciasp ffff800010032e18: a9bf7bfd stp x29, x30, [sp, #-16]! ffff800010032e1c: 910003fd mov x29, sp ffff800010032e20: aa0803e0 mov x0, x8 ffff800010032e24: 940017c0 bl ffff800010038d24 <kvm_vm_ioctl_check_extension> ffff800010032e28: 93407c00 sxtw x0, w0 ffff800010032e2c: a8c17bfd ldp x29, x30, [sp], #16 ffff800010032e30: d50323bf autiasp ffff800010032e34: d65f03c0 ret Unfortunately, this results in a fatal exception because paciasp is compatible only with branch-and-link (call) instructions and not simple indirect branches. A fix is being merged into Clang 10.0.1 so that a 'bti j' instruction is emitted as an explicit landing pad in this situation. Make in-kernel BTI depend on that compiler version when building with clang. Cc: Tom Stellard <tstellar@redhat.com> Cc: Daniel Kiss <daniel.kiss@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20200615105524.GA2694@willie-the-truck Link: https://lore.kernel.org/r/20200616183630.2445-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-16arm64: sve: Fix build failure when ARM64_SVE=y and SYSCTL=nWill Deacon
When I squashed the 'allnoconfig' compiler warning about the set_sve_default_vl() function being defined but not used in commit 1e570f512cbd ("arm64/sve: Eliminate data races on sve_default_vl"), I accidentally broke the build for configs where ARM64_SVE is enabled, but SYSCTL is not. Fix this by only compiling the SVE sysctl support if both CONFIG_SVE=y and CONFIG_SYSCTL=y. Cc: Dave Martin <Dave.Martin@arm.com> Reported-by: Qian Cai <cai@lca.pw> Link: https://lore.kernel.org/r/20200616131808.GA1040@lca.pw Signed-off-by: Will Deacon <will@kernel.org>
2020-06-16arm64: pgtable: Clear the GP bit for non-executable kernel pagesWill Deacon
Commit cca98e9f8b5e ("mm: enforce that vmap can't map pages executable") introduced 'pgprot_nx(prot)' for arm64 but collided silently with the BTI support during the merge window, which endeavours to clear the GP bit for non-executable kernel mappings in set_memory_nx(). For consistency between the two APIs, clear the GP bit in pgprot_nx(). Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200615154642.3579-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-15arm64: mm: reset address tag set by kasan sw taggingShyam Thombre
KASAN sw tagging sets a random tag of 8 bits in the top byte of the pointer returned by the memory allocating functions. So for the functions unaware of this change, the top 8 bits of the address must be reset which is done by the function arch_kasan_reset_tag(). Signed-off-by: Shyam Thombre <sthombre@codeaurora.org> Link: https://lore.kernel.org/r/1591787384-5823-1-git-send-email-sthombre@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-15arm64: traps: Dump registers prior to panic() in bad_mode()Will Deacon
When panicing due to an unknown/unhandled exception at EL1, dump the registers of the faulting context so that it's easier to figure out what went wrong. In particular, this makes it a lot easier to debug in-kernel BTI failures since it pretty-prints PSTATE.BTYPE in the crash log. Cc: Mark Brown <broonie@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200615113458.2884-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-15