summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/platforms
AgeCommit message (Collapse)Author
2019-11-13powerpc/pseries/cmm: Simulation modeDavid Hildenbrand
Let's allow to test the implementation without needing HW support. When "simulate=1" is specified when loading the module, we bypass all HW checks and HW calls. The sysfs file "simulate_loan_target_kb" can be used to simulate HW requests. The simualtion mode can be activated using: modprobe cmm debug=1 simulate=1 And the requested loan target can be changed using: echo X > /sys/devices/system/cmm/cmm0/simulate_loan_target_kb Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-11-david@redhat.com
2019-11-13powerpc/pseries/cmm: Switch to balloon_page_alloc()David Hildenbrand
balloon_page_alloc() will use GFP_HIGHUSER_MOVABLE in case we have CONFIG_BALLOON_COMPACTION. This is now possible, as balloon pages are movable with CONFIG_BALLOON_COMPACTION. Without CONFIG_BALLOON_COMPACTION, GFP_HIGHUSER is used. Note that apart from that, balloon_page_alloc() uses the following flags: __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN And current code used: GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC GFP_HIGHUSER/GFP_HIGHUSER_MOVABLE include __GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL | __GFP_HIGHMEM GFP_NOIO is __GFP_RECLAIM. With CONFIG_BALLOON_COMPACTION, we essentially add: __GFP_IO | __GFP_FS | __GFP_HARDWALL | __GFP_HIGHMEM | __GFP_MOVABLE Without CONFIG_BALLOON_COMPACTION, we essentially add: __GFP_IO | __GFP_FS | __GFP_HARDWALL | __GFP_HIGHMEM I assume this is fine, as this is what all other balloon compaction users use. If it turns out to be a problem, we could add __GFP_MOVABLE manually if we have CONFIG_BALLOON_COMPACTION. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-10-david@redhat.com
2019-11-13powerpc/pseries/cmm: Implement balloon compactionDavid Hildenbrand
We can now get rid of the cmm_lock and completely rely on the balloon compaction internals, which now also manage the page list and the lock. Inflated/"loaned" pages are now movable. Memory blocks that contain such pages can get offlined. Also, all such pages will be marked PageOffline() and can therefore be excluded in memory dumps using recent versions of makedumpfile. Don't switch to balloon_page_alloc() yet (due to the GFP_NOIO). Will do that separately to discuss this change in detail. Signed-off-by: David Hildenbrand <david@redhat.com> [mpe: Add isolated_pages-- in cmm_migratepage() as suggested by David] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-9-david@redhat.com
2019-11-13powerpc/pseries/cmm: Convert loaned_pages to an atomic_long_tDavid Hildenbrand
When switching to balloon compaction, we want to drop the cmm_lock and completely rely on the balloon compaction list lock internally. loaned_pages is currently protected under the cmm_lock. Note: Right now cmm_alloc_pages() and cmm_free_pages() can be called at the same time, e.g., via the thread and a concurrent OOM notifier. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-8-david@redhat.com
2019-11-13powerpc/pseries/cmm: Rip out memory isolate notifierDavid Hildenbrand
The memory isolate notifier was added to allow to offline memory blocks that contain inflated/"loaned" pages. We can achieve the same using the balloon compaction framework. Get rid of the memory isolate notifier. Also, we can get rid of cmm_mem_going_offline(), as we will never reach that code path now when we have allocated memory in the balloon (allocated pages are unmovable and will no longer be special-cased using the memory isolation notifier). Leave the memory notifier in place, so we can still back off in case memory gets offlined. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-7-david@redhat.com
2019-11-13powerpc/pseries/cmm: Use adjust_managed_page_count() insted of totalram_pages_*David Hildenbrand
adjust_managed_page_count() performs a totalram_pages_add(), but also adjusts the managed pages of the zone. Let's use that instead, similar to virtio-balloon. Use it before freeing a page. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-6-david@redhat.com
2019-11-13powerpc/pseries/cmm: Drop page arrayDavid Hildenbrand
We can simply store the pages in a list (page->lru), no need for a separate data structure (+ complicated handling). This is how most other balloon drivers store allocated pages without additional tracking data. For the notifiers, use page_to_pfn() to check if a page is in the applicable range. Use page_to_phys() in plpar_page_set_loaned() and plpar_page_set_active() (I assume due to the __pa() that's the right thing to do). Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-5-david@redhat.com
2019-11-13powerpc/pseries/cmm: Cleanup rc handling in cmm_init()David Hildenbrand
No need to initialize rc. Also, let's return 0 directly when succeeding. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-4-david@redhat.com
2019-11-13powerpc/pseries/cmm: Report errors when registering notifiers failsDavid Hildenbrand
If we don't set the rc, we will return "0", making it look like we succeeded. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-3-david@redhat.com
2019-11-13powerpc/pseries/cmm: Implement release() function for sysfs deviceDavid Hildenbrand
When unloading the module, one gets ------------[ cut here ]------------ Device 'cmm0' does not have a release() function, it is broken and must be fixed. See Documentation/kobject.txt. WARNING: CPU: 0 PID: 19308 at drivers/base/core.c:1244 .device_release+0xcc/0xf0 ... We only have one static fake device. There is nothing to do when releasing the device (via cmm_exit()). Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-2-david@redhat.com
2019-11-13powerpc/pseries: Add cpu DLPAR support for drc-info propertyTyrel Datwyler
Older firmwares provided information about Dynamic Reconfig Connectors (DRC) through several device tree properties, namely ibm,drc-types, ibm,drc-indexes, ibm,drc-names, and ibm,drc-power-domains. New firmwares have the ability to present this same information in a much condensed format through a device tree property called ibm,drc-info. The existing cpu DLPAR hotplug code only understands the older DRC property format when validating the drc-index of a cpu during a hotplug add. This updates those code paths to use the ibm,drc-info property, when present, instead for validation. Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573449697-5448-4-git-send-email-tyreld@linux.ibm.com
2019-11-13powerpc/pseries: Fix drc-info mappings of logical cpus to drc-indexTyrel Datwyler
There are a couple subtle errors in the mapping between cpu-ids and a cpus associated drc-index when using the new ibm,drc-info property. The first is that while drc-info may have been a supported firmware feature at boot it is possible we have migrated to a CEC with older firmware that doesn't support the ibm,drc-info property. In that case the device tree would have been updated after migration to remove the ibm,drc-info property and replace it with the older style ibm,drc-* properties for types, indexes, names, and power-domains. PAPR even goes as far as dictating that if we advertise support for drc-info that we are capable of supporting either property type at runtime. The second is that the first value of the ibm,drc-info property is the int encoded count of drc-info entries. As such "value" returned by of_prop_next_u32() is pointing at that count, and not the first element of the first drc-info entry as is expected by the of_read_drc_info_cell() helper. Fix the first by ignoring DRC-INFO firmware feature and instead testing directly for ibm,drc-info, and then falling back to the old style ibm,drc-indexes in the case it doesn't exit. Fix the second by incrementing value to the next element prior to parsing drc-info entries. Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573449697-5448-3-git-send-email-tyreld@linux.ibm.com
2019-11-13powerpc/pseries: Fix bad drc_index_start value parsing of drc-info entryTyrel Datwyler
The ibm,drc-info property is an array property that contains drc-info entries such that each entry is made up of 2 string encoded elements followed by 5 int encoded elements. The of_read_drc_info_cell() helper contains comments that correctly name the expected elements and their encoding. However, the usage of of_prop_next_string() and of_prop_next_u32() introduced a subtle skippage of the first u32. This is a result of of_prop_next_string() returning a pointer to the next property value which is not a string, but actually a (__be32 *). As, a result the following call to of_prop_next_u32() passes over the current int encoded value and actually stores the next one wrongly. Simply endian swap the current value in place after reading the first two string values. The remaining int encoded values can then be read correctly using of_prop_next_u32(). Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573449697-5448-2-git-send-email-tyreld@linux.ibm.com
2019-11-13Merge branch 'topic/secureboot' into nextMichael Ellerman
Merge the secureboot support, as well as the IMA changes needed to support it. From Nayna's cover letter: In order to verify the OS kernel on PowerNV systems, secure boot requires X.509 certificates trusted by the platform. These are stored in secure variables controlled by OPAL, called OPAL secure variables. In order to enable users to manage the keys, the secure variables need to be exposed to userspace. OPAL provides the runtime services for the kernel to be able to access the secure variables. This patchset defines the kernel interface for the OPAL APIs. These APIs are used by the hooks, which load these variables to the keyring and expose them to the userspace for reading/writing. Overall, this patchset adds the following support: * expose secure variables to the kernel via OPAL Runtime API interface * expose secure variables to the userspace via kernel sysfs interface * load kernel verification and revocation keys to .platform and .blacklist keyring respectively. The secure variables can be read/written using simple linux utilities cat/hexdump. For example: Path to the secure variables is: /sys/firmware/secvar/vars Each secure variable is listed as directory. $ ls -l total 0 drwxr-xr-x. 2 root root 0 Aug 20 21:20 db drwxr-xr-x. 2 root root 0 Aug 20 21:20 KEK drwxr-xr-x. 2 root root 0 Aug 20 21:20 PK The attributes of each of the secure variables are (for example: PK): $ ls -l total 0 -r--r--r--. 1 root root 4096 Oct 1 15:10 data -r--r--r--. 1 root root 65536 Oct 1 15:10 size --w-------. 1 root root 4096 Oct 1 15:12 update The "data" is used to read the existing variable value using hexdump. The data is stored in ESL format. The "update" is used to write a new value using cat. The update is to be submitted as AUTH file.
2019-11-13powerpc/powernv: Add OPAL API interface to access secure variableNayna Jain
The X.509 certificates trusted by the platform and required to secure boot the OS kernel are wrapped in secure variables, which are controlled by OPAL. This patch adds firmware/kernel interface to read and write OPAL secure variables based on the unique key. This support can be enabled using CONFIG_OPAL_SECVAR. Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com> Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Signed-off-by: Eric Richter <erichte@linux.ibm.com> [mpe: Make secvar_ops __ro_after_init, only build opal-secvar.c if PPC_SECURE_BOOT=y] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573441836-3632-2-git-send-email-nayna@linux.ibm.com
2019-11-05powerpc/pseries: safely roll back failed DLPAR cpu addNathan Lynch
dlpar_online_cpu() attempts to online all threads of a core that has been added to an LPAR. If onlining a non-primary thread fails (e.g. due to an allocation failure), the core is left with at least one thread online. dlpar_cpu_add() attempts to roll back the whole operation, releasing the core back to the platform. However, since some threads of the core being removed are still online, the BUG_ON(cpu_online(cpu)) in pseries_remove_processor() strikes: LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: CPU: 3 PID: 8587 Comm: drmgr Not tainted 5.3.0-rc2-00190-g9b123d1ea237-dirty #46 NIP: c0000000000eeb2c LR: c0000000000eeac4 CTR: c0000000000ee9e0 REGS: c0000001f745b6c0 TRAP: 0700 Not tainted (5.3.0-rc2-00190-g9b123d1ea237-dirty) MSR: 800000010282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR: 44002448 XER: 00000000 CFAR: c00000000195d718 IRQMASK: 0 GPR00: c0000000000eeac4 c0000001f745b950 c0000000032f6200 0000000000000008 GPR04: 0000000000000008 c000000003349c78 0000000000000040 00000000000001ff GPR08: 0000000000000008 0000000000000000 0000000000000001 0007ffffffffffff GPR12: 0000000084002844 c00000001ecacb80 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000008 GPR24: c000000003349ee0 c00000000334a2e4 c0000000fca4d7a8 c000000001d20048 GPR28: 0000000000000001 ffffffffffffffff ffffffffffffffff c0000000fca4d7c4 NIP [c0000000000eeb2c] pseries_smp_notifier+0x14c/0x2e0 LR [c0000000000eeac4] pseries_smp_notifier+0xe4/0x2e0 Call Trace: [c0000001f745b950] [c0000000000eeac4] pseries_smp_notifier+0xe4/0x2e0 (unreliable) [c0000001f745ba10] [c0000000001ac774] notifier_call_chain+0xb4/0x190 [c0000001f745bab0] [c0000000001ad62c] blocking_notifier_call_chain+0x7c/0xb0 [c0000001f745baf0] [c00000000167bda0] of_detach_node+0xc0/0x110 [c0000001f745bb50] [c0000000000e7ae4] dlpar_detach_node+0x64/0xa0 [c0000001f745bb80] [c0000000000edefc] dlpar_cpu_add+0x31c/0x360 [c0000001f745bc10] [c0000000000ee980] dlpar_cpu_probe+0x50/0xb0 [c0000001f745bc50] [c00000000002cf70] arch_cpu_probe+0x40/0x70 [c0000001f745bc70] [c000000000ccd808] cpu_probe_store+0x48/0x80 [c0000001f745bcb0] [c000000000cbcef8] dev_attr_store+0x38/0x60 [c0000001f745bcd0] [c00000000059c980] sysfs_kf_write+0x70/0xb0 [c0000001f745bd10] [c00000000059afb8] kernfs_fop_write+0xf8/0x280 [c0000001f745bd60] [c0000000004b437c] __vfs_write+0x3c/0x70 [c0000001f745bd80] [c0000000004b8710] vfs_write+0xd0/0x220 [c0000001f745bdd0] [c0000000004b8acc] ksys_write+0x7c/0x140 [c0000001f745be20] [c00000000000bbd8] system_call+0x5c/0x68 Move dlpar_offline_cpu() up in the file so that dlpar_online_cpu() can use it to re-offline any threads that have been onlined when an error is encountered. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Fixes: e666ae0b10aa ("powerpc/pseries: Update CPU hotplug error recovery") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191016183611.10867-3-nathanl@linux.ibm.com
2019-11-05powerpc/pseries: address checkpatch warnings in dlpar_offline_cpuNathan Lynch
Remove some stray blank lines, convert a printk to pr_warn, and address a line length violation. One functional change: use WARN_ON instead of BUG_ON in case H_PROD of a ceded thread yields an unexpected result from the platform. We can expect this code path to get uninterruptibly stuck in __cpu_die() if this happens, but that's more desirable than crashing. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Fixes: b6db63d1a7f0 ("pseries/pseries: Add code to online/offline CPUs of a DLPAR node") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191016183611.10867-2-nathanl@linux.ibm.com
2019-11-04Merge branch 'fixes' into nextMichael Ellerman
Merge our fixes branch, primarily to bring in the powernv CPU hotplug warning fix.
2019-10-29powerpc/powernv: Fix CPU idle to be called with IRQs disabledNicholas Piggin
Commit e78a7614f3876 ("idle: Prevent late-arriving interrupts from disrupting offline") changes arch_cpu_idle_dead to be called with interrupts disabled, which triggers the WARN in pnv_smp_cpu_kill_self. Fix this by fixing up irq_happened after hard disabling, rather than requiring there are no pending interrupts, similarly to what was done done until commit 2525db04d1cc5 ("powerpc/powernv: Simplify lazy IRQ handling in CPU offline"). Fixes: e78a7614f3876 ("idle: Prevent late-arriving interrupts from disrupting offline") Reported-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Add unexpected_mask rather than checking for known bad values, change the WARN_ON() to a WARN_ON_ONCE()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191022115814.22456-1-npiggin@gmail.com
2019-10-28powerpc/book3s64/hash: Use secondary hash for bolted mapping if the primary ↵Aneesh Kumar K.V
is full With bolted hash page table entry, kernel currently only use primary hash group when inserting the hash page table entry. In the rare case where kernel find all the 8 primary hash slot occupied by bolted entries, this can result in hash page table insert failure for bolted entries. Avoid this by using the secondary hash group. This is different from what kernel does for the non-bolted mapping. With non-bolted entries kernel will try secondary before removing an existing entry from hash page table group. With bolted prefer primary hash group and hence try to insert the page table entry by removing a slot from primary before trying the secondary hash group. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024093542.29777-3-aneesh.kumar@linux.ibm.com
2019-10-28powerpc/pseries: Don't opencode HPTE_V_BOLTEDAneesh Kumar K.V
No functional change in this patch. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024093542.29777-1-aneesh.kumar@linux.ibm.com
2019-10-25powerpc/powernv/eeh: Fix oops when probing cxl devicesFrederic Barrat
Recent cleanup in the way EEH support is added to a device causes a kernel oops when the cxl driver probes a device and creates virtual devices discovered on the FPGA: BUG: Kernel NULL pointer dereference at 0x000000a0 Faulting instruction address: 0xc000000000048070 Oops: Kernel access of bad area, sig: 7 [#1] ... NIP eeh_add_device_late.part.9+0x50/0x1e0 LR eeh_add_device_late.part.9+0x3c/0x1e0 Call Trace: _dev_info+0x5c/0x6c (unreliable) pnv_pcibios_bus_add_device+0x60/0xb0 pcibios_bus_add_device+0x40/0x60 pci_bus_add_device+0x30/0x100 pci_bus_add_devices+0x64/0xd0 cxl_pci_vphb_add+0xe0/0x130 [cxl] cxl_probe+0x504/0x5b0 [cxl] local_pci_probe+0x6c/0x110 work_for_cpu_fn+0x38/0x60 The root cause is that those cxl virtual devices don't have a representation in the device tree and therefore no associated pci_dn structure. In eeh_add_device_late(), pdn is NULL, so edev is NULL and we oops. We never had explicit support for EEH for those virtual devices. Instead, EEH events are reported to the (real) pci device and handled by the cxl driver. Which can then forward to the virtual devices and handle dependencies. The fact that we try adding EEH support for the virtual devices is new and a side-effect of the recent cleanup. This patch fixes it by skipping adding EEH support on powernv for devices which don't have a pci_dn structure. The cxl driver doesn't create virtual devices on pseries so this patch doesn't fix it there intentionally. Fixes: b905f8cdca77 ("powerpc/eeh: EEH for pSeries hot plug") Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com> Reviewed-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191016162833.22509-1-fbarrat@linux.ibm.com
2019-10-13Merge branch 'fixes' into nextMichael Ellerman
Merge our fixes branch, to bring in the fixes for the KVM PCR bug and the spufs crash.
2019-10-11powerpc/powernv: Add queue mechanism for early messagesDeb McLemore
When issuing a BMC soft poweroff during IPL, the poweroff can be lost so the machine would not poweroff. This is because opal messages can be received before the opal-power code registered its notifiers. Fix it by buffering messages. If we receive a message and do not yet have a handler for that type, store the message and replay when a handler for that type is registered. Signed-off-by: Deb McLemore <debmc@linux.vnet.ibm.com> [mpe: Single unlock path in opal_message_notifier_register(), tweak comments/formatting and change log.] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1526868278-4204-1-git-send-email-debmc@linux.vnet.ibm.com
2019-10-11spufs: fix a crash in spufs_create_root()Emmanuel Nicolet
The spu_fs_context was not set in fc->fs_private, this caused a crash when accessing ctx->mode in spufs_create_root(). Fixes: d2e0981c3b9a ("vfs: Convert spufs to use the new mount API") Signed-off-by: Emmanuel Nicolet <emmanuel.nicolet@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20191008141342.GA266797@gmail.com
2019-10-10powerpc/papr_scm: Fix an off-by-one check in papr_scm_meta_{get, set}Vaibhav Jain
A validation check to prevent out of bounds read/write inside functions papr_scm_meta_{get,set}() is off-by-one that prevent reads and writes to the last byte of the label area. This bug manifests as a failure to probe a dimm when libnvdimm is unable to read the entire config-area as advertised by ND_CMD_GET_CONFIG_SIZE. This usually happens when there are large number of namespaces created in the region backed by the dimm and the label-index spans max possible config-area. An error of the form below usually reported in the kernel logs: [ 255.293912] nvdimm: probe of nmem0 failed with error -22 The patch fixes these validation checks there by letting libnvdimm access the entire config-area. Fixes: 53e80bd042773('powerpc/nvdimm: Add support for multibyte read/write for metadata') Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190927062002.3169-1-vaibhav@linux.ibm.com
2019-10-09powerpc/pseries: Remove confusing warning message.Laurent Dufour
Since commit 1211ee61b4a8 ("powerpc/pseries: Read TLB Block Invalidate Characteristics"), a warning message is displayed when booting a guest on top of KVM: lpar: arch/powerpc/platforms/pseries/lpar.c pseries_lpar_read_hblkrm_characteristics Error calling get-system-parameter (0xfffffffd) This message is displayed because this hypervisor is not supporting the H_BLOCK_REMOVE hcall and thus is not exposing the corresponding feature. Reading the TLB Block Invalidate Characteristics should not be done if the feature is not exposed. Fixes: 1211ee61b4a8 ("powerpc/pseries: Read TLB Block Invalidate Characteristics") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191001132928.72555-1-ldufour@linux.ibm.com
2019-09-28Merge tag 'powerpc-5.4-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: "An assortment of fixes that were either missed by me, or didn't arrive quite in time for the first v5.4 pull. - Most notable is a fix for an issue with tlbie (broadcast TLB invalidation) on Power9, when using the Radix MMU. The tlbie can race with an mtpid (move to PID register, essentially MMU context switch) on another thread of the core, which can cause stores to continue to go to a page after it's unmapped. - A fix in our KVM code to add a missing barrier, the lack of which has been observed to cause missed IPIs and subsequently stuck CPUs in the host. - A change to the way we initialise PCR (Processor Compatibility Register) to make it forward compatible with future CPUs. - On some older PowerVM systems our H_BLOCK_REMOVE support could oops, fix it to detect such systems and fallback to the old invalidation method. - A fix for an oops seen on some machines when using KASAN on 32-bit. - A handful of other minor fixes, and two new selftests. Thanks to: Alistair Popple, Aneesh Kumar K.V, Christophe Leroy, Gustavo Romero, Joel Stanley, Jordan Niethe, Laurent Dufour, Michael Roth, Oliver O'Halloran" * tag 'powerpc-5.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/eeh: Fix eeh eeh_debugfs_break_device() with SRIOV devices powerpc/nvdimm: use H_SCM_QUERY hcall on H_OVERLAP error powerpc/nvdimm: Use HCALL error as the return value selftests/powerpc: Add test case for tlbie vs mtpidr ordering issue powerpc/mm: Fixup tlbie vs mtpidr/mtlpidr ordering issue on POWER9 powerpc/book3s64/radix: Rename CPU_FTR_P9_TLBIE_BUG feature flag powerpc/book3s64/mm: Don't do tlbie fixup for some hardware revisions powerpc/pseries: Call H_BLOCK_REMOVE when supported powerpc/pseries: Read TLB Block Invalidate Characteristics KVM: PPC: Book3S HV: use smp_mb() when setting/clearing host_ipi flag powerpc/mm: Fix an Oops in kasan_mmu_init() powerpc/mm: Add a helper to select PAGE_KERNEL_RO or PAGE_READONLY powerpc/64s: Set reserved PCR bits powerpc: Fix definition of PCR bits to work with old binutils powerpc/book3s64/radix: Remove WARN_ON in destroy_context() powerpc/tm: Add tm-poison test
2019-09-25powerpc/nvdimm: use H_SCM_QUERY hcall on H_OVERLAP errorAneesh Kumar K.V
Right now we force an unbind of SCM memory at drcindex on H_OVERLAP error. This really slows down operations like kexec where we get the H_OVERLAP error because we don't go through a full hypervisor re init. H_OVERLAP error for a H_SCM_BIND_MEM hcall indicates that SCM memory at drc index is already bound. Since we don't specify a logical memory address for bind hcall, we can use the H_SCM_QUERY hcall to query the already bound logical address. Boot time difference with and without patch is: [ 5.583617] IOMMU table initialized, virtual merging enabled [ 5.603041] papr_scm ibm,persistent-memory:ibm,pmemory@44104001: Retrying bind after unbinding [ 301.514221] papr_scm ibm,persistent-memory:ibm,pmemory@44108001: Retrying bind after unbinding [ 340.057238] hv-24x7: read 1530 catalog entries, created 537 event attrs (0 failures), 275 descs after fix [ 5.101572] IOMMU table initialized, virtual merging enabled [ 5.116984] papr_scm ibm,persistent-memory:ibm,pmemory@44104001: Querying SCM details [ 5.117223] papr_scm ibm,persistent-memory:ibm,pmemory@44108001: Querying SCM details [ 5.120530] hv-24x7: read 1530 catalog entries, created 537 event attrs (0 failures), 275 descs Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190903123452.28620-2-aneesh.kumar@linux.ibm.com
2019-09-25powerpc/nvdimm: Use HCALL error as the return valueAneesh Kumar K.V
This simplifies the error handling and also enable us to switch to H_SCM_QUERY hcall in a later patch on H_OVERLAP error. We also do some kernel print formatting fixup in this patch. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190903123452.28620-1-aneesh.kumar@linux.ibm.com
2019-09-24Merge branch 'work.mount3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull more mount API conversions from Al Viro: "Assorted conversions of options parsing to new API. gfs2 is probably the most serious one here; the rest is trivial stuff. Other things in what used to be #work.mount are going to wait for the next cycle (and preferably go via git trees of the filesystems involved)" * 'work.mount3' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: gfs2: Convert gfs2 to fs_context vfs: Convert spufs to use the new mount API vfs: Convert hypfs to use the new mount API hypfs: Fix error number left in struct pointer member vfs: Convert functionfs to use the new mount API vfs: Convert bpf to use the new mount API
2019-09-24powerpc/pseries: Call H_BLOCK_REMOVE when supportedLaurent Dufour
Depending on the hardware and the hypervisor, the hcall H_BLOCK_REMOVE may not be able to process all the page sizes for a segment base page size, as reported by the TLB Invalidate Characteristics. For each pair of base segment page size and actual page size, this characteristic tells us the size of the block the hcall supports. In the case, the hcall is not supporting a pair of base segment page size, actual page size, it is returning H_PARAM which leads to a panic like this: kernel BUG at /home/srikar/work/linux.git/arch/powerpc/platforms/pseries/lpar.c:466! Oops: Exception in kernel mode, sig: 5 [#1] BE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: CPU: 28 PID: 583 Comm: modprobe Not tainted 5.2.0-master #5 NIP: c0000000000be8dc LR: c0000000000be880 CTR: 0000000000000000 REGS: c0000007e77fb130 TRAP: 0700 Not tainted (5.2.0-master) MSR: 8000000000029032 <SF,EE,ME,IR,DR,RI> CR: 42224824 XER: 20000000 CFAR: c0000000000be8fc IRQMASK: 0 GPR00: 0000000022224828 c0000007e77fb3c0 c000000001434d00 0000000000000005 GPR04: 9000000004fa8c00 0000000000000000 0000000000000003 0000000000000001 GPR08: c0000007e77fb450 0000000000000000 0000000000000001 ffffffffffffffff GPR12: c0000007e77fb450 c00000000edfcb80 0000cd7d3ea30000 c0000000016022b0 GPR16: 00000000000000b0 0000cd7d3ea30000 0000000000000001 c080001f04f00105 GPR20: 0000000000000003 0000000000000004 c000000fbeb05f58 c000000001602200 GPR24: 0000000000000000 0000000000000004 8800000000000000 c000000000c5d148 GPR28: c000000000000000 8000000000000000 a000000000000000 c0000007e77fb580 NIP [c0000000000be8dc] .call_block_remove+0x12c/0x220 LR [c0000000000be880] .call_block_remove+0xd0/0x220 Call Trace: 0xc000000fb8c00240 (unreliable) .pSeries_lpar_flush_hash_range+0x578/0x670 .flush_hash_range+0x44/0x100 .__flush_tlb_pending+0x3c/0xc0 .zap_pte_range+0x7ec/0x830 .unmap_page_range+0x3f4/0x540 .unmap_vmas+0x94/0x120 .exit_mmap+0xac/0x1f0 .mmput+0x9c/0x1f0 .do_exit+0x388/0xd60 .do_group_exit+0x54/0x100 .__se_sys_exit_group+0x14/0x20 system_call+0x5c/0x70 Instruction dump: 39400001 38a00000 4800003c 60000000 60420000 7fa9e800 38e00000 419e0014 7d29d278 7d290074 7929d182 69270001 <0b070000> 7d495378 394a0001 7fa93040 The call to H_BLOCK_REMOVE should only be made for the supported pair of base segment page size, actual page size and using the correct maximum block size. Due to the required complexity in do_block_remove() and call_block_remove(), and the fact that currently a block size of 8 is returned by the hypervisor, we are only supporting 8 size block to the H_BLOCK_REMOVE hcall. In order to identify this limitation easily in the code, a local define HBLKR_SUPPORTED_SIZE defining the currently supported block size, and a dedicated checking helper is_supported_hlbkr() are introduced. For regular pages and hugetlb, the assumption is made that the page size is equal to the base page size. For THP the page size is assumed to be 16M. Fixes: ba2dd8a26baa ("powerpc/pseries/mm: call H_BLOCK_REMOVE") Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190920130523.20441-3-ldufour@linux.ibm.com
2019-09-24powerpc/pseries: Read TLB Block Invalidate CharacteristicsLaurent Dufour
The PAPR document specifies the TLB Block Invalidate Characteristics which tells for each pair of segment base page size, actual page size, the size of the block the hcall H_BLOCK_REMOVE supports. These characteristics are loaded at boot time in a new table hblkr_size. The table is separate from the mmu_psize_def because this is specific to the pseries platform. A new init function, pseries_lpar_read_hblkrm_characteristics() is added to read the characteristics. It is called from pSeries_setup_arch(). Fixes: ba2dd8a26baa ("powerpc/pseries/mm: call H_BLOCK_REMOVE") Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190920130523.20441-2-ldufour@linux.ibm.com
2019-09-24KVM: PPC: Book3S HV: use smp_mb() when setting/clearing host_ipi flagMichael Roth
On a 2-socket Power9 system with 32 cores/128 threads (SMT4) and 1TB of memory running the following guest configs: guest A: - 224GB of memory - 56 VCPUs (sockets=1,cores=28,threads=2), where: VCPUs 0-1 are pinned to CPUs 0-3, VCPUs 2-3 are pinned to CPUs 4-7, ... VCPUs 54-55 are pinned to CPUs 108-111 guest B: - 4GB of memory - 4 VCPUs (sockets=1,cores=4,threads=1) with the following workloads (with KSM and THP enabled in all): guest A: stress --cpu 40 --io 20 --vm 20 --vm-bytes 512M guest B: stress --cpu 4 --io 4 --vm 4 --vm-bytes 512M host: stress --cpu 4 --io 4 --vm 2 --vm-bytes 256M the below soft-lockup traces were observed after an hour or so and persisted until the host was reset (this was found to be reliably reproducible for this configuration, for kernels 4.15, 4.18, 5.0, and 5.3-rc5): [ 1253.183290] rcu: INFO: rcu_sched self-detected stall on CPU [ 1253.183319] rcu: 124-....: (5250 ticks this GP) idle=10a/1/0x4000000000000002 softirq=5408/5408 fqs=1941 [ 1256.287426] watchdog: BUG: soft lockup - CPU#105 stuck for 23s! [CPU 52/KVM:19709] [ 1264.075773] watchdog: BUG: soft lockup - CPU#24 stuck for 23s! [worker:19913] [ 1264.079769] watchdog: BUG: soft lockup - CPU#31 stuck for 23s! [worker:20331] [ 1264.095770] watchdog: BUG: soft lockup - CPU#45 stuck for 23s! [worker:20338] [ 1264.131773] watchdog: BUG: soft lockup - CPU#64 stuck for 23s! [avocado:19525] [ 1280.408480] watchdog: BUG: soft lockup - CPU#124 stuck for 22s! [ksmd:791] [ 1316.198012] rcu: INFO: rcu_sched self-detected stall on CPU [ 1316.198032] rcu: 124-....: (21003 ticks this GP) idle=10a/1/0x4000000000000002 softirq=5408/5408 fqs=8243 [ 1340.411024] watchdog: BUG: soft lockup - CPU#124 stuck for 22s! [ksmd:791] [ 1379.212609] rcu: INFO: rcu_sched self-detected stall on CPU [ 1379.212629] rcu: 124-....: (36756 ticks this GP) idle=10a/1/0x4000000000000002 softirq=5408/5408 fqs=14714 [ 1404.413615] watchdog: BUG: soft lockup - CPU#124 stuck for 22s! [ksmd:791] [ 1442.227095] rcu: INFO: rcu_sched self-detected stall on CPU [ 1442.227115] rcu: 124-....: (52509 ticks this GP) idle=10a/1/0x4000000000000002 softirq=5408/5408 fqs=21403 [ 1455.111787] INFO: task worker:19907 blocked for more than 120 seconds. [ 1455.111822] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1 [ 1455.111833] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1455.111884] INFO: task worker:19908 blocked for more than 120 seconds. [ 1455.111905] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1 [ 1455.111925] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1455.111966] INFO: task worker:20328 blocked for more than 120 seconds. [ 1455.111986] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1 [ 1455.111998] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1455.112048] INFO: task worker:20330 blocked for more than 120 seconds. [ 1455.112068] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1 [ 1455.112097] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1455.112138] INFO: task worker:20332 blocked for more than 120 seconds. [ 1455.112159] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1 [ 1455.112179] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1455.112210] INFO: task worker:20333 blocked for more than 120 seconds. [ 1455.112231] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1 [ 1455.112242] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1455.112282] INFO: task worker:20335 blocked for more than 120 seconds. [ 1455.112303] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1 [ 1455.112332] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1455.112372] INFO: task worker:20336 blocked for more than 120 seconds. [ 1455.112392] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1 CPUs 45, 24, and 124 are stuck on spin locks, likely held by CPUs 105 and 31. CPUs 105 and 31 are stuck in smp_call_function_many(), waiting on target CPU 42. For instance: # CPU 105 registers (via xmon) R00 = c00000000020b20c R16 = 00007d1bcd800000 R01 = c00000363eaa7970 R17 = 0000000000000001 R02 = c0000000019b3a00 R18 = 000000000000006b R03 = 000000000000002a R19 = 00007d537d7aecf0 R04 = 000000000000002a R20 = 60000000000000e0 R05 = 000000000000002a R21 = 0801000000000080 R06 = c0002073fb0caa08 R22 = 0000000000000d60 R07 = c0000000019ddd78 R23 = 0000000000000001 R08 = 000000000000002a R24 = c00000000147a700 R09 = 0000000000000001 R25 = c0002073fb0ca908 R10 = c000008ffeb4e660 R26 = 0000000000000000 R11 = c0002073fb0ca900 R27 = c0000000019e2464 R12 = c000000000050790 R28 = c0000000000812b0 R13 = c000207fff623e00 R29 = c0002073fb0ca808 R14 = 00007d1bbee00000 R30 = c0002073fb0ca800 R15 = 00007d1bcd600000 R31 = 0000000000000800 pc = c00000000020b260 smp_call_function_many+0x3d0/0x460 cfar= c00000000020b270 smp_call_function_many+0x3e0/0x460 lr = c00000000020b20c smp_call_function_many+0x37c/0x460 msr = 900000010288b033 cr = 44024824 ctr = c000000000050790 xer = 0000000000000000 trap = 100 CPU 42 is running normally, doing VCPU work: # CPU 42 stack trace (via xmon) [link register ] c00800001be17188 kvmppc_book3s_radix_page_fault+0x90/0x2b0 [kvm_hv] [c000008ed3343820] c000008ed3343850 (unreliable) [c000008ed33438d0] c00800001be11b6c kvmppc_book3s_hv_page_fault+0x264/0xe30 [kvm_hv] [c000008ed33439d0] c00800001be0d7b4 kvmppc_vcpu_run_hv+0x8dc/0xb50 [kvm_hv] [c000008ed3343ae0] c00800001c10891c kvmppc_vcpu_run+0x34/0x48 [kvm] [c000008ed3343b00] c00800001c10475c kvm_arch_vcpu_ioctl_run+0x244/0x420 [kvm] [c000008ed3343b90] c00800001c0f5a78 kvm_vcpu_ioctl+0x470/0x7c8 [kvm] [c000008ed3343d00] c000000000475450 do_vfs_ioctl+0xe0/0xc70 [c000008ed3343db0] c0000000004760e4 ksys_ioctl+0x104/0x120 [c000008ed3343e00] c000000000476128 sys_ioctl+0x28/0x80 [c000008ed3343e20] c00000000000b388 system_call+0x5c/0x70 --- Exception: c00 (System Call) at 00007d545cfd7694 SP (7d53ff7edf50) is in userspace It was subsequently found that ipi_message[PPC_MSG_CALL_FUNCTION] was set for CPU 42 by at least 1 of the CPUs waiting in smp_call_function_many(), but somehow the corresponding call_single_queue entries were never processed by CPU 42, causing the callers to spin in csd_lock_wait() indefinitely. Nick Piggin suggested something similar to the following sequence as a possible explanation (interleaving of CALL_FUNCTION/RESCHEDULE IPI messages seems to be most common, but any mix of CALL_FUNCTION and !CALL_FUNCTION messages could trigger it): CPU X: smp_muxed_ipi_set_message(): X: smp_mb() X: message[RESCHEDULE] = 1 X: doorbell_global_ipi(42): X: kvmppc_set_host_ipi(42, 1) X: ppc_msgsnd_sync()/smp_mb() X: ppc_msgsnd() -> 42 42: doorbell_exception(): // from CPU X 42: ppc_msgsync() 105: smp_muxed_ipi_set_message(): 105: smb_mb() // STORE DEFERRED DUE TO RE-ORDERING --105: message[CALL_FUNCTION] = 1 | 105: doorbell_global_ipi(42): | 105: kvmppc_set_host_ipi(42, 1) | 42: kvmppc_set_host_ipi(42, 0) | 42: smp_ipi_demux_relaxed() | 42: // returns to executing guest | // RE-ORDERED STORE COMPLETES ->105: message[CALL_FUNCTION] = 1 105: ppc_msgsnd_sync()/smp_mb() 105: ppc_msgsnd() -> 42 42: local_paca->kvm_hstate.host_ipi == 0 // IPI ignored 105: // hangs waiting on 42 to process messages/call_single_queue This can be prevented with an smp_mb() at the beginning of kvmppc_set_host_ipi(), such that stores to message[<type>] (or other state indicated by the host_ipi flag) are ordered vs. the store to to host_ipi. However, doing so might still allow for the following scenario (not yet observed): CPU X: smp_muxed_ipi_set_message(): X: smp_mb() X: message[RESCHEDULE] = 1 X: doorbell_global_ipi(42): X: kvmppc_set_host_ipi(42, 1) X: ppc_msgsnd_sync()/smp_mb() X: ppc_msgsnd() -> 42 42: doorbell_exception(): // from CPU X 42: ppc_msgsync() // STORE DEFERRED DUE TO RE-ORDERING -- 42: kvmppc_set_host_ipi(42, 0) | 42: smp_ipi_demux_relaxed() | 105: smp_muxed_ipi_set_message(): | 105: smb_mb() | 105: message[CALL_FUNCTION] = 1 | 105: doorbell_global_ipi(42): | 105: kvmppc_set_host_ipi(42, 1) | // RE-ORDERED STORE COMPLETES -> 42: kvmppc_set_host_ipi(42, 0) 42: // returns to executing guest 105: ppc_msgsnd_sync()/smp_mb() 105: ppc_msgsnd() -> 42 42: local_paca->kvm_hstate.host_ipi == 0 // IPI ignored 105: // hangs waiting on 42 to process messages/call_single_queue Fixing this scenario would require an smp_mb() *after* clearing host_ipi flag in kvmppc_set_host_ipi() to order the store vs. subsequent processing of IPI messages. To handle both cases, this patch splits kvmppc_set_host_ipi() into separate set/clear functions, where we execute smp_mb() prior to setting host_ipi flag, and after clearing host_ipi flag. These functions pair with each other to synchronize the sender and receiver sides. With that change in place the above workload ran for 20 hours without triggering any lock-ups. Fixes: 755563bc79c7 ("powerpc/powernv: Fixes for hypervisor doorbell handling") # v4.0 Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190911223155.16045-1-mdroth@linux.vnet.ibm.com
2019-09-20Merge tag 'powerpc-5.4-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: "This is a bit late, partly due to me travelling, and partly due to a power outage knocking out some of my test systems *while* I was travelling. - Initial support for running on a system with an Ultravisor, which is software that runs below the hypervisor and protects guests against some attacks by the hypervisor. - Support for building the kernel to run as a "Secure Virtual Machine", ie. as a guest capable of running on a system with an Ultravisor. - Some changes to our DMA code on bare metal, to allow devices with medium sized DMA masks (> 32 && < 59 bits) to use more than 2GB of DMA space. - Support for firmware assisted crash dumps on bare metal (powernv). - Two series fixing bugs in and refactoring our PCI EEH code. - A large series refactoring our exception entry code to use gas macros, both to make it more readable and also enable some future optimisations. As well as many cleanups and other minor features & fixups. Thanks to: Adam Zerella, Alexey Kardashevskiy, Alistair Popple, Andrew Donnellan, Aneesh Kumar K.V, Anju T Sudhakar, Anshuman Khandual, Balbir Singh, Benjamin Herrenschmidt, Cédric Le Goater, Christophe JAILLET, Christophe Leroy, Christopher M. Riedl, Christoph Hellwig, Claudio Carvalho, Daniel Axtens, David Gibson, David Hildenbrand, Desnes A. Nunes do Rosario, Ganesh Goudar, Gautham R. Shenoy, Greg Kurz, Guerney Hunt, Gustavo Romero, Halil Pasic, Hari Bathini, Joakim Tjernlund, Jonathan Neuschafer, Jordan Niethe, Leonardo Bras, Lianbo Jiang, Madhavan Srinivasan, Mahesh Salgaonkar, Mahesh Salgaonkar, Masahiro Yamada, Maxiwell S. Garcia, Michael Anderson, Nathan Chancellor, Nathan Lynch, Naveen N. Rao, Nicholas Piggin, Oliver O'Halloran, Qian Cai, Ram Pai, Ravi Bangoria, Reza Arbab, Ryan Grimm, Sam Bobroff, Santosh Sivaraj, Segher Boessenkool, Sukadev Bhattiprolu, Thiago Bauermann, Thiago Jung Bauermann, Thomas Gleixner, Tom Lendacky, Vasant Hegde" * tag 'powerpc-5.4-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (264 commits) powerpc/mm/mce: Keep irqs disabled during lockless page table walk powerpc: Use ftrace_graph_ret_addr() when unwinding powerpc/ftrace: Enable HAVE_FUNCTION_GRAPH_RET_ADDR_PTR ftrace: Look up the address of return_to_handler() using helpers powerpc: dump kernel log before carrying out fadump or kdump docs: powerpc: Add missing documentation reference powerpc/xmon: Fix output of XIVE IPI powerpc/xmon: Improve output of XIVE interrupts powerpc/mm/radix: remove useless kernel messages powerpc/fadump: support holes in kernel boot memory area powerpc/fadump: remove RMA_START and RMA_END macros powerpc/fadump: update documentation about option to release opalcore powerpc/fadump: consider f/w load area powerpc/opalcore: provide an option to invalidate /sys/firmware/opal/core file powerpc/opalcore: export /sys/firmware/opal/core for analysing opal crashes powerpc/fadump: update documentation about CONFIG_PRESERVE_FA_DUMP powerpc/fadump: add support to preserve crash data on FADUMP disabled kernel powerpc/fadump: improve how crashed kernel's memory is reserved powerpc/fadump: consider reserved ranges while releasing memory powerpc/fadump: make crash memory ranges array allocation generic ...
2019-09-18vfs: Convert spufs to use the new mount APIDavid Howells
Convert the spufs filesystem to the new internal mount API as the old one will be obsoleted and removed. This allows greater flexibility in communication of mount parameters between userspace, the VFS and the filesystem. See Documentation/filesystems/mount_api.txt for more information. Signed-off-by: David Howells <dhowells@redhat.com> cc: Jeremy Kerr <jk@ozlabs.org> cc: Arnd Bergmann <arnd@arndb.de> cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-09-14powerpc/fadump: support holes in kernel boot memory areaHari Bathini
With support to copy multiple kernel boot memory regions owing to copy size limitation, also handle holes in the memory area to be preserved. Support as many as 128 kernel boot memory regions. This allows having an adequate FADump capture kernel size for different scenarios. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/156821385448.5656.6124791213910877759.stgit@hbathini.in.ibm.com
2019-09-14powerpc/fadump: remove RMA_START and RMA_END macrosHari Bathini
RMA_START is defined as '0' and there is even a BUILD_BUG_ON() to make sure it is never anything else. Remove this macro and use '0' instead as code change is needed anyway when it has to be something else. Also, remove unused RMA_END macro. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/156821384096.5656.15026984053970204652.stgit@hbathini.in.ibm.com
2019-09-14powerpc/fadump: consider f/w load areaHari Bathini
OPAL loads kernel & initrd at 512MB offset (256MB size), also exported as ibm,opal/dump/fw-load-area. So, if boot memory size of FADump is less than 768MB, kernel memory to be exported as '/proc/vmcore' would be overwritten by f/w while loading kernel & initrd. To avoid such a scenario, enforce a minimum boot memory size of 768MB on OPAL platform and skip using FADump if a newer F/W version loads kernel & initrd above 768MB. Also, irrespective of RMA size, set the minimum boot memory size expected on pseries platform at 320MB. This is to avoid inflating the minimum memory requirements on systems with 512M/1024M RMA size. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/156821381414.5656.1592867278535469652.stgit@hbathini.in.ibm.com
2019-09-14powerpc/opalcore: provide an option to invalidate /sys/firmware/opal/core fileHari Bathini