summaryrefslogtreecommitdiffstats
path: root/drivers
AgeCommit message (Collapse)Author
2020-09-11RDMA/qedr: Use rdma_umem_for_each_dma_block() instead of open-codingJason Gunthorpe
This loop is splitting the DMA SGL into pg_shift sized pages, use the core code for this directly. Link: https://lore.kernel.org/r/9-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Acked-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-11RDMA/i40iw: Use ib_umem_num_dma_pages()Jason Gunthorpe
If ib_umem_find_best_pgsz() returns > PAGE_SIZE then the equation here is not correct. 'start' should be 'virt'. Change it to use the core code for page_num and the canonical calculation of page_shift. Fixes: eb52c0333f06 ("RDMA/i40iw: Use core helpers to get aligned DMA address within a supported page size") Link: https://lore.kernel.org/r/8-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-11RDMA/efa: Use ib_umem_num_dma_pages()Jason Gunthorpe
If ib_umem_find_best_pgsz() returns > PAGE_SIZE then the equation here is not correct. 'start' should be 'virt'. Change it to use the core code for page_num and the canonical calculation of page_shift. Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size") Link: https://lore.kernel.org/r/7-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Tested-by: Gal Pressman <galpress@amazon.com> Acked-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-11RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks()Jason Gunthorpe
ib_umem_num_pages() should only be used by things working with the SGL in CPU pages directly. Drivers building DMA lists should use the new ib_num_dma_blocks() which returns the number of blocks rdma_umem_for_each_block() will return. To make this general for DMA drivers requires a different implementation. Computing DMA block count based on umem->address only works if the requested page size is < PAGE_SIZE and/or the IOVA == umem->address. Instead the number of DMA pages should be computed in the IOVA address space, not umem->address. Thus the IOVA has to be stored inside the umem so it can be used for these calculations. For now set it to umem->address by default and fix it up if ib_umem_find_best_pgsz() was called. This allows drivers to be converted to ib_umem_num_dma_blocks() safely. Link: https://lore.kernel.org/r/6-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-11Merge tag 'drm-misc-fixes-2020-09-09' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-misc into drm-fixes drm-misc-fixes for v5.9-rc5: - Fix double free in virtio. - Add missing put_device in sun4i, and other fixes. - Small ingenic fixes. - Handle sun4i alpha on lowest plane correctly. - Remove output->enabled from virtio, as it should use crtc_state. - Fix tve200 enable/disable. - Documentation fix. - Fix virtio unblank. Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/478b49d1-b1b3-c983-7056-8a89249be435@mblankhorst.nl
2020-09-11Merge tag 'drm-intel-fixes-2020-09-10' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes drm/i915 fixes for v5.9-rc5: - Fix regression leading to audio probe failure Signed-off-by: Dave Airlie <airlied@redhat.com> From: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/875z8m2hss.fsf@intel.com
2020-09-10powercap/intel_rapl: add support for AlderLakeZhang Rui
Add intel_rapl support for the AlderLake platform. Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-10powercap/intel_rapl: add support for RocketLakeZhang Rui
Add intel_rapl support for the RocketLake platform. Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-10powercap/intel_rapl: add support for TigerLake DesktopZhang Rui
Add intel_rapl support for the TigerLake desktop platform. Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-10test_firmware: Test platform fw loading on non-EFI systemsKees Cook
On non-EFI systems, it wasn't possible to test the platform firmware loader because it will have never set "checked_fw" during __init. Instead, allow the test code to override this check. Additionally split the declarations into a private symbol namespace so there is greater enforcement of the symbol visibility. Fixes: 548193cba2a7 ("test_firmware: add support for firmware_request_platform") Cc: stable@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200909225354.3118328-1-keescook@chromium.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-09RDMA/umem: Replace for_each_sg_dma_page with rdma_umem_for_each_dma_blockJason Gunthorpe
Generally drivers should be using this core helper to split up the umem into DMA pages. These drivers are all probably wrong in some way to pass PAGE_SIZE in as the HW page size. Either the driver doesn't support other page sizes and it should use 4096, or the driver does support other page sizes and should use ib_umem_find_best_pgsz() to select the best HW pages size of the HW supported set. The only case it could be correct is if the HW has a global setting for PAGE_SIZE set at driver initialization time. Link: https://lore.kernel.org/r/5-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/umem: Add rdma_umem_for_each_dma_block()Jason Gunthorpe
This helper does the same as rdma_for_each_block(), except it works on a umem. This simplifies most of the call sites. Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/umem: Use simpler logic for ib_umem_find_best_pgsz()Jason Gunthorpe
The calculation in rdma_find_pg_bit() is fairly complicated, and the function is never called anywhere else. Inline a simpler version into ib_umem_find_best_pgsz() Link: https://lore.kernel.org/r/3-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()Jason Gunthorpe
rdma_for_each_block() makes assumptions about how the SGL is constructed that don't work if the block size is below the page size used to to build the SGL. The rules for umem SGL construction require that the SG's all be PAGE_SIZE aligned and we don't encode the actual byte offset of the VA range inside the SGL using offset and length. So rdma_for_each_block() has no idea where the actual starting/ending point is to compute the first/last block boundary if the starting address should be within a SGL. Fixing the SGL construction turns out to be really hard, and will be the subject of other patches. For now block smaller pages. Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/2-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundaryJason Gunthorpe
It is possible for a single SGL to span an aligned boundary, eg if the SGL is 61440 -> 90112 Then the length is 28672, which currently limits the block size to 32k. With a 32k page size the two covering blocks will be: 32768->65536 and 65536->98304 However, the correct answer is a 128K block size which will span the whole 28672 bytes in a single block. Instead of limiting based on length figure out which high IOVA bits don't change between the start and end addresses. That is the highest useful page size. Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/1-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA: Make counters destroy symmetricalLeon Romanovsky
Change counters to return failure like any other verbs destroy, however this flow shouldn't return error at all. Link: https://lore.kernel.org/r/20200907120921.476363-10-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA: Restore ability to return error for destroy WQLeon Romanovsky
Make this interface symmetrical to other destroy paths. Fixes: a49b1dc7ae44 ("RDMA: Convert destroy_wq to be void") Link: https://lore.kernel.org/r/20200907120921.476363-9-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA: Change XRCD destroy return valueLeon Romanovsky
Update XRCD destroy flow to allow command failure. Fixes: 28ad5f65c314 ("RDMA: Move XRCD to be under ib_core responsibility") Link: https://lore.kernel.org/r/20200907120921.476363-8-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA: Allow fail of destroy CQLeon Romanovsky
Like any other verbs objects, CQ shouldn't fail during destroy, but mlx5_ib didn't follow this contract with mixed IB verbs objects with DEVX. Such mix causes to the situation where FW and kernel are fully interdependent on the reference counting of each side. Kernel verbs and drivers that don't have DEVX flows shouldn't fail. Fixes: e39afe3d6dbd ("RDMA: Convert CQ allocations to be under core responsibility") Link: https://lore.kernel.org/r/20200907120921.476363-7-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/core: Delete function indirection for alloc/free kernel CQLeon Romanovsky
The ib_alloc_cq*() and ib_free_cq*() are solely kernel verbs to manage CQs and doesn't need extra indirection just to call same functions with constant parameter NULL as udata. Link: https://lore.kernel.org/r/20200907120921.476363-6-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA: Restore ability to fail on SRQ destroyLeon Romanovsky
In similar way to other IB objects, restore the ability to return error on SRQ destroy. Strictly speaking, this change is not necessary, and provided here to ensure a symmetrical interface like other destroy functions. Fixes: 68e326dea1db ("RDMA: Handle SRQ allocations by IB/core") Link: https://lore.kernel.org/r/20200907120921.476363-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/mlx5: Issue FW command to destroy SRQ on reentryLeon Romanovsky
The HW release can fail and leave the system in limbo state, where SRQ is removed from the table, but can't be destroyed later. In every reentry, the initial xa_erase_irq() check will fail. Rewrite the erase logic to keep index, but don't store the entry itself. By doing it, we can safely reinsert entry back in the case of destroy failure. Link: https://lore.kernel.org/r/20200907120921.476363-4-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA: Restore ability to fail on AH destroyLeon Romanovsky
Like any other IB verbs objects, AH are refcounted by ib_core. The release of those objects are controlled by ib_core with promise that AH destroy can't fail. Being SW object for now, this change makes dealloc_ah() to behave like any other destroy IB flows. Fixes: d345691471b4 ("RDMA: Handle AH allocations by IB/core") Link: https://lore.kernel.org/r/20200907120921.476363-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA: Restore ability to fail on PD deallocateLeon Romanovsky
The IB verbs objects are counted by the kernel and ib_core ensures that deallocate PD will success so it will be called once all other objects that depends on PD will be released. This is achieved by managing various reference counters on such objects. The mlx5 driver didn't follow this standard flow when allowed DEVX objects that are not managed by ib_core to be interleaved with the ones under ib_core responsibility. In such interleaved scenarios deallocate command can fail and ib_core will leave uobject in internal DB and attempt to clean it later to free resources anyway. This change partially restores returned value from dealloc_pd() for all drivers, but keeping in mind that non-DEVX devices and kernel verbs paths shouldn't fail. Fixes: 21a428a019c9 ("RDMA: Handle PD allocations by IB/core") Link: https://lore.kernel.org/r/20200907120921.476363-2-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09IB/isert: Fix unaligned immediate-data handlingSagi Grimberg
Currently we allocate rx buffers in a single contiguous buffers for headers (iser and iscsi) and data trailer. This means that most likely the data starting offset is aligned to 76 bytes (size of both headers). This worked fine for years, but at some point this broke, resulting in data corruptions in isert when a command comes with immediate data and the underlying backend device assumes 512 bytes buffer alignment. We assume a hard-requirement for all direct I/O buffers to be 512 bytes aligned. To fix this, we should avoid passing unaligned buffers for I/O. Instead, we allocate our recv buffers with some extra space such that we can have the data portion align to 512 byte boundary. This also means that we cannot reference headers or data using structure but rather accessors (as they may move based on alignment). Also, get rid of the wrong __packed annotation from iser_rx_desc as this has only harmful effects (not aligned to anything). This affects the rx descriptors for iscsi login and data plane. Fixes: 3d75ca0adef4 ("block: introduce multi-page bvec helpers") Link: https://lore.kernel.org/r/20200904195039.31687-1-sagi@grimberg.me Reported-by: Stephen Rust <srust@blockbridge.com> Tested-by: Doug Dumitru <doug@dumitru.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/rtrs-srv: Incorporate ib_register_client into rtrs server initMd Haris Iqbal
The rnbd_server module's communication manager (cm) initialization depends on the registration of the "network namespace subsystem" of the RDMA CM agent module. As such, when the kernel is configured to load the rnbd_server and the RDMA cma module during initialization; and if the rnbd_server module is initialized before RDMA cma module, a null ptr dereference occurs during the RDMA bind operation. Call trace: Call Trace: ? xas_load+0xd/0x80 xa_load+0x47/0x80 cma_ps_find+0x44/0x70 rdma_bind_addr+0x782/0x8b0 ? get_random_bytes+0x35/0x40 rtrs_srv_cm_init+0x50/0x80 rtrs_srv_open+0x102/0x180 ? rnbd_client_init+0x6e/0x6e rnbd_srv_init_module+0x34/0x84 ? rnbd_client_init+0x6e/0x6e do_one_initcall+0x4a/0x200 kernel_init_freeable+0x1f1/0x26e ? rest_init+0xb0/0xb0 kernel_init+0xe/0x100 ret_from_fork+0x22/0x30 Modules linked in: CR2: 0000000000000015 All this happens cause the cm init is in the call chain of the module init, which is not a preferred practice. So remove the call to rdma_create_id() from the module init call chain. Instead register rtrs-srv as an ib client, which makes sure that the rdma_create_id() is called only when an ib device is added. Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality") Link: https://lore.kernel.org/r/20200907103106.104530-1-haris.iqbal@cloud.ionos.com Reported-by: kernel test robot <rong.a.chen@intel.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> Reviewed-by: Jack Wang <jinpu.wang@cloud.ionos.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/rtrs-srv: Set .release function for rtrs srv device during device initMd Haris Iqbal
The device .release function was not being set during the device initialization. This was leading to the below warning, in error cases when put_srv was called before device_add was called. Warning: Device '(null)' does not have a release() function, it is broken and must be fixed. See Documentation/kobject.txt. So, set the device .release function during device initialization in the __alloc_srv() function. Fixes: baa5b28b7a47 ("RDMA/rtrs-srv: Replace device_register with device_initialize and device_add") Link: https://lore.kernel.org/r/20200907102216.104041-1-haris.iqbal@cloud.ionos.com Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/hns: Avoid unncessary initializationLijun Ou
Some variables have been initialized when used. As a result, here removes some unncessary initial assignment. Link: https://lore.kernel.org/r/1599547944-30671-1-git-send-email-oulijun@huawei.com Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/bnxt_re: Remove set but not used variable 'qplib_ctx'YueHaibing
drivers/infiniband/hw/bnxt_re/main.c:1012:25: warning: variable ‘qplib_ctx’ set but not used [-Wunused-but-set-variable] Fixes: f86b31c6a28f ("RDMA/bnxt_re: Static NQ depth allocation") Link: https://lore.kernel.org/r/20200905121624.32776-1-yuehaibing@huawei.com Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09RDMA/core: Change how failing destroy is handled during uobj abortJason Gunthorpe
Currently it triggers a WARN_ON and then goes ahead and destroys the uobject anyhow, leaking any driver memory. The only place that leaks driver memory should be during FD close() in uverbs_destroy_ufile_hw(). Drivers are only allowed to fail destroy uobjects if they guarantee destroy will eventually succeed. uverbs_destroy_ufile_hw() provides the loop to give the driver that chance. Link: https://lore.kernel.org/r/20200902081708.746631-1-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09spi: stm32: fix pm_runtime_get_sync() error checkingDan Carpenter
The pm_runtime_get_sync() can return either 0 or 1 on success but this code treats 1 as a failure. Fixes: db96bf976a4f ("spi: stm32: fixes suspend/resume management") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Alain Volmat <alain.volmat@st.com> Link: https://lore.kernel.org/r/20200909094304.GA420136@mwanda Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-09spi: Fix memory leak on splited transfersGustav Wiklander
In the prepare_message callback the bus driver has the opportunity to split a transfer into smaller chunks. spi_map_msg is done after prepare_message. Function spi_res_release releases the splited transfers in the message. Therefore spi_res_release should be called after spi_map_msg. The previous try at this was commit c9ba7a16d0f1 which released the splited transfers after spi_finalize_current_message had been called. This introduced a race since the message struct could be out of scope because the spi_sync call got completed. Fixes this leak on spi bus driver spi-bcm2835.c when transfer size is greater than 65532: Kmemleak: sg_alloc_table+0x28/0xc8 spi_map_buf+0xa4/0x300 __spi_pump_messages+0x370/0x748 __spi_sync+0x1d4/0x270 spi_sync+0x34/0x58 spi_test_execute_msg+0x60/0x340 [spi_loopback_test] spi_test_run_iter+0x548/0x578 [spi_loopback_test] spi_test_run_test+0x94/0x140 [spi_loopback_test] spi_test_run_tests+0x150/0x180 [spi_loopback_test] spi_loopback_test_probe+0x50/0xd0 [spi_loopback_test] spi_drv_probe+0x84/0xe0 Signed-off-by: Gustav Wiklander <gustavwi@axis.com> Link: https://lore.kernel.org/r/20200908151129.15915-1-gustav.wiklander@axis.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-09i2c: algo: pca: Reapply i2c bus settings after resetEvan Nimmo
If something goes wrong (such as the SCL being stuck low) then we need to reset the PCA chip. The issue with this is that on reset we lose all config settings and the chip ends up in a disabled state which results in a lock up/high CPU usage. We need to re-apply any configuration that had previously been set and re-enable the chip. Signed-off-by: Evan Nimmo <evan.nimmo@alliedtelesis.co.nz> Reviewed-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Wolfram Sang <wsa@kernel.org>
2020-09-09Merge tag 'at24-fixes-for-v5.9-rc5' of ↵Wolfram Sang
git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux into i2c/for-current at24 fixes for v5.9-rc5 - delay registration of the nvmem provider until after power is enabled
2020-09-09nvme-fabrics: allow to queue requests for live queuesSagi Grimberg
Right now we are failing requests based on the controller state (which is checked inline in nvmf_check_ready) however we should definitely accept requests if the queue is live. When entering controller reset, we transition the controller into NVME_CTRL_RESETTING, and then return BLK_STS_RESOURCE for non-mpath requests (have blk_noretry_request set). This is also the case for NVME_REQ_USER for the wrong reason. There shouldn't be any reason for us to reject this I/O in a controller reset. We do want to prevent passthru commands on the admin queue because we need the controller to fully initialize first before we let user passthru admin commands to be issued. In a non-mpath setup, this means that the requests will simply be requeued over and over forever not allowing the q_usage_counter to drop its final reference, causing controller reset to hang if running concurrently with heavy I/O. Fixes: 35897b920c8a ("nvme-fabrics: fix and refine state checks in __nvmf_check_ready") Reviewed-by: James Smart <james.smart@broadcom.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-09Merge tag 'drm-xlnx-dpsub-fixes-20200905' of ↵Dave Airlie
git://linuxtv.org/pinchartl/media into drm-fixes Kconfig fixes for DRM_ZYNQMP_DPSUB DMA engine dependency Signed-off-by: Dave Airlie <airlied@redhat.com> From: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200905172751.GC6319@pendragon.ideasonboard.com
2020-09-08Merge tag 'scsi-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: "Eleven fixes, mostly in drivers or minor fixes in driver related infrastructure libraries (target, libfc and libsas). Most of the bugs fixed only show up under rare circumstances, the exception being the endianness problem in qla2xxx which is used as a device on some sparc systems" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: mpt3sas: Don't call disable_irq from IRQ poll handler scsi: megaraid_sas: Don't call disable_irq from process IRQ poll scsi: target: iscsi: Fix hang in iscsit_access_np() when getting tpg->np_login_sem scsi: libsas: Set data_dir as DMA_NONE if libata marks qc as NODATA scsi: target: iscsi: Fix data digest calculation scsi: lpfc: Update lpfc version to 12.8.0.4 scsi: lpfc: Extend the RDF FPIN Registration descriptor for additional events scsi: lpfc: Fix FLOGI/PLOGI receive race condition in pt2pt discovery scsi: lpfc: Fix setting IRQ affinity with an empty CPU mask scsi: qla2xxx: Fix regression on sparc64 scsi: libfc: Fix for double free() scsi: pm8001: Fix memleak in pm8001_exec_internal_task_abort
2020-09-08nvme-tcp: cancel async events before freeing event structDavid Milburn
Cancel async event work in case async event has been queued up, and nvme_tcp_submit_async_event() runs after event has been freed. Signed-off-by: David Milburn <dmilburn@redhat.com> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-08nvme-rdma: cancel async events before freeing event structDavid Milburn
Cancel async event work in case async event has been queued up, and nvme_rdma_submit_async_event() runs after event has been freed. Signed-off-by: David Milburn <dmilburn@redhat.com> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-08nvme-fc: cancel async events before freeing event structDavid Milburn
Cancel async event work in case async event has been queued up, and nvme_fc_submit_async_event() runs after event has been freed. Signed-off-by: David Milburn <dmilburn@redhat.com> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-08nvme: Revert: Fix controller creation races with teardown flowJames Smart
The indicated patch introduced a barrier in the sysfs_delete attribute for the controller that rejects the request if the controller isn't created. "Created" is defined as at least 1 call to nvme_start_ctrl(). This is problematic in error-injection testing. If an error occurs on the initial attempt to create an association and the controller enters reconnect(s) attempts, the admin cannot delete the controller until either there is a successful association created or ctrl_loss_tmo times out. Where this issue is particularly hurtful is when the "admin" is the nvme-cli, it is performing a connection to a discovery controller, and it is initiated via auto-connect scripts. With the FC transport, if the first connection attempt fails, the controller enters a normal reconnect state but returns control to the cli thread that created the controller. In this scenario, the cli attempts to read the discovery log via ioctl, which fails, causing the cli to see it as an empty log and then proceeds to delete the discovery controller. The delete is rejected and the controller is left live. If the discovery controller reconnect then succeeds, there is no action to delete it, and it sits live doing nothing. Cc: <stable@vger.kernel.org> # v5.7+ Fixes: ce1518139e69 ("nvme: Fix controller creation races with teardown flow") Signed-off-by: James Smart <james.smart@broadcom.com> CC: Israel Rukshin <israelr@mellanox.com> CC: Max Gurtovoy <maxg@mellanox.com> CC: Christoph Hellwig <hch@lst.de> CC: Keith Busch <kbusch@kernel.org> CC: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-08Merge tag 'usb-serial-5.9-rc5' of ↵Greg Kroah-Hartman
https://git.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial into usb-linus Johan writes: USB-serial fixes for 5.9-rc5 Here are some new device ids for 5.9. All have been in linux-next with no reported issues. * tag 'usb-serial-5.9-rc5' of https://git.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial: USB: serial: option: support dynamic Quectel USB compositions USB: serial: option: add support for SIM7070/SIM7080/SIM7090 modules USB: serial: ftdi_sio: add IDs for Xsens Mti USB converter
2020-09-08spi: spi-cadence-quadspi: Fix mapping of buffers for DMA readsVignesh Raghavendra
Buffers need to mapped to DMA channel's device pointer instead of SPI controller's device pointer as its system DMA that actually does data transfer. Data inconsistencies have been reported when reading from flash without this fix. Fixes: ffa639e069fb ("mtd: spi-nor: cadence-quadspi: Add DMA support for direct mode reads") Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com> Tested-by: Jan Kiszka <jan.kiszka@siemens.com> Link: https://lore.kernel.org/r/20200831130720.4524-1-vigneshr@ti.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-08drm/i915: fix regression leading to display audio probe failure on GLKKai Vehmanen
In commit 4f0b4352bd26 ("drm/i915: Extract cdclk requirements checking to separate function") the order of force_min_cdclk_changed check and intel_modeset_checks(), was reversed. This broke the mechanism to immediately force a new CDCLK minimum, and lead to driver probe errors for display audio on GLK platform with 5.9-rc1 kernel. Fix the issue by moving intel_modeset_checks() call later. [vsyrjala: It also broke the ability of planes to bump up the cdclk and thus could lead to underruns when eg. flipping from 32bpp to 64bpp framebuffer. To be clear, we still compute the new cdclk correctly but fail to actually program it to the hardware due to intel_set_cdclk_{pre,post}_plane_update() not getting called on account of state->modeset==false.] Fixes: 4f0b4352bd26 ("drm/i915: Extract cdclk requirements checking to separate function") BugLink: https://github.com/thesofproject/linux/issues/2410 Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200901151036.1312357-1-kai.vehmanen@linux.intel.com (cherry picked from commit cf696856bc54a31f78e6538b84c8f7a006b6108b) Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2020-09-08i2c: npcm7xx: Fix timeout calculationTali Perry
timeout_usec value calculation was wrong, the calculated value was in msec instead of usec. Fixes: 56a1485b102e ("i2c: npcm7xx: Add Nuvoton NPCM I2C controller driver") Signed-off-by: Tali Perry <tali.perry1@gmail.com> Reviewed-by: Avi Fishman <avifishman70@gmail.com> Reviewed-by: Joel Stanley <joel@jms.id.au> Reviewed-by: Alex Qiu <xqiu@google.com> Signed-off-by: Wolfram Sang <wsa@kernel.org>
2020-09-08Revert "drm/i915/gem: Delete unused code"Dave Airlie
These commits caused a regression on Lenovo t520 sandybridge machine belonging to reporter. We are reverting them for 5.10 for other reasons, so just do it for 5.9 as well. This reverts commit 7ac2d2536dfa71c275a74813345779b1e7522c91. Reported-by: Harald Arnesen <harald@skogtun.org> Signed-off-by: Dave Airlie <airlied@redhat.com>
2020-09-08Revert "drm/i915/gem: Async GPU relocations only"Dave Airlie
These commits caused a regression on Lenovo t520 sandybridge machine belonging to reporter. We are reverting them for 5.10 for other reasons, so just do it for 5.9 as well. This reverts commit 9e0f9464e2ab36b864359a59b0e9058fdef0ce47. Reported-by: Harald Arnesen <harald@skogtun.org> Signed-off-by: Dave Airlie <airlied@redhat.com>
2020-09-08Revert "drm/i915: Remove i915_gem_object_get_dirty_page()"Dave Airlie
These commits caused a regression on Lenovo t520 sandybridge machine belonging to reporter. We are reverting them for 5.10 for other reasons, so just do it for 5.9 as well. This reverts commit 763fedd6a216f94c2eb98d2f7ca21be3d3806e69. Reported-by: Harald Arnesen <harald@skogtun.org> Signed-off-by: Dave Airlie <airied@redhat.com>
2020-09-08Merge tag 'drm-msm-fixes-2020-09-04' of ↵Dave Airlie
https://gitlab.freedesktop.org/drm/msm into drm-fixes A few fixes for a potential RPTR corruption issue. Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rob Clark <robdclark@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/ <CAF6AEGvnr6Nhz2J0sjv2G+j7iceVtaDiJDT8T88uW6jiBfOGKQ@mail.gmail.com
2020-09-07usb: typec: intel_pmc_mux: Do not configure SBU and HSL Orientation in ↵Utkarsh Patel
Alternate modes According to the PMC Type C Subsystem (TCSS) Mux programming guide rev 0.7, bits 4 and 5 are reserved in Alternate modes. SBU Orientation and HSL Orientation needs to be configured only during initial cable detection in USB connect flow based on device property of "sbu-orientation" and "hsl-orientation". Configuring these reserved bits in the Alternate modes may result in delay in display link training or some unexpected behaviour. So do not configure them while issuing Alternate Mode requests. Fixes: ff4a30d5e243 ("usb: typec: mux: intel_pmc_mux: Support for static SBU/HSL orientation") Signed-off-by: Utkarsh Patel <utkarsh.h.patel@intel.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Link: https://lore.kernel.org/r/20200907142152.35678-3-heikki.krogerus@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>