summaryrefslogtreecommitdiffstats
path: root/drivers/net
AgeCommit message (Collapse)Author
2020-07-02xen-netfront: remove redundant assignment to variable 'act'Colin Ian King
The variable act is being initialized with a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed. Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02net: ipa: simplify ipa_endpoint_program()Alex Elder
Have functions that write endpoint configuration registers return immediately if they are not valid for the direction of transfer for the endpoint. This allows most of the calls in ipa_endpoint_program() to be made unconditionally. Reorder the register writes to match the order of their definition (based on offset). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02net: ipa: move version test inside ipa_endpoint_program_suspend()Alex Elder
IPA version 4.0+ does not support endpoint suspend. Put a test at the top of ipa_endpoint_program_suspend() that returns immediately if suspend is not supported rather than making that check in the caller. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02net: ipa: always handle suspend workaroundAlex Elder
IPA version 3.5.1 has a hardware quirk that requires special handling if an RX endpoint is suspended while aggregation is active. This handling is implemented by ipa_endpoint_suspend_aggr(). Have ipa_endpoint_program_suspend() be responsible for calling ipa_endpoint_suspend_aggr() if suspend mode is being enabled on an endpoint. If the endpoint does not support aggregation, or if aggregation isn't active, this call will continue to have no effect. Move the definition of ipa_endpoint_suspend_aggr() up in the file so its definition precedes the new earlier reference to it. This requires ipa_endpoint_aggr_active() and ipa_endpoint_force_close() to be moved as well. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02net: ipa: move version test inside ipa_endpoint_program_delay()Alex Elder
IPA version 4.2 has a hardware quirk that affects endpoint delay mode, so it isn't used there. Isolate the test that avoids using delay mode for that version inside ipa_endpoint_program_delay(), rather than making that check in the caller. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02mlx4: Mark PM functions as __maybe_unusedWei Yongjun
In certain configurations without power management support, the following warnings happen: drivers/net/ethernet/mellanox/mlx4/main.c:4388:12: warning: 'mlx4_resume' defined but not used [-Wunused-function] 4388 | static int mlx4_resume(struct device *dev_d) | ^~~~~~~~~~~ drivers/net/ethernet/mellanox/mlx4/main.c:4373:12: warning: 'mlx4_suspend' defined but not used [-Wunused-function] 4373 | static int mlx4_suspend(struct device *dev_d) | ^~~~~~~~~~~~ Mark these functions as __maybe_unused to make it clear to the compiler that this is going to happen based on the configuration, which is the standard for these types of functions. Fixes: 0e3e206a3e12 ("mlx4: use generic power management") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02ksz884x: mark pcidev_suspend() as __maybe_unusedWei Yongjun
In certain configurations without power management support, gcc report the following warning: drivers/net/ethernet/micrel/ksz884x.c:7182:12: warning: 'pcidev_suspend' defined but not used [-Wunused-function] 7182 | static int pcidev_suspend(struct device *dev_d) | ^~~~~~~~~~~~~~ Mark pcidev_suspend() as __maybe_unused to make it clear. Fixes: 64120615d140 ("ksz884x: use generic power management") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02net: macb: remove is_udp variableClaudiu Beznea
Remove is_udp variable that is used in only one place and use ip_hdr(skb)->protocol == IPPROTO_UDP check instead. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02net: macb: do not initialize queue variableClaudiu Beznea
Do not initialize queue variable. It is already initialized in for loops. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02net: macb: use hweight32() to count set bits in queue_maskClaudiu Beznea
Use hweight32() to count set bits in queue_mask. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-02net: macb: do not set again bit 0 of queue_maskClaudiu Beznea
Bit 0 of queue_mask is set at the beginning of macb_probe_queues() function. Do not set it again after reading DGFG6 but instead use "|=" operator. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01Merge branch '40GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2020-07-01 This series contains updates to all Intel drivers, but a majority of the changes are to the i40e driver. Jeff converts 'fall through' comments to the 'fallthrough;' keyword for all Intel drivers. Removed unnecessary delay in the ixgbe ethtool diagnostics test. Arkadiusz implements Total Port Shutdown for i40e. This is the revised patch based on Jakub's feedback from an earlier submission of this patch, where additional code comments and description was needed to describe the functionality. Wei Yongjun fixes return error code for iavf_init_get_resources(). Magnus optimizes XDP code in i40e; starting with AF_XDP zero-copy transmit completion path. Then by only executing a division when necessary in the napi_poll data path. Move the check for transmit ring full outside the send loop to increase performance. Ciara add XDP ring statistics to i40e and the ability to dump these statistics and descriptors. Tony fixes reporting iavf statistics. Radoslaw adds support for 2.5 and 5 Gbps by implementing the newer ethtool ksettings API in ixgbe. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01Merge branch '100GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue Tony Nguyen says: ==================== 100GbE Intel Wired LAN Driver Updates 2020-07-01 This series contains updates to the ice driver only. Jacob implements a devlink region for device capabilities. Bruce removes structs containing only one-element arrays that are either unused or only used for indexing. Instead, use pointer arithmetic or other indexing to access the elements. Converts "C struct hack" variable-length types to the preferred C99 flexible array member. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01ice: replace single-element array used for C struct hackBruce Allan
Convert the pre-C90-extension "C struct hack" method (using a single- element array at the end of a structure for implementing variable-length types) to the preferred use of C99 flexible array member. Additional code cleanups were done near areas affected by this change. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01ice: avoid unnecessary single-member variable-length structsBruce Allan
There are a number of structures that consist of a one-element array as the only struct member. Some of those are unused so remove them. Others are used to index into a buffer/array consisting of a variable number of a different data or structure type. Those are unnecessary since we can use simple pointer arithmetic or index directly into the buffer to access individual elements of the buffer/array. Additional code cleanups were done near areas affected by this change. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01bonding: allow xfrm offload setup post-module-loadJarod Wilson
At the moment, bonding xfrm crypto offload can only be set up if the bonding module is loaded with active-backup mode already set. We need to be able to make this work with bonds set to AB after the bonding driver has already been loaded. So what's done here is: 1) move #define BOND_XFRM_FEATURES to net/bonding.h so it can be used by both bond_main.c and bond_options.c 2) set BOND_XFRM_FEATURES in bond_dev->hw_features universally, rather than only when loading in AB mode 3) wire up xfrmdev_ops universally too 4) disable BOND_XFRM_FEATURES in bond_dev->features if not AB 5) exit early (non-AB case) from bond_ipsec_offload_ok, to prevent a performance hit from traversing into the underlying drivers 5) toggle BOND_XFRM_FEATURES in bond_dev->wanted_features and call netdev_change_features() from bond_option_mode_set() In my local testing, I can change bonding modes back and forth on the fly, have hardware offload work when I'm in AB, and see no performance penalty to non-AB software encryption, despite having xfrm bits all wired up for all modes now. Fixes: 18cb261afd7b ("bonding: support hardware encryption offload to slaves") Reported-by: Huy Nguyen <huyn@mellanox.com> CC: Saeed Mahameed <saeedm@mellanox.com> CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: "David S. Miller" <davem@davemloft.net> CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> CC: Jakub Kicinski <kuba@kernel.org> CC: Steffen Klassert <steffen.klassert@secunet.com> CC: Herbert Xu <herbert@gondor.apana.org.au> CC: netdev@vger.kernel.org CC: intel-wired-lan@lists.osuosl.org Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01ice: implement snapshot for device capabilitiesJacob Keller
Add a new devlink region used for capturing a snapshot of the device capabilities buffer which is reported by the firmware over the AdminQ. This information can useful in debugging driver and firmware interactions. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01net: ipa: HOL_BLOCK_EN_FMASK is a 1-bit maskAlex Elder
The convention throughout the IPA driver is to directly use single-bit field mask values, rather than using (for example) u32_encode_bits() to set or clear them. Fix the one place that doesn't follow that convention, which sets HOL_BLOCK_EN_FMASK in ipa_endpoint_init_hol_block_enable(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: clarify endpoint register macro constraintsAlex Elder
A handful of registers are valid only for RX endpoints, and some others are valid only for TX endpoints. For these endpoints, add a comment above their defined offset macro that indicates the endpoints to which they apply. Extend the endpoint parameter naming convention as well, to make these constraints more explicit. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: mode register is TX onlyAlex Elder
The INIT_MODE endpoint configuration register is only valid for TX endpoints. Rather than writing a zero to that register for RX endpoints, avoid writing the register at all. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: metadata_mask register is RX onlyAlex Elder
The INIT_HDR_METADATA_MASK endpoint configuration register is only valid for RX endpoints. Rather than writing a zero to that register for TX endpoints, avoid writing the register at all. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: head-of-line block registers are RX onlyAlex Elder
The INIT_HOL_BLOCK_EN and INIT_HOL_BLOCK_TIMER endpoint registers are only valid for RX endpoints. Have ipa_endpoint_modem_hol_block_clear_all() skip writing these registers for TX endpoints. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: kill IPA_MEM_UC_OFFSETAlex Elder
The microcontroller shared memory area is at the beginning of the IPA resident memory. IPA_MEM_UC_OFFSET was defined as the offset within that region where it's found, but it's 0, and it's never actually used. Just get rid of the definition, and move some of the description it had to be above the definition of the ipa_uc_mem_area structure. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: standarize more GSI error messagesAlex Elder
Make minor updates to error messages reported in "gsi.c": - Use local variables to reduce multi-line function calls - Don't use parentheses in messages - Do some slight rewording in a few cases Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: always report GSI state errorsAlex Elder
We check the state of an event ring or channel both before and after any GSI command issued that will change that state. In most--but not all--cases, if the state is something different than expected we report an error message. Add error messages where missing, so that all unexpected states provide information about what went wrong. Drop the parentheses around the state value shown in all cases. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: reuse a local variable in ipa_endpoint_init_aggr()Alex Elder
Reuse the "limit" local variable in ipa_endpoint_init_aggr() when setting the aggregation size limit. Simple cleanup. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: reduce aggregation time limitAlex Elder
Halve the time limit used when aggregation is enabled on an RX endpoint, to half a millisecond. Use DIV_ROUND_CLOSEST() to compute the value that represents the time period, to get better accuracy in the event the time limit is not an even multiple of the granularity. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01net: ipa: rework ipa_aggr_granularity_val()Alex Elder
The timer used for aggregation makes use of an internal 32 KHz clock. The granularity of the timer is programmed by a field whose value is computed by ipa_aggr_granularity_val(). Redefine the way that value is computed by using a new TIMER_FREQUENCY constant representing the underlying clock frequency. Add two BUILD_BUG_ON() calls to ensure the value used is valid. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01xen networking: add XDP offset adjustment to xen-netbackDenis Kirjanov
the patch basically adds the offset adjustment and netfront state reading to make XDP work on netfront side. Reviewed-by: Paul Durrant <paul@xen.org> Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01xen networking: add basic XDP support for xen-netfrontDenis Kirjanov
The patch adds a basic XDP processing to xen-netfront driver. We ran an XDP program for an RX response received from netback driver. Also we request xen-netback to adjust data offset for bpf_xdp_adjust_head() header space for custom headers. synchronization between frontend and backend parts is done by using xenbus state switching: Reconfiguring -> Reconfigured- > Connected UDP packets drop rate using xdp program is around 310 kpps using ./pktgen_sample04_many_flows.sh and 160 kpps without the patch. Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01ixgbe: Add ethtool support to enable 2.5 and 5.0 Gbps supportRadoslaw Tyl
Added full support for new version Ethtool API. New API allow use 2500Gbase-T and 5000base-T supported and advertised link speed modes. Signed-off-by: Radoslaw Tyl <radoslawx.tyl@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01ixgbe: Cleanup unneeded delay in ethtool testJeff Kirsher
There is a 4 seconds delay in ixgbe_diag_test() that is holding up other ioctls such as SIOCGIFCONF that Oracle database applications use. One of Oracle's product runs "ethtool -t ethX online" periodically for system monitoring and that is impacting database applications that use SIOCGIFCONF at that same time. This 4 second delay was needed in out early 1GbE parts to give the PHY time to recover from a reset. This code was carried forward to the 10 GbE driver even it was not needed for the supported PHYs in the ixgbe driver. CC: Aleksandr Loktionov <aleksandr.loktionov@intel.com> CC: Jack Vogel <jack.vogel@oracle.com> Reported-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01iavf: Fix updating statisticsTony Nguyen
Commit bac8486116b0 ("iavf: Refactor the watchdog state machine") inverted the logic for when to update statistics. Statistics should be updated when no other commands are pending, instead they were only requested when a command was processed. iavf_request_stats() would see a pending request and not request statistics to be updated. This caused statistics to never be updated; fix the logic. Fixes: bac8486116b0 ("iavf: Refactor the watchdog state machine") Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
2020-07-01i40e: introduce new dump desc XDP commandCiara Loftus
Interfaces already exist for dumping Rx and Tx descriptor information. Introduce another for doing the same for XDP descriptors. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01i40e: add XDP ring statistics to dump VSI debug outputCiara Loftus
Prior to this, only the Rx and Tx ring statistics were dumped. The XDP ring statistics are now dumped as well. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01i40e: add XDP ring statistics to VSI statsCiara Loftus
Prior to this, only Rx and Tx ring statistics were accounted for. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01i40e: move check of full Tx ring to outside of send loopMagnus Karlsson
Move the check if the HW Tx ring is full to outside the send loop. Currently it is checked for every single descriptor that we send. Instead, tell the send loop to only process a maximum number of packets equal to the number of available slots in the Tx ring. This way, we can remove the check inside the send loop to and gain some performance. Suggested-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01i40e: eliminate division in napi_poll data pathMagnus Karlsson
Eliminate a division in the napi_poll data path. This division is executed even though it is only needed in the rare case when there are not enough interrupt lines so they have to be shared between queue pairs. Instead, just test for this case and only execute the division if needed. The code has been lifted from the ice driver. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01i40e: optimize AF_XDP Tx completion pathMagnus Karlsson
Improve the performance of the AF_XDP zero-copy Tx completion path. When there are no XDP buffers being sent using XDP_TX or XDP_REDIRECT, we do not have go through the SW ring to clean up any entries since the AF_XDP path does not use these. In these cases, just fast forward the next-to-use counter and skip going through the SW ring. The limit on the maximum number of entries to complete is also removed since the algorithm is now O(1). To simplify the code path, the maximum number of entries to complete for the XDP path is therefore also increased from 256 to 512 (the default number of Tx HW descriptors). This should be fine since the completion in the XDP path is faster than in the SKB path that has 256 as the maximum number. This patch provides around 4% throughput improvement for the l2fwd application in xdpsock on my machine. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01iavf: fix error return code in iavf_init_get_resources()Wei Yongjun
Fix to return negative error code -ENOMEM from the error handling case instead of 0, as done elsewhere in this function. Fixes: b66c7bc1cd4d ("iavf: Refactor init state machine") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01i40e: Add support for a new feature Total Port ShutdownArkadiusz Kubalewski
After OS requests to down a link on a physical network port, the traffic is no longer being processed but the physical link with a link partner is still established. Currently there is a feature (Link down on close) which allows to physically bring the link down (after OS request). With this patch new feature with similar capability is introduced: TOTAL_PORT_SHUTDOWN Allows to physically disable the link on the NIC's port. If enabled, (after link down request from the OS) no link, traffic or led activity is possible on that port. If I40E_FLAG_TOTAL_PORT_SHUTDOWN is enabled, the I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED must be explicitly forced to true and cannot be disabled at that time. The functionalities are exclusive in terms of configuration, but they also have similar behavior (allowing to disable physical link of the port), with following differences: - LINK_DOWN_ON_CLOSE_ENABLED is configurable at host OS run-time and is supported by whole family of 7xx Intel Ethernet Controllers - TOTAL_PORT_SHUTDOWN may be enabled only before OS loads (in BIOS) only if motherboard's BIOS and NIC's FW has support of it - when LINK_DOWN_ON_CLOSE_ENABLED is used, the link is being brought down by sending phy_type=0 to NIC's FW - when TOTAL_PORT_SHUTDOWN is used, phy_type is not altered, instead the link is being brought down by clearing bit (I40E_AQ_PHY_ENABLE_LINK) in abilities field of i40e_aq_set_phy_config structure Introduced changes: - new private flag I40E_FLAG_TOTAL_PORT_SHUTDOWN for handling the feature - probe of NVM if the feature was enabled at driver's port initialization - special handling on link-down procedure to let FW physically shutdown the port if the feature was enabled Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01ethernet/intel: Convert fallthrough code commentsJeff Kirsher
Convert all the remaining 'fall through" code comments to the newer 'fallthrough;' keyword. Suggested-by: Joe Perches <joe@perches.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-01natsemi: use generic power managementVaibhav Gupta
With legacy PM, drivers themselves were responsible for managing the device's power states and takes care of register states. After upgrading to the generic structure, PCI core will take care of required tasks and drivers should do only device-specific operations. Thus, there is no need to call the PCI helper functions like pci_enable_device, which is not recommended. Hence, removed. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01vxge: use generic power managementVaibhav Gupta
With legacy PM, drivers themselves were responsible for managing the device's power states and takes care of register states. After upgrading to the generic structure, PCI core will take care of required tasks and drivers should do only device-specific operations. Use "struct dev_pm_ops" variable to bind the callbacks. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01ksz884x: use generic power managementVaibhav Gupta
With legacy PM, drivers themselves were responsible for managing the device's power states and takes care of register states. After upgrading to the generic structure, PCI core will take care of required tasks and drivers should do only device-specific operations. Thus, there is no need to call the PCI helper functions like pci_enable_wake(), pci_save/restore_sate() and pci_set_power_state(). Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01mlx4: use generic power managementVaibhav Gupta
With legacy PM, drivers themselves were responsible for managing the device's power states and takes care of register states. After upgrading to the generic structure, PCI core will take care of required tasks and drivers should do only device-specific operations. Use "struct dev_pm_ops" variable to bind the callbacks. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01benet: use generic power managementVaibhav Gupta
With legacy PM, drivers themselves were responsible for managing the device's power states and takes care of register states. After upgrading to the generic structure, PCI core will take care of required tasks and drivers should do only device-specific operations. Thus, there is no need to call the PCI helper functions like pci_enable/disable_device(), pci_save/restore_sate() and pci_set_power_state(). Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01sundance: use generic power managementVaibhav Gupta
With legacy PM, drivers themselves were responsible for managing the device's power states and takes care of register states. After upgrading to the generic structure, PCI core will take care of required tasks and drivers should do only device-specific operations. Thus, there is no need to call the PCI helper functions like pci_enable/disable_device(), pci_save/restore_sate() and pci_set_power_state(). Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01liquidio: use generic power managementVaibhav Gupta
Drivers should not use legacy power management as they have to manage power states and related operations, for the device, themselves. This driver was handling them with the help of PCI helper functions. With generic PM, all essentials will be handled by the PCI core. Driver needs to do only device-specific operations. The driver defined empty-body .suspend() and .resume() callbacks earlier. They can now be define NULL and bind with "struct dev_pm_ops" variable. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-01ena_netdev: use generic power managementVaibhav Gupta
With legacy PM, drivers themselves were responsible for managing the device's power states and takes care of register states. After upgrading to the generic structure, PCI core will take care of required tasks and drivers should do only device-specific operations. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>