From 4cb2f2817c9307eb1ed73c11459c409414075230 Mon Sep 17 00:00:00 2001 From: MellanoxBSP <32340777+MellanoxBSP@users.noreply.github.com> Date: Fri, 1 Feb 2019 11:18:48 +0200 Subject: [PATCH] Mellanox drivers backport patching from kernels 4.20-5.0-next (#79) * Mellanox drivers backport patching from kernels 4.20-5.1 Backport from upstream kernels v4.20, v5.0. Patches include: - Amendment for cooling device - Fix for mlxreg-fan driver - Fixes for Mellanox systems DMI names. The list of commits is below: commit fb7255a923115188ac134bb562d1c44f4f3a413b Author: Vadim Pasternak Date: Thu Nov 15 17:27:00 2018 +0000 platform/x86: mlx-platform: Convert to use SPDX identifier Reduce size of duplicated comments by switching to use SPDX identifier. Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) commit e2883859dd0b4ee6fc70151e417fed8680efaa4b Author: Vadim Pasternak Date: Thu Nov 15 17:26:58 2018 +0000 platform/x86: mlx-platform: Allow mlxreg-io driver activation for new systems Allow mlxreg-io platform driver activation for the next generation systems, in particular for MQM87xx, MSN34xx, MSN37xx types, which have: - extended reset causes bits related to ComEx reset, voltage devices firmware upgrade, system platform reset; - additional CPLD device; - JTAG select capability; Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) commit 440f343df1996302d9a3904647ff11b689bf27bc Author: Vadim Pasternak Date: Thu Nov 15 17:26:57 2018 +0000 platform/x86: mlx-platform: Fix LED configuration Exchange LED configuration between msn201x and next generation systems types. Bug was introduced when LED driver activation was added to mlx-platform. LED configuration for the three new system MQMB7, MSN37, MSN34 was assigned to MSN21 and vice versa. This bug affects MSN21 only and likely requires backport to v4.19. Fixes: 1189456b1cce ("platform/x86: mlx-platform: Add LED platform driver activation") Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) commit edd45cba5ed7f53974475ddc9a1453c2c87b3328 Author: Vadim Pasternak Date: Thu Nov 15 17:26:56 2018 +0000 platform/x86: mlx-platform: Fix tachometer registers Shift by one the registers for tachometers (7 - 12). This fix is relevant for the same new systems MQMB7, MSN37, MSN34, which are about to be released to the customers. At the moment, none of them is at customers sites. The customers will not suffer from this change. This fix is necessary, because register used before for tachometer 7 has been than reserved for the second PWM for newer systems, which are not supported yet in mlx-platform driver. So registers of tachometers 7-12 have been shifted by one. Fixes: 0378123c5800 ("platform/x86: mlx-platform: Add mlxreg-fan platform driver activation") Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) commit 3752e5c764b4fb1abe43c78f635bf019c8e98db2 Author: Vadim Pasternak Date: Thu Nov 15 17:26:55 2018 +0000 platform/x86: mlx-platform: Rename new systems product names Rename product names for next generation systems QMB7, SN37, SN34 to respectively MQMB7, MSN37, MSN34. All these systems are about to be released to the customers. At the moment, none of them is at customers sites. The customers will not suffer from this change. The names have been changed due to marketing decision. Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) commit 59e96ec85e8e59170f6d5cba028e199a2e5dfe67 Author: Vadim Pasternak Date: Thu Nov 15 17:26:54 2018 +0000 platform/x86: mlx-platform: Add definitions for new registers Add definitions for new registers: - CPLD3 version - next generation systems are equipped with three CPLD; - Two reset cause registers, which store the system reset reason (like system failures, upgrade failures and so on; Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) commit a421ce088ac8eb3591d2a1ae0ded2dcece72018f Author: Vadim Pasternak Date: Tue Nov 20 06:52:03 2018 +0000 mlxsw: core: Extend cooling device with cooling levels Extend cooling device with cooling levels vector to allow more flexibility of PWM setting. Thermal zone algorithm operates with the numerical states for PWM setting. Each state is the index, defined in range from 0 to 10 and it's mapped to the relevant duty cycle value, which is written to PWM controller. With the current definition fan speed is set to 0% for state 0, 10% for state 1, and so on up to 100% for the maximum state 10. Some systems have limitation for the PWM speed minimum. For such systems PWM setting speed to 0% will just disable the ability to increase speed anymore and such device will be stall on zero speed. Cooling levels allow to configure state vector according to the particular system requirements. For example, if PWM speed is not allowed to be below 30%, cooling levels could be configured as 30%, 30%, 30%, 30%, 40%, 50% and so on. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel Signed-off-by: David S. Miller commit 3f9ffa5c3a25bf2a3c880b07f620c8ef029dc261 Author: Vadim Pasternak Date: Tue Nov 20 23:16:36 2018 +0000 hwmon: (mlxreg-fan) Modify macros for tachometer fault status reading Modify macros for tachometer fault status reading for making it more simple and clear. Signed-off-by: Vadim Pasternak Signed-off-by: Guenter Roeck commit 243cfe3fb8978c7eec24511aba7dac98819ed896 Author: Vadim Pasternak Date: Fri Nov 16 13:47:11 2018 +0000 hwmon: (mlxreg-fan) Fix macros for tacho fault reading Fix macros for tacometer fault reading. This fix is relevant for three Mellanox systems MQMB7, MSN37, MSN34, which are about to be released to the customers. At the moment, none of them is at customers sites. Fixes: 65afb4c8e7e4 ("hwmon: (mlxreg-fan) Add support for Mellanox FAN driver") Signed-off-by: Vadim Pasternak Signed-off-by: Guenter Roeck Signed-off-by: Vadim Pasternak - Mellanox drivers backport patching from next kernels 5.1 Backport from upstream kernels accpted for v5.1. Patches include: - Amendment for thermal zone and hwmon for mlxsw module. - Support for new Mellanox platforms. The list of commits is below: commit 572b2266b9005e8561d1942b238ae2b1c90e5758 Author: Vadim Pasternak Date: Wed Dec 12 23:59:15 2018 +0000 commit a36d97a368f41a6d13e5c0e8b12d2d18ee81b680 Author: Vadim Pasternak Date: Wed Dec 12 23:59:16 2018 +0000 platform/x86: mlx-platform: Add support for new VMOD0007 board name Add support for new Mellanox system type MSN3700C, which is a cost reduced flavor of the MSN37 system class. Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) commit 572b2266b9005e8561d1942b238ae2b1c90e5758 Author: Vadim Pasternak Date: Wed Dec 12 23:59:15 2018 +0000 platform/x86: mlx-platform: Add support for fan capability registers Provide support for the fan capability registers for the next generation systems of types MQM87xx, MSN34xx, MSN37xx. These new registers provide configuration for tachometers and fan drawers connectivity. Use these registers for next generation led, fan and hotplug structures in order to distinguish between the systems which have minor configuration differences. This reduces the amount of code used to describe such systems. Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) platform/x86: mlx-platform: Add support for fan capability registers Provide support for the fan capability registers for the next generation systems of types MQM87xx, MSN34xx, MSN37xx. These new registers provide configuration for tachometers and fan drawers connectivity. Use these registers for next generation led, fan and hotplug structures in order to distinguish between the systems which have minor configuration differences. This reduces the amount of code used to describe such systems. Signed-off-by: Vadim Pasternak Signed-off-by: Darren Hart (VMware) commit 1c10399c663df363846fb70cf1cc871afaadcf59 (HEAD, combined_queue) Author: Vadim Pasternak Date: Thu Jan 17 14:54:11 2019 +0000 squash to: "mlxsw: minimal: Add support for ethtool interface" Fix compilation error reporting by kbuild test root. Replace in mlxsw_m_port_switchdev_init() mlxsw_m_port->dev->switchdev_ops = &mlxsw_m_port_switchdev_ops; by SWITCHDEV_SET_OPS(mlxsw_m_port->dev, &mlxsw_m_port_switchdev_ops); to make minimal module independent on NET_SWITCHDEV, since SWITCHDEV is not mandatory for minimal driver. Signed-off-by: Vadim Pasternak commit 90818ac3344d31bf9eadb466243bf7e13d058107 Author: Vadim Pasternak Date: Sun Dec 30 13:25:18 2018 +0000 mlxsw: core: Extend thermal module with per QSFP module thermal zones Add a dedicated thermal zone for each QSFP/SFP module. The current temperature is obtained from the module's temperature sensor and the trip points are set based on the warning and critical thresholds read from the module. A cooling device (fan) is bound to all the thermal zones. The thermal zone governor is set to user space in order to avoid collisions between thermal zones. For example, one thermal zone might want to increase the speed of the fan, whereas another one would like to decrease it. Deferring this decision to user space allows the user to the take the most suitable decision. Signed-off-by: Vadim Pasternak Signed-off-by: Ido Schimmel commit 2c68cc78f1b6a2d6908e84131e1325fd0af45755 Author: Vadim Pasternak Date: Sun Jan 13 14:10:00 2019 +0000 mlxsw: i2c: Extend initialization by querying firmware data Extend initialization flow by query requests for firmware configuration data in order to obtain the current firmware version. Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit ba41e5bfd3cb7e14b32bb46a5006262dfec3605d Author: Vadim Pasternak Date: Sun Jan 13 14:09:59 2019 +0000 mlxsw: i2c: Extend initialization by querying resources data Extend initialization flow by query requests for chip resources data in order to obtain chip's specific capabilities, like the number of ports. Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit cdfe1c083d0869757239ed1c2fc09ef74ad60141 Author: Vadim Pasternak Date: Sun Jan 13 14:09:58 2019 +0000 mlxsw: i2c: Extend input parameters list of command API Extend input parameters list of command API in mlxsw_i2c_cmd() in order to support initialization commands. Up until now, only access commands were supported by I2C driver. Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit bb92562d37e08de040f75c604e3e9bd96d830b3a Author: Vadim Pasternak Date: Sun Jan 13 14:09:57 2019 +0000 mlxsw: i2c: Modify input parameter name in initialization API Change input parameter name "resource" to "res" in mlxsw_i2c_init() in order to align it with mlxsw_pci_init(). Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit e8d4edb71443616911f36a8f45cb93e15a47ad4c Author: Vadim Pasternak Date: Sun Jan 13 14:09:56 2019 +0000 mlxsw: i2c: Fix comment misspelling Fix comment for mlxsw_i2c_write_cmd(). Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit d1908ba110713493f9a100b58b1fe12335178fbd Author: Vadim Pasternak Date: Sun Jan 13 14:09:55 2019 +0000 mlxsw: core: Move resource query API to common location Move mlxsw_pci_resources_query() to a common location to allow reuse by the different drivers and over all the supported physical buses. Rename it to mlxsw_core_resources_query(). Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit 534d3d5c66565a6d64daf8accb7e5db00eaeb9a3 Author: Vadim Pasternak Date: Sun Dec 30 17:27:35 2018 +0000 mlxsw: minimal: Add support for ethtool interface Add support for ethtool interface to allow reading QSFP/SFP modules content through 'ethtool -m' command. The minimal driver is chip independent, uses I2C bus for chip access. Its purpose is to support chassis management on the systems equipped with Mellanox network switch device. For example from BMC (Board Management Controller) device. The patch allows to obtain QSFP/SFP module info through ethtool. Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit 216cc3945f481a53f9063c5c399ac8aac69aed27 Author: Vadim Pasternak Date: Sun Dec 30 17:27:34 2018 +0000 mlxsw: minimal: Make structures and variables names shorter Replace "mlxsw_minimal" by "mlxsw_m" in order to improve code readability. Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit b772d504c3c04deb0fad51ba69d1af29cbeba303 Author: Vadim Pasternak Date: Sun Dec 30 17:27:33 2018 +0000 mlxsw: core: Move ethtool modules callbacks bodies to common location Move ethtool bodies of callbacks get_module_info() and get_module_eeprom() to the common location to allow reuse by the different mlxsw drivers. Signed-off-by: Vadim Pasternak Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel commit cf7f555463b0370f517f3cb0bfb547fa3bf57305 Author: Vadim Pasternak Date: Thu Nov 22 09:58:33 2018 +0000 mlxsw: core: thermal zone binding to an external cooling device Allow thermal zone binding to an external cooling device from the cooling devices white list. It provides support for Mellanox next generation systems on which cooling device logic is not controlled through the switch registers. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit 1076dc7da451b1770efcd0a6e56c6ca59930bdf2 Author: Vadim Pasternak Date: Tue Nov 20 22:53:27 2018 +0000 mlxsw: core: Add QSFP module temperature label attribute to hwmon Add label attribute to hwmon object for exposing QSFP module's temperature sensor name. Modules are labeled as "front panel xxx". The label is used by utilities such as "sensors": front panel 001: +0.0C (crit = +0.0C, emerg = +0.0C) .. front panel 020: +31.0C (crit = +70.0C, emerg = +80.0C) .. front panel 056: +41.0C (crit = +70.0C, emerg = +80.0C) Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit a44463600400715b95d2e950f3b1e6f93cbf724b Author: Vadim Pasternak Date: Tue Nov 20 13:04:23 2018 +0000 mlxsw: core: Extend hwmon interface with QSFP module temperature attributes Add new attributes to hwmon object for exposing QSFP module temperature input, fault indication, critical and emergency thresholds. Temperature input and fault indication are reading from Management Temperature Bulk Register. Temperature thresholds are reading from Management Cable Info Access Register. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit fbc40d0c2e0490f1f3f57e7b19eccf8279a62d7e Author: Vadim Pasternak Date: Tue Nov 20 13:04:22 2018 +0000 mlxsw: core: Extend hwmon interface with fan fault attribute Add new fan hwmon attribute for exposing fan faults (fault indication is read from Fan Out of Range Event Register). Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit 840ffb0c319eeafdad6b862b1aa687966991b2da Author: Vadim Pasternak Date: Tue Nov 20 13:04:21 2018 +0000 mlxsw: core: Rename cooling device Rename cooling device from "Fan" to "mlxsw_fan". Name "Fan" is too common name, and such name is misleading, while it's interpreted by user. For example name "Fan" could be used by ACPI. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit 125a0a232afda25362c83f9b984c8fa71458b051 Author: Vadim Pasternak Date: Tue Nov 20 13:04:20 2018 +0000 mlxsw: core: Replace thermal temperature trips with defines Replace thermal hardcoded temperature trips values with defines. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit 6a7d5f9cf01d24581fb5a533c609d4534f2585c3 Author: Vadim Pasternak Date: Tue Nov 20 13:04:19 2018 +0000 mlxsw: core: Modify thermal zone definition Modify thermal zone trip points setting for better alignment with system thermal requirement. Add hysteresis thresholds for thermal trips in order to avoid throttling around thermal trip point. If hysteresis temperature is not considered, PWM can have side effect of flip up/down on thermal trip point boundary. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit fd6bdf864a1f3cd5035de84ebceed573b7451374 Author: Vadim Pasternak Date: Tue Nov 20 13:04:18 2018 +0000 mlxsw: core: Set different thermal polling time based on bus frequency capability Add low frequency bus capability in order to allow core functionality separation based on bus type. Driver could run over PCIe, which is considered as high frequency bus or I2C, which is considered as low frequency bus. In the last case time setting, for example, for thermal polling interval, should be increased. Use different thermal monitoring based on bus type. For I2C bus time is set to 20 seconds, while for PCIe 1 second polling interval is used. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit 2d382b8a7a63268506fa9b1d869b7139d9ee06e0 Author: Vadim Pasternak Date: Tue Nov 20 13:04:17 2018 +0000 mlxsw: core: Add core environment module for QSFP module temperature thresholds reading Add new core_env module to allow module temperature warning and critical thresholds reading. New internal API reads the temperature thresholds from the modules, which are equipped with the thermal sensor. These thresholds are to be exposed by hwmon module and to be used by thermal module. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit a8a14ab2228eb04b4b9a38e389c1326fb634cbd3 Author: Vadim Pasternak Date: Tue Nov 20 13:04:16 2018 +0000 mlxsw: reg: Add Fan Out of Range Event Register Add FORE (Fan Out of Range Event Register), which is used for fan fault reading. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit b0a01ca026839e5e93c7c53a822b407ae5ac0a7b Author: Vadim Pasternak Date: Tue Nov 20 13:04:15 2018 +0000 mlxsw: reg: Add Management Temperature Bulk Register Add MTBR (Management Temperature Bulk Register), which is used for port temperature reading in a bulk mode. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel commit d7bb8405da07703b8466fcb1fdc7a86d5cf30ae9 Author: Vadim Pasternak Date: Tue Nov 20 13:04:14 2018 +0000 mlxsw: spectrum: Move QSFP EEPROM definitons to common location Move QSFP EEPROM definitions to common location from the spectrum driver in order to make them available for other mlxsw modules. They are common for all kind of chips and have relation to SFF specifications 8024, 8436, 8472, 8636, rather than to chip type. Signed-off-by: Vadim Pasternak Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel Signed-off-by: Vadim Pasternak Mellanox drivers backport patching from kernels 4.20-5.0 Backport from upstream kernels v4.20, v5.0. Patches include: - Amendment for cooling device - Fix for mlxreg-fan driver - Fixes for Mellanox systems DMI names. The list of commits is below: hwmon: (mlxreg-fan) Add support for fan capability registers Add support for fan capability registers in order to distinct between the systems which have minor fan configuration differences. This reduces the amount of code used to describe such systems. The capability registers provides system specific information about the number of physically connected tachometers and system specific fan speed scale parameter. For example one system can be equipped with twelve fan tachometers, while the other with for example, eight or six. Or one system should use default fan speed divider value, while the other has a scale parameter defined in hardware, which should be used for divider setting. Reading this information from the capability registers allows to use the same fan structure for the systems with the such differences. Signed-off-by: Vadim Pasternak hwmon: (pmbus) Fix driver info initialization in probe routine Fix tps53679_probe() by using dynamically allocated ?pmbus_driver_info? structure instead of static. Usage of static structures causes overwritten of the field ?vrm_version? - when the number of tps53679 devices with the different ?vrm_version? are used within the same system, the last probed device overwrites this field for all others. Fixes: 610526527a13e4c9 ("hwmon: (pmbus) Add support for Texas Instruments tps53679 device") Signed-off-by: Vadim Pasternak Signed-off-by: Vadim Pasternak Mellanox drivers backport patching for patches for next kernel 5.2 Backport from upstream kernels v4.20, v5.0. Patches include: - Support capability registers for Mellanox platform driver. - Support capability registers for leds-mlxreg driver. - Support capability registers for mlxreg-fan driver. - Fix for probe routine of tps53679 driver. - Fix for probe routine of tps53679 driver. platform/x86: mlx-platform: Add support for fan speed capability register Provide support for the fan speed capability register for the next generation systems of types MQM87xx, MSN34xx, MSN37xx. These new register provide configuration for tachometers and fan drawers connectivity. Use these scale value for FAN speed calculation. Signed-off-by: Vadim Pasternak platform/x86: mlx-platform: Add support for CPLD4 register Provide support for CPLD4 version register, relevant for MSN8xx system family. Signed-off-by: Vadim Pasternak leds: mlxreg: Add support for capability register Add support for capability register in order to distinct between the systems which have minor LED configuration differences. This reduces the amount of code used to describe such systems. For example one system can be equipped with six LED, while the other with only four. Reading this information from the capability registers allows to use the same LED structure for such systems. Signed-off-by: Vadim Pasternak Signed-off-by: Vadim Pasternak * Add patch for CPLD4 --- ...-Support-extended-port-numbers-for-S.patch | 75 + ...-mlxsw-thermal-monitoring-amendments.patch | 4072 +++++ ...-introduce-watchdog-driver-for-Mella.patch | 839 + ...-Support-port-numbers-initialization.patch | 86 + patch/0030-update-kernel-config.patch | 25 + ...1-mlxsw-Align-code-with-kernel-v-5.0.patch | 13374 ++++++++++++++++ ...sonic-update-kernel-config-mlsxw-pci.patch | 26 + ...driver-info-initialization-in-probe-.patch | 42 + ...mal-disable-highest-zone-calculation.patch | 35 + ...-x86-mlx-platform-Add-CPLD4-register.patch | 63 + patch/series | 10 + 11 files changed, 18647 insertions(+) create mode 100644 patch/0026-mlxsw-qsfp_sysfs-Support-extended-port-numbers-for-S.patch create mode 100644 patch/0027-mlxsw-thermal-monitoring-amendments.patch create mode 100644 patch/0028-watchdog-mlx-wdt-introduce-watchdog-driver-for-Mella.patch create mode 100644 patch/0029-mlxsw-qsfp_sysfs-Support-port-numbers-initialization.patch create mode 100644 patch/0030-update-kernel-config.patch create mode 100644 patch/0031-mlxsw-Align-code-with-kernel-v-5.0.patch create mode 100644 patch/0032-sonic-update-kernel-config-mlsxw-pci.patch create mode 100644 patch/0033-hwmon-pmbus-Fix-driver-info-initialization-in-probe-.patch create mode 100644 patch/0034-mlxsw-thermal-disable-highest-zone-calculation.patch create mode 100644 patch/0035-platform-x86-mlx-platform-Add-CPLD4-register.patch diff --git a/patch/0026-mlxsw-qsfp_sysfs-Support-extended-port-numbers-for-S.patch b/patch/0026-mlxsw-qsfp_sysfs-Support-extended-port-numbers-for-S.patch new file mode 100644 index 000000000000..0783cffb8efa --- /dev/null +++ b/patch/0026-mlxsw-qsfp_sysfs-Support-extended-port-numbers-for-S.patch @@ -0,0 +1,75 @@ +From c6a95c1ea4518a19cf46e8d0c844ae980df4c5da Mon Sep 17 00:00:00 2001 +From: Vadim Pasternak +Date: Thu, 3 Jan 2019 18:05:01 +0000 +Subject: [PATCH v1] mlxsw: qsfp_sysfs: Support extended port numbers for + Spectrume2 chip + +Add system type detection through DMI table in order to distinct between +the systems supporting up to 64 and up to 128 ports. + +Signed-off-by: Vadim Pasternak +--- + drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c | 19 +++++++++++++++++-- + 1 file changed, 17 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c b/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c +index c072b91..bee2a08 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c +@@ -47,7 +47,7 @@ + #define MLXSW_QSFP_SUB_PAGE_NUM 3 + #define MLXSW_QSFP_SUB_PAGE_SIZE 48 + #define MLXSW_QSFP_LAST_SUB_PAGE_SIZE 32 +-#define MLXSW_QSFP_MAX_NUM 64 ++#define MLXSW_QSFP_MAX_NUM 128 + #define MLXSW_QSFP_MIN_REQ_LEN 4 + #define MLXSW_QSFP_STATUS_VALID_TIME (120 * HZ) + #define MLXSW_QSFP_MAX_CPLD_NUM 3 +@@ -88,6 +88,7 @@ struct mlxsw_qsfp { + }; + + static int mlxsw_qsfp_cpld_num = MLXSW_QSFP_MIN_CPLD_NUM; ++static int mlxsw_qsfp_num = MLXSW_QSFP_MAX_NUM / 2; + + static int + mlxsw_qsfp_query_module_eeprom(struct mlxsw_qsfp *mlxsw_qsfp, u8 index, +@@ -238,6 +239,13 @@ static int mlxsw_qsfp_dmi_set_cpld_num(const struct dmi_system_id *dmi) + return 1; + }; + ++static int mlxsw_qsfp_dmi_set_qsfp_num(const struct dmi_system_id *dmi) ++{ ++ mlxsw_qsfp_num = MLXSW_QSFP_MAX_NUM; ++ ++ return 1; ++}; ++ + static const struct dmi_system_id mlxsw_qsfp_dmi_table[] = { + { + .callback = mlxsw_qsfp_dmi_set_cpld_num, +@@ -253,6 +261,13 @@ static const struct dmi_system_id mlxsw_qsfp_dmi_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "MSN27"), + }, + }, ++ { ++ .callback = mlxsw_qsfp_dmi_set_qsfp_num, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "Mellanox Technologies"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "MSN37"), ++ }, ++ }, + { } + }; + MODULE_DEVICE_TABLE(dmi, mlxsw_qsfp_dmi_table); +@@ -283,7 +298,7 @@ int mlxsw_qsfp_init(struct mlxsw_core *mlxsw_core, + mlxsw_qsfp->bus_info = mlxsw_bus_info; + mlxsw_bus_info->dev->platform_data = mlxsw_qsfp; + +- for (i = 1; i <= MLXSW_QSFP_MAX_NUM; i++) { ++ for (i = 1; i <= mlxsw_qsfp_num; i++) { + mlxsw_reg_pmlp_pack(pmlp_pl, i); + err = mlxsw_reg_query(mlxsw_qsfp->core, MLXSW_REG(pmlp), + pmlp_pl); +-- +2.1.4 + diff --git a/patch/0027-mlxsw-thermal-monitoring-amendments.patch b/patch/0027-mlxsw-thermal-monitoring-amendments.patch new file mode 100644 index 000000000000..9abbc574e9c6 --- /dev/null +++ b/patch/0027-mlxsw-thermal-monitoring-amendments.patch @@ -0,0 +1,4072 @@ +From e663de0d7c184c605db27519e3f23e9ec835d845 Mon Sep 17 00:00:00 2001 +From: Vadim Pasternak +Date: Fri, 14 Dec 2018 01:38:16 +0000 +Subject: [PATCH mellanox 4.20-4.21 backport] mlxsw thermal monitoring + amendments + +This patchset extends mlxsw hwmon and thermal with module temperature +attributes (input, fault, critical and emergency thresholds) and adds +hwmon fault for FAN. + +New hwmon attributes, such as FAN faults, port temperature fault will +improve system monitoring abilities. + +Introduction of per QSFP module thermal zones + +The motivation is: +- To support multiport network switch equipped with a big number of + temperature sensors (128+) and with a single cooling device. +- Provide a user interface that will allow it to optimize thermal + monitoring flow. + +When multiple sensors are mapped to the same cooling device, the +cooling device should be set according the worst sensor from the +sensors associated with this cooling device. The system shall implement +cooling control based on thermal monitoring of the critical temperature +sensors. In many cases, in order to achieve an optimal thermal +solution, the user involvement is reqiered. + +Add support for ethtool interface to allow reading QSFP/SFP modules +content through 'ethtool -m' command. + +It adds to sysfs the next additional attributes: +per each port: +- tempX_crit (reading from Management Cable Info Access Register); +- tempX_fault (reading Management Temperature Bulk Register); +- tempX_emergency (reading from Management Cable Info Access Register); +- tempX_input (reading Management Temperature Bulk Register); + where X is from 2 (1 is for ASIC ambient temperature) to the number + of ports equipped within the system. +per each tachometer: +- fanY_fault (reading from Fan Out of Range Event Register); + where Y is from 1 to the number of rotors (FANs) equipped within the + system. +Temperature input, critical and emergency attributes are supposed to be +exposed from sensors utilities of lm-sensors package, like: +front panel 001: +51.0C (highest = +52.0C) +front panel 002: +62.0C (crit = +70.0C, emerg = +80.0C) +... +front panel 055: +60.0C (crit = +70.0C, emerg = +80.0C) + +Add definitions for new registers: +- CPLD3 version - next generation systems are equipped with three CPLD; +- Two reset cause registers, which store the system reset reason (like + system failures, upgrade failures and so on; + +The below are the list of the commits included in patchset. + +mlxsw: spectrum: Move QSFP EEPROM definitons to common location + +Move QSFP EEPROM definitions to common location from the spectrum driver +in order to make them available for other mlxsw modules. They are common +for all kind of chips and have relation to SFF specifications 8024, 8436, +8472, 8636, rather than to chip type. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: reg: Add Management Temperature Bulk Register + +Add MTBR (Management Temperature Bulk Register), which is used for port +temperature reading in a bulk mode. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: reg: Add Fan Out of Range Event Register + +Add FORE (Fan Out of Range Event Register), which is used for Fan fault +reading. + +Signed-off-by: Vadim Pasternak +--- + +mlxsw: core: Add core environment module for QSFP module temperature thresholds reading + +Add new core_env module to allow module temperature warning and critical +thresholds reading. + +New internal API reads the temperature thresholds from the modules, which +are equipped with the thermal sensor. These thresholds are to be exposed +by hwmon module and to be used by thermal module. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: Set different thermal polling time based on bus frequency capability + +Add low frequency bus capability in order to allow core functionality +separation based on bus type. Driver could run over PCIe, which is +considered as high frequency bus or I2C , which is considered as low +frequency bus. In the last case time setting, for example, for thermal +polling interval, should be increased. + +Use different thermal monitoring based on bus type. +For I2C bus time is set to 20 seconds, while for PCIe 1 second polling +interval is used. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: Modify thermal zone definition + +Modify thermal zone trip points setting for better alignment with +system thermal requirement. +Add hysteresis thresholds for thermal trips are added in order to avoid +throttling around thermal trip point. If hysteresis temperature is not +considered PWM can have side effect of flip up/down on thermal trip +point boundary. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: Replace thermal temperature trips with defines + +Replace thermal hardcoded temperature trips values with defines. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: Extend cooling device with cooling levels + +Extend cooling device with cooling levels vector to allow more +flexibility of PWM setting. +Thermal zone algorithm operates with the numerical states for PWM +setting. Each state is the index, defined in range from 0 to 10 and +it's mapped to the relevant duty cycle value, which is written to PWM +controller. With the current definition fan speed is set to 0% for +state 0, 10% for state 1, and so on up to 100% for the maximum state +10. +Some systems have limitation for the PWM speed minimum. For such +systems PWM setting speed to 0% will just disable the ability to +increase speed anymore and such device will be stall on zero speed. +Cooling levels allow to configure state vector according to the +particular system requirements. For example, if PWM speed is not +allowed to be below 30%, cooling levels could be configured as 30%, +30%, 30%, 30%, 40%, 50% and so on. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: Rename cooling device + +Rename cooling device from "Fan" to "mlxsw_fan". +Name "Fan" is too common name, and such name is misleading, while it's +interpreted by user. +For example name "Fan" could be used by ACPI. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: Extend hwmon interface with fan fault attribute + +Add new fan hwmon attribute for exposing fan faults (fault indication +is reading from Fan Out of Range Event Register). + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: Extend hwmon interface with QSFP module temperature attributes + +Add new attributes to hwmon object for exposing QSFP module temperature +input, fault indication, critical and emergency thresholds. +Temperature input and fault indication are reading from Management +Temperature Bulk Register. Temperature thresholds are reading from +Management Cable Info Access Register. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: thermal zone binding to an external cooling device + +Allow thermal zone binding to an external cooling device from the +cooling devices white list. +It provides support for Mellanox next generation systems on which +cooling device logic is not controlled through the switch registers. + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko +--- + +mlxsw: core: Add QSFP module temperature label attribute to hwmon + +Add label attribute to hwmon object for exposing QSFP module's +temperature sensors name. Modules are labeld as "front panel xxx". +It will be exposed by utilities sensors as: +front panel 001: +0.0C (crit = +0.0C, emerg = +0.0C) +.. +front panel 020: +31.0C (crit = +70.0C, emerg = +80.0C) +.. +front panel 056: +41.0C (crit = +70.0C, emerg = +80.0C) + +Signed-off-by: Vadim Pasternak +Reviewed-by: Jiri Pirko + +mlxsw: core: Extend thermal module with per QSFP module thermal zones + +Add a dedicated thermal zone for each QSFP/SFP module. +Implement per QSFP/SFP module thermal zone for mlxsw based hardware. +Use module temperature sensor and module warning and critical +temperature thresholds, obtained through the mlxsw hardware for the +thermal zone current and trips temperatures. +Bind a cooling device to all of these thermal zones for fan speed +control. +Set thermal zone governor to user space. +Since all these zones share the same cooling device, it will allow to +user to take most suitable thermal control decision and to avoid +competing between the thermal zones for the cooling device control +and to avoid collisions between thermal zones, when one of them could +require increasing of the cooling device speed, while another one could +require its reducing. + +Signed-off-by: Vadim Pasternak +--- + +mlxsw: core: Extend thermal module with highest thermal zone detection + +Add the detection of highest thermal zone and user notification about +which thermal zone currently has a highest score. It'll allow to user +to make an optimal decision about the thermal control management, in +case user intends to be involved in thermal monitoring process. +Otherwise the thermal flow is not affected. + +Thermal zone score is represented by 32 bits unsigned integer and +calculated according to the next formula: +For T < TZ, where t from {normal trip = 0, high trip = 1, hot +trip = 2, critical = 3}: +TZ score = (T + (TZ - T) / 2) / (TZ - T) * 256 ** j; +Highest thermal zone score s is set as MAX(TZscore); +Following this formula, if TZ is in trip point higher than TZ, +the higher score is to be always assigned to TZ. + +For two thermal zones located at the same kind of trip point, the higher +score will be assigned to the zone, which closer to the next trip point. +Thus, the highest score will always be assigned objectively to the hottest +thermal zone. + +User is notified through udev event in case new thermal zone is reached +the highest score. + +Signed-off-by: Vadim Pasternak +--- + +mlxsw: minimal: Add support for ethtool interface + +Add support for ethtool interface to allow reading QSFP/SFP modules +content through 'ethtool -m' command. +The minimal driver is chip independent, uses I2C bus for chip access. +Its purpose is to support chassis management on the systems equipped +with Mellanox network switch device. For example from BMC (Board +Management Controller) device. +The patch allows to obtain QSFP/SFP module info through ethtool. + +Signed-off-by: Vadim Pasternak +--- + +platform/x86: mlx-platform: Add support for fan direction register + +Provide support for the fan direction register. +This register shows configuration for system fans direction, which could +be forward or reversed. +For forward direction - relevant bit is set 0; +For reversed direction - relevant bit is set 1. + +Signed-off-by: Vadim Pasternak +--- + +platform_data/mlxreg: Document fixes for core platform data + +Remove "led" from the description, since the structure +"mlxreg_core_platform_data" is used not only for led data. + +Signed-off-by: Vadim Pasternak +--- + +platform_data/mlxreg: Add capability field to core platform data + +Add capability field to "mlxreg_core_platform_data" structure. +The purpose of this register is to provide additional info to platform +driver through the atribute related capability register. + +Signed-off-by: Vadim Pasternak +--- + +platform/x86: mlx-platform: Add support for fan capability registers + +Provide support for the fan capability registers for next generation +systems of types MQM87xx, MSN34xx, MSN37xx. These new registers +provide configuration for tachometers connectivity, fan drawers +connectvivity and tachometer speed divider. +Use these registers for next generation led, fan and hotplug structures +in order to distinct between the systems which have minor configuration +differences. This reduces the amount of code used to describe such +systems. + +Signed-off-by: Vadim Pasternak +--- + +platform/x86: mlx-platform: Add support for new VMOD0007 board name + +Add support for new Mellanox system type MSN3700C, which is +a cost reduction flavor of MSN37 system?s class. + +Signed-off-by: Vadim Pasternak +--- + +hwmon: (mlxreg-fan) Add support for fan capability registers + +Add support for fan capability registers in order to distinct between +the systems which have minor fan configuration differences. This +reduces the amount of code used to describe such systems. +The capability registers provides system specific information about the +number of physically connected tachometers and system specific fan +speed scale parameter. +For example one system can be equipped with twelve fan tachometers, +while the other with for example, eight or six. Or one system should +use default fan speed divider value, while the other has a scale +parameter defined in hardware, which should be used for divider +setting. +Reading this information from the capability registers allows to use the +same fan structure for the systems with the such differences. + +Signed-off-by: Vadim Pasternak +--- + +leds: mlxreg: Add support for capability register + +Add support for capability register in order to distinct between the +systems which have minor LED configuration differences. This reduces +the amount of code used to describe such systems. +For example one system can be equipped with six LED, while the other +with only four. Reading this information from the capability registers +allows to use the same LED structure for such systems. + +Signed-off-by: Vadim Pasternak +--- +--- + drivers/hwmon/mlxreg-fan.c | 78 ++- + drivers/leds/leds-mlxreg.c | 55 +- + drivers/net/ethernet/mellanox/mlxsw/Makefile | 2 +- + drivers/net/ethernet/mellanox/mlxsw/core.h | 14 + + drivers/net/ethernet/mellanox/mlxsw/core_env.c | 238 +++++++ + drivers/net/ethernet/mellanox/mlxsw/core_env.h | 17 + + drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c | 337 +++++++-- + drivers/net/ethernet/mellanox/mlxsw/core_thermal.c | 761 +++++++++++++++++++-- + drivers/net/ethernet/mellanox/mlxsw/i2c.c | 43 +- + drivers/net/ethernet/mellanox/mlxsw/i2c.h | 35 +- + drivers/net/ethernet/mellanox/mlxsw/minimal.c | 356 ++++++++-- + drivers/net/ethernet/mellanox/mlxsw/reg.h | 139 +++- + drivers/platform/mellanox/mlxreg-hotplug.c | 23 +- + drivers/platform/mellanox/mlxreg-io.c | 4 +- + drivers/platform/x86/mlx-platform.c | 138 +++- + include/linux/platform_data/mlxreg.h | 6 +- + include/linux/sfp.h | 564 +++++++++++++++ + 17 files changed, 2506 insertions(+), 304 deletions(-) + create mode 100644 drivers/net/ethernet/mellanox/mlxsw/core_env.c + create mode 100644 drivers/net/ethernet/mellanox/mlxsw/core_env.h + create mode 100644 include/linux/sfp.h + +diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c +index d8fa4be..11388c5 100644 +--- a/drivers/hwmon/mlxreg-fan.c ++++ b/drivers/hwmon/mlxreg-fan.c +@@ -27,7 +27,10 @@ + #define MLXREG_FAN_SPEED_MAX (MLXREG_FAN_MAX_STATE * 2) + #define MLXREG_FAN_SPEED_MIN_LEVEL 2 /* 20 percent */ + #define MLXREG_FAN_TACHO_SAMPLES_PER_PULSE_DEF 44 +-#define MLXREG_FAN_TACHO_DIVIDER_DEF 1132 ++#define MLXREG_FAN_TACHO_DIVIDER_MIN 283 ++#define MLXREG_FAN_TACHO_DIVIDER_DEF (MLXREG_FAN_TACHO_DIVIDER_MIN \ ++ * 4) ++#define MLXREG_FAN_TACHO_DIVIDER_SCALE_MAX 32 + /* + * FAN datasheet defines the formula for RPM calculations as RPM = 15/t-high. + * The logic in a programmable device measures the time t-high by sampling the +@@ -51,7 +54,7 @@ + */ + #define MLXREG_FAN_GET_RPM(rval, d, s) (DIV_ROUND_CLOSEST(15000000 * 100, \ + ((rval) + (s)) * (d))) +-#define MLXREG_FAN_GET_FAULT(val, mask) (!((val) ^ (mask))) ++#define MLXREG_FAN_GET_FAULT(val, mask) ((val) == (mask)) + #define MLXREG_FAN_PWM_DUTY2STATE(duty) (DIV_ROUND_CLOSEST((duty) * \ + MLXREG_FAN_MAX_STATE, \ + MLXREG_FAN_MAX_DUTY)) +@@ -360,12 +363,57 @@ static const struct thermal_cooling_device_ops mlxreg_fan_cooling_ops = { + .set_cur_state = mlxreg_fan_set_cur_state, + }; + ++static int mlxreg_fan_connect_verify(struct mlxreg_fan *fan, ++ struct mlxreg_core_data *data, ++ bool *connected) ++{ ++ u32 regval; ++ int err; ++ ++ err = regmap_read(fan->regmap, data->capability, ®val); ++ if (err) { ++ dev_err(fan->dev, "Failed to query capability register 0x%08x\n", ++ data->capability); ++ return err; ++ } ++ ++ *connected = (regval & data->bit) ? true : false; ++ ++ return 0; ++} ++ ++static int mlxreg_fan_speed_divider_get(struct mlxreg_fan *fan, ++ struct mlxreg_core_data *data) ++{ ++ u32 regval; ++ int err; ++ ++ err = regmap_read(fan->regmap, data->capability, ®val); ++ if (err) { ++ dev_err(fan->dev, "Failed to query capability register 0x%08x\n", ++ data->capability); ++ return err; ++ } ++ ++ /* ++ * Set divider value according to the capability register, in case it ++ * contains valid value. Otherwise use default value. The purpose of ++ * this validation is to protect against the old hardware, in which ++ * this register can be un-initialized. ++ */ ++ if (regval > 0 && regval <= MLXREG_FAN_TACHO_DIVIDER_SCALE_MAX) ++ fan->divider = regval * MLXREG_FAN_TACHO_DIVIDER_MIN; ++ ++ return 0; ++} ++ + static int mlxreg_fan_config(struct mlxreg_fan *fan, + struct mlxreg_core_platform_data *pdata) + { + struct mlxreg_core_data *data = pdata->data; +- bool configured = false; ++ bool configured = false, connected = false; + int tacho_num = 0, i; ++ int err; + + fan->samples = MLXREG_FAN_TACHO_SAMPLES_PER_PULSE_DEF; + fan->divider = MLXREG_FAN_TACHO_DIVIDER_DEF; +@@ -376,6 +424,18 @@ static int mlxreg_fan_config(struct mlxreg_fan *fan, + data->label); + return -EINVAL; + } ++ ++ if (data->capability) { ++ err = mlxreg_fan_connect_verify(fan, data, ++ &connected); ++ if (err) ++ return err; ++ if (!connected) { ++ tacho_num++; ++ continue; ++ } ++ } ++ + fan->tacho[tacho_num].reg = data->reg; + fan->tacho[tacho_num].mask = data->mask; + fan->tacho[tacho_num++].connected = true; +@@ -394,13 +454,19 @@ static int mlxreg_fan_config(struct mlxreg_fan *fan, + return -EINVAL; + } + /* Validate that conf parameters are not zeros. */ +- if (!data->mask || !data->bit) { ++ if (!data->mask && !data->bit && !data->capability) { + dev_err(fan->dev, "invalid conf entry params: %s\n", + data->label); + return -EINVAL; + } +- fan->samples = data->mask; +- fan->divider = data->bit; ++ if (data->capability) { ++ err = mlxreg_fan_speed_divider_get(fan, data); ++ if (err) ++ return err; ++ } else { ++ fan->samples = data->mask; ++ fan->divider = data->bit; ++ } + configured = true; + } else { + dev_err(fan->dev, "invalid label: %s\n", data->label); +diff --git a/drivers/leds/leds-mlxreg.c b/drivers/leds/leds-mlxreg.c +index 036c214..2db2000 100644 +--- a/drivers/leds/leds-mlxreg.c ++++ b/drivers/leds/leds-mlxreg.c +@@ -1,35 +1,7 @@ +-/* +- * Copyright (c) 2017 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2017 Vadim Pasternak +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) ++// ++// Copyright (c) 2018 Mellanox Technologies. All rights reserved. ++// Copyright (c) 2018 Vadim Pasternak + + #include + #include +@@ -217,6 +189,7 @@ static int mlxreg_led_config(struct mlxreg_led_priv_data *priv) + struct mlxreg_led_data *led_data; + struct led_classdev *led_cdev; + int brightness; ++ u32 regval; + int i; + int err; + +@@ -226,6 +199,17 @@ static int mlxreg_led_config(struct mlxreg_led_priv_data *priv) + if (!led_data) + return -ENOMEM; + ++ if (data->capability) { ++ err = regmap_read(led_pdata->regmap, data->capability, ++ ®val); ++ if (err) { ++ dev_err(&priv->pdev->dev, "Failed to query capability register\n"); ++ return err; ++ } ++ if (!(regval & data->bit)) ++ continue; ++ } ++ + led_cdev = &led_data->led_cdev; + led_data->data_parent = priv; + if (strstr(data->label, "red") || +@@ -295,16 +279,9 @@ static int mlxreg_led_remove(struct platform_device *pdev) + return 0; + } + +-static const struct of_device_id mlxreg_led_dt_match[] = { +- { .compatible = "mellanox,leds-mlxreg" }, +- { }, +-}; +-MODULE_DEVICE_TABLE(of, mlxreg_led_dt_match); +- + static struct platform_driver mlxreg_led_driver = { + .driver = { + .name = "leds-mlxreg", +- .of_match_table = of_match_ptr(mlxreg_led_dt_match), + }, + .probe = mlxreg_led_probe, + .remove = mlxreg_led_remove, +diff --git a/drivers/net/ethernet/mellanox/mlxsw/Makefile b/drivers/net/ethernet/mellanox/mlxsw/Makefile +index b58ea1b..c62ba64 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/Makefile ++++ b/drivers/net/ethernet/mellanox/mlxsw/Makefile +@@ -1,5 +1,5 @@ + obj-$(CONFIG_MLXSW_CORE) += mlxsw_core.o +-mlxsw_core-objs := core.o ++mlxsw_core-objs := core.o core_env.o + mlxsw_core-$(CONFIG_MLXSW_CORE_HWMON) += core_hwmon.o + mlxsw_core-$(CONFIG_MLXSW_CORE_THERMAL) += core_thermal.o + mlxsw_core-$(CONFIG_MLXSW_CORE_QSFP) += qsfp_sysfs.o +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h +index ffaacc9..4fb104e 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/core.h +@@ -63,6 +63,13 @@ struct mlxsw_driver; + struct mlxsw_bus; + struct mlxsw_bus_info; + ++#define MLXSW_PORT_MAX_PORTS_DEFAULT 0x40 ++static inline unsigned int ++mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core) ++{ ++ return MLXSW_PORT_MAX_PORTS_DEFAULT; ++} ++ + void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core); + + int mlxsw_core_driver_register(struct mlxsw_driver *mlxsw_driver); +@@ -161,6 +168,8 @@ mlxsw_core_port_driver_priv(struct mlxsw_core_port *mlxsw_core_port) + return mlxsw_core_port; + } + ++int mlxsw_core_port_get_phys_port_name(struct mlxsw_core *mlxsw_core, ++ u8 local_port, char *name, size_t len); + int mlxsw_core_port_init(struct mlxsw_core *mlxsw_core, + struct mlxsw_core_port *mlxsw_core_port, u8 local_port, + struct net_device *dev, bool split, u32 split_group); +@@ -331,6 +340,7 @@ struct mlxsw_bus_info { + } fw_rev; + u8 vsd[MLXSW_CMD_BOARDINFO_VSD_LEN]; + u8 psid[MLXSW_CMD_BOARDINFO_PSID_LEN]; ++ u8 low_frequency; + }; + + struct mlxsw_hwmon; +@@ -351,6 +361,10 @@ static inline int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core, + return 0; + } + ++static inline void mlxsw_hwmon_fini(struct mlxsw_hwmon *mlxsw_hwmon) ++{ ++} ++ + #endif + + struct mlxsw_thermal; +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_env.c b/drivers/net/ethernet/mellanox/mlxsw/core_env.c +new file mode 100644 +index 0000000..7a15e93 +--- /dev/null ++++ b/drivers/net/ethernet/mellanox/mlxsw/core_env.c +@@ -0,0 +1,238 @@ ++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 ++/* Copyright (c) 2018 Mellanox Technologies. All rights reserved */ ++ ++#include ++#include ++ ++#include "core.h" ++#include "core_env.h" ++#include "item.h" ++#include "reg.h" ++ ++static int mlxsw_env_validate_cable_ident(struct mlxsw_core *core, int id, ++ bool *qsfp) ++{ ++ char eeprom_tmp[MLXSW_REG_MCIA_EEPROM_SIZE]; ++ char mcia_pl[MLXSW_REG_MCIA_LEN]; ++ u8 ident; ++ int err; ++ ++ mlxsw_reg_mcia_pack(mcia_pl, id, 0, MLXSW_REG_MCIA_PAGE0_LO_OFF, 0, 1, ++ MLXSW_REG_MCIA_I2C_ADDR_LOW); ++ err = mlxsw_reg_query(core, MLXSW_REG(mcia), mcia_pl); ++ if (err) ++ return err; ++ mlxsw_reg_mcia_eeprom_memcpy_from(mcia_pl, eeprom_tmp); ++ ident = eeprom_tmp[0]; ++ switch (ident) { ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_SFP: ++ *qsfp = false; ++ break; ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP: /* fall-through */ ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP_PLUS: /* fall-through */ ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP28: /* fall-through */ ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP_DD: ++ *qsfp = true; ++ break; ++ default: ++ return -EINVAL; ++ } ++ ++ return 0; ++} ++ ++static int ++mlxsw_env_query_module_eeprom(struct mlxsw_core *mlxsw_core, int module, ++ u16 offset, u16 size, void *data, ++ unsigned int *p_read_size) ++{ ++ char eeprom_tmp[MLXSW_REG_MCIA_EEPROM_SIZE]; ++ char mcia_pl[MLXSW_REG_MCIA_LEN]; ++ u16 i2c_addr; ++ int status; ++ int err; ++ ++ size = min_t(u16, size, MLXSW_REG_MCIA_EEPROM_SIZE); ++ ++ if (offset < MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH && ++ offset + size > MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH) ++ /* Cross pages read, read until offset 256 in low page */ ++ size = MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH - offset; ++ ++ i2c_addr = MLXSW_REG_MCIA_I2C_ADDR_LOW; ++ if (offset >= MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH) { ++ i2c_addr = MLXSW_REG_MCIA_I2C_ADDR_HIGH; ++ offset -= MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH; ++ } ++ ++ mlxsw_reg_mcia_pack(mcia_pl, module, 0, 0, offset, size, i2c_addr); ++ ++ err = mlxsw_reg_query(mlxsw_core, MLXSW_REG(mcia), mcia_pl); ++ if (err) ++ return err; ++ ++ status = mlxsw_reg_mcia_status_get(mcia_pl); ++ if (status) ++ return -EIO; ++ ++ mlxsw_reg_mcia_eeprom_memcpy_from(mcia_pl, eeprom_tmp); ++ memcpy(data, eeprom_tmp, size); ++ *p_read_size = size; ++ ++ return 0; ++} ++ ++int mlxsw_env_module_temp_thresholds_get(struct mlxsw_core *core, int module, ++ int off, int *temp) ++{ ++ char eeprom_tmp[MLXSW_REG_MCIA_EEPROM_SIZE]; ++ union { ++ u8 buf[MLXSW_REG_MCIA_TH_ITEM_SIZE]; ++ u16 temp; ++ } temp_thresh; ++ char mcia_pl[MLXSW_REG_MCIA_LEN] = {0}; ++ char mtbr_pl[MLXSW_REG_MTBR_LEN] = {0}; ++ u16 module_temp; ++ bool qsfp; ++ int err; ++ ++ mlxsw_reg_mtbr_pack(mtbr_pl, MLXSW_REG_MTBR_BASE_MODULE_INDEX + module, ++ 1); ++ err = mlxsw_reg_query(core, MLXSW_REG(mtbr), mtbr_pl); ++ if (err) ++ return err; ++ ++ /* Don't read temperature thresholds for module with no valid info. */ ++ mlxsw_reg_mtbr_temp_unpack(mtbr_pl, 0, &module_temp, NULL); ++ switch (module_temp) { ++ case MLXSW_REG_MTBR_BAD_SENS_INFO: /* fall-through */ ++ case MLXSW_REG_MTBR_NO_CONN: /* fall-through */ ++ case MLXSW_REG_MTBR_NO_TEMP_SENS: /* fall-through */ ++ case MLXSW_REG_MTBR_INDEX_NA: ++ *temp = 0; ++ return 0; ++ default: ++ /* Do not consider thresholds for zero temperature. */ ++ if (!MLXSW_REG_MTMP_TEMP_TO_MC(module_temp)) { ++ *temp = 0; ++ return 0; ++ } ++ break; ++ } ++ ++ /* Read Free Side Device Temperature Thresholds from page 03h ++ * (MSB at lower byte address). ++ * Bytes: ++ * 128-129 - Temp High Alarm (SFP_TEMP_HIGH_ALARM); ++ * 130-131 - Temp Low Alarm (SFP_TEMP_LOW_ALARM); ++ * 132-133 - Temp High Warning (SFP_TEMP_HIGH_WARN); ++ * 134-135 - Temp Low Warning (SFP_TEMP_LOW_WARN); ++ */ ++ ++ /* Validate module identifier value. */ ++ err = mlxsw_env_validate_cable_ident(core, module, &qsfp); ++ if (err) ++ return err; ++ ++ if (qsfp) ++ mlxsw_reg_mcia_pack(mcia_pl, module, 0, ++ MLXSW_REG_MCIA_TH_PAGE_NUM, ++ MLXSW_REG_MCIA_TH_PAGE_OFF + off, ++ MLXSW_REG_MCIA_TH_ITEM_SIZE, ++ MLXSW_REG_MCIA_I2C_ADDR_LOW); ++ else ++ mlxsw_reg_mcia_pack(mcia_pl, module, 0, ++ MLXSW_REG_MCIA_PAGE0_LO, ++ off, MLXSW_REG_MCIA_TH_ITEM_SIZE, ++ MLXSW_REG_MCIA_I2C_ADDR_HIGH); ++ ++ err = mlxsw_reg_query(core, MLXSW_REG(mcia), mcia_pl); ++ if (err) ++ return err; ++ ++ mlxsw_reg_mcia_eeprom_memcpy_from(mcia_pl, eeprom_tmp); ++ memcpy(temp_thresh.buf, eeprom_tmp, MLXSW_REG_MCIA_TH_ITEM_SIZE); ++ *temp = temp_thresh.temp * 1000; ++ ++ return 0; ++} ++ ++int mlxsw_env_get_module_info(struct mlxsw_core *mlxsw_core, int module, ++ struct ethtool_modinfo *modinfo) ++{ ++ u8 module_info[MLXSW_REG_MCIA_EEPROM_MODULE_INFO_SIZE]; ++ u16 offset = MLXSW_REG_MCIA_EEPROM_MODULE_INFO_SIZE; ++ u8 module_rev_id, module_id; ++ unsigned int read_size; ++ int err; ++ ++ err = mlxsw_env_query_module_eeprom(mlxsw_core, module, 0, offset, ++ module_info, &read_size); ++ if (err) ++ return err; ++ ++ if (read_size < offset) ++ return -EIO; ++ ++ module_rev_id = module_info[MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID]; ++ module_id = module_info[MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID]; ++ ++ switch (module_id) { ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP: ++ modinfo->type = ETH_MODULE_SFF_8436; ++ modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN; ++ break; ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP_PLUS: /* fall-through */ ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP28: ++ if (module_id == MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP28 || ++ module_rev_id >= ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID_8636) { ++ modinfo->type = ETH_MODULE_SFF_8636; ++ modinfo->eeprom_len = ETH_MODULE_SFF_8636_LEN; ++ } else { ++ modinfo->type = ETH_MODULE_SFF_8436; ++ modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN; ++ } ++ break; ++ case MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_SFP: ++ modinfo->type = ETH_MODULE_SFF_8472; ++ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN; ++ break; ++ default: ++ return -EINVAL; ++ } ++ ++ return 0; ++} ++EXPORT_SYMBOL(mlxsw_env_get_module_info); ++ ++int mlxsw_env_get_module_eeprom(struct net_device *netdev, ++ struct mlxsw_core *mlxsw_core, int module, ++ struct ethtool_eeprom *ee, u8 *data) ++{ ++ int offset = ee->offset; ++ unsigned int read_size; ++ int i = 0; ++ int err; ++ ++ if (!ee->len) ++ return -EINVAL; ++ ++ memset(data, 0, ee->len); ++ ++ while (i < ee->len) { ++ err = mlxsw_env_query_module_eeprom(mlxsw_core, module, offset, ++ ee->len - i, data + i, ++ &read_size); ++ if (err) { ++ netdev_err(netdev, "Eeprom query failed\n"); ++ return err; ++ } ++ ++ i += read_size; ++ offset += read_size; ++ } ++ ++ return 0; ++} ++EXPORT_SYMBOL(mlxsw_env_get_module_eeprom); +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_env.h b/drivers/net/ethernet/mellanox/mlxsw/core_env.h +new file mode 100644 +index 0000000..064d0e7 +--- /dev/null ++++ b/drivers/net/ethernet/mellanox/mlxsw/core_env.h +@@ -0,0 +1,17 @@ ++/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ ++/* Copyright (c) 2018 Mellanox Technologies. All rights reserved */ ++ ++#ifndef _MLXSW_CORE_ENV_H ++#define _MLXSW_CORE_ENV_H ++ ++int mlxsw_env_module_temp_thresholds_get(struct mlxsw_core *core, int module, ++ int off, int *temp); ++ ++int mlxsw_env_get_module_info(struct mlxsw_core *mlxsw_core, int module, ++ struct ethtool_modinfo *modinfo); ++ ++int mlxsw_env_get_module_eeprom(struct net_device *netdev, ++ struct mlxsw_core *mlxsw_core, int module, ++ struct ethtool_eeprom *ee, u8 *data); ++ ++#endif +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c +index ab710e3..f1ada4cd 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c +@@ -1,36 +1,5 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c +- * Copyright (c) 2015 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2015 Jiri Pirko +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 ++/* Copyright (c) 2015-2018 Mellanox Technologies. All rights reserved */ + + #include + #include +@@ -38,8 +7,10 @@ + #include + #include + #include ++#include + + #include "core.h" ++#include "core_env.h" + + #define MLXSW_HWMON_TEMP_SENSOR_MAX_COUNT 127 + #define MLXSW_HWMON_ATTR_COUNT (MLXSW_HWMON_TEMP_SENSOR_MAX_COUNT * 4 + \ +@@ -61,6 +32,7 @@ struct mlxsw_hwmon { + struct attribute *attrs[MLXSW_HWMON_ATTR_COUNT + 1]; + struct mlxsw_hwmon_attr hwmon_attrs[MLXSW_HWMON_ATTR_COUNT]; + unsigned int attrs_count; ++ u8 sensor_count; + }; + + static ssize_t mlxsw_hwmon_temp_show(struct device *dev, +@@ -152,6 +124,27 @@ static ssize_t mlxsw_hwmon_fan_rpm_show(struct device *dev, + return sprintf(buf, "%u\n", mlxsw_reg_mfsm_rpm_get(mfsm_pl)); + } + ++static ssize_t mlxsw_hwmon_fan_fault_show(struct device *dev, ++ struct device_attribute *attr, ++ char *buf) ++{ ++ struct mlxsw_hwmon_attr *mlwsw_hwmon_attr = ++ container_of(attr, struct mlxsw_hwmon_attr, dev_attr); ++ struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon; ++ char fore_pl[MLXSW_REG_FORE_LEN]; ++ bool fault; ++ int err; ++ ++ err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(fore), fore_pl); ++ if (err) { ++ dev_err(mlxsw_hwmon->bus_info->dev, "Failed to query fan\n"); ++ return err; ++ } ++ mlxsw_reg_fore_unpack(fore_pl, mlwsw_hwmon_attr->type_index, &fault); ++ ++ return sprintf(buf, "%u\n", fault); ++} ++ + static ssize_t mlxsw_hwmon_pwm_show(struct device *dev, + struct device_attribute *attr, + char *buf) +@@ -198,12 +191,160 @@ static ssize_t mlxsw_hwmon_pwm_store(struct device *dev, + return len; + } + ++static ssize_t mlxsw_hwmon_module_temp_show(struct device *dev, ++ struct device_attribute *attr, ++ char *buf) ++{ ++ struct mlxsw_hwmon_attr *mlwsw_hwmon_attr = ++ container_of(attr, struct mlxsw_hwmon_attr, dev_attr); ++ struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon; ++ char mtbr_pl[MLXSW_REG_MTBR_LEN] = {0}; ++ u16 temp; ++ u8 module; ++ int err; ++ ++ module = mlwsw_hwmon_attr->type_index - mlxsw_hwmon->sensor_count; ++ mlxsw_reg_mtbr_pack(mtbr_pl, MLXSW_REG_MTBR_BASE_MODULE_INDEX + module, ++ 1); ++ err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(mtbr), mtbr_pl); ++ if (err) { ++ dev_err(dev, "Failed to query module temprature sensor\n"); ++ return err; ++ } ++ ++ mlxsw_reg_mtbr_temp_unpack(mtbr_pl, 0, &temp, NULL); ++ /* Update status and temperature cache. */ ++ switch (temp) { ++ case MLXSW_REG_MTBR_NO_CONN: /* fall-through */ ++ case MLXSW_REG_MTBR_NO_TEMP_SENS: /* fall-through */ ++ case MLXSW_REG_MTBR_INDEX_NA: ++ temp = 0; ++ break; ++ case MLXSW_REG_MTBR_BAD_SENS_INFO: ++ /* Untrusted cable is connected. Reading temperature from its ++ * sensor is faulty. ++ */ ++ temp = 0; ++ break; ++ default: ++ temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp); ++ break; ++ } ++ ++ return sprintf(buf, "%u\n", temp); ++} ++ ++static ssize_t mlxsw_hwmon_module_temp_fault_show(struct device *dev, ++ struct device_attribute *attr, ++ char *buf) ++{ ++ struct mlxsw_hwmon_attr *mlwsw_hwmon_attr = ++ container_of(attr, struct mlxsw_hwmon_attr, dev_attr); ++ struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon; ++ char mtbr_pl[MLXSW_REG_MTBR_LEN] = {0}; ++ u8 module, fault; ++ u16 temp; ++ int err; ++ ++ module = mlwsw_hwmon_attr->type_index - mlxsw_hwmon->sensor_count; ++ mlxsw_reg_mtbr_pack(mtbr_pl, MLXSW_REG_MTBR_BASE_MODULE_INDEX + module, ++ 1); ++ err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(mtbr), mtbr_pl); ++ if (err) { ++ dev_err(dev, "Failed to query module temprature sensor\n"); ++ return err; ++ } ++ ++ mlxsw_reg_mtbr_temp_unpack(mtbr_pl, 0, &temp, NULL); ++ ++ /* Update status and temperature cache. */ ++ switch (temp) { ++ case MLXSW_REG_MTBR_BAD_SENS_INFO: ++ /* Untrusted cable is connected. Reading temperature from its ++ * sensor is faulty. ++ */ ++ fault = 1; ++ break; ++ case MLXSW_REG_MTBR_NO_CONN: /* fall-through */ ++ case MLXSW_REG_MTBR_NO_TEMP_SENS: /* fall-through */ ++ case MLXSW_REG_MTBR_INDEX_NA: ++ default: ++ fault = 0; ++ break; ++ } ++ ++ return sprintf(buf, "%u\n", fault); ++} ++ ++static ssize_t ++mlxsw_hwmon_module_temp_critical_show(struct device *dev, ++ struct device_attribute *attr, char *buf) ++{ ++ struct mlxsw_hwmon_attr *mlwsw_hwmon_attr = ++ container_of(attr, struct mlxsw_hwmon_attr, dev_attr); ++ struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon; ++ int temp; ++ u8 module; ++ int err; ++ ++ module = mlwsw_hwmon_attr->type_index - mlxsw_hwmon->sensor_count; ++ err = mlxsw_env_module_temp_thresholds_get(mlxsw_hwmon->core, module, ++ SFP_TEMP_HIGH_WARN, &temp); ++ if (err) { ++ dev_err(dev, "Failed to query module temprature thresholds\n"); ++ return err; ++ } ++ ++ return sprintf(buf, "%u\n", temp); ++} ++ ++static ssize_t ++mlxsw_hwmon_module_temp_emergency_show(struct device *dev, ++ struct device_attribute *attr, ++ char *buf) ++{ ++ struct mlxsw_hwmon_attr *mlwsw_hwmon_attr = ++ container_of(attr, struct mlxsw_hwmon_attr, dev_attr); ++ struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon; ++ u8 module; ++ int temp; ++ int err; ++ ++ module = mlwsw_hwmon_attr->type_index - mlxsw_hwmon->sensor_count; ++ err = mlxsw_env_module_temp_thresholds_get(mlxsw_hwmon->core, module, ++ SFP_TEMP_HIGH_ALARM, &temp); ++ if (err) { ++ dev_err(dev, "Failed to query module temprature thresholds\n"); ++ return err; ++ } ++ ++ return sprintf(buf, "%u\n", temp); ++} ++ ++static ssize_t ++mlxsw_hwmon_module_temp_label_show(struct device *dev, ++ struct device_attribute *attr, ++ char *buf) ++{ ++ struct mlxsw_hwmon_attr *mlwsw_hwmon_attr = ++ container_of(attr, struct mlxsw_hwmon_attr, dev_attr); ++ ++ return sprintf(buf, "front panel %03u\n", ++ mlwsw_hwmon_attr->type_index); ++} ++ + enum mlxsw_hwmon_attr_type { + MLXSW_HWMON_ATTR_TYPE_TEMP, + MLXSW_HWMON_ATTR_TYPE_TEMP_MAX, + MLXSW_HWMON_ATTR_TYPE_TEMP_RST, + MLXSW_HWMON_ATTR_TYPE_FAN_RPM, ++ MLXSW_HWMON_ATTR_TYPE_FAN_FAULT, + MLXSW_HWMON_ATTR_TYPE_PWM, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_FAULT, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_CRIT, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_EMERG, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_LABEL, + }; + + static void mlxsw_hwmon_attr_add(struct mlxsw_hwmon *mlxsw_hwmon, +@@ -218,35 +359,75 @@ static void mlxsw_hwmon_attr_add(struct mlxsw_hwmon *mlxsw_hwmon, + switch (attr_type) { + case MLXSW_HWMON_ATTR_TYPE_TEMP: + mlxsw_hwmon_attr->dev_attr.show = mlxsw_hwmon_temp_show; +- mlxsw_hwmon_attr->dev_attr.attr.mode = S_IRUGO; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; + snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), + "temp%u_input", num + 1); + break; + case MLXSW_HWMON_ATTR_TYPE_TEMP_MAX: + mlxsw_hwmon_attr->dev_attr.show = mlxsw_hwmon_temp_max_show; +- mlxsw_hwmon_attr->dev_attr.attr.mode = S_IRUGO; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; + snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), + "temp%u_highest", num + 1); + break; + case MLXSW_HWMON_ATTR_TYPE_TEMP_RST: + mlxsw_hwmon_attr->dev_attr.store = mlxsw_hwmon_temp_rst_store; +- mlxsw_hwmon_attr->dev_attr.attr.mode = S_IWUSR; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0200; + snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), + "temp%u_reset_history", num + 1); + break; + case MLXSW_HWMON_ATTR_TYPE_FAN_RPM: + mlxsw_hwmon_attr->dev_attr.show = mlxsw_hwmon_fan_rpm_show; +- mlxsw_hwmon_attr->dev_attr.attr.mode = S_IRUGO; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; + snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), + "fan%u_input", num + 1); + break; ++ case MLXSW_HWMON_ATTR_TYPE_FAN_FAULT: ++ mlxsw_hwmon_attr->dev_attr.show = mlxsw_hwmon_fan_fault_show; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; ++ snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), ++ "fan%u_fault", num + 1); ++ break; + case MLXSW_HWMON_ATTR_TYPE_PWM: + mlxsw_hwmon_attr->dev_attr.show = mlxsw_hwmon_pwm_show; + mlxsw_hwmon_attr->dev_attr.store = mlxsw_hwmon_pwm_store; +- mlxsw_hwmon_attr->dev_attr.attr.mode = S_IWUSR | S_IRUGO; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0644; + snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), + "pwm%u", num + 1); + break; ++ case MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE: ++ mlxsw_hwmon_attr->dev_attr.show = mlxsw_hwmon_module_temp_show; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; ++ snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), ++ "temp%u_input", num + 1); ++ break; ++ case MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_FAULT: ++ mlxsw_hwmon_attr->dev_attr.show = ++ mlxsw_hwmon_module_temp_fault_show; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; ++ snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), ++ "temp%u_fault", num + 1); ++ break; ++ case MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_CRIT: ++ mlxsw_hwmon_attr->dev_attr.show = ++ mlxsw_hwmon_module_temp_critical_show; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; ++ snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), ++ "temp%u_crit", num + 1); ++ break; ++ case MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_EMERG: ++ mlxsw_hwmon_attr->dev_attr.show = ++ mlxsw_hwmon_module_temp_emergency_show; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; ++ snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), ++ "temp%u_emergency", num + 1); ++ break; ++ case MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_LABEL: ++ mlxsw_hwmon_attr->dev_attr.show = ++ mlxsw_hwmon_module_temp_label_show; ++ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444; ++ snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name), ++ "temp%u_label", num + 1); ++ break; + default: + WARN_ON(1); + } +@@ -264,7 +445,6 @@ static int mlxsw_hwmon_temp_init(struct mlxsw_hwmon *mlxsw_hwmon) + { + char mtcap_pl[MLXSW_REG_MTCAP_LEN] = {0}; + char mtmp_pl[MLXSW_REG_MTMP_LEN]; +- u8 sensor_count; + int i; + int err; + +@@ -273,8 +453,8 @@ static int mlxsw_hwmon_temp_init(struct mlxsw_hwmon *mlxsw_hwmon) + dev_err(mlxsw_hwmon->bus_info->dev, "Failed to get number of temp sensors\n"); + return err; + } +- sensor_count = mlxsw_reg_mtcap_sensor_count_get(mtcap_pl); +- for (i = 0; i < sensor_count; i++) { ++ mlxsw_hwmon->sensor_count = mlxsw_reg_mtcap_sensor_count_get(mtcap_pl); ++ for (i = 0; i < mlxsw_hwmon->sensor_count; i++) { + mlxsw_reg_mtmp_pack(mtmp_pl, i, true, true); + err = mlxsw_reg_write(mlxsw_hwmon->core, + MLXSW_REG(mtmp), mtmp_pl); +@@ -311,10 +491,14 @@ static int mlxsw_hwmon_fans_init(struct mlxsw_hwmon *mlxsw_hwmon) + mlxsw_reg_mfcr_unpack(mfcr_pl, &freq, &tacho_active, &pwm_active); + num = 0; + for (type_index = 0; type_index < MLXSW_MFCR_TACHOS_MAX; type_index++) { +- if (tacho_active & BIT(type_index)) ++ if (tacho_active & BIT(type_index)) { + mlxsw_hwmon_attr_add(mlxsw_hwmon, + MLXSW_HWMON_ATTR_TYPE_FAN_RPM, ++ type_index, num); ++ mlxsw_hwmon_attr_add(mlxsw_hwmon, ++ MLXSW_HWMON_ATTR_TYPE_FAN_FAULT, + type_index, num++); ++ } + } + num = 0; + for (type_index = 0; type_index < MLXSW_MFCR_PWMS_MAX; type_index++) { +@@ -326,6 +510,53 @@ static int mlxsw_hwmon_fans_init(struct mlxsw_hwmon *mlxsw_hwmon) + return 0; + } + ++static int mlxsw_hwmon_module_init(struct mlxsw_hwmon *mlxsw_hwmon) ++{ ++ unsigned int module_count = mlxsw_core_max_ports(mlxsw_hwmon->core); ++ char pmlp_pl[MLXSW_REG_PMLP_LEN] = {0}; ++ int i, index; ++ u8 width; ++ int err; ++ ++ /* Add extra attributes for module temperature. Sensor index is ++ * assigned to sensor_count value, while all indexed before ++ * sensor_count are already utilized by the sensors connected through ++ * mtmp register by mlxsw_hwmon_temp_init(). ++ */ ++ index = mlxsw_hwmon->sensor_count; ++ for (i = 1; i < module_count; i++) { ++ mlxsw_reg_pmlp_pack(pmlp_pl, i); ++ err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(pmlp), ++ pmlp_pl); ++ if (err) { ++ dev_err(mlxsw_hwmon->bus_info->dev, "Failed to read module index %d\n", ++ i); ++ return err; ++ } ++ width = mlxsw_reg_pmlp_width_get(pmlp_pl); ++ if (!width) ++ continue; ++ mlxsw_hwmon_attr_add(mlxsw_hwmon, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE, index, ++ index); ++ mlxsw_hwmon_attr_add(mlxsw_hwmon, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_FAULT, ++ index, index); ++ mlxsw_hwmon_attr_add(mlxsw_hwmon, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_CRIT, ++ index, index); ++ mlxsw_hwmon_attr_add(mlxsw_hwmon, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_EMERG, ++ index, index); ++ mlxsw_hwmon_attr_add(mlxsw_hwmon, ++ MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_LABEL, ++ index, index); ++ index++; ++ } ++ ++ return 0; ++} ++ + int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core, + const struct mlxsw_bus_info *mlxsw_bus_info, + struct mlxsw_hwmon **p_hwmon) +@@ -334,8 +565,7 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core, + struct device *hwmon_dev; + int err; + +- mlxsw_hwmon = devm_kzalloc(mlxsw_bus_info->dev, sizeof(*mlxsw_hwmon), +- GFP_KERNEL); ++ mlxsw_hwmon = kzalloc(sizeof(*mlxsw_hwmon), GFP_KERNEL); + if (!mlxsw_hwmon) + return -ENOMEM; + mlxsw_hwmon->core = mlxsw_core; +@@ -349,13 +579,16 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core, + if (err) + goto err_fans_init; + ++ err = mlxsw_hwmon_module_init(mlxsw_hwmon); ++ if (err) ++ goto err_temp_module_init; ++ + mlxsw_hwmon->groups[0] = &mlxsw_hwmon->group; + mlxsw_hwmon->group.attrs = mlxsw_hwmon->attrs; + +- hwmon_dev = devm_hwmon_device_register_with_groups(mlxsw_bus_info->dev, +- "mlxsw", +- mlxsw_hwmon, +- mlxsw_hwmon->groups); ++ hwmon_dev = hwmon_device_register_with_groups(mlxsw_bus_info->dev, ++ "mlxsw", mlxsw_hwmon, ++ mlxsw_hwmon->groups); + if (IS_ERR(hwmon_dev)) { + err = PTR_ERR(hwmon_dev); + goto err_hwmon_register; +@@ -366,7 +599,15 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core, + return 0; + + err_hwmon_register: ++err_temp_module_init: + err_fans_init: + err_temp_init: ++ kfree(mlxsw_hwmon); + return err; + } ++ ++void mlxsw_hwmon_fini(struct mlxsw_hwmon *mlxsw_hwmon) ++{ ++ hwmon_device_unregister(mlxsw_hwmon->hwmon_dev); ++ kfree(mlxsw_hwmon); ++} +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +index 8047556..c047b61 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +@@ -1,34 +1,6 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/core_thermal.c ++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 ++/* Copyright (c) 2016-2018 Mellanox Technologies. All rights reserved + * Copyright (c) 2016 Ivan Vecera +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. + */ + + #include +@@ -37,44 +9,79 @@ + #include + #include + #include ++#include + + #include "core.h" ++#include "core_env.h" + + #define MLXSW_THERMAL_POLL_INT 1000 /* ms */ +-#define MLXSW_THERMAL_MAX_TEMP 110000 /* 110C */ ++#define MLXSW_THERMAL_SLOW_POLL_INT 20000 /* ms */ ++#define MLXSW_THERMAL_ASIC_TEMP_NORM 75000 /* 75C */ ++#define MLXSW_THERMAL_ASIC_TEMP_HIGH 85000 /* 85C */ ++#define MLXSW_THERMAL_ASIC_TEMP_HOT 105000 /* 105C */ ++#define MLXSW_THERMAL_ASIC_TEMP_CRIT 110000 /* 110C */ ++#define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */ ++#define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2) ++#define MLXSW_THERMAL_ZONE_MAX_NAME 16 ++#define MLXSW_THERMAL_TEMP_SCORE_MAX 0xffffffff + #define MLXSW_THERMAL_MAX_STATE 10 + #define MLXSW_THERMAL_MAX_DUTY 255 ++/* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values ++ * MLXSW_THERMAL_MAX_STATE + x, where x is between 2 and 10 are used for ++ * setting fan speed dynamic minimum. For example, if value is set to 14 (40%) ++ * cooling levels vector will be set to 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10 to ++ * introduce PWM speed in percent: 40, 40, 40, 40, 40, 50, 60. 70, 80, 90, 100. ++ */ ++#define MLXSW_THERMAL_SPEED_MIN (MLXSW_THERMAL_MAX_STATE + 2) ++#define MLXSW_THERMAL_SPEED_MAX (MLXSW_THERMAL_MAX_STATE * 2) ++#define MLXSW_THERMAL_SPEED_MIN_LEVEL 2 /* 20% */ ++ ++/* External cooling devices, allowed for binding to mlxsw thermal zones. */ ++static char * const mlxsw_thermal_external_allowed_cdev[] = { ++ "mlxreg_fan", ++}; ++ ++enum mlxsw_thermal_trips { ++ MLXSW_THERMAL_TEMP_TRIP_NORM, ++ MLXSW_THERMAL_TEMP_TRIP_HIGH, ++ MLXSW_THERMAL_TEMP_TRIP_HOT, ++ MLXSW_THERMAL_TEMP_TRIP_CRIT, ++}; + + struct mlxsw_thermal_trip { + int type; + int temp; ++ int hyst; + int min_state; + int max_state; + }; + + static const struct mlxsw_thermal_trip default_thermal_trips[] = { +- { /* Above normal - 60%-100% PWM */ ++ { /* In range - 0-40% PWM */ + .type = THERMAL_TRIP_ACTIVE, +- .temp = 75000, +- .min_state = (6 * MLXSW_THERMAL_MAX_STATE) / 10, +- .max_state = MLXSW_THERMAL_MAX_STATE, ++ .temp = MLXSW_THERMAL_ASIC_TEMP_NORM, ++ .hyst = MLXSW_THERMAL_HYSTERESIS_TEMP, ++ .min_state = 0, ++ .max_state = (4 * MLXSW_THERMAL_MAX_STATE) / 10, + }, + { +- /* Very high - 100% PWM */ ++ /* In range - 40-100% PWM */ + .type = THERMAL_TRIP_ACTIVE, +- .temp = 85000, +- .min_state = MLXSW_THERMAL_MAX_STATE, ++ .temp = MLXSW_THERMAL_ASIC_TEMP_HIGH, ++ .hyst = MLXSW_THERMAL_HYSTERESIS_TEMP, ++ .min_state = (4 * MLXSW_THERMAL_MAX_STATE) / 10, + .max_state = MLXSW_THERMAL_MAX_STATE, + }, + { /* Warning */ + .type = THERMAL_TRIP_HOT, +- .temp = 105000, ++ .temp = MLXSW_THERMAL_ASIC_TEMP_HOT, ++ .hyst = MLXSW_THERMAL_HYSTERESIS_TEMP, + .min_state = MLXSW_THERMAL_MAX_STATE, + .max_state = MLXSW_THERMAL_MAX_STATE, + }, + { /* Critical - soft poweroff */ + .type = THERMAL_TRIP_CRITICAL, +- .temp = MLXSW_THERMAL_MAX_TEMP, ++ .temp = MLXSW_THERMAL_ASIC_TEMP_CRIT, + .min_state = MLXSW_THERMAL_MAX_STATE, + .max_state = MLXSW_THERMAL_MAX_STATE, + } +@@ -85,13 +92,29 @@ static const struct mlxsw_thermal_trip default_thermal_trips[] = { + /* Make sure all trips are writable */ + #define MLXSW_THERMAL_TRIP_MASK (BIT(MLXSW_THERMAL_NUM_TRIPS) - 1) + ++struct mlxsw_thermal; ++ ++struct mlxsw_thermal_module { ++ struct mlxsw_thermal *parent; ++ struct thermal_zone_device *tzdev; ++ struct mlxsw_thermal_trip trips[MLXSW_THERMAL_NUM_TRIPS]; ++ enum thermal_device_mode mode; ++ int module; ++}; ++ + struct mlxsw_thermal { + struct mlxsw_core *core; + const struct mlxsw_bus_info *bus_info; + struct thermal_zone_device *tzdev; ++ int polling_delay; + struct thermal_cooling_device *cdevs[MLXSW_MFCR_PWMS_MAX]; ++ u8 cooling_levels[MLXSW_THERMAL_MAX_STATE + 1]; + struct mlxsw_thermal_trip trips[MLXSW_THERMAL_NUM_TRIPS]; + enum thermal_device_mode mode; ++ struct mlxsw_thermal_module *tz_module_arr; ++ unsigned int tz_module_num; ++ int tz_highest; ++ struct mutex tz_update_lock; + }; + + static inline u8 mlxsw_state_to_duty(int state) +@@ -115,9 +138,201 @@ static int mlxsw_get_cooling_device_idx(struct mlxsw_thermal *thermal, + if (thermal->cdevs[i] == cdev) + return i; + ++ /* Allow mlxsw thermal zone binding to an external cooling device */ ++ for (i = 0; i < ARRAY_SIZE(mlxsw_thermal_external_allowed_cdev); i++) { ++ if (strnstr(cdev->type, mlxsw_thermal_external_allowed_cdev[i], ++ sizeof(cdev->type))) ++ return 0; ++ } ++ + return -ENODEV; + } + ++static void ++mlxsw_thermal_module_trips_reset(struct mlxsw_thermal_module *tz) ++{ ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = 0; ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = 0; ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = 0; ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = 0; ++} ++ ++static int ++mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core, ++ struct mlxsw_thermal_module *tz) ++{ ++ int crit_temp, emerg_temp; ++ int err; ++ ++ err = mlxsw_env_module_temp_thresholds_get(core, tz->module, ++ SFP_TEMP_HIGH_WARN, ++ &crit_temp); ++ if (err) ++ return err; ++ ++ err = mlxsw_env_module_temp_thresholds_get(core, tz->module, ++ SFP_TEMP_HIGH_ALARM, ++ &emerg_temp); ++ if (err) ++ return err; ++ ++ /* According to the system thermal requirements, the thermal zones are ++ * defined with four trip points. The critical and emergency ++ * temperature thresholds, provided by QSFP module are set as "active" ++ * and "hot" trip points, "normal" and "critical" trip points ar ++ * derived from "active" and "hot" by subtracting or adding double ++ * hysteresis value. ++ */ ++ if (crit_temp >= MLXSW_THERMAL_MODULE_TEMP_SHIFT) ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp - ++ MLXSW_THERMAL_MODULE_TEMP_SHIFT; ++ else ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp; ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = crit_temp; ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = emerg_temp; ++ if (emerg_temp > crit_temp) ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp + ++ MLXSW_THERMAL_MODULE_TEMP_SHIFT; ++ else ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp; ++ ++ return 0; ++} ++ ++static void mlxsw_thermal_tz_score_get(struct mlxsw_thermal_trip *trips, ++ int temp, int *score) ++{ ++ struct mlxsw_thermal_trip *trip = trips; ++ int delta, i, shift = 1; ++ ++ /* Calculate thermal zone score, if temperature is above the critical ++ * threshold score is set to MLXSW_THERMAL_TEMP_SCORE_MAX. ++ */ ++ *score = MLXSW_THERMAL_TEMP_SCORE_MAX; ++ for (i = MLXSW_THERMAL_TEMP_TRIP_NORM; i < MLXSW_THERMAL_NUM_TRIPS; ++ i++, trip++) { ++ if (temp < trip->temp) { ++ delta = DIV_ROUND_CLOSEST(temp, trip->temp - temp); ++ *score = delta * shift; ++ break; ++ } ++ shift *= 256; ++ } ++} ++ ++static int ++mlxsw_thermal_highest_tz_get(struct device *dev, struct mlxsw_thermal *thermal, ++ int module_count, unsigned int seed_temp, ++ int *max_tz, int *max_score) ++{ ++ char mtbr_pl[MLXSW_REG_MTBR_LEN]; ++ struct mlxsw_thermal_module *tz; ++ int i, j, index, off, score; ++ u16 temp; ++ int err; ++ ++ mlxsw_thermal_tz_score_get(thermal->trips, seed_temp, max_score); ++ /* Read modules temperature. */ ++ index = 0; ++ while (index < module_count) { ++ off = min_t(u8, MLXSW_REG_MTBR_REC_MAX_COUNT, ++ module_count - index); ++ mlxsw_reg_mtbr_pack(mtbr_pl, MLXSW_REG_MTBR_BASE_MODULE_INDEX + ++ index, off); ++ err = mlxsw_reg_query(thermal->core, MLXSW_REG(mtbr), mtbr_pl); ++ if (err) { ++ dev_err(dev, "Failed to get temp from index %d\n", ++ off); ++ return err; ++ } ++ ++ for (i = 0, j = index; i < off; i++, j++) { ++ mlxsw_reg_mtbr_temp_unpack(mtbr_pl, i, &temp, NULL); ++ /* Update status and temperature cache. */ ++ switch (temp) { ++ case MLXSW_REG_MTBR_NO_CONN: /* fall-through */ ++ case MLXSW_REG_MTBR_NO_TEMP_SENS: /* fall-through */ ++ case MLXSW_REG_MTBR_INDEX_NA: /* fall-through */ ++ case MLXSW_REG_MTBR_BAD_SENS_INFO: ++ temp = 0; ++ break; ++ default: ++ tz = &thermal->tz_module_arr[j]; ++ if (!tz) ++ break; ++ /* Reset all trip point. */ ++ mlxsw_thermal_module_trips_reset(tz); ++ temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp); ++ /* Do not consider zero temperature. */ ++ if (!temp) ++ break; ++ ++ err = mlxsw_thermal_module_trips_update(dev, ++ thermal->core, ++ tz); ++ if (err) { ++ dev_err(dev, "Failed to update trips for %s\n", ++ tz->tzdev->type); ++ return err; ++ } ++ ++ score = 0; ++ mlxsw_thermal_tz_score_get(tz->trips, temp, ++ &score); ++ if (score > *max_score) { ++ *max_score = score; ++ *max_tz = j + 1; ++ } ++ break; ++ } ++ } ++ index += off; ++ } ++ ++ return 0; ++} ++ ++static int ++mlxsw_thermal_highest_tz_notify(struct device *dev, ++ struct thermal_zone_device *tzdev, ++ struct mlxsw_thermal *thermal, ++ int module_count, unsigned int temp) ++{ ++ char env_record[24]; ++ char *envp[2] = { env_record, NULL }; ++ struct mlxsw_thermal_module *tz_module; ++ struct thermal_zone_device *tz; ++ int max_tz = 0, max_score = 0; ++ int err; ++ ++ err = mlxsw_thermal_highest_tz_get(dev, thermal, ++ thermal->tz_module_num, temp, ++ &max_tz, &max_score); ++ if (err) { ++ dev_err(dev, "Failed to query module temp sensor\n"); ++ return err; ++ } ++ ++ if (thermal->tz_highest != max_tz) { ++ sprintf(env_record, "TZ_HIGHEST==%u", max_score); ++ if (max_tz && (&thermal->tz_module_arr[max_tz - 1])) { ++ tz_module = &thermal->tz_module_arr[max_tz - 1]; ++ tz = tz_module->tzdev; ++ err = kobject_uevent_env(&tz->device.kobj, KOBJ_CHANGE, ++ envp); ++ } else { ++ err = kobject_uevent_env(&tzdev->device.kobj, ++ KOBJ_CHANGE, envp); ++ } ++ if (err) ++ dev_err(dev, "Error sending uevent %s\n", envp[0]); ++ else ++ thermal->tz_highest = max_tz; ++ } ++ ++ return 0; ++} ++ + static int mlxsw_thermal_bind(struct thermal_zone_device *tzdev, + struct thermal_cooling_device *cdev) + { +@@ -183,15 +398,20 @@ static int mlxsw_thermal_set_mode(struct thermal_zone_device *tzdev, + + mutex_lock(&tzdev->lock); + +- if (mode == THERMAL_DEVICE_ENABLED) +- tzdev->polling_delay = MLXSW_THERMAL_POLL_INT; +- else ++ if (mode == THERMAL_DEVICE_ENABLED) { ++ thermal->tz_highest = 0; ++ tzdev->polling_delay = thermal->polling_delay; ++ } else { + tzdev->polling_delay = 0; ++ } + + mutex_unlock(&tzdev->lock); + + thermal->mode = mode; ++ ++ mutex_lock(&thermal->tz_update_lock); + thermal_zone_device_update(tzdev, THERMAL_EVENT_UNSPECIFIED); ++ mutex_unlock(&thermal->tz_update_lock); + + return 0; + } +@@ -214,6 +434,14 @@ static int mlxsw_thermal_get_temp(struct thermal_zone_device *tzdev, + } + mlxsw_reg_mtmp_unpack(mtmp_pl, &temp, NULL, NULL); + ++ if (thermal->tz_module_arr) { ++ err = mlxsw_thermal_highest_tz_notify(dev, tzdev, thermal, ++ thermal->tz_module_num, ++ temp); ++ if (err) ++ dev_err(dev, "Failed to query module temp sensor\n"); ++ } ++ + *p_temp = (int) temp; + return 0; + } +@@ -249,13 +477,31 @@ static int mlxsw_thermal_set_trip_temp(struct thermal_zone_device *tzdev, + struct mlxsw_thermal *thermal = tzdev->devdata; + + if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS || +- temp > MLXSW_THERMAL_MAX_TEMP) ++ temp > MLXSW_THERMAL_ASIC_TEMP_CRIT) + return -EINVAL; + + thermal->trips[trip].temp = temp; + return 0; + } + ++static int mlxsw_thermal_get_trip_hyst(struct thermal_zone_device *tzdev, ++ int trip, int *p_hyst) ++{ ++ struct mlxsw_thermal *thermal = tzdev->devdata; ++ ++ *p_hyst = thermal->trips[trip].hyst; ++ return 0; ++} ++ ++static int mlxsw_thermal_set_trip_hyst(struct thermal_zone_device *tzdev, ++ int trip, int hyst) ++{ ++ struct mlxsw_thermal *thermal = tzdev->devdata; ++ ++ thermal->trips[trip].hyst = hyst; ++ return 0; ++} ++ + static struct thermal_zone_device_ops mlxsw_thermal_ops = { + .bind = mlxsw_thermal_bind, + .unbind = mlxsw_thermal_unbind, +@@ -265,6 +511,250 @@ static struct thermal_zone_device_ops mlxsw_thermal_ops = { + .get_trip_type = mlxsw_thermal_get_trip_type, + .get_trip_temp = mlxsw_thermal_get_trip_temp, + .set_trip_temp = mlxsw_thermal_set_trip_temp, ++ .get_trip_hyst = mlxsw_thermal_get_trip_hyst, ++ .set_trip_hyst = mlxsw_thermal_set_trip_hyst, ++}; ++ ++static int mlxsw_thermal_module_bind(struct thermal_zone_device *tzdev, ++ struct thermal_cooling_device *cdev) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ struct mlxsw_thermal *thermal = tz->parent; ++ int i, j, err; ++ ++ /* If the cooling device is one of ours bind it */ ++ if (mlxsw_get_cooling_device_idx(thermal, cdev) < 0) ++ return 0; ++ ++ for (i = 0; i < MLXSW_THERMAL_NUM_TRIPS; i++) { ++ const struct mlxsw_thermal_trip *trip = &tz->trips[i]; ++ ++ err = thermal_zone_bind_cooling_device(tzdev, i, cdev, ++ trip->max_state, ++ trip->min_state, ++ THERMAL_WEIGHT_DEFAULT); ++ if (err < 0) ++ goto err_bind_cooling_device; ++ } ++ return 0; ++ ++err_bind_cooling_device: ++ for (j = i - 1; j >= 0; j--) ++ thermal_zone_unbind_cooling_device(tzdev, j, cdev); ++ return err; ++} ++ ++static int mlxsw_thermal_module_unbind(struct thermal_zone_device *tzdev, ++ struct thermal_cooling_device *cdev) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ struct mlxsw_thermal *thermal = tz->parent; ++ int i; ++ int err; ++ ++ /* If the cooling device is one of ours unbind it */ ++ if (mlxsw_get_cooling_device_idx(thermal, cdev) < 0) ++ return 0; ++ ++ for (i = 0; i < MLXSW_THERMAL_NUM_TRIPS; i++) { ++ err = thermal_zone_unbind_cooling_device(tzdev, i, cdev); ++ WARN_ON(err); ++ } ++ return err; ++} ++ ++static int mlxsw_thermal_module_mode_get(struct thermal_zone_device *tzdev, ++ enum thermal_device_mode *mode) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ ++ *mode = tz->mode; ++ ++ return 0; ++} ++ ++static int mlxsw_thermal_module_mode_set(struct thermal_zone_device *tzdev, ++ enum thermal_device_mode mode) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ struct mlxsw_thermal *thermal = tz->parent; ++ ++ mutex_lock(&tzdev->lock); ++ ++ if (mode == THERMAL_DEVICE_ENABLED) ++ tzdev->polling_delay = thermal->polling_delay; ++ else ++ tzdev->polling_delay = 0; ++ ++ mutex_unlock(&tzdev->lock); ++ ++ tz->mode = mode; ++ ++ mutex_lock(&thermal->tz_update_lock); ++ thermal_zone_device_update(tzdev, THERMAL_EVENT_UNSPECIFIED); ++ mutex_unlock(&thermal->tz_update_lock); ++ ++ return 0; ++} ++ ++static int mlxsw_thermal_module_temp_get(struct thermal_zone_device *tzdev, ++ int *p_temp) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ struct mlxsw_thermal *thermal = tz->parent; ++ struct device *dev = thermal->bus_info->dev; ++ char mtbr_pl[MLXSW_REG_MTBR_LEN]; ++ u16 temp; ++ int err; ++ ++ /* Read module temperature. */ ++ mlxsw_reg_mtbr_pack(mtbr_pl, MLXSW_REG_MTBR_BASE_MODULE_INDEX + ++ tz->module, 1); ++ err = mlxsw_reg_query(thermal->core, MLXSW_REG(mtbr), mtbr_pl); ++ if (err) ++ return err; ++ ++ mlxsw_reg_mtbr_temp_unpack(mtbr_pl, 0, &temp, NULL); ++ /* Update temperature. */ ++ switch (temp) { ++ case MLXSW_REG_MTBR_NO_CONN: /* fall-through */ ++ case MLXSW_REG_MTBR_NO_TEMP_SENS: /* fall-through */ ++ case MLXSW_REG_MTBR_INDEX_NA: /* fall-through */ ++ case MLXSW_REG_MTBR_BAD_SENS_INFO: ++ temp = 0; ++ break; ++ default: ++ temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp); ++ /* Reset all trip point. */ ++ mlxsw_thermal_module_trips_reset(tz); ++ /* Update trip points. */ ++ err = mlxsw_thermal_module_trips_update(dev, thermal->core, ++ tz); ++ if (err) ++ return err; ++ break; ++ } ++ ++ *p_temp = (int) temp; ++ return 0; ++} ++ ++static int ++mlxsw_thermal_module_trip_type_get(struct thermal_zone_device *tzdev, int trip, ++ enum thermal_trip_type *p_type) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ ++ if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS) ++ return -EINVAL; ++ ++ *p_type = tz->trips[trip].type; ++ return 0; ++} ++ ++static int ++mlxsw_thermal_module_trip_temp_get(struct thermal_zone_device *tzdev, ++ int trip, int *p_temp) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ ++ if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS) ++ return -EINVAL; ++ ++ *p_temp = tz->trips[trip].temp; ++ return 0; ++} ++ ++static int ++mlxsw_thermal_module_trip_temp_set(struct thermal_zone_device *tzdev, ++ int trip, int temp) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ ++ if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS || ++ temp > tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp) ++ return -EINVAL; ++ ++ tz->trips[trip].temp = temp; ++ return 0; ++} ++ ++static int ++mlxsw_thermal_module_trip_hyst_get(struct thermal_zone_device *tzdev, int trip, ++ int *p_hyst) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ ++ *p_hyst = tz->trips[trip].hyst; ++ return 0; ++} ++ ++static int ++mlxsw_thermal_module_trip_hyst_set(struct thermal_zone_device *tzdev, int trip, ++ int hyst) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ ++ tz->trips[trip].hyst = hyst; ++ return 0; ++} ++ ++static int mlxsw_thermal_module_trend_get(struct thermal_zone_device *tzdev, ++ int trip, enum thermal_trend *trend) ++{ ++ struct mlxsw_thermal_module *tz = tzdev->devdata; ++ struct mlxsw_thermal *thermal = tz->parent; ++ struct device *dev = thermal->bus_info->dev; ++ char *envp[2] = { "TZ_DOWN=1", NULL }; ++ int delta, window; ++ int err; ++ ++ if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS) ++ return -EINVAL; ++ ++ delta = tzdev->last_temperature - tzdev->temperature; ++ window = tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp - ++ tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp; ++ if (delta > window && !window && !tzdev->last_temperature) { ++ /* Notify user about fast temperature decreasing by sending ++ * hwmon uevent. Fast decreasing could happen when some hot ++ * module is removed. In this situation temperature trend could ++ * go down once, and then stay in a stable state. ++ * Notification will allow user to handle such case, if user ++ * supposes to optimize PWM state. ++ */ ++ err = kobject_uevent_env(&tzdev->device.kobj, KOBJ_CHANGE, ++ envp); ++ if (err) ++ dev_err(dev, "Error sending uevent %s\n", envp[0]); ++ } ++ ++ if (tzdev->temperature > tzdev->last_temperature) ++ *trend = THERMAL_TREND_RAISING; ++ else if (tzdev->temperature < tzdev->last_temperature) ++ *trend = THERMAL_TREND_DROPPING; ++ else ++ *trend = THERMAL_TREND_STABLE; ++ ++ return 0; ++} ++ ++static struct thermal_zone_params mlxsw_thermal_module_params = { ++ .governor_name = "user_space", ++}; ++ ++static struct thermal_zone_device_ops mlxsw_thermal_module_ops = { ++ .bind = mlxsw_thermal_module_bind, ++ .unbind = mlxsw_thermal_module_unbind, ++ .get_mode = mlxsw_thermal_module_mode_get, ++ .set_mode = mlxsw_thermal_module_mode_set, ++ .get_temp = mlxsw_thermal_module_temp_get, ++ .get_trip_type = mlxsw_thermal_module_trip_type_get, ++ .get_trip_temp = mlxsw_thermal_module_trip_temp_get, ++ .set_trip_temp = mlxsw_thermal_module_trip_temp_set, ++ .get_trip_hyst = mlxsw_thermal_module_trip_hyst_get, ++ .set_trip_hyst = mlxsw_thermal_module_trip_hyst_set, ++ .get_trend = mlxsw_thermal_module_trend_get, + }; + + static int mlxsw_thermal_get_max_state(struct thermal_cooling_device *cdev, +@@ -307,12 +797,51 @@ static int mlxsw_thermal_set_cur_state(struct thermal_cooling_device *cdev, + struct mlxsw_thermal *thermal = cdev->devdata; + struct device *dev = thermal->bus_info->dev; + char mfsc_pl[MLXSW_REG_MFSC_LEN]; +- int err, idx; ++ unsigned long cur_state, i; ++ int idx; ++ u8 duty; ++ int err; + + idx = mlxsw_get_cooling_device_idx(thermal, cdev); + if (idx < 0) + return idx; + ++ /* Verify if this request is for changing allowed fan dynamical ++ * minimum. If it is - update cooling levels accordingly and update ++ * state, if current state is below the newly requested minimum state. ++ * For example, if current state is 5, and minimal state is to be ++ * changed from 4 to 6, thermal->cooling_levels[0 to 5] will be changed ++ * all from 4 to 6. And state 5 (thermal->cooling_levels[4]) should be ++ * overwritten. ++ */ ++ if (state >= MLXSW_THERMAL_SPEED_MIN && ++ state <= MLXSW_THERMAL_SPEED_MAX) { ++ state -= MLXSW_THERMAL_MAX_STATE; ++ for (i = 0; i <= MLXSW_THERMAL_MAX_STATE; i++) ++ thermal->cooling_levels[i] = max(state, i); ++ ++ mlxsw_reg_mfsc_pack(mfsc_pl, idx, 0); ++ err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfsc), mfsc_pl); ++ if (err) ++ return err; ++ ++ duty = mlxsw_reg_mfsc_pwm_duty_cycle_get(mfsc_pl); ++ cur_state = mlxsw_duty_to_state(duty); ++ ++ /* If current fan state is lower than requested dynamical ++ * minimum, increase fan speed up to dynamical minimum. ++ */ ++ if (state < cur_state) ++ return 0; ++ ++ state = cur_state; ++ } ++ ++ if (state > MLXSW_THERMAL_MAX_STATE) ++ return -EINVAL; ++ ++ /* Normalize the state to the valid speed range. */ ++ state = thermal->cooling_levels[state]; + mlxsw_reg_mfsc_pack(mfsc_pl, idx, mlxsw_state_to_duty(state)); + err = mlxsw_reg_write(thermal->core, MLXSW_REG(mfsc), mfsc_pl); + if (err) { +@@ -328,6 +857,122 @@ static const struct thermal_cooling_device_ops mlxsw_cooling_ops = { + .set_cur_state = mlxsw_thermal_set_cur_state, + }; + ++static int ++mlxsw_thermal_module_tz_init(struct mlxsw_thermal_module *module_tz) ++{ ++ char tz_name[MLXSW_THERMAL_ZONE_MAX_NAME]; ++ int err; ++ ++ snprintf(tz_name, sizeof(tz_name), "mlxsw-module%d", ++ module_tz->module + 1); ++ module_tz->tzdev = thermal_zone_device_register(tz_name, ++ MLXSW_THERMAL_NUM_TRIPS, ++ MLXSW_THERMAL_TRIP_MASK, ++ module_tz, ++ &mlxsw_thermal_module_ops, ++ &mlxsw_thermal_module_params, ++ 0, 0); ++ if (IS_ERR(module_tz->tzdev)) { ++ err = PTR_ERR(module_tz); ++ return err; ++ } ++ ++ return 0; ++} ++ ++static void mlxsw_thermal_module_tz_fini(struct thermal_zone_device *tzdev) ++{ ++ thermal_zone_device_unregister(tzdev); ++} ++ ++static int ++mlxsw_thermal_module_init(struct device *dev, struct mlxsw_core *core, ++ struct mlxsw_thermal *thermal, u8 local_port) ++{ ++ struct mlxsw_thermal_module *module_tz; ++ char pmlp_pl[MLXSW_REG_PMLP_LEN]; ++ u8 width, module; ++ int err; ++ ++ mlxsw_reg_pmlp_pack(pmlp_pl, local_port); ++ err = mlxsw_reg_query(core, MLXSW_REG(pmlp), pmlp_pl); ++ if (err) ++ return err; ++ ++ width = mlxsw_reg_pmlp_width_get(pmlp_pl); ++ if (!width) ++ return 0; ++ ++ module = mlxsw_reg_pmlp_module_get(pmlp_pl, 0); ++ module_tz = &thermal->tz_module_arr[module]; ++ module_tz->module = module; ++ module_tz->parent = thermal; ++ memcpy(module_tz->trips, default_thermal_trips, ++ sizeof(thermal->trips)); ++ /* Initialize all trip point. */ ++ mlxsw_thermal_module_trips_reset(module_tz); ++ /* Update trip point according to the module data. */ ++ err = mlxsw_thermal_module_trips_update(dev, core, module_tz); ++ if (err) ++ return err; ++ ++ thermal->tz_module_num++; ++ ++ return 0; ++} ++ ++static void mlxsw_thermal_module_fini(struct mlxsw_thermal_module *module_tz) ++{ ++ if (module_tz && module_tz->tzdev) { ++ mlxsw_thermal_module_tz_fini(module_tz->tzdev); ++ module_tz->tzdev = NULL; ++ } ++} ++ ++static int ++mlxsw_thermal_modules_init(struct device *dev, struct mlxsw_core *core, ++ struct mlxsw_thermal *thermal) ++{ ++ unsigned int module_count = mlxsw_core_max_ports(core); ++ int i, err; ++ ++ thermal->tz_module_arr = kcalloc(module_count, ++ sizeof(*thermal->tz_module_arr), ++ GFP_KERNEL); ++ if (!thermal->tz_module_arr) ++ return -ENOMEM; ++ ++ for (i = 1; i <= module_count; i++) { ++ err = mlxsw_thermal_module_init(dev, core, thermal, i); ++ if (err) ++ goto err_unreg_tz_module_arr; ++ } ++ ++ for (i = 0; i < thermal->tz_module_num; i++) { ++ err = mlxsw_thermal_module_tz_init(&thermal->tz_module_arr[i]); ++ if (err) ++ goto err_unreg_tz_module_arr; ++ } ++ ++ return 0; ++ ++err_unreg_tz_module_arr: ++ for (i = thermal->tz_module_num - 1; i >= 0; i--) ++ mlxsw_thermal_module_fini(&thermal->tz_module_arr[i]); ++ kfree(thermal->tz_module_arr); ++ return err; ++} ++ ++static void ++mlxsw_thermal_modules_fini(struct mlxsw_thermal *thermal) ++{ ++ int i; ++ ++ for (i = thermal->tz_module_num - 1; i >= 0; i--) ++ mlxsw_thermal_module_fini(&thermal->tz_module_arr[i]); ++ kfree(thermal->tz_module_arr); ++} ++ + int mlxsw_thermal_init(struct mlxsw_core *core, + const struct mlxsw_bus_info *bus_info, + struct mlxsw_thermal **p_thermal) +@@ -347,6 +992,7 @@ int mlxsw_thermal_init(struct mlxsw_core *core, + + thermal->core = core; + thermal->bus_info = bus_info; ++ mutex_init(&thermal->tz_update_lock); + memcpy(thermal->trips, default_thermal_trips, sizeof(thermal->trips)); + + err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfcr), mfcr_pl); +@@ -380,7 +1026,8 @@ int mlxsw_thermal_init(struct mlxsw_core *core, + if (pwm_active & BIT(i)) { + struct thermal_cooling_device *cdev; + +- cdev = thermal_cooling_device_register("Fan", thermal, ++ cdev = thermal_cooling_device_register("mlxsw_fan", ++ thermal, + &mlxsw_cooling_ops); + if (IS_ERR(cdev)) { + err = PTR_ERR(cdev); +@@ -391,22 +1038,41 @@ int mlxsw_thermal_init(struct mlxsw_core *core, + } + } + ++ /* Initialize cooling levels per PWM state. */ ++ for (i = 0; i < MLXSW_THERMAL_MAX_STATE; i++) ++ thermal->cooling_levels[i] = max(MLXSW_THERMAL_SPEED_MIN_LEVEL, ++ i); ++ ++ thermal->polling_delay = bus_info->low_frequency ? ++ MLXSW_THERMAL_SLOW_POLL_INT : ++ MLXSW_THERMAL_POLL_INT; ++ + thermal->tzdev = thermal_zone_device_register("mlxsw", + MLXSW_THERMAL_NUM_TRIPS, + MLXSW_THERMAL_TRIP_MASK, + thermal, + &mlxsw_thermal_ops, + NULL, 0, +- MLXSW_THERMAL_POLL_INT); ++ thermal->polling_delay); + if (IS_ERR(thermal->tzdev)) { + err = PTR_ERR(thermal->tzdev); + dev_err(dev, "Failed to register thermal zone\n"); + goto err_unreg_cdevs; + } + ++ err = mlxsw_thermal_modules_init(dev, core, thermal); ++ if (err) ++ goto err_unreg_tzdev; ++ + thermal->mode = THERMAL_DEVICE_ENABLED; + *p_thermal = thermal; + return 0; ++ ++err_unreg_tzdev: ++ if (thermal->tzdev) { ++ thermal_zone_device_unregister(thermal->tzdev); ++ thermal->tzdev = NULL; ++ } + err_unreg_cdevs: + for (i = 0; i < MLXSW_MFCR_PWMS_MAX; i++) + if (thermal->cdevs[i]) +@@ -420,6 +1086,7 @@ void mlxsw_thermal_fini(struct mlxsw_thermal *thermal) + { + int i; + ++ mlxsw_thermal_modules_fini(thermal); + if (thermal->tzdev) { + thermal_zone_device_unregister(thermal->tzdev); + thermal->tzdev = NULL; +diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c +index 5c31665..f1b95d5 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c +@@ -1,36 +1,5 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/i2c.c +- * Copyright (c) 2016 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2016 Vadim Pasternak +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 ++/* Copyright (c) 2016-2018 Mellanox Technologies. All rights reserved */ + + #include + #include +@@ -46,8 +15,6 @@ + #include "core.h" + #include "i2c.h" + +-static const char mlxsw_i2c_driver_name[] = "mlxsw_i2c"; +- + #define MLXSW_I2C_CIR2_BASE 0x72000 + #define MLXSW_I2C_CIR_STATUS_OFF 0x18 + #define MLXSW_I2C_CIR2_OFF_STATUS (MLXSW_I2C_CIR2_BASE + \ +@@ -364,10 +331,7 @@ mlxsw_i2c_cmd(struct device *dev, size_t in_mbox_size, u8 *in_mbox, + if (reg_size % MLXSW_I2C_BLK_MAX) + num++; + +- if (mutex_lock_interruptible(&mlxsw_i2c->cmd.lock) < 0) { +- dev_err(&client->dev, "Could not acquire lock"); +- return -EINVAL; +- } ++ mutex_lock(&mlxsw_i2c->cmd.lock); + + err = mlxsw_i2c_write(dev, reg_size, in_mbox, num, status); + if (err) +@@ -536,6 +500,7 @@ static int mlxsw_i2c_probe(struct i2c_client *client, + mlxsw_i2c->bus_info.device_kind = id->name; + mlxsw_i2c->bus_info.device_name = client->name; + mlxsw_i2c->bus_info.dev = &client->dev; ++ mlxsw_i2c->bus_info.low_frequency = true; + mlxsw_i2c->dev = &client->dev; + + err = mlxsw_core_bus_device_register(&mlxsw_i2c->bus_info, +diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.h b/drivers/net/ethernet/mellanox/mlxsw/i2c.h +index daa24b2..17e059d 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.h +@@ -1,36 +1,5 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/i2c.h +- * Copyright (c) 2016 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2016 Vadim Pasternak +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ ++/* Copyright (c) 2016-2018 Mellanox Technologies. All rights reserved */ + + #ifndef _MLXSW_I2C_H + #define _MLXSW_I2C_H +diff --git a/drivers/net/ethernet/mellanox/mlxsw/minimal.c b/drivers/net/ethernet/mellanox/mlxsw/minimal.c +index 3dd1626..c47949c 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/minimal.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/minimal.c +@@ -1,37 +1,9 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/minimal.c +- * Copyright (c) 2016 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2016 Vadim Pasternak +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 ++/* Copyright (c) 2016-2018 Mellanox Technologies. All rights reserved */ + ++#include ++#include ++#include + #include + #include + #include +@@ -39,59 +11,339 @@ + #include + + #include "core.h" ++#include "core_env.h" + #include "i2c.h" + +-static const char mlxsw_minimal_driver_name[] = "mlxsw_minimal"; ++static const char mlxsw_m_driver_name[] = "mlxsw_minimal"; + +-static const struct mlxsw_config_profile mlxsw_minimal_config_profile; ++struct mlxsw_m_port; + +-static struct mlxsw_driver mlxsw_minimal_driver = { +- .kind = mlxsw_minimal_driver_name, +- .priv_size = 1, +- .profile = &mlxsw_minimal_config_profile, ++struct mlxsw_m { ++ struct mlxsw_m_port **modules; ++ int *module_to_port; ++ struct mlxsw_core *core; ++ const struct mlxsw_bus_info *bus_info; ++ u8 base_mac[ETH_ALEN]; ++ u8 max_modules; + }; + +-static const struct i2c_device_id mlxsw_minimal_i2c_id[] = { ++struct mlxsw_m_port { ++ struct mlxsw_core_port core_port; /* must be first */ ++ struct net_device *dev; ++ struct mlxsw_m *mlxsw_m; ++ u8 local_port; ++ u8 module; ++}; ++ ++static int mlxsw_m_port_dummy_open_stop(struct net_device *dev) ++{ ++ return 0; ++} ++ ++static const struct net_device_ops mlxsw_m_port_netdev_ops = { ++ .ndo_open = mlxsw_m_port_dummy_open_stop, ++ .ndo_stop = mlxsw_m_port_dummy_open_stop, ++}; ++ ++static int mlxsw_m_get_module_info(struct net_device *netdev, ++ struct ethtool_modinfo *modinfo) ++{ ++ struct mlxsw_m_port *mlxsw_m_port = netdev_priv(netdev); ++ struct mlxsw_core *core = mlxsw_m_port->mlxsw_m->core; ++ int err; ++ ++ err = mlxsw_env_get_module_info(core, mlxsw_m_port->module, modinfo); ++ ++ return err; ++} ++ ++static int ++mlxsw_m_get_module_eeprom(struct net_device *netdev, struct ethtool_eeprom *ee, ++ u8 *data) ++{ ++ struct mlxsw_m_port *mlxsw_m_port = netdev_priv(netdev); ++ struct mlxsw_core *core = mlxsw_m_port->mlxsw_m->core; ++ int err; ++ ++ err = mlxsw_env_get_module_eeprom(netdev, core, ++ mlxsw_m_port->module, ee, data); ++ ++ return err; ++} ++ ++static const struct ethtool_ops mlxsw_m_port_ethtool_ops = { ++ .get_module_info = mlxsw_m_get_module_info, ++ .get_module_eeprom = mlxsw_m_get_module_eeprom, ++}; ++ ++static int ++mlxsw_m_port_module_info_get(struct mlxsw_m *mlxsw_m, u8 local_port, ++ u8 *p_module, u8 *p_width) ++{ ++ char pmlp_pl[MLXSW_REG_PMLP_LEN]; ++ int err; ++ ++ mlxsw_reg_pmlp_pack(pmlp_pl, local_port); ++ err = mlxsw_reg_query(mlxsw_m->core, MLXSW_REG(pmlp), pmlp_pl); ++ if (err) ++ return err; ++ *p_module = mlxsw_reg_pmlp_module_get(pmlp_pl, 0); ++ *p_width = mlxsw_reg_pmlp_width_get(pmlp_pl); ++ ++ return 0; ++} ++ ++static int ++mlxsw_m_port_dev_addr_get(struct mlxsw_m_port *mlxsw_m_port) ++{ ++ struct mlxsw_m *mlxsw_m = mlxsw_m_port->mlxsw_m; ++ struct net_device *dev = mlxsw_m_port->dev; ++ char ppad_pl[MLXSW_REG_PPAD_LEN]; ++ int err; ++ ++ mlxsw_reg_ppad_pack(ppad_pl, false, 0); ++ err = mlxsw_reg_query(mlxsw_m->core, MLXSW_REG(ppad), ppad_pl); ++ if (err) ++ return err; ++ mlxsw_reg_ppad_mac_memcpy_from(ppad_pl, dev->dev_addr); ++ /* The last byte value in base mac address is guaranteed ++ * to be such it does not overflow when adding local_port ++ * value. ++ */ ++ dev->dev_addr[ETH_ALEN - 1] += mlxsw_m_port->module + 1; ++ return 0; ++} ++ ++static int ++mlxsw_m_port_create(struct mlxsw_m *mlxsw_m, u8 local_port, u8 module) ++{ ++ struct mlxsw_m_port *mlxsw_m_port; ++ struct net_device *dev; ++ int err; ++ ++ dev = alloc_etherdev(sizeof(struct mlxsw_m_port)); ++ if (!dev) { ++ err = -ENOMEM; ++ goto err_alloc_etherdev; ++ } ++ ++ SET_NETDEV_DEV(dev, mlxsw_m->bus_info->dev); ++ mlxsw_m_port = netdev_priv(dev); ++ mlxsw_m_port->dev = dev; ++ mlxsw_m_port->mlxsw_m = mlxsw_m; ++ mlxsw_m_port->local_port = local_port; ++ mlxsw_m_port->module = module; ++ ++ mlxsw_m->modules[local_port] = mlxsw_m_port; ++ ++ dev->netdev_ops = &mlxsw_m_port_netdev_ops; ++ dev->ethtool_ops = &mlxsw_m_port_ethtool_ops; ++ ++ err = mlxsw_core_port_init(mlxsw_m->core, ++ &mlxsw_m_port->core_port, local_port, ++ dev, false, module); ++ if (err) { ++ dev_err(mlxsw_m->bus_info->dev, "Port %d: Failed to init core port\n", ++ local_port); ++ goto err_alloc_etherdev; ++ } ++ ++ err = mlxsw_m_port_dev_addr_get(mlxsw_m_port); ++ if (err) { ++ dev_err(mlxsw_m->bus_info->dev, "Port %d: Unable to get port mac address\n", ++ mlxsw_m_port->local_port); ++ goto err_dev_addr_get; ++ } ++ ++ netif_carrier_off(dev); ++ ++ err = register_netdev(dev); ++ if (err) { ++ dev_err(mlxsw_m->bus_info->dev, "Port %d: Failed to register netdev\n", ++ mlxsw_m_port->local_port); ++ goto err_register_netdev; ++ } ++ ++ return 0; ++ ++err_register_netdev: ++ free_netdev(dev); ++err_dev_addr_get: ++err_alloc_etherdev: ++ mlxsw_m->modules[local_port] = NULL; ++ return err; ++} ++ ++static void mlxsw_m_port_remove(struct mlxsw_m *mlxsw_m, u8 local_port) ++{ ++ struct mlxsw_m_port *mlxsw_m_port = mlxsw_m->modules[local_port]; ++ ++ unregister_netdev(mlxsw_m_port->dev); /* This calls ndo_stop */ ++ free_netdev(mlxsw_m_port->dev); ++ mlxsw_core_port_fini(&mlxsw_m_port->core_port); ++} ++ ++static void mlxsw_m_ports_remove(struct mlxsw_m *mlxsw_m) ++{ ++ int i; ++ ++ for (i = 0; i < mlxsw_m->max_modules; i++) { ++ if (mlxsw_m->module_to_port[i] > 0) ++ mlxsw_m_port_remove(mlxsw_m, ++ mlxsw_m->module_to_port[i]); ++ } ++ ++ kfree(mlxsw_m->module_to_port); ++ kfree(mlxsw_m->modules); ++} ++ ++static int mlxsw_m_port_mapping_create(struct mlxsw_m *mlxsw_m, u8 local_port, ++ u8 *last_module) ++{ ++ u8 module, width; ++ int err; ++ ++ /* Fill out to local port mapping array */ ++ err = mlxsw_m_port_module_info_get(mlxsw_m, local_port, &module, ++ &width); ++ if (err) ++ return err; ++ ++ if (!width) ++ return 0; ++ /* Skip, if port belongs to the cluster */ ++ if (module == *last_module) ++ return 0; ++ *last_module = module; ++ mlxsw_m->module_to_port[module] = ++mlxsw_m->max_modules; ++ ++ return 0; ++} ++ ++static int mlxsw_m_ports_create(struct mlxsw_m *mlxsw_m) ++{ ++ unsigned int max_port = mlxsw_core_max_ports(mlxsw_m->core); ++ u8 last_module = max_port; ++ int i; ++ int err; ++ ++ mlxsw_m->modules = kcalloc(max_port, sizeof(*mlxsw_m->modules), ++ GFP_KERNEL); ++ if (!mlxsw_m->modules) ++ return -ENOMEM; ++ ++ mlxsw_m->module_to_port = kmalloc_array(max_port, sizeof(int), ++ GFP_KERNEL); ++ if (!mlxsw_m->module_to_port) { ++ err = -ENOMEM; ++ goto err_port_create; ++ } ++ ++ /* Invalidate the entries of module to local port mapping array */ ++ for (i = 0; i < max_port; i++) ++ mlxsw_m->module_to_port[i] = -1; ++ ++ /* Fill out module to local port mapping array */ ++ for (i = 1; i <= max_port; i++) { ++ err = mlxsw_m_port_mapping_create(mlxsw_m, i, &last_module); ++ if (err) ++ goto err_port_create; ++ } ++ ++ /* Create port objects for each valid entry */ ++ for (i = 0; i < mlxsw_m->max_modules; i++) { ++ if (mlxsw_m->module_to_port[i] > 0) { ++ err = mlxsw_m_port_create(mlxsw_m, ++ mlxsw_m->module_to_port[i], ++ i); ++ if (err) ++ goto err_port_create; ++ } ++ } ++ ++ return 0; ++ ++err_port_create: ++ mlxsw_m_ports_remove(mlxsw_m); ++ return err; ++} ++ ++static int mlxsw_m_init(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_bus_info *mlxsw_bus_info) ++{ ++ struct mlxsw_m *mlxsw_m = mlxsw_core_driver_priv(mlxsw_core); ++ int err; ++ ++ mlxsw_m->core = mlxsw_core; ++ mlxsw_m->bus_info = mlxsw_bus_info; ++ ++ err = mlxsw_m_ports_create(mlxsw_m); ++ if (err) { ++ dev_err(mlxsw_m->bus_info->dev, "Failed to create modules\n"); ++ return err; ++ } ++ ++ return 0; ++} ++ ++static void mlxsw_m_fini(struct mlxsw_core *mlxsw_core) ++{ ++ struct mlxsw_m *mlxsw_m = mlxsw_core_driver_priv(mlxsw_core); ++ ++ mlxsw_m_ports_remove(mlxsw_m); ++} ++ ++static const struct mlxsw_config_profile mlxsw_m_config_profile; ++ ++static struct mlxsw_driver mlxsw_m_driver = { ++ .kind = mlxsw_m_driver_name, ++ .priv_size = sizeof(struct mlxsw_m), ++ .init = mlxsw_m_init, ++ .fini = mlxsw_m_fini, ++ .profile = &mlxsw_m_config_profile, ++}; ++ ++static const struct i2c_device_id mlxsw_m_i2c_id[] = { + { "mlxsw_minimal", 0}, + { }, + }; + +-static struct i2c_driver mlxsw_minimal_i2c_driver = { ++static struct i2c_driver mlxsw_m_i2c_driver = { + .driver.name = "mlxsw_minimal", + .class = I2C_CLASS_HWMON, +- .id_table = mlxsw_minimal_i2c_id, ++ .id_table = mlxsw_m_i2c_id, + }; + +-static int __init mlxsw_minimal_module_init(void) ++static int __init mlxsw_m_module_init(void) + { + int err; + +- err = mlxsw_core_driver_register(&mlxsw_minimal_driver); ++ err = mlxsw_core_driver_register(&mlxsw_m_driver); + if (err) + return err; + +- err = mlxsw_i2c_driver_register(&mlxsw_minimal_i2c_driver); ++ err = mlxsw_i2c_driver_register(&mlxsw_m_i2c_driver); + if (err) + goto err_i2c_driver_register; + + return 0; + + err_i2c_driver_register: +- mlxsw_core_driver_unregister(&mlxsw_minimal_driver); ++ mlxsw_core_driver_unregister(&mlxsw_m_driver); + + return err; + } + +-static void __exit mlxsw_minimal_module_exit(void) ++static void __exit mlxsw_m_module_exit(void) + { +- mlxsw_i2c_driver_unregister(&mlxsw_minimal_i2c_driver); +- mlxsw_core_driver_unregister(&mlxsw_minimal_driver); ++ mlxsw_i2c_driver_unregister(&mlxsw_m_i2c_driver); ++ mlxsw_core_driver_unregister(&mlxsw_m_driver); + } + +-module_init(mlxsw_minimal_module_init); +-module_exit(mlxsw_minimal_module_exit); ++module_init(mlxsw_m_module_init); ++module_exit(mlxsw_m_module_exit); + + MODULE_LICENSE("Dual BSD/GPL"); + MODULE_AUTHOR("Vadim Pasternak "); + MODULE_DESCRIPTION("Mellanox minimal driver"); +-MODULE_DEVICE_TABLE(i2c, mlxsw_minimal_i2c_id); ++MODULE_DEVICE_TABLE(i2c, mlxsw_m_i2c_id); +diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h +index 2c0c331..20f01bb 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h +@@ -4618,6 +4618,35 @@ static inline void mlxsw_reg_mfsl_unpack(char *payload, u8 tacho, + *p_tach_max = mlxsw_reg_mfsl_tach_max_get(payload); + } + ++/* FORE - Fan Out of Range Event Register ++ * -------------------------------------- ++ * This register reports the status of the controlled fans compared to the ++ * range defined by the MFSL register. ++ */ ++#define MLXSW_REG_FORE_ID 0x9007 ++#define MLXSW_REG_FORE_LEN 0x0C ++ ++MLXSW_REG_DEFINE(fore, MLXSW_REG_FORE_ID, MLXSW_REG_FORE_LEN); ++ ++/* fan_under_limit ++ * Fan speed is below the low limit defined in MFSL register. Each bit relates ++ * to a single tachometer and indicates the specific tachometer reading is ++ * below the threshold. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, fore, fan_under_limit, 0x00, 16, 10); ++ ++static inline void mlxsw_reg_fore_unpack(char *payload, u8 tacho, ++ bool *fan_under_limit) ++{ ++ u16 limit; ++ ++ if (fan_under_limit) { ++ limit = mlxsw_reg_fore_fan_under_limit_get(payload); ++ *fan_under_limit = !!(limit & BIT(tacho)); ++ } ++} ++ + /* MTCAP - Management Temperature Capabilities + * ------------------------------------------- + * This register exposes the capabilities of the device and +@@ -4750,6 +4779,80 @@ static inline void mlxsw_reg_mtmp_unpack(char *payload, unsigned int *p_temp, + mlxsw_reg_mtmp_sensor_name_memcpy_from(payload, sensor_name); + } + ++/* MTBR - Management Temperature Bulk Register ++ * ------------------------------------------- ++ * This register is used for bulk temperature reading. ++ */ ++#define MLXSW_REG_MTBR_ID 0x900F ++#define MLXSW_REG_MTBR_BASE_LEN 0x10 /* base length, without records */ ++#define MLXSW_REG_MTBR_REC_LEN 0x04 /* record length */ ++#define MLXSW_REG_MTBR_REC_MAX_COUNT 47 /* firmware limitation */ ++#define MLXSW_REG_MTBR_LEN (MLXSW_REG_MTBR_BASE_LEN + \ ++ MLXSW_REG_MTBR_REC_LEN * \ ++ MLXSW_REG_MTBR_REC_MAX_COUNT) ++ ++MLXSW_REG_DEFINE(mtbr, MLXSW_REG_MTBR_ID, MLXSW_REG_MTBR_LEN); ++ ++/* reg_mtbr_base_sensor_index ++ * Base sensors index to access (0 - ASIC sensor, 1-63 - ambient sensors, ++ * 64-127 are mapped to the SFP+/QSFP modules sequentially). ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, mtbr, base_sensor_index, 0x00, 0, 7); ++ ++/* reg_mtbr_num_rec ++ * Request: Number of records to read ++ * Response: Number of records read ++ * See above description for more details. ++ * Range 1..255 ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, mtbr, num_rec, 0x04, 0, 8); ++ ++/* reg_mtbr_rec_max_temp ++ * The highest measured temperature from the sensor. ++ * When the bit mte is cleared, the field max_temperature is reserved. ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, mtbr, rec_max_temp, MLXSW_REG_MTBR_BASE_LEN, 16, ++ 16, MLXSW_REG_MTBR_REC_LEN, 0x00, false); ++ ++/* reg_mtbr_rec_temp ++ * Temperature reading from the sensor. Reading is in 0..125 Celsius ++ * degrees units. ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, mtbr, rec_temp, MLXSW_REG_MTBR_BASE_LEN, 0, 16, ++ MLXSW_REG_MTBR_REC_LEN, 0x00, false); ++ ++static inline void mlxsw_reg_mtbr_pack(char *payload, u8 base_sensor_index, ++ u8 num_rec) ++{ ++ MLXSW_REG_ZERO(mtbr, payload); ++ mlxsw_reg_mtbr_base_sensor_index_set(payload, base_sensor_index); ++ mlxsw_reg_mtbr_num_rec_set(payload, num_rec); ++} ++ ++/* Error codes from temperatute reading */ ++enum mlxsw_reg_mtbr_temp_status { ++ MLXSW_REG_MTBR_NO_CONN = 0x8000, ++ MLXSW_REG_MTBR_NO_TEMP_SENS = 0x8001, ++ MLXSW_REG_MTBR_INDEX_NA = 0x8002, ++ MLXSW_REG_MTBR_BAD_SENS_INFO = 0x8003, ++}; ++ ++/* Base index for reading modules temperature */ ++#define MLXSW_REG_MTBR_BASE_MODULE_INDEX 64 ++ ++static inline void mlxsw_reg_mtbr_temp_unpack(char *payload, int rec_ind, ++ u16 *p_temp, u16 *p_max_temp) ++{ ++ if (p_temp) ++ *p_temp = mlxsw_reg_mtbr_rec_temp_get(payload, rec_ind); ++ if (p_max_temp) ++ *p_max_temp = mlxsw_reg_mtbr_rec_max_temp_get(payload, rec_ind); ++} ++ + /* MCIA - Management Cable Info Access + * ----------------------------------- + * MCIA register is used to access the SFP+ and QSFP connector's EPROM. +@@ -4804,13 +4907,41 @@ MLXSW_ITEM32(reg, mcia, device_address, 0x04, 0, 16); + */ + MLXSW_ITEM32(reg, mcia, size, 0x08, 0, 16); + +-#define MLXSW_SP_REG_MCIA_EEPROM_SIZE 48 ++#define MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH 256 ++#define MLXSW_REG_MCIA_EEPROM_SIZE 48 ++#define MLXSW_REG_MCIA_I2C_ADDR_LOW 0x50 ++#define MLXSW_REG_MCIA_I2C_ADDR_HIGH 0x51 ++#define MLXSW_REG_MCIA_PAGE0_LO_OFF 0xa0 ++#define MLXSW_REG_MCIA_TH_ITEM_SIZE 2 ++#define MLXSW_REG_MCIA_TH_PAGE_NUM 3 ++#define MLXSW_REG_MCIA_PAGE0_LO 0 ++#define MLXSW_REG_MCIA_TH_PAGE_OFF 0x80 ++ ++enum mlxsw_reg_mcia_eeprom_module_info_rev_id { ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID_UNSPC = 0x00, ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID_8436 = 0x01, ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID_8636 = 0x03, ++}; ++ ++enum mlxsw_reg_mcia_eeprom_module_info_id { ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_SFP = 0x03, ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP = 0x0C, ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP_PLUS = 0x0D, ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP28 = 0x11, ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP_DD = 0x18, ++}; ++ ++enum mlxsw_reg_mcia_eeprom_module_info { ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID, ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID, ++ MLXSW_REG_MCIA_EEPROM_MODULE_INFO_SIZE, ++}; + + /* reg_mcia_eeprom + * Bytes to read/write. + * Access: RW + */ +-MLXSW_ITEM_BUF(reg, mcia, eeprom, 0x10, MLXSW_SP_REG_MCIA_EEPROM_SIZE); ++MLXSW_ITEM_BUF(reg, mcia, eeprom, 0x10, MLXSW_REG_MCIA_EEPROM_SIZE); + + static inline void mlxsw_reg_mcia_pack(char *payload, u8 module, u8 lock, + u8 page_number, u16 device_addr, +@@ -5548,6 +5679,8 @@ static inline const char *mlxsw_reg_id_str(u16 reg_id) + return "MFSC"; + case MLXSW_REG_MFSM_ID: + return "MFSM"; ++ case MLXSW_REG_FORE_ID: ++ return "FORE"; + case MLXSW_REG_MTCAP_ID: + return "MTCAP"; + case MLXSW_REG_MPAT_ID: +@@ -5556,6 +5689,8 @@ static inline const char *mlxsw_reg_id_str(u16 reg_id) + return "MPAR"; + case MLXSW_REG_MTMP_ID: + return "MTMP"; ++ case MLXSW_REG_MTBR_ID: ++ return "MTBR"; + case MLXSW_REG_MLCR_ID: + return "MLCR"; + case MLXSW_REG_SBPR_ID: +diff --git a/drivers/platform/mellanox/mlxreg-hotplug.c b/drivers/platform/mellanox/mlxreg-hotplug.c +index b6d4455..52314a1 100644 +--- a/drivers/platform/mellanox/mlxreg-hotplug.c ++++ b/drivers/platform/mellanox/mlxreg-hotplug.c +@@ -495,7 +495,9 @@ static int mlxreg_hotplug_set_irq(struct mlxreg_hotplug_priv_data *priv) + { + struct mlxreg_core_hotplug_platform_data *pdata; + struct mlxreg_core_item *item; +- int i, ret; ++ struct mlxreg_core_data *data; ++ u32 regval; ++ int i, j, ret; + + pdata = dev_get_platdata(&priv->pdev->dev); + item = pdata->items; +@@ -507,6 +509,25 @@ static int mlxreg_hotplug_set_irq(struct mlxreg_hotplug_priv_data *priv) + if (ret) + goto out; + ++ /* ++ * Verify if hardware configuration requires to disable ++ * interrupt capability for some of components. ++ */ ++ data = item->data; ++ for (j = 0; j < item->count; j++, data++) { ++ /* Verify if the attribute has capability register. */ ++ if (data->capability) { ++ /* Read capability register. */ ++ ret = regmap_read(priv->regmap, ++ data->capability, ®val); ++ if (ret) ++ goto out; ++ ++ if (!(regval & data->bit)) ++ item->mask &= ~BIT(j); ++ } ++ } ++ + /* Set group initial status as mask and unmask group event. */ + if (item->inversed) { + item->cache = item->mask; +diff --git a/drivers/platform/mellanox/mlxreg-io.c b/drivers/platform/mellanox/mlxreg-io.c +index c192dfe..acfaf64 100644 +--- a/drivers/platform/mellanox/mlxreg-io.c ++++ b/drivers/platform/mellanox/mlxreg-io.c +@@ -152,8 +152,8 @@ static int mlxreg_io_attr_init(struct mlxreg_io_priv_data *priv) + { + int i; + +- priv->group.attrs = devm_kzalloc(&priv->pdev->dev, +- priv->pdata->counter * ++ priv->group.attrs = devm_kcalloc(&priv->pdev->dev, ++ priv->pdata->counter, + sizeof(struct attribute *), + GFP_KERNEL); + if (!priv->group.attrs) +diff --git a/drivers/platform/x86/mlx-platform.c b/drivers/platform/x86/mlx-platform.c +index e8782f5..6e150b4 100644 +--- a/drivers/platform/x86/mlx-platform.c ++++ b/drivers/platform/x86/mlx-platform.c +@@ -1,34 +1,9 @@ ++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 + /* +- * Copyright (c) 2016 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2016 Vadim Pasternak ++ * Mellanox platform driver + * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. ++ * Copyright (C) 2016-2018 Mellanox Technologies ++ * Copyright (C) 2016-2018 Vadim Pasternak + */ + + #include +@@ -49,7 +24,7 @@ + #define MLXPLAT_CPLD_LPC_REG_BASE_ADRR 0x2500 + #define MLXPLAT_CPLD_LPC_REG_CPLD1_VER_OFFSET 0x00 + #define MLXPLAT_CPLD_LPC_REG_CPLD2_VER_OFFSET 0x01 +-#define MLXPLAT_CPLD_LPC_REG_CPLD3_VER_OFFSET 0x02 ++#define MLXPLAT_CPLD_LPC_REG_CPLD3_VER_OFFSET 0x02 + #define MLXPLAT_CPLD_LPC_REG_RESET_CAUSE_OFFSET 0x1d + #define MLXPLAT_CPLD_LPC_REG_RST_CAUSE1_OFFSET 0x1e + #define MLXPLAT_CPLD_LPC_REG_RST_CAUSE2_OFFSET 0x1f +@@ -58,6 +33,7 @@ + #define MLXPLAT_CPLD_LPC_REG_LED3_OFFSET 0x22 + #define MLXPLAT_CPLD_LPC_REG_LED4_OFFSET 0x23 + #define MLXPLAT_CPLD_LPC_REG_LED5_OFFSET 0x24 ++#define MLXPLAT_CPLD_LPC_REG_FAN_DIRECTION 0x2a + #define MLXPLAT_CPLD_LPC_REG_GP1_OFFSET 0x30 + #define MLXPLAT_CPLD_LPC_REG_WP1_OFFSET 0x31 + #define MLXPLAT_CPLD_LPC_REG_GP2_OFFSET 0x32 +@@ -92,6 +68,10 @@ + #define MLXPLAT_CPLD_LPC_REG_TACHO10_OFFSET 0xee + #define MLXPLAT_CPLD_LPC_REG_TACHO11_OFFSET 0xef + #define MLXPLAT_CPLD_LPC_REG_TACHO12_OFFSET 0xf0 ++#define MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET 0xf5 ++#define MLXPLAT_CPLD_LPC_REG_FAN_CAP2_OFFSET 0xf6 ++#define MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET 0xf7 ++#define MLXPLAT_CPLD_LPC_REG_TACHO_SPEED_OFFSET 0xf8 + #define MLXPLAT_CPLD_LPC_IO_RANGE 0x100 + #define MLXPLAT_CPLD_LPC_I2C_CH1_OFF 0xdb + #define MLXPLAT_CPLD_LPC_I2C_CH2_OFF 0xda +@@ -578,7 +558,7 @@ static struct mlxreg_core_item mlxplat_mlxcpld_msn201x_items[] = { + + static + struct mlxreg_core_hotplug_platform_data mlxplat_mlxcpld_msn201x_data = { +- .items = mlxplat_mlxcpld_msn21xx_items, ++ .items = mlxplat_mlxcpld_msn201x_items, + .counter = ARRAY_SIZE(mlxplat_mlxcpld_msn201x_items), + .cell = MLXPLAT_CPLD_LPC_REG_AGGR_OFFSET, + .mask = MLXPLAT_CPLD_AGGR_MASK_DEF, +@@ -609,36 +589,48 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_ng_fan_items_data[] = { + .label = "fan1", + .reg = MLXPLAT_CPLD_LPC_REG_FAN_OFFSET, + .mask = BIT(0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(0), + .hpdev.nr = MLXPLAT_CPLD_NR_NONE, + }, + { + .label = "fan2", + .reg = MLXPLAT_CPLD_LPC_REG_FAN_OFFSET, + .mask = BIT(1), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(1), + .hpdev.nr = MLXPLAT_CPLD_NR_NONE, + }, + { + .label = "fan3", + .reg = MLXPLAT_CPLD_LPC_REG_FAN_OFFSET, + .mask = BIT(2), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(2), + .hpdev.nr = MLXPLAT_CPLD_NR_NONE, + }, + { + .label = "fan4", + .reg = MLXPLAT_CPLD_LPC_REG_FAN_OFFSET, + .mask = BIT(3), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(3), + .hpdev.nr = MLXPLAT_CPLD_NR_NONE, + }, + { + .label = "fan5", + .reg = MLXPLAT_CPLD_LPC_REG_FAN_OFFSET, + .mask = BIT(4), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(4), + .hpdev.nr = MLXPLAT_CPLD_NR_NONE, + }, + { + .label = "fan6", + .reg = MLXPLAT_CPLD_LPC_REG_FAN_OFFSET, + .mask = BIT(5), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(5), + .hpdev.nr = MLXPLAT_CPLD_NR_NONE, + }, + }; +@@ -841,61 +833,85 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_ng_led_data[] = { + .label = "fan1:green", + .reg = MLXPLAT_CPLD_LPC_REG_LED2_OFFSET, + .mask = MLXPLAT_CPLD_LED_LO_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(0), + }, + { + .label = "fan1:orange", + .reg = MLXPLAT_CPLD_LPC_REG_LED2_OFFSET, + .mask = MLXPLAT_CPLD_LED_LO_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(0), + }, + { + .label = "fan2:green", + .reg = MLXPLAT_CPLD_LPC_REG_LED2_OFFSET, + .mask = MLXPLAT_CPLD_LED_HI_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(1), + }, + { + .label = "fan2:orange", + .reg = MLXPLAT_CPLD_LPC_REG_LED2_OFFSET, + .mask = MLXPLAT_CPLD_LED_HI_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(1), + }, + { + .label = "fan3:green", + .reg = MLXPLAT_CPLD_LPC_REG_LED3_OFFSET, + .mask = MLXPLAT_CPLD_LED_LO_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(2), + }, + { + .label = "fan3:orange", + .reg = MLXPLAT_CPLD_LPC_REG_LED3_OFFSET, + .mask = MLXPLAT_CPLD_LED_LO_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(2), + }, + { + .label = "fan4:green", + .reg = MLXPLAT_CPLD_LPC_REG_LED3_OFFSET, + .mask = MLXPLAT_CPLD_LED_HI_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(3), + }, + { + .label = "fan4:orange", + .reg = MLXPLAT_CPLD_LPC_REG_LED3_OFFSET, + .mask = MLXPLAT_CPLD_LED_HI_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(3), + }, + { + .label = "fan5:green", + .reg = MLXPLAT_CPLD_LPC_REG_LED4_OFFSET, + .mask = MLXPLAT_CPLD_LED_LO_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(4), + }, + { + .label = "fan5:orange", + .reg = MLXPLAT_CPLD_LPC_REG_LED4_OFFSET, + .mask = MLXPLAT_CPLD_LED_LO_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(4), + }, + { + .label = "fan6:green", + .reg = MLXPLAT_CPLD_LPC_REG_LED4_OFFSET, + .mask = MLXPLAT_CPLD_LED_HI_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(5), + }, + { + .label = "fan6:orange", + .reg = MLXPLAT_CPLD_LPC_REG_LED4_OFFSET, + .mask = MLXPLAT_CPLD_LED_HI_NIBBLE_MASK, ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET, ++ .bit = BIT(5), + }, + }; + +@@ -1123,7 +1139,7 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_ng_regs_io_data[] = { + .reg = MLXPLAT_CPLD_LPC_REG_CPLD3_VER_OFFSET, + .bit = GENMASK(7, 0), + .mode = 0444, +- }, ++ }, + { + .label = "reset_long_pb", + .reg = MLXPLAT_CPLD_LPC_REG_RESET_CAUSE_OFFSET, +@@ -1209,6 +1225,12 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_ng_regs_io_data[] = { + .bit = 1, + .mode = 0444, + }, ++ { ++ .label = "fan_dir", ++ .reg = MLXPLAT_CPLD_LPC_REG_FAN_DIRECTION, ++ .bit = GENMASK(7, 0), ++ .mode = 0200, ++ }, + }; + + static struct mlxreg_core_platform_data mlxplat_default_ng_regs_io_data = { +@@ -1226,61 +1248,90 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_fan_data[] = { + .label = "tacho1", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO1_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET, ++ .bit = BIT(0), + }, + { + .label = "tacho2", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO2_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET, ++ .bit = BIT(1), + }, + { + .label = "tacho3", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO3_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET, ++ .bit = BIT(2), + }, + { + .label = "tacho4", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO4_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET, ++ .bit = BIT(3), + }, + { + .label = "tacho5", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO5_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET, ++ .bit = BIT(4), + }, + { + .label = "tacho6", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO6_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET, ++ .bit = BIT(5), + }, + { + .label = "tacho7", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO7_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET, ++ .bit = BIT(6), + }, + { + .label = "tacho8", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO8_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET, ++ .bit = BIT(7), + }, + { + .label = "tacho9", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO9_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP2_OFFSET, ++ .bit = BIT(0), + }, + { + .label = "tacho10", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO10_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP2_OFFSET, ++ .bit = BIT(1), + }, + { + .label = "tacho11", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO11_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP2_OFFSET, ++ .bit = BIT(2), + }, + { + .label = "tacho12", + .reg = MLXPLAT_CPLD_LPC_REG_TACHO12_OFFSET, + .mask = GENMASK(7, 0), ++ .capability = MLXPLAT_CPLD_LPC_REG_FAN_CAP2_OFFSET, ++ .bit = BIT(3), ++ }, ++ { ++ .label = "conf", ++ .reg = MLXPLAT_CPLD_LPC_REG_TACHO12_OFFSET, ++ .capability = MLXPLAT_CPLD_LPC_REG_TACHO_SPEED_OFFSET, + }, + }; + +@@ -1332,6 +1383,7 @@ static bool mlxplat_mlxcpld_readable_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_LED3_OFFSET: + case MLXPLAT_CPLD_LPC_REG_LED4_OFFSET: + case MLXPLAT_CPLD_LPC_REG_LED5_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_FAN_DIRECTION: + case MLXPLAT_CPLD_LPC_REG_GP1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_WP1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_GP2_OFFSET: +@@ -1366,6 +1418,10 @@ static bool mlxplat_mlxcpld_readable_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_TACHO11_OFFSET: + case MLXPLAT_CPLD_LPC_REG_TACHO12_OFFSET: + case MLXPLAT_CPLD_LPC_REG_PWM_CONTROL_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_FAN_CAP2_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_TACHO_SPEED_OFFSET: + return true; + } + return false; +@@ -1385,6 +1441,7 @@ static bool mlxplat_mlxcpld_volatile_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_LED3_OFFSET: + case MLXPLAT_CPLD_LPC_REG_LED4_OFFSET: + case MLXPLAT_CPLD_LPC_REG_LED5_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_FAN_DIRECTION: + case MLXPLAT_CPLD_LPC_REG_GP1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_GP2_OFFSET: + case MLXPLAT_CPLD_LPC_REG_AGGR_OFFSET: +@@ -1417,6 +1474,10 @@ static bool mlxplat_mlxcpld_volatile_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_TACHO11_OFFSET: + case MLXPLAT_CPLD_LPC_REG_TACHO12_OFFSET: + case MLXPLAT_CPLD_LPC_REG_PWM_CONTROL_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_FAN_CAP1_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_FAN_CAP2_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_FAN_DRW_CAP_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_TACHO_SPEED_OFFSET: + return true; + } + return false; +@@ -1639,6 +1700,13 @@ static const struct dmi_system_id mlxplat_dmi_table[] __initconst = { + }, + }, + { ++ .callback = mlxplat_dmi_qmb7xx_matched, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "Mellanox Technologies"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "MSN38"), ++ }, ++ }, ++ { + .callback = mlxplat_dmi_default_matched, + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "VMOD0001"), +@@ -1668,6 +1736,12 @@ static const struct dmi_system_id mlxplat_dmi_table[] __initconst = { + DMI_MATCH(DMI_BOARD_NAME, "VMOD0005"), + }, + }, ++ { ++ .callback = mlxplat_dmi_qmb7xx_matched, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "VMOD0007"), ++ }, ++ }, + { } + }; + +diff --git a/include/linux/platform_data/mlxreg.h b/include/linux/platform_data/mlxreg.h +index 19f5cb61..1b2f86f 100644 +--- a/include/linux/platform_data/mlxreg.h ++++ b/include/linux/platform_data/mlxreg.h +@@ -61,6 +61,7 @@ struct mlxreg_hotplug_device { + * @reg: attribute register; + * @mask: attribute access mask; + * @bit: attribute effective bit; ++ * @capability: attribute capability register; + * @mode: access mode; + * @np - pointer to node platform associated with attribute; + * @hpdev - hotplug device data; +@@ -72,6 +73,7 @@ struct mlxreg_core_data { + u32 reg; + u32 mask; + u32 bit; ++ u32 capability; + umode_t mode; + struct device_node *np; + struct mlxreg_hotplug_device hpdev; +@@ -107,9 +109,9 @@ struct mlxreg_core_item { + /** + * struct mlxreg_core_platform_data - platform data: + * +- * @led_data: led private data; ++ * @data: instance private data; + * @regmap: register map of parent device; +- * @counter: number of led instances; ++ * @counter: number of instances; + */ + struct mlxreg_core_platform_data { + struct mlxreg_core_data *data; +diff --git a/include/linux/sfp.h b/include/linux/sfp.h +new file mode 100644 +index 0000000..d37518e +--- /dev/null ++++ b/include/linux/sfp.h +@@ -0,0 +1,564 @@ ++#ifndef LINUX_SFP_H ++#define LINUX_SFP_H ++ ++#include ++ ++struct sfp_eeprom_base { ++ u8 phys_id; ++ u8 phys_ext_id; ++ u8 connector; ++#if defined __BIG_ENDIAN_BITFIELD ++ u8 e10g_base_er:1; ++ u8 e10g_base_lrm:1; ++ u8 e10g_base_lr:1; ++ u8 e10g_base_sr:1; ++ u8 if_1x_sx:1; ++ u8 if_1x_lx:1; ++ u8 if_1x_copper_active:1; ++ u8 if_1x_copper_passive:1; ++ ++ u8 escon_mmf_1310_led:1; ++ u8 escon_smf_1310_laser:1; ++ u8 sonet_oc192_short_reach:1; ++ u8 sonet_reach_bit1:1; ++ u8 sonet_reach_bit2:1; ++ u8 sonet_oc48_long_reach:1; ++ u8 sonet_oc48_intermediate_reach:1; ++ u8 sonet_oc48_short_reach:1; ++ ++ u8 unallocated_5_7:1; ++ u8 sonet_oc12_smf_long_reach:1; ++ u8 sonet_oc12_smf_intermediate_reach:1; ++ u8 sonet_oc12_short_reach:1; ++ u8 unallocated_5_3:1; ++ u8 sonet_oc3_smf_long_reach:1; ++ u8 sonet_oc3_smf_intermediate_reach:1; ++ u8 sonet_oc3_short_reach:1; ++ ++ u8 e_base_px:1; ++ u8 e_base_bx10:1; ++ u8 e100_base_fx:1; ++ u8 e100_base_lx:1; ++ u8 e1000_base_t:1; ++ u8 e1000_base_cx:1; ++ u8 e1000_base_lx:1; ++ u8 e1000_base_sx:1; ++ ++ u8 fc_ll_v:1; ++ u8 fc_ll_s:1; ++ u8 fc_ll_i:1; ++ u8 fc_ll_l:1; ++ u8 fc_ll_m:1; ++ u8 fc_tech_sa:1; ++ u8 fc_tech_lc:1; ++ u8 fc_tech_electrical_inter_enclosure:1; ++ ++ u8 fc_tech_electrical_intra_enclosure:1; ++ u8 fc_tech_sn:1; ++ u8 fc_tech_sl:1; ++ u8 fc_tech_ll:1; ++ u8 sfp_ct_active:1; ++ u8 sfp_ct_passive:1; ++ u8 unallocated_8_1:1; ++ u8 unallocated_8_0:1; ++ ++ u8 fc_media_tw:1; ++ u8 fc_media_tp:1; ++ u8 fc_media_mi:1; ++ u8 fc_media_tv:1; ++ u8 fc_media_m6:1; ++ u8 fc_media_m5:1; ++ u8 unallocated_9_1:1; ++ u8 fc_media_sm:1; ++ ++ u8 fc_speed_1200:1; ++ u8 fc_speed_800:1; ++ u8 fc_speed_1600:1; ++ u8 fc_speed_400:1; ++ u8 fc_speed_3200:1; ++ u8 fc_speed_200:1; ++ u8 unallocated_10_1:1; ++ u8 fc_speed_100:1; ++#elif defined __LITTLE_ENDIAN_BITFIELD ++ u8 if_1x_copper_passive:1; ++ u8 if_1x_copper_active:1; ++ u8 if_1x_lx:1; ++ u8 if_1x_sx:1; ++ u8 e10g_base_sr:1; ++ u8 e10g_base_lr:1; ++ u8 e10g_base_lrm:1; ++ u8 e10g_base_er:1; ++ ++ u8 sonet_oc3_short_reach:1; ++ u8 sonet_oc3_smf_intermediate_reach:1; ++ u8 sonet_oc3_smf_long_reach:1; ++ u8 unallocated_5_3:1; ++ u8 sonet_oc12_short_reach:1; ++ u8 sonet_oc12_smf_intermediate_reach:1; ++ u8 sonet_oc12_smf_long_reach:1; ++ u8 unallocated_5_7:1; ++ ++ u8 sonet_oc48_short_reach:1; ++ u8 sonet_oc48_intermediate_reach:1; ++ u8 sonet_oc48_long_reach:1; ++ u8 sonet_reach_bit2:1; ++ u8 sonet_reach_bit1:1; ++ u8 sonet_oc192_short_reach:1; ++ u8 escon_smf_1310_laser:1; ++ u8 escon_mmf_1310_led:1; ++ ++ u8 e1000_base_sx:1; ++ u8 e1000_base_lx:1; ++ u8 e1000_base_cx:1; ++ u8 e1000_base_t:1; ++ u8 e100_base_lx:1; ++ u8 e100_base_fx:1; ++ u8 e_base_bx10:1; ++ u8 e_base_px:1; ++ ++ u8 fc_tech_electrical_inter_enclosure:1; ++ u8 fc_tech_lc:1; ++ u8 fc_tech_sa:1; ++ u8 fc_ll_m:1; ++ u8 fc_ll_l:1; ++ u8 fc_ll_i:1; ++ u8 fc_ll_s:1; ++ u8 fc_ll_v:1; ++ ++ u8 unallocated_8_0:1; ++ u8 unallocated_8_1:1; ++ u8 sfp_ct_passive:1; ++ u8 sfp_ct_active:1; ++ u8 fc_tech_ll:1; ++ u8 fc_tech_sl:1; ++ u8 fc_tech_sn:1; ++ u8 fc_tech_electrical_intra_enclosure:1; ++ ++ u8 fc_media_sm:1; ++ u8 unallocated_9_1:1; ++ u8 fc_media_m5:1; ++ u8 fc_media_m6:1; ++ u8 fc_media_tv:1; ++ u8 fc_media_mi:1; ++ u8 fc_media_tp:1; ++ u8 fc_media_tw:1; ++ ++ u8 fc_speed_100:1; ++ u8 unallocated_10_1:1; ++ u8 fc_speed_200:1; ++ u8 fc_speed_3200:1; ++ u8 fc_speed_400:1; ++ u8 fc_speed_1600:1; ++ u8 fc_speed_800:1; ++ u8 fc_speed_1200:1; ++#else ++#error Unknown Endian ++#endif ++ u8 encoding; ++ u8 br_nominal; ++ u8 rate_id; ++ u8 link_len[6]; ++ char vendor_name[16]; ++ u8 extended_cc; ++ char vendor_oui[3]; ++ char vendor_pn[16]; ++ char vendor_rev[4]; ++ union { ++ __be16 optical_wavelength; ++ __be16 cable_compliance; ++ struct { ++#if defined __BIG_ENDIAN_BITFIELD ++ u8 reserved60_2:6; ++ u8 fc_pi_4_app_h:1; ++ u8 sff8431_app_e:1; ++ u8 reserved61:8; ++#elif defined __LITTLE_ENDIAN_BITFIELD ++ u8 sff8431_app_e:1; ++ u8 fc_pi_4_app_h:1; ++ u8 reserved60_2:6; ++ u8 reserved61:8; ++#else ++#error Unknown Endian ++#endif ++ } __packed passive; ++ struct { ++#if defined __BIG_ENDIAN_BITFIELD ++ u8 reserved60_4:4; ++ u8 fc_pi_4_lim:1; ++ u8 sff8431_lim:1; ++ u8 fc_pi_4_app_h:1; ++ u8 sff8431_app_e:1; ++ u8 reserved61:8; ++#elif defined __LITTLE_ENDIAN_BITFIELD ++ u8 sff8431_app_e:1; ++ u8 fc_pi_4_app_h:1; ++ u8 sff8431_lim:1; ++ u8 fc_pi_4_lim:1; ++ u8 reserved60_4:4; ++ u8 reserved61:8; ++#else ++#error Unknown Endian ++#endif ++ } __packed active; ++ } __packed; ++ u8 reserved62; ++ u8 cc_base; ++} __packed; ++ ++struct sfp_eeprom_ext { ++ __be16 options; ++ u8 br_max; ++ u8 br_min; ++ char vendor_sn[16]; ++ char datecode[8]; ++ u8 diagmon; ++ u8 enhopts; ++ u8 sff8472_compliance; ++ u8 cc_ext; ++} __packed; ++ ++/** ++ * struct sfp_eeprom_id - raw SFP module identification information ++ * @base: base SFP module identification structure ++ * @ext: extended SFP module identification structure ++ * ++ * See the SFF-8472 specification and related documents for the definition ++ * of these structure members. This can be obtained from ++ * ftp://ftp.seagate.com/sff ++ */ ++struct sfp_eeprom_id { ++ struct sfp_eeprom_base base; ++ struct sfp_eeprom_ext ext; ++} __packed; ++ ++struct sfp_diag { ++ __be16 temp_high_alarm; ++ __be16 temp_low_alarm; ++ __be16 temp_high_warn; ++ __be16 temp_low_warn; ++ __be16 volt_high_alarm; ++ __be16 volt_low_alarm; ++ __be16 volt_high_warn; ++ __be16 volt_low_warn; ++ __be16 bias_high_alarm; ++ __be16 bias_low_alarm; ++ __be16 bias_high_warn; ++ __be16 bias_low_warn; ++ __be16 txpwr_high_alarm; ++ __be16 txpwr_low_alarm; ++ __be16 txpwr_high_warn; ++ __be16 txpwr_low_warn; ++ __be16 rxpwr_high_alarm; ++ __be16 rxpwr_low_alarm; ++ __be16 rxpwr_high_warn; ++ __be16 rxpwr_low_warn; ++ __be16 laser_temp_high_alarm; ++ __be16 laser_temp_low_alarm; ++ __be16 laser_temp_high_warn; ++ __be16 laser_temp_low_warn; ++ __be16 tec_cur_high_alarm; ++ __be16 tec_cur_low_alarm; ++ __be16 tec_cur_high_warn; ++ __be16 tec_cur_low_warn; ++ __be32 cal_rxpwr4; ++ __be32 cal_rxpwr3; ++ __be32 cal_rxpwr2; ++ __be32 cal_rxpwr1; ++ __be32 cal_rxpwr0; ++ __be16 cal_txi_slope; ++ __be16 cal_txi_offset; ++ __be16 cal_txpwr_slope; ++ __be16 cal_txpwr_offset; ++ __be16 cal_t_slope; ++ __be16 cal_t_offset; ++ __be16 cal_v_slope; ++ __be16 cal_v_offset; ++} __packed; ++ ++/* SFP EEPROM registers */ ++enum { ++ SFP_PHYS_ID = 0x00, ++ SFP_PHYS_EXT_ID = 0x01, ++ SFP_CONNECTOR = 0x02, ++ SFP_COMPLIANCE = 0x03, ++ SFP_ENCODING = 0x0b, ++ SFP_BR_NOMINAL = 0x0c, ++ SFP_RATE_ID = 0x0d, ++ SFP_LINK_LEN_SM_KM = 0x0e, ++ SFP_LINK_LEN_SM_100M = 0x0f, ++ SFP_LINK_LEN_50UM_OM2_10M = 0x10, ++ SFP_LINK_LEN_62_5UM_OM1_10M = 0x11, ++ SFP_LINK_LEN_COPPER_1M = 0x12, ++ SFP_LINK_LEN_50UM_OM4_10M = 0x12, ++ SFP_LINK_LEN_50UM_OM3_10M = 0x13, ++ SFP_VENDOR_NAME = 0x14, ++ SFP_VENDOR_OUI = 0x25, ++ SFP_VENDOR_PN = 0x28, ++ SFP_VENDOR_REV = 0x38, ++ SFP_OPTICAL_WAVELENGTH_MSB = 0x3c, ++ SFP_OPTICAL_WAVELENGTH_LSB = 0x3d, ++ SFP_CABLE_SPEC = 0x3c, ++ SFP_CC_BASE = 0x3f, ++ SFP_OPTIONS = 0x40, /* 2 bytes, MSB, LSB */ ++ SFP_BR_MAX = 0x42, ++ SFP_BR_MIN = 0x43, ++ SFP_VENDOR_SN = 0x44, ++ SFP_DATECODE = 0x54, ++ SFP_DIAGMON = 0x5c, ++ SFP_ENHOPTS = 0x5d, ++ SFP_SFF8472_COMPLIANCE = 0x5e, ++ SFP_CC_EXT = 0x5f, ++ ++ SFP_PHYS_ID_SFF = 0x02, ++ SFP_PHYS_ID_SFP = 0x03, ++ SFP_PHYS_EXT_ID_SFP = 0x04, ++ SFP_CONNECTOR_UNSPEC = 0x00, ++ /* codes 01-05 not supportable on SFP, but some modules have single SC */ ++ SFP_CONNECTOR_SC = 0x01, ++ SFP_CONNECTOR_FIBERJACK = 0x06, ++ SFP_CONNECTOR_LC = 0x07, ++ SFP_CONNECTOR_MT_RJ = 0x08, ++ SFP_CONNECTOR_MU = 0x09, ++ SFP_CONNECTOR_SG = 0x0a, ++ SFP_CONNECTOR_OPTICAL_PIGTAIL = 0x0b, ++ SFP_CONNECTOR_MPO_1X12 = 0x0c, ++ SFP_CONNECTOR_MPO_2X16 = 0x0d, ++ SFP_CONNECTOR_HSSDC_II = 0x20, ++ SFP_CONNECTOR_COPPER_PIGTAIL = 0x21, ++ SFP_CONNECTOR_RJ45 = 0x22, ++ SFP_CONNECTOR_NOSEPARATE = 0x23, ++ SFP_CONNECTOR_MXC_2X16 = 0x24, ++ SFP_ENCODING_UNSPEC = 0x00, ++ SFP_ENCODING_8B10B = 0x01, ++ SFP_ENCODING_4B5B = 0x02, ++ SFP_ENCODING_NRZ = 0x03, ++ SFP_ENCODING_8472_MANCHESTER = 0x04, ++ SFP_ENCODING_8472_SONET = 0x05, ++ SFP_ENCODING_8472_64B66B = 0x06, ++ SFP_ENCODING_256B257B = 0x07, ++ SFP_ENCODING_PAM4 = 0x08, ++ SFP_OPTIONS_HIGH_POWER_LEVEL = BIT(13), ++ SFP_OPTIONS_PAGING_A2 = BIT(12), ++ SFP_OPTIONS_RETIMER = BIT(11), ++ SFP_OPTIONS_COOLED_XCVR = BIT(10), ++ SFP_OPTIONS_POWER_DECL = BIT(9), ++ SFP_OPTIONS_RX_LINEAR_OUT = BIT(8), ++ SFP_OPTIONS_RX_DECISION_THRESH = BIT(7), ++ SFP_OPTIONS_TUNABLE_TX = BIT(6), ++ SFP_OPTIONS_RATE_SELECT = BIT(5), ++ SFP_OPTIONS_TX_DISABLE = BIT(4), ++ SFP_OPTIONS_TX_FAULT = BIT(3), ++ SFP_OPTIONS_LOS_INVERTED = BIT(2), ++ SFP_OPTIONS_LOS_NORMAL = BIT(1), ++ SFP_DIAGMON_DDM = BIT(6), ++ SFP_DIAGMON_INT_CAL = BIT(5), ++ SFP_DIAGMON_EXT_CAL = BIT(4), ++ SFP_DIAGMON_RXPWR_AVG = BIT(3), ++ SFP_DIAGMON_ADDRMODE = BIT(2), ++ SFP_ENHOPTS_ALARMWARN = BIT(7), ++ SFP_ENHOPTS_SOFT_TX_DISABLE = BIT(6), ++ SFP_ENHOPTS_SOFT_TX_FAULT = BIT(5), ++ SFP_ENHOPTS_SOFT_RX_LOS = BIT(4), ++ SFP_ENHOPTS_SOFT_RATE_SELECT = BIT(3), ++ SFP_ENHOPTS_APP_SELECT_SFF8079 = BIT(2), ++ SFP_ENHOPTS_SOFT_RATE_SFF8431 = BIT(1), ++ SFP_SFF8472_COMPLIANCE_NONE = 0x00, ++ SFP_SFF8472_COMPLIANCE_REV9_3 = 0x01, ++ SFP_SFF8472_COMPLIANCE_REV9_5 = 0x02, ++ SFP_SFF8472_COMPLIANCE_REV10_2 = 0x03, ++ SFP_SFF8472_COMPLIANCE_REV10_4 = 0x04, ++ SFP_SFF8472_COMPLIANCE_REV11_0 = 0x05, ++ SFP_SFF8472_COMPLIANCE_REV11_3 = 0x06, ++ SFP_SFF8472_COMPLIANCE_REV11_4 = 0x07, ++ SFP_SFF8472_COMPLIANCE_REV12_0 = 0x08, ++}; ++ ++/* SFP Diagnostics */ ++enum { ++ /* Alarm and warnings stored MSB at lower address then LSB */ ++ SFP_TEMP_HIGH_ALARM = 0x00, ++ SFP_TEMP_LOW_ALARM = 0x02, ++ SFP_TEMP_HIGH_WARN = 0x04, ++ SFP_TEMP_LOW_WARN = 0x06, ++ SFP_VOLT_HIGH_ALARM = 0x08, ++ SFP_VOLT_LOW_ALARM = 0x0a, ++ SFP_VOLT_HIGH_WARN = 0x0c, ++ SFP_VOLT_LOW_WARN = 0x0e, ++ SFP_BIAS_HIGH_ALARM = 0x10, ++ SFP_BIAS_LOW_ALARM = 0x12, ++ SFP_BIAS_HIGH_WARN = 0x14, ++ SFP_BIAS_LOW_WARN = 0x16, ++ SFP_TXPWR_HIGH_ALARM = 0x18, ++ SFP_TXPWR_LOW_ALARM = 0x1a, ++ SFP_TXPWR_HIGH_WARN = 0x1c, ++ SFP_TXPWR_LOW_WARN = 0x1e, ++ SFP_RXPWR_HIGH_ALARM = 0x20, ++ SFP_RXPWR_LOW_ALARM = 0x22, ++ SFP_RXPWR_HIGH_WARN = 0x24, ++ SFP_RXPWR_LOW_WARN = 0x26, ++ SFP_LASER_TEMP_HIGH_ALARM = 0x28, ++ SFP_LASER_TEMP_LOW_ALARM = 0x2a, ++ SFP_LASER_TEMP_HIGH_WARN = 0x2c, ++ SFP_LASER_TEMP_LOW_WARN = 0x2e, ++ SFP_TEC_CUR_HIGH_ALARM = 0x30, ++ SFP_TEC_CUR_LOW_ALARM = 0x32, ++ SFP_TEC_CUR_HIGH_WARN = 0x34, ++ SFP_TEC_CUR_LOW_WARN = 0x36, ++ SFP_CAL_RXPWR4 = 0x38, ++ SFP_CAL_RXPWR3 = 0x3c, ++ SFP_CAL_RXPWR2 = 0x40, ++ SFP_CAL_RXPWR1 = 0x44, ++ SFP_CAL_RXPWR0 = 0x48, ++ SFP_CAL_TXI_SLOPE = 0x4c, ++ SFP_CAL_TXI_OFFSET = 0x4e, ++ SFP_CAL_TXPWR_SLOPE = 0x50, ++ SFP_CAL_TXPWR_OFFSET = 0x52, ++ SFP_CAL_T_SLOPE = 0x54, ++ SFP_CAL_T_OFFSET = 0x56, ++ SFP_CAL_V_SLOPE = 0x58, ++ SFP_CAL_V_OFFSET = 0x5a, ++ SFP_CHKSUM = 0x5f, ++ ++ SFP_TEMP = 0x60, ++ SFP_VCC = 0x62, ++ SFP_TX_BIAS = 0x64, ++ SFP_TX_POWER = 0x66, ++ SFP_RX_POWER = 0x68, ++ SFP_LASER_TEMP = 0x6a, ++ SFP_TEC_CUR = 0x6c, ++ ++ SFP_STATUS = 0x6e, ++ SFP_ALARM0 = 0x70, ++ SFP_ALARM0_TEMP_HIGH = BIT(7), ++ SFP_ALARM0_TEMP_LOW = BIT(6), ++ SFP_ALARM0_VCC_HIGH = BIT(5), ++ SFP_ALARM0_VCC_LOW = BIT(4), ++ SFP_ALARM0_TX_BIAS_HIGH = BIT(3), ++ SFP_ALARM0_TX_BIAS_LOW = BIT(2), ++ SFP_ALARM0_TXPWR_HIGH = BIT(1), ++ SFP_ALARM0_TXPWR_LOW = BIT(0), ++ ++ SFP_ALARM1 = 0x71, ++ SFP_ALARM1_RXPWR_HIGH = BIT(7), ++ SFP_ALARM1_RXPWR_LOW = BIT(6), ++ ++ SFP_WARN0 = 0x74, ++ SFP_WARN0_TEMP_HIGH = BIT(7), ++ SFP_WARN0_TEMP_LOW = BIT(6), ++ SFP_WARN0_VCC_HIGH = BIT(5), ++ SFP_WARN0_VCC_LOW = BIT(4), ++ SFP_WARN0_TX_BIAS_HIGH = BIT(3), ++ SFP_WARN0_TX_BIAS_LOW = BIT(2), ++ SFP_WARN0_TXPWR_HIGH = BIT(1), ++ SFP_WARN0_TXPWR_LOW = BIT(0), ++ ++ SFP_WARN1 = 0x75, ++ SFP_WARN1_RXPWR_HIGH = BIT(7), ++ SFP_WARN1_RXPWR_LOW = BIT(6), ++ ++ SFP_EXT_STATUS = 0x76, ++ SFP_VSL = 0x78, ++ SFP_PAGE = 0x7f, ++}; ++ ++struct fwnode_handle; ++struct ethtool_eeprom; ++struct ethtool_modinfo; ++struct net_device; ++struct sfp_bus; ++ ++/** ++ * struct sfp_upstream_ops - upstream operations structure ++ * @module_insert: called after a module has been detected to determine ++ * whether the module is supported for the upstream device. ++ * @module_remove: called after the module has been removed. ++ * @link_down: called when the link is non-operational for whatever ++ * reason. ++ * @link_up: called when the link is operational. ++ * @connect_phy: called when an I2C accessible PHY has been detected ++ * on the module. ++ * @disconnect_phy: called when a module with an I2C accessible PHY has ++ * been removed. ++ */ ++struct sfp_upstream_ops { ++ int (*module_insert)(void *priv, const struct sfp_eeprom_id *id); ++ void (*module_remove)(void *priv); ++ void (*link_down)(void *priv); ++ void (*link_up)(void *priv); ++ int (*connect_phy)(void *priv, struct phy_device *); ++ void (*disconnect_phy)(void *priv); ++}; ++ ++#if IS_ENABLED(CONFIG_SFP) ++int sfp_parse_port(struct sfp_bus *bus, const struct sfp_eeprom_id *id, ++ unsigned long *support); ++void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id, ++ unsigned long *support); ++phy_interface_t sfp_select_interface(struct sfp_bus *bus, ++ const struct sfp_eeprom_id *id, ++ unsigned long *link_modes); ++ ++int sfp_get_module_info(struct sfp_bus *bus, struct ethtool_modinfo *modinfo); ++int sfp_get_module_eeprom(struct sfp_bus *bus, struct ethtool_eeprom *ee, ++ u8 *data); ++void sfp_upstream_start(struct sfp_bus *bus); ++void sfp_upstream_stop(struct sfp_bus *bus); ++struct sfp_bus *sfp_register_upstream(struct fwnode_handle *fwnode, ++ struct net_device *ndev, void *upstream, ++ const struct sfp_upstream_ops *ops); ++void sfp_unregister_upstream(struct sfp_bus *bus); ++#else ++static inline int sfp_parse_port(struct sfp_bus *bus, ++ const struct sfp_eeprom_id *id, ++ unsigned long *support) ++{ ++ return PORT_OTHER; ++} ++ ++static inline void sfp_parse_support(struct sfp_bus *bus, ++ const struct sfp_eeprom_id *id, ++ unsigned long *support) ++{ ++} ++ ++static inline phy_interface_t sfp_select_interface(struct sfp_bus *bus, ++ const struct sfp_eeprom_id *id, ++ unsigned long *link_modes) ++{ ++ return PHY_INTERFACE_MODE_NA; ++} ++ ++static inline int sfp_get_module_info(struct sfp_bus *bus, ++ struct ethtool_modinfo *modinfo) ++{ ++ return -EOPNOTSUPP; ++} ++ ++static inline int sfp_get_module_eeprom(struct sfp_bus *bus, ++ struct ethtool_eeprom *ee, u8 *data) ++{ ++ return -EOPNOTSUPP; ++} ++ ++static inline void sfp_upstream_start(struct sfp_bus *bus) ++{ ++} ++ ++static inline void sfp_upstream_stop(struct sfp_bus *bus) ++{ ++} ++ ++static inline struct sfp_bus *sfp_register_upstream( ++ struct fwnode_handle *fwnode, ++ struct net_device *ndev, void *upstream, ++ const struct sfp_upstream_ops *ops) ++{ ++ return (struct sfp_bus *)-1; ++} ++ ++static inline void sfp_unregister_upstream(struct sfp_bus *bus) ++{ ++} ++#endif ++ ++#endif +-- +2.1.4 + diff --git a/patch/0028-watchdog-mlx-wdt-introduce-watchdog-driver-for-Mella.patch b/patch/0028-watchdog-mlx-wdt-introduce-watchdog-driver-for-Mella.patch new file mode 100644 index 000000000000..d29b5c5cc918 --- /dev/null +++ b/patch/0028-watchdog-mlx-wdt-introduce-watchdog-driver-for-Mella.patch @@ -0,0 +1,839 @@ +From a648f2856518f8884a7798469faf755d0f5dcd50 Mon Sep 17 00:00:00 2001 +From: Michael Shych +Date: Mon, 17 Dec 2018 12:32:45 +0000 +Subject: [PATCH v1 mlx-wdt 1/1] watchdog: mlx-wdt: introduce watchdog driver + for Mellanox systems + +Watchdog driver for Mellanox watchdog devices, implemented in +programmable logic device. + +Main and auxiliary watchdog devices can exist on the same system. +There are several actions that can be defined in the watchdog: +system reset, start fans on full speed and increase counter. +The last 2 actions are performed without system reset. +Actions without reset are provided for auxiliary watchdog devices, +which is optional. +Access to CPLD registers is performed through generic +regmap interface. + +There are 2 types of HW CPLD watchdog implementations. +Type 1: actual HW timeout can be defined as power of 2 msec. +e.g. timeout 20 sec will be rounded up to 32768 msec.; +maximum timeout period is 32 sec (32768 msec.); +get time-left isn't supported +Type 2: actual HW timeout is defined in sec. and it's a same as +user defined timeout; +maximum timeout is 255 sec; +get time-left is supported; + +Watchdog driver is probed from common mlx_platform driver. + +Signed-off-by: Michael Shych +--- + drivers/platform/x86/mlx-platform.c | 202 +++++++++++++++++- + drivers/watchdog/Kconfig | 15 ++ + drivers/watchdog/Makefile | 1 + + drivers/watchdog/mlx_wdt.c | 391 +++++++++++++++++++++++++++++++++++ + include/linux/platform_data/mlxreg.h | 6 + + 5 files changed, 612 insertions(+), 3 deletions(-) + create mode 100644 drivers/watchdog/mlx_wdt.c + +diff --git a/drivers/platform/x86/mlx-platform.c b/drivers/platform/x86/mlx-platform.c +index 6e150b4..fc8d655 100644 +--- a/drivers/platform/x86/mlx-platform.c ++++ b/drivers/platform/x86/mlx-platform.c +@@ -55,6 +55,14 @@ + #define MLXPLAT_CPLD_LPC_REG_FAN_OFFSET 0x88 + #define MLXPLAT_CPLD_LPC_REG_FAN_EVENT_OFFSET 0x89 + #define MLXPLAT_CPLD_LPC_REG_FAN_MASK_OFFSET 0x8a ++#define MLXPLAT_CPLD_LPC_REG_WD_CLEAR_OFFSET 0xc7 ++#define MLXPLAT_CPLD_LPC_REG_WD_CLEAR_WP_OFFSET 0xc8 ++#define MLXPLAT_CPLD_LPC_REG_WD1_TMR_OFFSET 0xc9 ++#define MLXPLAT_CPLD_LPC_REG_WD1_ACT_OFFSET 0xcb ++#define MLXPLAT_CPLD_LPC_REG_WD2_TMR_OFFSET 0xcd ++#define MLXPLAT_CPLD_LPC_REG_WD2_ACT_OFFSET 0xcf ++#define MLXPLAT_CPLD_LPC_REG_WD3_TMR_OFFSET 0xd1 ++#define MLXPLAT_CPLD_LPC_REG_WD3_ACT_OFFSET 0xd2 + #define MLXPLAT_CPLD_LPC_REG_PWM1_OFFSET 0xe3 + #define MLXPLAT_CPLD_LPC_REG_TACHO1_OFFSET 0xe4 + #define MLXPLAT_CPLD_LPC_REG_TACHO2_OFFSET 0xe5 +@@ -128,6 +136,18 @@ + #define MLXPLAT_CPLD_FAN3_DEFAULT_NR 13 + #define MLXPLAT_CPLD_FAN4_DEFAULT_NR 14 + ++/* Masks and default values for watchdogs */ ++#define MLXPLAT_CPLD_WD1_CLEAR_MASK GENMASK(7, 1) ++#define MLXPLAT_CPLD_WD2_CLEAR_MASK (GENMASK(7, 0) & ~BIT(1)) ++ ++#define MLXPLAT_CPLD_WD_TYPE1_TO_MASK GENMASK(7, 4) ++#define MLXPLAT_CPLD_WD_TYPE2_TO_MASK 0 ++#define MLXPLAT_CPLD_WD_RESET_ACT_MASK GENMASK(7, 1) ++#define MLXPLAT_CPLD_WD_FAN_ACT_MASK (GENMASK(7, 0) & ~BIT(4)) ++#define MLXPLAT_CPLD_WD_COUNT_ACT_MASK (GENMASK(7, 0) & ~BIT(7)) ++#define MLXPLAT_CPLD_WD_DFLT_TIMEOUT 30 ++#define MLXPLAT_CPLD_WD_MAX_DEVS 2 ++ + /* mlxplat_priv - platform private data + * @pdev_i2c - i2c controller platform device + * @pdev_mux - array of mux platform devices +@@ -135,6 +155,7 @@ + * @pdev_led - led platform devices + * @pdev_io_regs - register access platform devices + * @pdev_fan - FAN platform devices ++ * @pdev_wd - array of watchdog platform devices + */ + struct mlxplat_priv { + struct platform_device *pdev_i2c; +@@ -143,6 +164,7 @@ struct mlxplat_priv { + struct platform_device *pdev_led; + struct platform_device *pdev_io_regs; + struct platform_device *pdev_fan; ++ struct platform_device *pdev_wd[MLXPLAT_CPLD_WD_MAX_DEVS]; + }; + + /* Regions for LPC I2C controller and LPC base register space */ +@@ -1340,6 +1362,132 @@ static struct mlxreg_core_platform_data mlxplat_default_fan_data = { + .counter = ARRAY_SIZE(mlxplat_mlxcpld_default_fan_data), + }; + ++/* Type1 watchdog implementation on MSN2700, MSN2100 and MSN2140 systems */ ++static struct mlxreg_core_data mlxplat_mlxcpld_wd_main_regs_type1[] = { ++ { ++ .label = "action", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD1_ACT_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_RESET_ACT_MASK, ++ .bit = 0, ++ }, ++ { ++ .label = "timeout", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD1_TMR_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_TYPE1_TO_MASK, ++ .health_cntr = MLXPLAT_CPLD_WD_DFLT_TIMEOUT, ++ }, ++ { ++ .label = "ping", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD_CLEAR_OFFSET, ++ .mask = MLXPLAT_CPLD_WD1_CLEAR_MASK, ++ .bit = 0, ++ }, ++ { ++ .label = "reset", ++ .reg = MLXPLAT_CPLD_LPC_REG_RESET_CAUSE_OFFSET, ++ .mask = GENMASK(7, 0) & ~BIT(6), ++ .bit = 6, ++ }, ++}; ++ ++static struct mlxreg_core_data mlxplat_mlxcpld_wd_aux_regs_type1[] = { ++ { ++ .label = "action", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD2_ACT_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_FAN_ACT_MASK, ++ .bit = 4, ++ }, ++ { ++ .label = "timeout", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD2_TMR_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_TYPE1_TO_MASK, ++ .health_cntr = MLXPLAT_CPLD_WD_DFLT_TIMEOUT, ++ }, ++ { ++ .label = "ping", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD_CLEAR_OFFSET, ++ .mask = MLXPLAT_CPLD_WD1_CLEAR_MASK, ++ .bit = 1, ++ }, ++}; ++ ++static struct mlxreg_core_platform_data mlxplat_mlxcpld_wd_set_type1[] = { ++ { ++ .data = mlxplat_mlxcpld_wd_main_regs_type1, ++ .counter = ARRAY_SIZE(mlxplat_mlxcpld_wd_main_regs_type1), ++ .identity = "mlx-wdt-main", ++ }, ++ { ++ .data = mlxplat_mlxcpld_wd_aux_regs_type1, ++ .counter = ARRAY_SIZE(mlxplat_mlxcpld_wd_aux_regs_type1), ++ .identity = "mlx-wdt-aux", ++ }, ++}; ++ ++/* Type2 watchdog implementation on MSB8700 and up systems ++ * To differentiate: ping reg == action reg ++ */ ++static struct mlxreg_core_data mlxplat_mlxcpld_wd_main_regs_type2[] = { ++ { ++ .label = "action", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD2_ACT_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_RESET_ACT_MASK, ++ .bit = 0, ++ }, ++ { ++ .label = "timeout", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD2_TMR_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_TYPE2_TO_MASK, ++ .health_cntr = MLXPLAT_CPLD_WD_DFLT_TIMEOUT, ++ }, ++ { ++ .label = "ping", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD2_ACT_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_RESET_ACT_MASK, ++ .bit = 0, ++ }, ++ { ++ .label = "reset", ++ .reg = MLXPLAT_CPLD_LPC_REG_RESET_CAUSE_OFFSET, ++ .mask = GENMASK(7, 0) & ~BIT(6), ++ .bit = 6, ++ }, ++}; ++ ++static struct mlxreg_core_data mlxplat_mlxcpld_wd_aux_regs_type2[] = { ++ { ++ .label = "action", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD3_ACT_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_FAN_ACT_MASK, ++ .bit = 4, ++ }, ++ { ++ .label = "timeout", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD3_TMR_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_TYPE2_TO_MASK, ++ .health_cntr = MLXPLAT_CPLD_WD_DFLT_TIMEOUT, ++ }, ++ { ++ .label = "ping", ++ .reg = MLXPLAT_CPLD_LPC_REG_WD3_ACT_OFFSET, ++ .mask = MLXPLAT_CPLD_WD_FAN_ACT_MASK, ++ .bit = 4, ++ }, ++}; ++ ++static struct mlxreg_core_platform_data mlxplat_mlxcpld_wd_set_type2[] = { ++ { ++ .data = mlxplat_mlxcpld_wd_main_regs_type2, ++ .counter = ARRAY_SIZE(mlxplat_mlxcpld_wd_main_regs_type2), ++ .identity = "mlx-wdt-main", ++ }, ++ { ++ .data = mlxplat_mlxcpld_wd_aux_regs_type2, ++ .counter = ARRAY_SIZE(mlxplat_mlxcpld_wd_aux_regs_type2), ++ .identity = "mlx-wdt-aux", ++ }, ++}; ++ + static bool mlxplat_mlxcpld_writeable_reg(struct device *dev, unsigned int reg) + { + switch (reg) { +@@ -1362,6 +1510,14 @@ static bool mlxplat_mlxcpld_writeable_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_PWR_MASK_OFFSET: + case MLXPLAT_CPLD_LPC_REG_FAN_EVENT_OFFSET: + case MLXPLAT_CPLD_LPC_REG_FAN_MASK_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD_CLEAR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD_CLEAR_WP_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD1_TMR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD1_ACT_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD2_TMR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD2_ACT_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD3_TMR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD3_ACT_OFFSET: + case MLXPLAT_CPLD_LPC_REG_PWM1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_PWM_CONTROL_OFFSET: + return true; +@@ -1404,6 +1560,14 @@ static bool mlxplat_mlxcpld_readable_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_FAN_OFFSET: + case MLXPLAT_CPLD_LPC_REG_FAN_EVENT_OFFSET: + case MLXPLAT_CPLD_LPC_REG_FAN_MASK_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD_CLEAR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD_CLEAR_WP_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD1_TMR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD1_ACT_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD2_TMR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD2_ACT_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD3_TMR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD3_ACT_OFFSET: + case MLXPLAT_CPLD_LPC_REG_PWM1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_TACHO1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_TACHO2_OFFSET: +@@ -1460,6 +1624,8 @@ static bool mlxplat_mlxcpld_volatile_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_FAN_OFFSET: + case MLXPLAT_CPLD_LPC_REG_FAN_EVENT_OFFSET: + case MLXPLAT_CPLD_LPC_REG_FAN_MASK_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD2_TMR_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_WD3_TMR_OFFSET: + case MLXPLAT_CPLD_LPC_REG_PWM1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_TACHO1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_TACHO2_OFFSET: +@@ -1487,6 +1653,7 @@ static const struct reg_default mlxplat_mlxcpld_regmap_default[] = { + { MLXPLAT_CPLD_LPC_REG_WP1_OFFSET, 0x00 }, + { MLXPLAT_CPLD_LPC_REG_WP2_OFFSET, 0x00 }, + { MLXPLAT_CPLD_LPC_REG_PWM_CONTROL_OFFSET, 0x00 }, ++ { MLXPLAT_CPLD_LPC_REG_WD_CLEAR_WP_OFFSET, 0x00 }, + }; + + struct mlxplat_mlxcpld_regmap_context { +@@ -1536,6 +1703,8 @@ static struct mlxreg_core_hotplug_platform_data *mlxplat_hotplug; + static struct mlxreg_core_platform_data *mlxplat_led; + static struct mlxreg_core_platform_data *mlxplat_regs_io; + static struct mlxreg_core_platform_data *mlxplat_fan; ++static struct mlxreg_core_platform_data ++ *mlxplat_wd_data[MLXPLAT_CPLD_WD_MAX_DEVS]; + + static int __init mlxplat_dmi_default_matched(const struct dmi_system_id *dmi) + { +@@ -1551,6 +1720,7 @@ static int __init mlxplat_dmi_default_matched(const struct dmi_system_id *dmi) + mlxplat_default_channels[i - 1][MLXPLAT_CPLD_GRP_CHNL_NUM - 1]; + mlxplat_led = &mlxplat_default_led_data; + mlxplat_regs_io = &mlxplat_default_regs_io_data; ++ mlxplat_wd_data[0] = &mlxplat_mlxcpld_wd_set_type1[0]; + + return 1; + }; +@@ -1569,6 +1739,7 @@ static int __init mlxplat_dmi_msn21xx_matched(const struct dmi_system_id *dmi) + mlxplat_msn21xx_channels[MLXPLAT_CPLD_GRP_CHNL_NUM - 1]; + mlxplat_led = &mlxplat_msn21xx_led_data; + mlxplat_regs_io = &mlxplat_msn21xx_regs_io_data; ++ mlxplat_wd_data[0] = &mlxplat_mlxcpld_wd_set_type1[0]; + + return 1; + }; +@@ -1587,6 +1758,7 @@ static int __init mlxplat_dmi_msn274x_matched(const struct dmi_system_id *dmi) + mlxplat_msn21xx_channels[MLXPLAT_CPLD_GRP_CHNL_NUM - 1]; + mlxplat_led = &mlxplat_default_led_data; + mlxplat_regs_io = &mlxplat_msn21xx_regs_io_data; ++ mlxplat_wd_data[0] = &mlxplat_mlxcpld_wd_set_type1[0]; + + return 1; + }; +@@ -1605,6 +1777,7 @@ static int __init mlxplat_dmi_msn201x_matched(const struct dmi_system_id *dmi) + mlxplat_default_channels[i - 1][MLXPLAT_CPLD_GRP_CHNL_NUM - 1]; + mlxplat_led = &mlxplat_msn21xx_led_data; + mlxplat_regs_io = &mlxplat_msn21xx_regs_io_data; ++ mlxplat_wd_data[0] = &mlxplat_mlxcpld_wd_set_type1[0]; + + return 1; + }; +@@ -1624,6 +1797,8 @@ static int __init mlxplat_dmi_qmb7xx_matched(const struct dmi_system_id *dmi) + mlxplat_led = &mlxplat_default_ng_led_data; + mlxplat_regs_io = &mlxplat_default_ng_regs_io_data; + mlxplat_fan = &mlxplat_default_fan_data; ++ for (i = 0; i < ARRAY_SIZE(mlxplat_mlxcpld_wd_set_type2); i++) ++ mlxplat_wd_data[i] = &mlxplat_mlxcpld_wd_set_type2[i]; + + return 1; + }; +@@ -1906,15 +2081,33 @@ static int __init mlxplat_init(void) + } + } + ++ /* Add WD drivers. */ ++ for (j = 0; j < MLXPLAT_CPLD_WD_MAX_DEVS; j++) { ++ if (mlxplat_wd_data[j]) { ++ mlxplat_wd_data[j]->regmap = mlxplat_hotplug->regmap; ++ priv->pdev_wd[j] = platform_device_register_resndata( ++ &mlxplat_dev->dev, ++ "mlx-wdt", j, NULL, 0, ++ mlxplat_wd_data[j], ++ sizeof(*mlxplat_wd_data[j])); ++ if (IS_ERR(priv->pdev_wd[j])) { ++ err = PTR_ERR(priv->pdev_wd[j]); ++ goto fail_platform_wd_register; ++ } ++ } ++ } ++ + /* Sync registers with hardware. */ + regcache_mark_dirty(mlxplat_hotplug->regmap); + err = regcache_sync(mlxplat_hotplug->regmap); + if (err) +- goto fail_platform_fan_register; ++ goto fail_platform_wd_register; + + return 0; + +-fail_platform_fan_register: ++fail_platform_wd_register: ++ while (--j >= 0) ++ platform_device_unregister(priv->pdev_wd[j]); + if (mlxplat_fan) + platform_device_unregister(priv->pdev_fan); + fail_platform_io_regs_register: +@@ -1946,7 +2139,10 @@ static void __exit mlxplat_exit(void) + platform_device_unregister(priv->pdev_io_regs); + platform_device_unregister(priv->pdev_led); + platform_device_unregister(priv->pdev_hotplug); +- ++ for (i = MLXPLAT_CPLD_WD_MAX_DEVS - 1; i >= 0 ; i--) { ++ if (mlxplat_wd_data[i]) ++ platform_device_unregister(priv->pdev_wd[i]); ++ } + for (i = ARRAY_SIZE(mlxplat_mux_data) - 1; i >= 0 ; i--) + platform_device_unregister(priv->pdev_mux[i]); + +diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig +index 3eb58cb..ada5a33 100644 +--- a/drivers/watchdog/Kconfig ++++ b/drivers/watchdog/Kconfig +@@ -141,6 +141,21 @@ config MENF21BMC_WATCHDOG + This driver can also be built as a module. If so the module + will be called menf21bmc_wdt. + ++config MLX_WDT ++ tristate "Mellanox Watchdog" ++ select WATCHDOG_CORE ++ select REGMAP ++ ---help--- ++ This is the driver for the hardware watchdog on Mellanox systems. ++ If you are going to use it, say Y here, otherwise N. ++ This driver can be used together with the watchdog daemon. ++ It can also watch your kernel to make sure it doesn't freeze, ++ and if it does, it reboots your system after a certain amount of ++ time. ++ ++ To compile this driver as a module, choose M here: the ++ module will be called mlx-wdt. ++ + config TANGOX_WATCHDOG + tristate "Sigma Designs SMP86xx/SMP87xx watchdog" + select WATCHDOG_CORE +diff --git a/drivers/watchdog/Makefile b/drivers/watchdog/Makefile +index caa9f4a..50494df 100644 +--- a/drivers/watchdog/Makefile ++++ b/drivers/watchdog/Makefile +@@ -139,6 +139,7 @@ obj-$(CONFIG_INTEL_SCU_WATCHDOG) += intel_scu_watchdog.o + obj-$(CONFIG_INTEL_MID_WATCHDOG) += intel-mid_wdt.o + obj-$(CONFIG_INTEL_MEI_WDT) += mei_wdt.o + obj-$(CONFIG_NI903X_WDT) += ni903x_wdt.o ++obj-$(CONFIG_MLX_WDT) += mlx_wdt.o + + # M32R Architecture + +diff --git a/drivers/watchdog/mlx_wdt.c b/drivers/watchdog/mlx_wdt.c +new file mode 100644 +index 0000000..7effe8c +--- /dev/null ++++ b/drivers/watchdog/mlx_wdt.c +@@ -0,0 +1,391 @@ ++// SPDX-License-Identifier: GPL-2.0+ ++/* ++ * Mellanox watchdog driver ++ * ++ * Copyright (C) 2018 Mellanox Technologies ++ * Copyright (C) 2018 Michael Shych ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++ ++#define MLXREG_WDT_CLOCK_SCALE 1000 ++#define MLXREG_WDT_MAX_TIMEOUT_TYPE1 32 ++#define MLXREG_WDT_MAX_TIMEOUT_TYPE2 255 ++#define MLXREG_WDT_MIN_TIMEOUT 1 ++#define MLXREG_WDT_HW_TIMEOUT_CONVERT(hw_timeout) ((1 << (hw_timeout)) \ ++ / MLXREG_WDT_CLOCK_SCALE) ++ ++/** ++ * enum mlxreg_wdt_type - type of HW watchdog ++ * ++ * TYPE1 can be differentiated by different register/mask ++ * for WD action set and ping. ++ */ ++enum mlxreg_wdt_type { ++ MLX_WDT_TYPE1, ++ MLX_WDT_TYPE2, ++}; ++ ++/** ++ * struct mlxreg_wdt - wd private data: ++ * ++ * @wdd: watchdog device; ++ * @device: basic device; ++ * @pdata: data received from platform driver; ++ * @regmap: register map of parent device; ++ * @timeout: defined timeout in sec.; ++ * @hw_timeout: real timeout set in hw; ++ * It will be roundup base of 2 in WD type 1, ++ * in WD type 2 it will be same number of sec as timeout; ++ * @action_idx: index for direct access to action register; ++ * @timeout_idx:index for direct access to TO register; ++ * @ping_idx: index for direct access to ping register; ++ * @reset_idx: index for direct access to reset cause register; ++ * @wd_type: watchdog HW type; ++ * @hw_timeout: actual HW timeout; ++ * @io_lock: spinlock for io access; ++ */ ++struct mlxreg_wdt { ++ struct watchdog_device wdd; ++ struct mlxreg_core_platform_data *pdata; ++ void *regmap; ++ int action_idx; ++ int timeout_idx; ++ int ping_idx; ++ int reset_idx; ++ enum mlxreg_wdt_type wdt_type; ++ u8 hw_timeout; ++ spinlock_t io_lock; /* the lock for io operations */ ++}; ++ ++static int mlxreg_wdt_roundup_to_base_2(struct mlxreg_wdt *wdt, int timeout) ++{ ++ timeout *= MLXREG_WDT_CLOCK_SCALE; ++ ++ wdt->hw_timeout = order_base_2(timeout); ++ dev_info(wdt->wdd.parent, ++ "watchdog %s timeout %d was rounded up to %lu (msec)\n", ++ wdt->wdd.info->identity, timeout, roundup_pow_of_two(timeout)); ++ ++ return 0; ++} ++ ++static enum mlxreg_wdt_type ++mlxreg_wdt_check_watchdog_type(struct mlxreg_wdt *wdt, ++ struct mlxreg_core_platform_data *pdata) ++{ ++ if ((pdata->data[wdt->action_idx].reg == ++ pdata->data[wdt->ping_idx].reg) && ++ (pdata->data[wdt->action_idx].mask == ++ pdata->data[wdt->ping_idx].mask)) ++ return MLX_WDT_TYPE2; ++ else ++ return MLX_WDT_TYPE1; ++} ++ ++static int mlxreg_wdt_check_card_reset(struct mlxreg_wdt *wdt) ++{ ++ struct mlxreg_core_data *reg_data; ++ u32 regval; ++ int rc; ++ ++ if (wdt->reset_idx == -EINVAL) ++ return -EINVAL; ++ ++ if (!(wdt->wdd.info->options & WDIOF_CARDRESET)) ++ return 0; ++ ++ spin_lock(&wdt->io_lock); ++ reg_data = &wdt->pdata->data[wdt->reset_idx]; ++ rc = regmap_read(wdt->regmap, reg_data->reg, ®val); ++ spin_unlock(&wdt->io_lock); ++ if (rc) ++ goto read_error; ++ ++ if (regval & ~reg_data->mask) { ++ wdt->wdd.bootstatus = WDIOF_CARDRESET; ++ dev_info(wdt->wdd.parent, ++ "watchdog previously reset the CPU\n"); ++ } ++ ++read_error: ++ return rc; ++} ++ ++static int mlxreg_wdt_start(struct watchdog_device *wdd) ++{ ++ struct mlxreg_wdt *wdt = watchdog_get_drvdata(wdd); ++ struct mlxreg_core_data *reg_data = &wdt->pdata->data[wdt->action_idx]; ++ u32 regval; ++ int rc; ++ ++ spin_lock(&wdt->io_lock); ++ rc = regmap_read(wdt->regmap, reg_data->reg, ®val); ++ if (rc) { ++ spin_unlock(&wdt->io_lock); ++ goto read_error; ++ } ++ ++ regval = (regval & reg_data->mask) | BIT(reg_data->bit); ++ rc = regmap_write(wdt->regmap, reg_data->reg, regval); ++ spin_unlock(&wdt->io_lock); ++ if (!rc) { ++ set_bit(WDOG_HW_RUNNING, &wdt->wdd.status); ++ dev_info(wdt->wdd.parent, "watchdog %s started\n", ++ wdd->info->identity); ++ } ++ ++read_error: ++ return rc; ++} ++ ++static int mlxreg_wdt_stop(struct watchdog_device *wdd) ++{ ++ struct mlxreg_wdt *wdt = watchdog_get_drvdata(wdd); ++ struct mlxreg_core_data *reg_data = &wdt->pdata->data[wdt->action_idx]; ++ u32 regval; ++ int rc; ++ ++ spin_lock(&wdt->io_lock); ++ rc = regmap_read(wdt->regmap, reg_data->reg, ®val); ++ if (rc) { ++ spin_unlock(&wdt->io_lock); ++ goto read_error; ++ } ++ ++ regval = (regval & reg_data->mask) & ~BIT(reg_data->bit); ++ rc = regmap_write(wdt->regmap, reg_data->reg, regval); ++ spin_unlock(&wdt->io_lock); ++ if (!rc) ++ dev_info(wdt->wdd.parent, "watchdog %s stopped\n", ++ wdd->info->identity); ++ ++read_error: ++ return rc; ++} ++ ++static int mlxreg_wdt_ping(struct watchdog_device *wdd) ++{ ++ struct mlxreg_wdt *wdt = watchdog_get_drvdata(wdd); ++ struct mlxreg_core_data *reg_data = &wdt->pdata->data[wdt->ping_idx]; ++ u32 regval; ++ int rc; ++ ++ spin_lock(&wdt->io_lock); ++ rc = regmap_read(wdt->regmap, reg_data->reg, ®val); ++ if (rc) ++ goto read_error; ++ ++ regval = (regval & reg_data->mask) | BIT(reg_data->bit); ++ rc = regmap_write(wdt->regmap, reg_data->reg, regval); ++ ++read_error: ++ spin_unlock(&wdt->io_lock); ++ ++ return rc; ++} ++ ++static int mlxreg_wdt_set_timeout(struct watchdog_device *wdd, ++ unsigned int timeout) ++{ ++ struct mlxreg_wdt *wdt = watchdog_get_drvdata(wdd); ++ struct mlxreg_core_data *reg_data = &wdt->pdata->data[wdt->timeout_idx]; ++ u32 regval; ++ int rc; ++ ++ spin_lock(&wdt->io_lock); ++ ++ if (wdt->wdt_type == MLX_WDT_TYPE1) { ++ rc = regmap_read(wdt->regmap, reg_data->reg, ®val); ++ if (rc) ++ goto read_error; ++ regval = (regval & reg_data->mask) | wdt->hw_timeout; ++ } else { ++ wdt->hw_timeout = timeout; ++ regval = timeout; ++ } ++ ++ rc = regmap_write(wdt->regmap, reg_data->reg, regval); ++ ++read_error: ++ spin_unlock(&wdt->io_lock); ++ ++ return rc; ++} ++ ++static unsigned int mlxreg_wdt_get_timeleft(struct watchdog_device *wdd) ++{ ++ struct mlxreg_wdt *wdt = watchdog_get_drvdata(wdd); ++ struct mlxreg_core_data *reg_data = &wdt->pdata->data[wdt->timeout_idx]; ++ u32 regval; ++ int rc; ++ ++ if (wdt->wdt_type == MLX_WDT_TYPE1) ++ return 0; ++ ++ spin_lock(&wdt->io_lock); ++ rc = regmap_read(wdt->regmap, reg_data->reg, ®val); ++ if (rc) ++ rc = 0; ++ else ++ rc = regval; ++ ++ spin_unlock(&wdt->io_lock); ++ ++ return rc; ++} ++ ++static const struct watchdog_ops mlxreg_wdt_ops_type1 = { ++ .start = mlxreg_wdt_start, ++ .stop = mlxreg_wdt_stop, ++ .ping = mlxreg_wdt_ping, ++ .set_timeout = mlxreg_wdt_set_timeout, ++ .owner = THIS_MODULE, ++}; ++ ++static const struct watchdog_ops mlxreg_wdt_ops_type2 = { ++ .start = mlxreg_wdt_start, ++ .stop = mlxreg_wdt_stop, ++ .ping = mlxreg_wdt_ping, ++ .set_timeout = mlxreg_wdt_set_timeout, ++ .get_timeleft = mlxreg_wdt_get_timeleft, ++ .owner = THIS_MODULE, ++}; ++ ++static const struct watchdog_info mlxreg_wdt_main_info = { ++ .options = WDIOF_KEEPALIVEPING ++ | WDIOF_MAGICCLOSE ++ | WDIOF_SETTIMEOUT ++ | WDIOF_CARDRESET, ++ .identity = "mlx-wdt-main", ++}; ++ ++static const struct watchdog_info mlxreg_wdt_aux_info = { ++ .options = WDIOF_KEEPALIVEPING ++ | WDIOF_MAGICCLOSE ++ | WDIOF_SETTIMEOUT ++ | WDIOF_ALARMONLY, ++ .identity = "mlx-wdt-aux", ++}; ++ ++static int mlxreg_wdt_config(struct mlxreg_wdt *wdt, ++ struct mlxreg_core_platform_data *pdata) ++{ ++ struct mlxreg_core_data *data = pdata->data; ++ int i, timeout; ++ ++ wdt->reset_idx = -EINVAL; ++ for (i = 0; i < pdata->counter; i++, data++) { ++ if (strnstr(data->label, "action", sizeof(data->label))) ++ wdt->action_idx = i; ++ else if (strnstr(data->label, "timeout", sizeof(data->label))) ++ wdt->timeout_idx = i; ++ else if (strnstr(data->label, "ping", sizeof(data->label))) ++ wdt->ping_idx = i; ++ else if (strnstr(data->label, "reset", sizeof(data->label))) ++ wdt->reset_idx = i; ++ } ++ ++ wdt->pdata = pdata; ++ if (strnstr(pdata->identity, mlxreg_wdt_main_info.identity, ++ sizeof(mlxreg_wdt_main_info.identity))) ++ wdt->wdd.info = &mlxreg_wdt_main_info; ++ else ++ wdt->wdd.info = &mlxreg_wdt_aux_info; ++ ++ timeout = pdata->data[wdt->timeout_idx].health_cntr; ++ wdt->wdt_type = mlxreg_wdt_check_watchdog_type(wdt, pdata); ++ if (wdt->wdt_type == MLX_WDT_TYPE2) { ++ wdt->hw_timeout = timeout; ++ wdt->wdd.ops = &mlxreg_wdt_ops_type2; ++ wdt->wdd.timeout = wdt->hw_timeout; ++ wdt->wdd.max_timeout = MLXREG_WDT_MAX_TIMEOUT_TYPE2; ++ } else { ++ mlxreg_wdt_roundup_to_base_2(wdt, timeout); ++ wdt->wdd.ops = &mlxreg_wdt_ops_type1; ++ /* Rowndown to actual closest number of sec. */ ++ wdt->wdd.timeout = ++ MLXREG_WDT_HW_TIMEOUT_CONVERT(wdt->hw_timeout); ++ wdt->wdd.max_timeout = MLXREG_WDT_MAX_TIMEOUT_TYPE1; ++ } ++ wdt->wdd.min_timeout = MLXREG_WDT_MIN_TIMEOUT; ++ ++ return -EINVAL; ++} ++ ++static int mlxreg_wdt_probe(struct platform_device *pdev) ++{ ++ struct mlxreg_core_platform_data *pdata; ++ struct mlxreg_wdt *wdt; ++ int rc; ++ ++ pdata = dev_get_platdata(&pdev->dev); ++ if (!pdata) { ++ dev_err(&pdev->dev, "Failed to get platform data.\n"); ++ return -EINVAL; ++ } ++ wdt = devm_kzalloc(&pdev->dev, sizeof(*wdt), GFP_KERNEL); ++ if (!wdt) ++ return -ENOMEM; ++ ++ spin_lock_init(&wdt->io_lock); ++ ++ wdt->wdd.parent = &pdev->dev; ++ wdt->regmap = pdata->regmap; ++ mlxreg_wdt_config(wdt, pdata); ++ ++ if ((pdata->features & MLXREG_CORE_WD_FEATURE_NOSTOP_AFTER_START)) ++ watchdog_set_nowayout(&wdt->wdd, WATCHDOG_NOWAYOUT); ++ watchdog_stop_on_reboot(&wdt->wdd); ++ watchdog_init_timeout(&wdt->wdd, 0, &pdev->dev); ++ watchdog_set_drvdata(&wdt->wdd, wdt); ++ ++ mlxreg_wdt_check_card_reset(wdt); ++ rc = devm_watchdog_register_device(&pdev->dev, &wdt->wdd); ++ if (rc) { ++ dev_err(&pdev->dev, ++ "Cannot register watchdog device (err=%d)\n", rc); ++ return rc; ++ } ++ ++ mlxreg_wdt_set_timeout(&wdt->wdd, wdt->wdd.timeout); ++ if ((pdata->features & MLXREG_CORE_WD_FEATURE_START_AT_BOOT)) ++ mlxreg_wdt_start(&wdt->wdd); ++ ++ return rc; ++} ++ ++static int mlxreg_wdt_remove(struct platform_device *pdev) ++{ ++ struct mlxreg_wdt *wdt = dev_get_platdata(&pdev->dev); ++ ++ mlxreg_wdt_stop(&wdt->wdd); ++ watchdog_unregister_device(&wdt->wdd); ++ ++ return 0; ++} ++ ++static struct platform_driver mlxreg_wdt_driver = { ++ .probe = mlxreg_wdt_probe, ++ .remove = mlxreg_wdt_remove, ++ .driver = { ++ .name = "mlx-wdt", ++ }, ++}; ++ ++module_platform_driver(mlxreg_wdt_driver); ++ ++MODULE_AUTHOR("Michael Shych "); ++MODULE_DESCRIPTION("Mellanox watchdog driver"); ++MODULE_LICENSE("GPL"); ++MODULE_ALIAS("platform:mlx-wdt"); +diff --git a/include/linux/platform_data/mlxreg.h b/include/linux/platform_data/mlxreg.h +index 1b2f86f..4d70c00 100644 +--- a/include/linux/platform_data/mlxreg.h ++++ b/include/linux/platform_data/mlxreg.h +@@ -35,6 +35,8 @@ + #define __LINUX_PLATFORM_DATA_MLXREG_H + + #define MLXREG_CORE_LABEL_MAX_SIZE 32 ++#define MLXREG_CORE_WD_FEATURE_NOSTOP_AFTER_START BIT(0) ++#define MLXREG_CORE_WD_FEATURE_START_AT_BOOT BIT(1) + + /** + * struct mlxreg_hotplug_device - I2C device data: +@@ -112,11 +114,15 @@ struct mlxreg_core_item { + * @data: instance private data; + * @regmap: register map of parent device; + * @counter: number of instances; ++ * @features: supported features of device; ++ * @identity: device identity name; + */ + struct mlxreg_core_platform_data { + struct mlxreg_core_data *data; + void *regmap; + int counter; ++ u32 features; ++ char identity[MLXREG_CORE_LABEL_MAX_SIZE]; + }; + + /** +-- +2.1.4 + diff --git a/patch/0029-mlxsw-qsfp_sysfs-Support-port-numbers-initialization.patch b/patch/0029-mlxsw-qsfp_sysfs-Support-port-numbers-initialization.patch new file mode 100644 index 000000000000..ae982fd44740 --- /dev/null +++ b/patch/0029-mlxsw-qsfp_sysfs-Support-port-numbers-initialization.patch @@ -0,0 +1,86 @@ +From d83d9b8a4813c6a626db151f9b9269d8c69a032a Mon Sep 17 00:00:00 2001 +From: Vadim Pasternak +Date: Sun, 6 Jan 2019 12:25:46 +0000 +Subject: [PATCH v1] mlxsw: qsfp_sysfs: Support port numbers initialization + +Support port numbers initialization based on system type. + +Signed-off-by: Vadim Pasternak +--- + drivers/net/ethernet/mellanox/mlxsw/core.c | 20 +++++++++++++++++++- + drivers/net/ethernet/mellanox/mlxsw/core.h | 10 +++------- + drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c | 1 + + 3 files changed, 23 insertions(+), 8 deletions(-) + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c +index 10863d6..01987f0 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c +@@ -114,12 +114,30 @@ struct mlxsw_core { + struct mlxsw_resources resources; + struct mlxsw_hwmon *hwmon; + struct mlxsw_thermal *thermal; +-struct mlxsw_qsfp *qsfp; ++ struct mlxsw_qsfp *qsfp; + struct mlxsw_core_port ports[MLXSW_PORT_MAX_PORTS]; ++ unsigned int max_ports; + unsigned long driver_priv[0]; + /* driver_priv has to be always the last item */ + }; + ++#define MLXSW_PORT_MAX_PORTS_DEFAULT 0x40 ++unsigned int mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core) ++{ ++ if (mlxsw_core->max_ports) ++ return mlxsw_core->max_ports; ++ else ++ return MLXSW_PORT_MAX_PORTS_DEFAULT; ++} ++EXPORT_SYMBOL(mlxsw_core_max_ports); ++ ++void mlxsw_core_max_ports_set(struct mlxsw_core *mlxsw_core, ++ unsigned int max_ports) ++{ ++ mlxsw_core->max_ports = max_ports; ++} ++EXPORT_SYMBOL(mlxsw_core_max_ports_set); ++ + void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core) + { + return mlxsw_core->driver_priv; +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h +index 4fb104e..db27dd0 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/core.h +@@ -63,13 +63,9 @@ struct mlxsw_driver; + struct mlxsw_bus; + struct mlxsw_bus_info; + +-#define MLXSW_PORT_MAX_PORTS_DEFAULT 0x40 +-static inline unsigned int +-mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core) +-{ +- return MLXSW_PORT_MAX_PORTS_DEFAULT; +-} +- ++unsigned int mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core); ++void mlxsw_core_max_ports_set(struct mlxsw_core *mlxsw_core, ++ unsigned int max_ports); + void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core); + + int mlxsw_core_driver_register(struct mlxsw_driver *mlxsw_driver); +diff --git a/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c b/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c +index bee2a08..0781f16 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c +@@ -298,6 +298,7 @@ int mlxsw_qsfp_init(struct mlxsw_core *mlxsw_core, + mlxsw_qsfp->bus_info = mlxsw_bus_info; + mlxsw_bus_info->dev->platform_data = mlxsw_qsfp; + ++ mlxsw_core_max_ports_set(mlxsw_core, mlxsw_qsfp_num); + for (i = 1; i <= mlxsw_qsfp_num; i++) { + mlxsw_reg_pmlp_pack(pmlp_pl, i); + err = mlxsw_reg_query(mlxsw_qsfp->core, MLXSW_REG(pmlp), +-- +2.1.4 + diff --git a/patch/0030-update-kernel-config.patch b/patch/0030-update-kernel-config.patch new file mode 100644 index 000000000000..d72fdbcc6a64 --- /dev/null +++ b/patch/0030-update-kernel-config.patch @@ -0,0 +1,25 @@ +From 90c84c4963764d537535d950ea3233db518a6287 Mon Sep 17 00:00:00 2001 +From: Mykola Kostenok +Date: Mon, 17 Dec 2018 20:54:49 +0200 +Subject: [PATCH] update kernel config add CONFIG_MLX_WDT + +Signed-off-by: Mykola Kostenok +--- + debian/build/build_amd64_none_amd64/.config | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/debian/build/build_amd64_none_amd64/.config b/debian/build/build_amd64_none_amd64/.config +index b7bcf15..21d0d6f 100644 +--- a/debian/build/build_amd64_none_amd64/.config ++++ b/debian/build/build_amd64_none_amd64/.config +@@ -4093,6 +4093,7 @@ CONFIG_SBC_EPX_C3_WATCHDOG=m + # CONFIG_NI903X_WDT is not set + # CONFIG_MEN_A21_WDT is not set + CONFIG_XEN_WDT=m ++CONFIG_MLX_WDT=y + + # + # PCI-based Watchdog Cards +-- +1.9.1 + diff --git a/patch/0031-mlxsw-Align-code-with-kernel-v-5.0.patch b/patch/0031-mlxsw-Align-code-with-kernel-v-5.0.patch new file mode 100644 index 000000000000..96ba7ac220e9 --- /dev/null +++ b/patch/0031-mlxsw-Align-code-with-kernel-v-5.0.patch @@ -0,0 +1,13374 @@ +From bef153de6e232204874c3d2916ccb639f09d284e Mon Sep 17 00:00:00 2001 +From: Vadim Pasternak +Date: Tue, 15 Jan 2019 08:25:23 +0000 +Subject: [PATCH] mlxsw: Align code with kernel v 5.0 + +Signed-off-by: Vadim Pasternak +--- + drivers/net/ethernet/mellanox/mlxsw/cmd.h | 108 +- + drivers/net/ethernet/mellanox/mlxsw/core.c | 732 +- + drivers/net/ethernet/mellanox/mlxsw/core.h | 219 +- + drivers/net/ethernet/mellanox/mlxsw/core_thermal.c | 9 +- + drivers/net/ethernet/mellanox/mlxsw/i2c.c | 149 +- + drivers/net/ethernet/mellanox/mlxsw/item.h | 182 +- + drivers/net/ethernet/mellanox/mlxsw/minimal.c | 114 +- + drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c | 1 - + drivers/net/ethernet/mellanox/mlxsw/reg.h | 9380 ++++++++++++++------ + drivers/net/ethernet/mellanox/mlxsw/resources.h | 152 + + 10 files changed, 7657 insertions(+), 3389 deletions(-) + create mode 100644 drivers/net/ethernet/mellanox/mlxsw/resources.h + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/cmd.h b/drivers/net/ethernet/mellanox/mlxsw/cmd.h +index 28271be..0772e43 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/cmd.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/cmd.h +@@ -1,37 +1,5 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/cmd.h +- * Copyright (c) 2015 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2015 Jiri Pirko +- * Copyright (c) 2015 Ido Schimmel +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ ++/* Copyright (c) 2015-2018 Mellanox Technologies. All rights reserved */ + + #ifndef _MLXSW_CMD_H + #define _MLXSW_CMD_H +@@ -58,7 +26,7 @@ static inline void mlxsw_cmd_mbox_zero(char *mbox) + struct mlxsw_core; + + int mlxsw_cmd_exec(struct mlxsw_core *mlxsw_core, u16 opcode, u8 opcode_mod, +- u32 in_mod, bool out_mbox_direct, ++ u32 in_mod, bool out_mbox_direct, bool reset_ok, + char *in_mbox, size_t in_mbox_size, + char *out_mbox, size_t out_mbox_size); + +@@ -67,7 +35,7 @@ static inline int mlxsw_cmd_exec_in(struct mlxsw_core *mlxsw_core, u16 opcode, + size_t in_mbox_size) + { + return mlxsw_cmd_exec(mlxsw_core, opcode, opcode_mod, in_mod, false, +- in_mbox, in_mbox_size, NULL, 0); ++ false, in_mbox, in_mbox_size, NULL, 0); + } + + static inline int mlxsw_cmd_exec_out(struct mlxsw_core *mlxsw_core, u16 opcode, +@@ -76,7 +44,7 @@ static inline int mlxsw_cmd_exec_out(struct mlxsw_core *mlxsw_core, u16 opcode, + char *out_mbox, size_t out_mbox_size) + { + return mlxsw_cmd_exec(mlxsw_core, opcode, opcode_mod, in_mod, +- out_mbox_direct, NULL, 0, ++ out_mbox_direct, false, NULL, 0, + out_mbox, out_mbox_size); + } + +@@ -84,7 +52,7 @@ static inline int mlxsw_cmd_exec_none(struct mlxsw_core *mlxsw_core, u16 opcode, + u8 opcode_mod, u32 in_mod) + { + return mlxsw_cmd_exec(mlxsw_core, opcode, opcode_mod, in_mod, false, +- NULL, 0, NULL, 0); ++ false, NULL, 0, NULL, 0); + } + + enum mlxsw_cmd_opcode { +@@ -179,6 +147,8 @@ enum mlxsw_cmd_status { + MLXSW_CMD_STATUS_BAD_INDEX = 0x0A, + /* NVMEM checksum/CRC failed. */ + MLXSW_CMD_STATUS_BAD_NVMEM = 0x0B, ++ /* Device is currently running reset */ ++ MLXSW_CMD_STATUS_RUNNING_RESET = 0x26, + /* Bad management packet (silently discarded). */ + MLXSW_CMD_STATUS_BAD_PKT = 0x30, + }; +@@ -208,6 +178,8 @@ static inline const char *mlxsw_cmd_status_str(u8 status) + return "BAD_INDEX"; + case MLXSW_CMD_STATUS_BAD_NVMEM: + return "BAD_NVMEM"; ++ case MLXSW_CMD_STATUS_RUNNING_RESET: ++ return "RUNNING_RESET"; + case MLXSW_CMD_STATUS_BAD_PKT: + return "BAD_PKT"; + default: +@@ -424,10 +396,15 @@ MLXSW_ITEM32(cmd_mbox, query_aq_cap, log_max_rdq_sz, 0x04, 24, 8); + MLXSW_ITEM32(cmd_mbox, query_aq_cap, max_num_rdqs, 0x04, 0, 8); + + /* cmd_mbox_query_aq_cap_log_max_cq_sz +- * Log (base 2) of max CQEs allowed on CQ. ++ * Log (base 2) of the Maximum CQEs allowed in a CQ for CQEv0 and CQEv1. + */ + MLXSW_ITEM32(cmd_mbox, query_aq_cap, log_max_cq_sz, 0x08, 24, 8); + ++/* cmd_mbox_query_aq_cap_log_max_cqv2_sz ++ * Log (base 2) of the Maximum CQEs allowed in a CQ for CQEv2. ++ */ ++MLXSW_ITEM32(cmd_mbox, query_aq_cap, log_max_cqv2_sz, 0x08, 16, 8); ++ + /* cmd_mbox_query_aq_cap_max_num_cqs + * Maximum number of CQs. + */ +@@ -513,6 +490,11 @@ static inline int mlxsw_cmd_unmap_fa(struct mlxsw_core *mlxsw_core) + * are no more sources in the table, will return resource id 0xFFF to indicate + * it. + */ ++ ++#define MLXSW_CMD_QUERY_RESOURCES_TABLE_END_ID 0xffff ++#define MLXSW_CMD_QUERY_RESOURCES_MAX_QUERIES 100 ++#define MLXSW_CMD_QUERY_RESOURCES_PER_QUERY 32 ++ + static inline int mlxsw_cmd_query_resources(struct mlxsw_core *mlxsw_core, + char *out_mbox, int index) + { +@@ -657,6 +639,12 @@ MLXSW_ITEM32(cmd_mbox, config_profile, set_kvd_hash_single_size, 0x0C, 25, 1); + */ + MLXSW_ITEM32(cmd_mbox, config_profile, set_kvd_hash_double_size, 0x0C, 26, 1); + ++/* cmd_mbox_config_set_cqe_version ++ * Capability bit. Setting a bit to 1 configures the profile ++ * according to the mailbox contents. ++ */ ++MLXSW_ITEM32(cmd_mbox, config_profile, set_cqe_version, 0x08, 0, 1); ++ + /* cmd_mbox_config_profile_max_vepa_channels + * Maximum number of VEPA channels per port (0 through 16) + * 0 - multi-channel VEPA is disabled +@@ -836,6 +824,14 @@ MLXSW_ITEM32_INDEXED(cmd_mbox, config_profile, swid_config_type, + MLXSW_ITEM32_INDEXED(cmd_mbox, config_profile, swid_config_properties, + 0x60, 0, 8, 0x08, 0x00, false); + ++/* cmd_mbox_config_profile_cqe_version ++ * CQE version: ++ * 0: CQE version is 0 ++ * 1: CQE version is either 1 or 2 ++ * CQE ver 1 or 2 is configured by Completion Queue Context field cqe_ver. ++ */ ++MLXSW_ITEM32(cmd_mbox, config_profile, cqe_version, 0xB0, 0, 8); ++ + /* ACCESS_REG - Access EMAD Supported Register + * ---------------------------------- + * OpMod == 0 (N/A), INMmod == 0 (N/A) +@@ -845,10 +841,12 @@ MLXSW_ITEM32_INDEXED(cmd_mbox, config_profile, swid_config_properties, + */ + + static inline int mlxsw_cmd_access_reg(struct mlxsw_core *mlxsw_core, ++ bool reset_ok, + char *in_mbox, char *out_mbox) + { + return mlxsw_cmd_exec(mlxsw_core, MLXSW_CMD_OPCODE_ACCESS_REG, +- 0, 0, false, in_mbox, MLXSW_CMD_MBOX_SIZE, ++ 0, 0, false, reset_ok, ++ in_mbox, MLXSW_CMD_MBOX_SIZE, + out_mbox, MLXSW_CMD_MBOX_SIZE); + } + +@@ -1027,24 +1025,21 @@ static inline int mlxsw_cmd_sw2hw_cq(struct mlxsw_core *mlxsw_core, + 0, cq_number, in_mbox, MLXSW_CMD_MBOX_SIZE); + } + +-/* cmd_mbox_sw2hw_cq_cv ++enum mlxsw_cmd_mbox_sw2hw_cq_cqe_ver { ++ MLXSW_CMD_MBOX_SW2HW_CQ_CQE_VER_1, ++ MLXSW_CMD_MBOX_SW2HW_CQ_CQE_VER_2, ++}; ++ ++/* cmd_mbox_sw2hw_cq_cqe_ver + * CQE Version. +- * 0 - CQE Version 0, 1 - CQE Version 1 + */ +-MLXSW_ITEM32(cmd_mbox, sw2hw_cq, cv, 0x00, 28, 4); ++MLXSW_ITEM32(cmd_mbox, sw2hw_cq, cqe_ver, 0x00, 28, 4); + + /* cmd_mbox_sw2hw_cq_c_eqn + * Event Queue this CQ reports completion events to. + */ + MLXSW_ITEM32(cmd_mbox, sw2hw_cq, c_eqn, 0x00, 24, 1); + +-/* cmd_mbox_sw2hw_cq_oi +- * When set, overrun ignore is enabled. When set, updates of +- * CQ consumer counter (poll for completion) or Request completion +- * notifications (Arm CQ) DoorBells should not be rung on that CQ. +- */ +-MLXSW_ITEM32(cmd_mbox, sw2hw_cq, oi, 0x00, 12, 1); +- + /* cmd_mbox_sw2hw_cq_st + * Event delivery state machine + * 0x0 - FIRED +@@ -1127,12 +1122,7 @@ static inline int mlxsw_cmd_sw2hw_eq(struct mlxsw_core *mlxsw_core, + */ + MLXSW_ITEM32(cmd_mbox, sw2hw_eq, int_msix, 0x00, 24, 1); + +-/* cmd_mbox_sw2hw_eq_int_oi +- * When set, overrun ignore is enabled. +- */ +-MLXSW_ITEM32(cmd_mbox, sw2hw_eq, oi, 0x00, 12, 1); +- +-/* cmd_mbox_sw2hw_eq_int_st ++/* cmd_mbox_sw2hw_eq_st + * Event delivery state machine + * 0x0 - FIRED + * 0x1 - ARMED (Request for Notification) +@@ -1141,19 +1131,19 @@ MLXSW_ITEM32(cmd_mbox, sw2hw_eq, oi, 0x00, 12, 1); + */ + MLXSW_ITEM32(cmd_mbox, sw2hw_eq, st, 0x00, 8, 2); + +-/* cmd_mbox_sw2hw_eq_int_log_eq_size ++/* cmd_mbox_sw2hw_eq_log_eq_size + * Log (base 2) of the EQ size (in entries). + */ + MLXSW_ITEM32(cmd_mbox, sw2hw_eq, log_eq_size, 0x00, 0, 4); + +-/* cmd_mbox_sw2hw_eq_int_producer_counter ++/* cmd_mbox_sw2hw_eq_producer_counter + * Producer Counter. The counter is incremented for each EQE that is written + * by the HW to the EQ. + * Maintained by HW (valid for the QUERY_EQ command only) + */ + MLXSW_ITEM32(cmd_mbox, sw2hw_eq, producer_counter, 0x04, 0, 16); + +-/* cmd_mbox_sw2hw_eq_int_pa ++/* cmd_mbox_sw2hw_eq_pa + * Physical Address. + */ + MLXSW_ITEM64_INDEXED(cmd_mbox, sw2hw_eq, pa, 0x10, 11, 53, 0x08, 0x00, true); +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c +index 01987f0..e420451 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c +@@ -1,38 +1,5 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/core.c +- * Copyright (c) 2015 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2015 Jiri Pirko +- * Copyright (c) 2015 Ido Schimmel +- * Copyright (c) 2015 Elad Raz +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 ++/* Copyright (c) 2015-2018 Mellanox Technologies. All rights reserved */ + + #include + #include +@@ -40,9 +7,6 @@ + #include + #include + #include +-#include +-#include +-#include + #include + #include + #include +@@ -67,33 +31,39 @@ + #include "trap.h" + #include "emad.h" + #include "reg.h" ++#include "resources.h" + + static LIST_HEAD(mlxsw_core_driver_list); + static DEFINE_SPINLOCK(mlxsw_core_driver_list_lock); + + static const char mlxsw_core_driver_name[] = "mlxsw_core"; + +-static struct dentry *mlxsw_core_dbg_root; +- + static struct workqueue_struct *mlxsw_wq; ++static struct workqueue_struct *mlxsw_owq; + +-struct mlxsw_core_pcpu_stats { +- u64 trap_rx_packets[MLXSW_TRAP_ID_MAX]; +- u64 trap_rx_bytes[MLXSW_TRAP_ID_MAX]; +- u64 port_rx_packets[MLXSW_PORT_MAX_PORTS]; +- u64 port_rx_bytes[MLXSW_PORT_MAX_PORTS]; +- struct u64_stats_sync syncp; +- u32 trap_rx_dropped[MLXSW_TRAP_ID_MAX]; +- u32 port_rx_dropped[MLXSW_PORT_MAX_PORTS]; +- u32 trap_rx_invalid; +- u32 port_rx_invalid; ++struct mlxsw_core_port { ++ struct devlink_port devlink_port; ++ void *port_driver_priv; ++ u8 local_port; + }; + ++void *mlxsw_core_port_driver_priv(struct mlxsw_core_port *mlxsw_core_port) ++{ ++ return mlxsw_core_port->port_driver_priv; ++} ++EXPORT_SYMBOL(mlxsw_core_port_driver_priv); ++ ++static bool mlxsw_core_port_check(struct mlxsw_core_port *mlxsw_core_port) ++{ ++ return mlxsw_core_port->port_driver_priv != NULL; ++} ++ + struct mlxsw_core { + struct mlxsw_driver *driver; + const struct mlxsw_bus *bus; + void *bus_priv; + const struct mlxsw_bus_info *bus_info; ++ struct workqueue_struct *emad_wq; + struct list_head rx_listener_list; + struct list_head event_listener_list; + struct { +@@ -102,41 +72,49 @@ struct mlxsw_core { + spinlock_t trans_list_lock; /* protects trans_list writes */ + bool use_emad; + } emad; +- struct mlxsw_core_pcpu_stats __percpu *pcpu_stats; +- struct dentry *dbg_dir; +- struct { +- struct debugfs_blob_wrapper vsd_blob; +- struct debugfs_blob_wrapper psid_blob; +- } dbg; + struct { + u8 *mapping; /* lag_id+port_index to local_port mapping */ + } lag; +- struct mlxsw_resources resources; ++ struct mlxsw_res res; + struct mlxsw_hwmon *hwmon; + struct mlxsw_thermal *thermal; + struct mlxsw_qsfp *qsfp; +- struct mlxsw_core_port ports[MLXSW_PORT_MAX_PORTS]; ++ struct mlxsw_core_port *ports; + unsigned int max_ports; ++ bool reload_fail; + unsigned long driver_priv[0]; + /* driver_priv has to be always the last item */ + }; + + #define MLXSW_PORT_MAX_PORTS_DEFAULT 0x40 +-unsigned int mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core) ++ ++static int mlxsw_ports_init(struct mlxsw_core *mlxsw_core) + { +- if (mlxsw_core->max_ports) +- return mlxsw_core->max_ports; ++ /* Switch ports are numbered from 1 to queried value */ ++ if (MLXSW_CORE_RES_VALID(mlxsw_core, MAX_SYSTEM_PORT)) ++ mlxsw_core->max_ports = MLXSW_CORE_RES_GET(mlxsw_core, ++ MAX_SYSTEM_PORT) + 1; + else +- return MLXSW_PORT_MAX_PORTS_DEFAULT; ++ mlxsw_core->max_ports = MLXSW_PORT_MAX_PORTS_DEFAULT + 1; ++ ++ mlxsw_core->ports = kcalloc(mlxsw_core->max_ports, ++ sizeof(struct mlxsw_core_port), GFP_KERNEL); ++ if (!mlxsw_core->ports) ++ return -ENOMEM; ++ ++ return 0; + } +-EXPORT_SYMBOL(mlxsw_core_max_ports); + +-void mlxsw_core_max_ports_set(struct mlxsw_core *mlxsw_core, +- unsigned int max_ports) ++static void mlxsw_ports_fini(struct mlxsw_core *mlxsw_core) + { +- mlxsw_core->max_ports = max_ports; ++ kfree(mlxsw_core->ports); + } +-EXPORT_SYMBOL(mlxsw_core_max_ports_set); ++ ++unsigned int mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core) ++{ ++ return mlxsw_core->max_ports; ++} ++EXPORT_SYMBOL(mlxsw_core_max_ports); + + void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core) + { +@@ -457,7 +435,7 @@ static void mlxsw_emad_trans_timeout_schedule(struct mlxsw_reg_trans *trans) + { + unsigned long timeout = msecs_to_jiffies(MLXSW_EMAD_TIMEOUT_MS); + +- mlxsw_core_schedule_dw(&trans->timeout_dw, timeout); ++ queue_delayed_work(trans->core->emad_wq, &trans->timeout_dw, timeout); + } + + static int mlxsw_emad_transmit(struct mlxsw_core *mlxsw_core, +@@ -573,36 +551,24 @@ static void mlxsw_emad_rx_listener_func(struct sk_buff *skb, u8 local_port, + dev_kfree_skb(skb); + } + +-static const struct mlxsw_rx_listener mlxsw_emad_rx_listener = { +- .func = mlxsw_emad_rx_listener_func, +- .local_port = MLXSW_PORT_DONT_CARE, +- .trap_id = MLXSW_TRAP_ID_ETHEMAD, +-}; +- +-static int mlxsw_emad_traps_set(struct mlxsw_core *mlxsw_core) +-{ +- char htgt_pl[MLXSW_REG_HTGT_LEN]; +- char hpkt_pl[MLXSW_REG_HPKT_LEN]; +- int err; +- +- mlxsw_reg_htgt_pack(htgt_pl, MLXSW_REG_HTGT_TRAP_GROUP_EMAD); +- err = mlxsw_reg_write(mlxsw_core, MLXSW_REG(htgt), htgt_pl); +- if (err) +- return err; +- +- mlxsw_reg_hpkt_pack(hpkt_pl, MLXSW_REG_HPKT_ACTION_TRAP_TO_CPU, +- MLXSW_TRAP_ID_ETHEMAD); +- return mlxsw_reg_write(mlxsw_core, MLXSW_REG(hpkt), hpkt_pl); +-} ++static const struct mlxsw_listener mlxsw_emad_rx_listener = ++ MLXSW_RXL(mlxsw_emad_rx_listener_func, ETHEMAD, TRAP_TO_CPU, false, ++ EMAD, DISCARD); + + static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core) + { ++ struct workqueue_struct *emad_wq; + u64 tid; + int err; + + if (!(mlxsw_core->bus->features & MLXSW_BUS_F_TXRX)) + return 0; + ++ emad_wq = alloc_workqueue("mlxsw_core_emad", WQ_MEM_RECLAIM, 0); ++ if (!emad_wq) ++ return -ENOMEM; ++ mlxsw_core->emad_wq = emad_wq; ++ + /* Set the upper 32 bits of the transaction ID field to a random + * number. This allows us to discard EMADs addressed to other + * devices. +@@ -614,42 +580,35 @@ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core) + INIT_LIST_HEAD(&mlxsw_core->emad.trans_list); + spin_lock_init(&mlxsw_core->emad.trans_list_lock); + +- err = mlxsw_core_rx_listener_register(mlxsw_core, +- &mlxsw_emad_rx_listener, +- mlxsw_core); ++ err = mlxsw_core_trap_register(mlxsw_core, &mlxsw_emad_rx_listener, ++ mlxsw_core); + if (err) + return err; + +- err = mlxsw_emad_traps_set(mlxsw_core); ++ err = mlxsw_core->driver->basic_trap_groups_set(mlxsw_core); + if (err) + goto err_emad_trap_set; +- + mlxsw_core->emad.use_emad = true; + + return 0; + + err_emad_trap_set: +- mlxsw_core_rx_listener_unregister(mlxsw_core, +- &mlxsw_emad_rx_listener, +- mlxsw_core); ++ mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_emad_rx_listener, ++ mlxsw_core); ++ destroy_workqueue(mlxsw_core->emad_wq); + return err; + } + + static void mlxsw_emad_fini(struct mlxsw_core *mlxsw_core) + { +- char hpkt_pl[MLXSW_REG_HPKT_LEN]; + + if (!(mlxsw_core->bus->features & MLXSW_BUS_F_TXRX)) + return; + + mlxsw_core->emad.use_emad = false; +- mlxsw_reg_hpkt_pack(hpkt_pl, MLXSW_REG_HPKT_ACTION_DISCARD, +- MLXSW_TRAP_ID_ETHEMAD); +- mlxsw_reg_write(mlxsw_core, MLXSW_REG(hpkt), hpkt_pl); +- +- mlxsw_core_rx_listener_unregister(mlxsw_core, +- &mlxsw_emad_rx_listener, +- mlxsw_core); ++ mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_emad_rx_listener, ++ mlxsw_core); ++ destroy_workqueue(mlxsw_core->emad_wq); + } + + static struct sk_buff *mlxsw_emad_alloc(const struct mlxsw_core *mlxsw_core, +@@ -686,7 +645,7 @@ static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core, + int err; + + dev_dbg(mlxsw_core->bus_info->dev, "EMAD reg access (tid=%llx,reg_id=%x(%s),type=%s)\n", +- trans->tid, reg->id, mlxsw_reg_id_str(reg->id), ++ tid, reg->id, mlxsw_reg_id_str(reg->id), + mlxsw_core_reg_access_type_str(type)); + + skb = mlxsw_emad_alloc(mlxsw_core, reg->len); +@@ -730,91 +689,6 @@ static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core, + * Core functions + *****************/ + +-static int mlxsw_core_rx_stats_dbg_read(struct seq_file *file, void *data) +-{ +- struct mlxsw_core *mlxsw_core = file->private; +- struct mlxsw_core_pcpu_stats *p; +- u64 rx_packets, rx_bytes; +- u64 tmp_rx_packets, tmp_rx_bytes; +- u32 rx_dropped, rx_invalid; +- unsigned int start; +- int i; +- int j; +- static const char hdr[] = +- " NUM RX_PACKETS RX_BYTES RX_DROPPED\n"; +- +- seq_printf(file, hdr); +- for (i = 0; i < MLXSW_TRAP_ID_MAX; i++) { +- rx_packets = 0; +- rx_bytes = 0; +- rx_dropped = 0; +- for_each_possible_cpu(j) { +- p = per_cpu_ptr(mlxsw_core->pcpu_stats, j); +- do { +- start = u64_stats_fetch_begin(&p->syncp); +- tmp_rx_packets = p->trap_rx_packets[i]; +- tmp_rx_bytes = p->trap_rx_bytes[i]; +- } while (u64_stats_fetch_retry(&p->syncp, start)); +- +- rx_packets += tmp_rx_packets; +- rx_bytes += tmp_rx_bytes; +- rx_dropped += p->trap_rx_dropped[i]; +- } +- seq_printf(file, "trap %3d %12llu %12llu %10u\n", +- i, rx_packets, rx_bytes, rx_dropped); +- } +- rx_invalid = 0; +- for_each_possible_cpu(j) { +- p = per_cpu_ptr(mlxsw_core->pcpu_stats, j); +- rx_invalid += p->trap_rx_invalid; +- } +- seq_printf(file, "trap INV %10u\n", +- rx_invalid); +- +- for (i = 0; i < MLXSW_PORT_MAX_PORTS; i++) { +- rx_packets = 0; +- rx_bytes = 0; +- rx_dropped = 0; +- for_each_possible_cpu(j) { +- p = per_cpu_ptr(mlxsw_core->pcpu_stats, j); +- do { +- start = u64_stats_fetch_begin(&p->syncp); +- tmp_rx_packets = p->port_rx_packets[i]; +- tmp_rx_bytes = p->port_rx_bytes[i]; +- } while (u64_stats_fetch_retry(&p->syncp, start)); +- +- rx_packets += tmp_rx_packets; +- rx_bytes += tmp_rx_bytes; +- rx_dropped += p->port_rx_dropped[i]; +- } +- seq_printf(file, "port %3d %12llu %12llu %10u\n", +- i, rx_packets, rx_bytes, rx_dropped); +- } +- rx_invalid = 0; +- for_each_possible_cpu(j) { +- p = per_cpu_ptr(mlxsw_core->pcpu_stats, j); +- rx_invalid += p->port_rx_invalid; +- } +- seq_printf(file, "port INV %10u\n", +- rx_invalid); +- return 0; +-} +- +-static int mlxsw_core_rx_stats_dbg_open(struct inode *inode, struct file *f) +-{ +- struct mlxsw_core *mlxsw_core = inode->i_private; +- +- return single_open(f, mlxsw_core_rx_stats_dbg_read, mlxsw_core); +-} +- +-static const struct file_operations mlxsw_core_rx_stats_dbg_ops = { +- .owner = THIS_MODULE, +- .open = mlxsw_core_rx_stats_dbg_open, +- .release = single_release, +- .read = seq_read, +- .llseek = seq_lseek +-}; +- + int mlxsw_core_driver_register(struct mlxsw_driver *mlxsw_driver) + { + spin_lock(&mlxsw_core_driver_list_lock); +@@ -849,84 +723,10 @@ static struct mlxsw_driver *mlxsw_core_driver_get(const char *kind) + + spin_lock(&mlxsw_core_driver_list_lock); + mlxsw_driver = __driver_find(kind); +- if (!mlxsw_driver) { +- spin_unlock(&mlxsw_core_driver_list_lock); +- request_module(MLXSW_MODULE_ALIAS_PREFIX "%s", kind); +- spin_lock(&mlxsw_core_driver_list_lock); +- mlxsw_driver = __driver_find(kind); +- } +- if (mlxsw_driver) { +- if (!try_module_get(mlxsw_driver->owner)) +- mlxsw_driver = NULL; +- } +- + spin_unlock(&mlxsw_core_driver_list_lock); + return mlxsw_driver; + } + +-static void mlxsw_core_driver_put(const char *kind) +-{ +- struct mlxsw_driver *mlxsw_driver; +- +- spin_lock(&mlxsw_core_driver_list_lock); +- mlxsw_driver = __driver_find(kind); +- spin_unlock(&mlxsw_core_driver_list_lock); +- if (!mlxsw_driver) +- return; +- module_put(mlxsw_driver->owner); +-} +- +-static int mlxsw_core_debugfs_init(struct mlxsw_core *mlxsw_core) +-{ +- const struct mlxsw_bus_info *bus_info = mlxsw_core->bus_info; +- +- mlxsw_core->dbg_dir = debugfs_create_dir(bus_info->device_name, +- mlxsw_core_dbg_root); +- if (!mlxsw_core->dbg_dir) +- return -ENOMEM; +- debugfs_create_file("rx_stats", S_IRUGO, mlxsw_core->dbg_dir, +- mlxsw_core, &mlxsw_core_rx_stats_dbg_ops); +- mlxsw_core->dbg.vsd_blob.data = (void *) &bus_info->vsd; +- mlxsw_core->dbg.vsd_blob.size = sizeof(bus_info->vsd); +- debugfs_create_blob("vsd", S_IRUGO, mlxsw_core->dbg_dir, +- &mlxsw_core->dbg.vsd_blob); +- mlxsw_core->dbg.psid_blob.data = (void *) &bus_info->psid; +- mlxsw_core->dbg.psid_blob.size = sizeof(bus_info->psid); +- debugfs_create_blob("psid", S_IRUGO, mlxsw_core->dbg_dir, +- &mlxsw_core->dbg.psid_blob); +- return 0; +-} +- +-static void mlxsw_core_debugfs_fini(struct mlxsw_core *mlxsw_core) +-{ +- debugfs_remove_recursive(mlxsw_core->dbg_dir); +-} +- +-static int mlxsw_devlink_port_split(struct devlink *devlink, +- unsigned int port_index, +- unsigned int count) +-{ +- struct mlxsw_core *mlxsw_core = devlink_priv(devlink); +- +- if (port_index >= MLXSW_PORT_MAX_PORTS) +- return -EINVAL; +- if (!mlxsw_core->driver->port_split) +- return -EOPNOTSUPP; +- return mlxsw_core->driver->port_split(mlxsw_core, port_index, count); +-} +- +-static int mlxsw_devlink_port_unsplit(struct devlink *devlink, +- unsigned int port_index) +-{ +- struct mlxsw_core *mlxsw_core = devlink_priv(devlink); +- +- if (port_index >= MLXSW_PORT_MAX_PORTS) +- return -EINVAL; +- if (!mlxsw_core->driver->port_unsplit) +- return -EOPNOTSUPP; +- return mlxsw_core->driver->port_unsplit(mlxsw_core, port_index); +-} +- + static int + mlxsw_devlink_sb_pool_get(struct devlink *devlink, + unsigned int sb_index, u16 pool_index, +@@ -960,6 +760,21 @@ static void *__dl_port(struct devlink_port *devlink_port) + return container_of(devlink_port, struct mlxsw_core_port, devlink_port); + } + ++static int mlxsw_devlink_port_type_set(struct devlink_port *devlink_port, ++ enum devlink_port_type port_type) ++{ ++ struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink); ++ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver; ++ struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port); ++ ++ if (!mlxsw_driver->port_type_set) ++ return -EOPNOTSUPP; ++ ++ return mlxsw_driver->port_type_set(mlxsw_core, ++ mlxsw_core_port->local_port, ++ port_type); ++} ++ + static int mlxsw_devlink_sb_port_pool_get(struct devlink_port *devlink_port, + unsigned int sb_index, u16 pool_index, + u32 *p_threshold) +@@ -968,7 +783,8 @@ static int mlxsw_devlink_sb_port_pool_get(struct devlink_port *devlink_port, + struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver; + struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port); + +- if (!mlxsw_driver->sb_port_pool_get) ++ if (!mlxsw_driver->sb_port_pool_get || ++ !mlxsw_core_port_check(mlxsw_core_port)) + return -EOPNOTSUPP; + return mlxsw_driver->sb_port_pool_get(mlxsw_core_port, sb_index, + pool_index, p_threshold); +@@ -982,7 +798,8 @@ static int mlxsw_devlink_sb_port_pool_set(struct devlink_port *devlink_port, + struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver; + struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port); + +- if (!mlxsw_driver->sb_port_pool_set) ++ if (!mlxsw_driver->sb_port_pool_set || ++ !mlxsw_core_port_check(mlxsw_core_port)) + return -EOPNOTSUPP; + return mlxsw_driver->sb_port_pool_set(mlxsw_core_port, sb_index, + pool_index, threshold); +@@ -998,7 +815,8 @@ mlxsw_devlink_sb_tc_pool_bind_get(struct devlink_port *devlink_port, + struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver; + struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port); + +- if (!mlxsw_driver->sb_tc_pool_bind_get) ++ if (!mlxsw_driver->sb_tc_pool_bind_get || ++ !mlxsw_core_port_check(mlxsw_core_port)) + return -EOPNOTSUPP; + return mlxsw_driver->sb_tc_pool_bind_get(mlxsw_core_port, sb_index, + tc_index, pool_type, +@@ -1015,7 +833,8 @@ mlxsw_devlink_sb_tc_pool_bind_set(struct devlink_port *devlink_port, + struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver; + struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port); + +- if (!mlxsw_driver->sb_tc_pool_bind_set) ++ if (!mlxsw_driver->sb_tc_pool_bind_set || ++ !mlxsw_core_port_check(mlxsw_core_port)) + return -EOPNOTSUPP; + return mlxsw_driver->sb_tc_pool_bind_set(mlxsw_core_port, sb_index, + tc_index, pool_type, +@@ -1053,7 +872,8 @@ mlxsw_devlink_sb_occ_port_pool_get(struct devlink_port *devlink_port, + struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver; + struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port); + +- if (!mlxsw_driver->sb_occ_port_pool_get) ++ if (!mlxsw_driver->sb_occ_port_pool_get || ++ !mlxsw_core_port_check(mlxsw_core_port)) + return -EOPNOTSUPP; + return mlxsw_driver->sb_occ_port_pool_get(mlxsw_core_port, sb_index, + pool_index, p_cur, p_max); +@@ -1069,7 +889,8 @@ mlxsw_devlink_sb_occ_tc_port_bind_get(struct devlink_port *devlink_port, + struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver; + struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port); + +- if (!mlxsw_driver->sb_occ_tc_port_bind_get) ++ if (!mlxsw_driver->sb_occ_tc_port_bind_get || ++ !mlxsw_core_port_check(mlxsw_core_port)) + return -EOPNOTSUPP; + return mlxsw_driver->sb_occ_tc_port_bind_get(mlxsw_core_port, + sb_index, tc_index, +@@ -1077,8 +898,7 @@ mlxsw_devlink_sb_occ_tc_port_bind_get(struct devlink_port *devlink_port, + } + + static const struct devlink_ops mlxsw_devlink_ops = { +- .port_split = mlxsw_devlink_port_split, +- .port_unsplit = mlxsw_devlink_port_unsplit, ++ .port_type_set = mlxsw_devlink_port_type_set, + .sb_pool_get = mlxsw_devlink_sb_pool_get, + .sb_pool_set = mlxsw_devlink_sb_pool_set, + .sb_port_pool_get = mlxsw_devlink_sb_port_pool_get, +@@ -1093,23 +913,27 @@ static const struct devlink_ops mlxsw_devlink_ops = { + + int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, + const struct mlxsw_bus *mlxsw_bus, +- void *bus_priv) ++ void *bus_priv, bool reload, ++ struct devlink *devlink) + { + const char *device_kind = mlxsw_bus_info->device_kind; + struct mlxsw_core *mlxsw_core; + struct mlxsw_driver *mlxsw_driver; +- struct devlink *devlink; ++ struct mlxsw_res *res; + size_t alloc_size; + int err; + + mlxsw_driver = mlxsw_core_driver_get(device_kind); + if (!mlxsw_driver) + return -EINVAL; +- alloc_size = sizeof(*mlxsw_core) + mlxsw_driver->priv_size; +- devlink = devlink_alloc(&mlxsw_devlink_ops, alloc_size); +- if (!devlink) { +- err = -ENOMEM; +- goto err_devlink_alloc; ++ ++ if (!reload) { ++ alloc_size = sizeof(*mlxsw_core) + mlxsw_driver->priv_size; ++ devlink = devlink_alloc(&mlxsw_devlink_ops, alloc_size); ++ if (!devlink) { ++ err = -ENOMEM; ++ goto err_devlink_alloc; ++ } + } + + mlxsw_core = devlink_priv(devlink); +@@ -1120,22 +944,26 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, + mlxsw_core->bus_priv = bus_priv; + mlxsw_core->bus_info = mlxsw_bus_info; + +- mlxsw_core->pcpu_stats = +- netdev_alloc_pcpu_stats(struct mlxsw_core_pcpu_stats); +- if (!mlxsw_core->pcpu_stats) { +- err = -ENOMEM; +- goto err_alloc_stats; ++ res = mlxsw_driver->res_query_enabled ? &mlxsw_core->res : NULL; ++ err = mlxsw_bus->init(bus_priv, mlxsw_core, mlxsw_driver->profile, res); ++ if (err) ++ goto err_bus_init; ++ ++ if (mlxsw_driver->resources_register && !reload) { ++ err = mlxsw_driver->resources_register(mlxsw_core); ++ if (err) ++ goto err_register_resources; + } + +- err = mlxsw_bus->init(bus_priv, mlxsw_core, mlxsw_driver->profile, +- &mlxsw_core->resources); ++ err = mlxsw_ports_init(mlxsw_core); + if (err) +- goto err_bus_init; ++ goto err_ports_init; + +- if (mlxsw_core->resources.max_lag_valid && +- mlxsw_core->resources.max_ports_in_lag_valid) { +- alloc_size = sizeof(u8) * mlxsw_core->resources.max_lag * +- mlxsw_core->resources.max_ports_in_lag; ++ if (MLXSW_CORE_RES_VALID(mlxsw_core, MAX_LAG) && ++ MLXSW_CORE_RES_VALID(mlxsw_core, MAX_LAG_MEMBERS)) { ++ alloc_size = sizeof(u8) * ++ MLXSW_CORE_RES_GET(mlxsw_core, MAX_LAG) * ++ MLXSW_CORE_RES_GET(mlxsw_core, MAX_LAG_MEMBERS); + mlxsw_core->lag.mapping = kzalloc(alloc_size, GFP_KERNEL); + if (!mlxsw_core->lag.mapping) { + err = -ENOMEM; +@@ -1147,9 +975,11 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, + if (err) + goto err_emad_init; + +- err = devlink_register(devlink, mlxsw_bus_info->dev); +- if (err) +- goto err_devlink_register; ++ if (!reload) { ++ err = devlink_register(devlink, mlxsw_bus_info->dev); ++ if (err) ++ goto err_devlink_register; ++ } + + err = mlxsw_hwmon_init(mlxsw_core, mlxsw_bus_info, &mlxsw_core->hwmon); + if (err) +@@ -1171,54 +1001,66 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, + goto err_driver_init; + } + +- err = mlxsw_core_debugfs_init(mlxsw_core); +- if (err) +- goto err_debugfs_init; +- + return 0; + +-err_debugfs_init: +- mlxsw_core->driver->fini(mlxsw_core); + err_driver_init: + mlxsw_qsfp_fini(mlxsw_core->qsfp); + err_qsfp_init: + mlxsw_thermal_fini(mlxsw_core->thermal); + err_thermal_init: ++ mlxsw_hwmon_fini(mlxsw_core->hwmon); + err_hwmon_init: +- devlink_unregister(devlink); ++ if (!reload) ++ devlink_unregister(devlink); + err_devlink_register: + mlxsw_emad_fini(mlxsw_core); + err_emad_init: + kfree(mlxsw_core->lag.mapping); + err_alloc_lag_mapping: ++ mlxsw_ports_fini(mlxsw_core); ++err_ports_init: ++err_register_resources: + mlxsw_bus->fini(bus_priv); + err_bus_init: +- free_percpu(mlxsw_core->pcpu_stats); +-err_alloc_stats: +- devlink_free(devlink); ++ if (!reload) ++ devlink_free(devlink); + err_devlink_alloc: +- mlxsw_core_driver_put(device_kind); + return err; + } + EXPORT_SYMBOL(mlxsw_core_bus_device_register); + +-void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core) ++void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core, ++ bool reload) + { +- const char *device_kind = mlxsw_core->bus_info->device_kind; + struct devlink *devlink = priv_to_devlink(mlxsw_core); + +- mlxsw_core_debugfs_fini(mlxsw_core); ++ if (mlxsw_core->reload_fail) { ++ if (!reload) ++ /* Only the parts that were not de-initialized in the ++ * failed reload attempt need to be de-initialized. ++ */ ++ goto reload_fail_deinit; ++ else ++ return; ++ } ++ + if (mlxsw_core->driver->fini) + mlxsw_core->driver->fini(mlxsw_core); + mlxsw_qsfp_fini(mlxsw_core->qsfp); + mlxsw_thermal_fini(mlxsw_core->thermal); +- devlink_unregister(devlink); ++ mlxsw_hwmon_fini(mlxsw_core->hwmon); ++ if (!reload) ++ devlink_unregister(devlink); + mlxsw_emad_fini(mlxsw_core); +- mlxsw_core->bus->fini(mlxsw_core->bus_priv); + kfree(mlxsw_core->lag.mapping); +- free_percpu(mlxsw_core->pcpu_stats); ++ mlxsw_ports_fini(mlxsw_core); ++ mlxsw_core->bus->fini(mlxsw_core->bus_priv); ++ ++ return; ++ ++reload_fail_deinit: ++ devlink_unregister(devlink); + devlink_free(devlink); +- mlxsw_core_driver_put(device_kind); + } + EXPORT_SYMBOL(mlxsw_core_bus_device_unregister); + +@@ -1392,6 +1234,75 @@ void mlxsw_core_event_listener_unregister(struct mlxsw_core *mlxsw_core, + } + EXPORT_SYMBOL(mlxsw_core_event_listener_unregister); + ++static int mlxsw_core_listener_register(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_listener *listener, ++ void *priv) ++{ ++ if (listener->is_event) ++ return mlxsw_core_event_listener_register(mlxsw_core, ++ &listener->u.event_listener, ++ priv); ++ else ++ return mlxsw_core_rx_listener_register(mlxsw_core, ++ &listener->u.rx_listener, ++ priv); ++} ++ ++static void mlxsw_core_listener_unregister(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_listener *listener, ++ void *priv) ++{ ++ if (listener->is_event) ++ mlxsw_core_event_listener_unregister(mlxsw_core, ++ &listener->u.event_listener, ++ priv); ++ else ++ mlxsw_core_rx_listener_unregister(mlxsw_core, ++ &listener->u.rx_listener, ++ priv); ++} ++ ++int mlxsw_core_trap_register(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_listener *listener, void *priv) ++{ ++ char hpkt_pl[MLXSW_REG_HPKT_LEN]; ++ int err; ++ ++ err = mlxsw_core_listener_register(mlxsw_core, listener, priv); ++ if (err) ++ return err; ++ ++ mlxsw_reg_hpkt_pack(hpkt_pl, listener->action, listener->trap_id, ++ listener->trap_group, listener->is_ctrl); ++ err = mlxsw_reg_write(mlxsw_core, MLXSW_REG(hpkt), hpkt_pl); ++ if (err) ++ goto err_trap_set; ++ ++ return 0; ++ ++err_trap_set: ++ mlxsw_core_listener_unregister(mlxsw_core, listener, priv); ++ return err; ++} ++EXPORT_SYMBOL(mlxsw_core_trap_register); ++ ++void mlxsw_core_trap_unregister(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_listener *listener, ++ void *priv) ++{ ++ char hpkt_pl[MLXSW_REG_HPKT_LEN]; ++ ++ if (!listener->is_event) { ++ mlxsw_reg_hpkt_pack(hpkt_pl, listener->unreg_action, ++ listener->trap_id, listener->trap_group, ++ listener->is_ctrl); ++ mlxsw_reg_write(mlxsw_core, MLXSW_REG(hpkt), hpkt_pl); ++ } ++ ++ mlxsw_core_listener_unregister(mlxsw_core, listener, priv); ++} ++EXPORT_SYMBOL(mlxsw_core_trap_unregister); ++ + static u64 mlxsw_core_tid_get(struct mlxsw_core *mlxsw_core) + { + return atomic64_inc_return(&mlxsw_core->emad.tid); +@@ -1492,6 +1403,7 @@ static int mlxsw_core_reg_access_cmd(struct mlxsw_core *mlxsw_core, + { + enum mlxsw_emad_op_tlv_status status; + int err, n_retry; ++ bool reset_ok; + char *in_mbox, *out_mbox, *tmp; + + dev_dbg(mlxsw_core->bus_info->dev, "Reg cmd access (reg_id=%x(%s),type=%s)\n", +@@ -1513,9 +1425,16 @@ static int mlxsw_core_reg_access_cmd(struct mlxsw_core *mlxsw_core, + tmp = in_mbox + MLXSW_EMAD_OP_TLV_LEN * sizeof(u32); + mlxsw_emad_pack_reg_tlv(tmp, reg, payload); + ++ /* There is a special treatment needed for MRSR (reset) register. ++ * The command interface will return error after the command ++ * is executed, so tell the lower layer to expect it ++ * and cope accordingly. ++ */ ++ reset_ok = reg->id == MLXSW_REG_MRSR_ID; ++ + n_retry = 0; + retry: +- err = mlxsw_cmd_access_reg(mlxsw_core, in_mbox, out_mbox); ++ err = mlxsw_cmd_access_reg(mlxsw_core, reset_ok, in_mbox, out_mbox); + if (!err) { + err = mlxsw_emad_process_status(out_mbox, &status); + if (err) { +@@ -1595,7 +1514,6 @@ void mlxsw_core_skb_receive(struct mlxsw_core *mlxsw_core, struct sk_buff *skb, + { + struct mlxsw_rx_listener_item *rxl_item; + const struct mlxsw_rx_listener *rxl; +- struct mlxsw_core_pcpu_stats *pcpu_stats; + u8 local_port; + bool found = false; + +@@ -1617,7 +1535,7 @@ void mlxsw_core_skb_receive(struct mlxsw_core *mlxsw_core, struct sk_buff *skb, + __func__, local_port, rx_info->trap_id); + + if ((rx_info->trap_id >= MLXSW_TRAP_ID_MAX) || +- (local_port >= MLXSW_PORT_MAX_PORTS)) ++ (local_port >= mlxsw_core->max_ports)) + goto drop; + + rcu_read_lock(); +@@ -1634,26 +1552,10 @@ void mlxsw_core_skb_receive(struct mlxsw_core *mlxsw_core, struct sk_buff *skb, + if (!found) + goto drop; + +- pcpu_stats = this_cpu_ptr(mlxsw_core->pcpu_stats); +- u64_stats_update_begin(&pcpu_stats->syncp); +- pcpu_stats->port_rx_packets[local_port]++; +- pcpu_stats->port_rx_bytes[local_port] += skb->len; +- pcpu_stats->trap_rx_packets[rx_info->trap_id]++; +- pcpu_stats->trap_rx_bytes[rx_info->trap_id] += skb->len; +- u64_stats_update_end(&pcpu_stats->syncp); +- + rxl->func(skb, local_port, rxl_item->priv); + return; + + drop: +- if (rx_info->trap_id >= MLXSW_TRAP_ID_MAX) +- this_cpu_inc(mlxsw_core->pcpu_stats->trap_rx_invalid); +- else +- this_cpu_inc(mlxsw_core->pcpu_stats->trap_rx_dropped[rx_info->trap_id]); +- if (local_port >= MLXSW_PORT_MAX_PORTS) +- this_cpu_inc(mlxsw_core->pcpu_stats->port_rx_invalid); +- else +- this_cpu_inc(mlxsw_core->pcpu_stats->port_rx_dropped[local_port]); + dev_kfree_skb(skb); + } + EXPORT_SYMBOL(mlxsw_core_skb_receive); +@@ -1661,7 +1563,7 @@ EXPORT_SYMBOL(mlxsw_core_skb_receive); + static int mlxsw_core_lag_mapping_index(struct mlxsw_core *mlxsw_core, + u16 lag_id, u8 port_index) + { +- return mlxsw_core->resources.max_ports_in_lag * lag_id + ++ return MLXSW_CORE_RES_GET(mlxsw_core, MAX_LAG_MEMBERS) * lag_id + + port_index; + } + +@@ -1690,7 +1592,7 @@ void mlxsw_core_lag_mapping_clear(struct mlxsw_core *mlxsw_core, + { + int i; + +- for (i = 0; i < mlxsw_core->resources.max_ports_in_lag; i++) { ++ for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_core, MAX_LAG_MEMBERS); i++) { + int index = mlxsw_core_lag_mapping_index(mlxsw_core, + lag_id, i); + +@@ -1700,34 +1602,82 @@ void mlxsw_core_lag_mapping_clear(struct mlxsw_core *mlxsw_core, + } + EXPORT_SYMBOL(mlxsw_core_lag_mapping_clear); + +-struct mlxsw_resources *mlxsw_core_resources_get(struct mlxsw_core *mlxsw_core) ++bool mlxsw_core_res_valid(struct mlxsw_core *mlxsw_core, ++ enum mlxsw_res_id res_id) ++{ ++ return mlxsw_res_valid(&mlxsw_core->res, res_id); ++} ++EXPORT_SYMBOL(mlxsw_core_res_valid); ++ ++u64 mlxsw_core_res_get(struct mlxsw_core *mlxsw_core, ++ enum mlxsw_res_id res_id) + { +- return &mlxsw_core->resources; ++ return mlxsw_res_get(&mlxsw_core->res, res_id); + } +-EXPORT_SYMBOL(mlxsw_core_resources_get); ++EXPORT_SYMBOL(mlxsw_core_res_get); + +-int mlxsw_core_port_init(struct mlxsw_core *mlxsw_core, +- struct mlxsw_core_port *mlxsw_core_port, u8 local_port, +- struct net_device *dev, bool split, u32 split_group) ++int mlxsw_core_port_init(struct mlxsw_core *mlxsw_core, u8 local_port) + { + struct devlink *devlink = priv_to_devlink(mlxsw_core); ++ struct mlxsw_core_port *mlxsw_core_port = ++ &mlxsw_core->ports[local_port]; + struct devlink_port *devlink_port = &mlxsw_core_port->devlink_port; ++ int err; + +- if (split) +- devlink_port_split_set(devlink_port, split_group); +- devlink_port_type_eth_set(devlink_port, dev); +- return devlink_port_register(devlink, devlink_port, local_port); ++ mlxsw_core_port->local_port = local_port; ++ err = devlink_port_register(devlink, devlink_port, local_port); ++ if (err) ++ memset(mlxsw_core_port, 0, sizeof(*mlxsw_core_port)); ++ return err; + } + EXPORT_SYMBOL(mlxsw_core_port_init); + +-void mlxsw_core_port_fini(struct mlxsw_core_port *mlxsw_core_port) ++void mlxsw_core_port_fini(struct mlxsw_core *mlxsw_core, u8 local_port) + { ++ struct mlxsw_core_port *mlxsw_core_port = ++ &mlxsw_core->ports[local_port]; + struct devlink_port *devlink_port = &mlxsw_core_port->devlink_port; + + devlink_port_unregister(devlink_port); ++ memset(mlxsw_core_port, 0, sizeof(*mlxsw_core_port)); + } + EXPORT_SYMBOL(mlxsw_core_port_fini); + ++void mlxsw_core_port_ib_set(struct mlxsw_core *mlxsw_core, u8 local_port, ++ void *port_driver_priv) ++{ ++ struct mlxsw_core_port *mlxsw_core_port = ++ &mlxsw_core->ports[local_port]; ++ struct devlink_port *devlink_port = &mlxsw_core_port->devlink_port; ++ ++ mlxsw_core_port->port_driver_priv = port_driver_priv; ++ devlink_port_type_ib_set(devlink_port, NULL); ++} ++EXPORT_SYMBOL(mlxsw_core_port_ib_set); ++ ++void mlxsw_core_port_clear(struct mlxsw_core *mlxsw_core, u8 local_port, ++ void *port_driver_priv) ++{ ++ struct mlxsw_core_port *mlxsw_core_port = ++ &mlxsw_core->ports[local_port]; ++ struct devlink_port *devlink_port = &mlxsw_core_port->devlink_port; ++ ++ mlxsw_core_port->port_driver_priv = port_driver_priv; ++ devlink_port_type_clear(devlink_port); ++} ++EXPORT_SYMBOL(mlxsw_core_port_clear); ++ ++enum devlink_port_type mlxsw_core_port_type_get(struct mlxsw_core *mlxsw_core, ++ u8 local_port) ++{ ++ struct mlxsw_core_port *mlxsw_core_port = ++ &mlxsw_core->ports[local_port]; ++ struct devlink_port *devlink_port = &mlxsw_core_port->devlink_port; ++ ++ return devlink_port->type; ++} ++EXPORT_SYMBOL(mlxsw_core_port_type_get); ++ + static void mlxsw_core_buf_dump_dbg(struct mlxsw_core *mlxsw_core, + const char *buf, size_t size) + { +@@ -1747,7 +1697,7 @@ static void mlxsw_core_buf_dump_dbg(struct mlxsw_core *mlxsw_core, + } + + int mlxsw_cmd_exec(struct mlxsw_core *mlxsw_core, u16 opcode, u8 opcode_mod, +- u32 in_mod, bool out_mbox_direct, ++ u32 in_mod, bool out_mbox_direct, bool reset_ok, + char *in_mbox, size_t in_mbox_size, + char *out_mbox, size_t out_mbox_size) + { +@@ -1770,7 +1720,15 @@ int mlxsw_cmd_exec(struct mlxsw_core *mlxsw_core, u16 opcode, u8 opcode_mod, + in_mbox, in_mbox_size, + out_mbox, out_mbox_size, &status); + +- if (err == -EIO && status != MLXSW_CMD_STATUS_OK) { ++ if (!err && out_mbox) { ++ dev_dbg(mlxsw_core->bus_info->dev, "Output mailbox:\n"); ++ mlxsw_core_buf_dump_dbg(mlxsw_core, out_mbox, out_mbox_size); ++ } ++ ++ if (reset_ok && err == -EIO && ++ status == MLXSW_CMD_STATUS_RUNNING_RESET) { ++ err = 0; ++ } else if (err == -EIO && status != MLXSW_CMD_STATUS_OK) { + dev_err(mlxsw_core->bus_info->dev, "Cmd exec failed (opcode=%x(%s),opcode_mod=%x,in_mod=%x,status=%x(%s))\n", + opcode, mlxsw_cmd_opcode_str(opcode), opcode_mod, + in_mod, status, mlxsw_cmd_status_str(status)); +@@ -1780,10 +1738,6 @@ int mlxsw_cmd_exec(struct mlxsw_core *mlxsw_core, u16 opcode, u8 opcode_mod, + in_mod); + } + +- if (!err && out_mbox) { +- dev_dbg(mlxsw_core->bus_info->dev, "Output mailbox:\n"); +- mlxsw_core_buf_dump_dbg(mlxsw_core, out_mbox, out_mbox_size); +- } + return err; + } + EXPORT_SYMBOL(mlxsw_cmd_exec); +@@ -1794,6 +1748,71 @@ int mlxsw_core_schedule_dw(struct delayed_work *dwork, unsigned long delay) + } + EXPORT_SYMBOL(mlxsw_core_schedule_dw); + ++bool mlxsw_core_schedule_work(struct work_struct *work) ++{ ++ return queue_work(mlxsw_owq, work); ++} ++EXPORT_SYMBOL(mlxsw_core_schedule_work); ++ ++void mlxsw_core_flush_owq(void) ++{ ++ flush_workqueue(mlxsw_owq); ++} ++EXPORT_SYMBOL(mlxsw_core_flush_owq); ++ ++int mlxsw_core_kvd_sizes_get(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_config_profile *profile, ++ u64 *p_single_size, u64 *p_double_size, ++ u64 *p_linear_size) ++{ ++ struct mlxsw_driver *driver = mlxsw_core->driver; ++ ++ if (!driver->kvd_sizes_get) ++ return -EINVAL; ++ ++ return driver->kvd_sizes_get(mlxsw_core, profile, ++ p_single_size, p_double_size, ++ p_linear_size); ++} ++EXPORT_SYMBOL(mlxsw_core_kvd_sizes_get); ++ ++int mlxsw_core_resources_query(struct mlxsw_core *mlxsw_core, char *mbox, ++ struct mlxsw_res *res) ++{ ++ int index, i; ++ u64 data; ++ u16 id; ++ int err; ++ ++ if (!res) ++ return 0; ++ ++ mlxsw_cmd_mbox_zero(mbox); ++ ++ for (index = 0; index < MLXSW_CMD_QUERY_RESOURCES_MAX_QUERIES; ++ index++) { ++ err = mlxsw_cmd_query_resources(mlxsw_core, mbox, index); ++ if (err) ++ return err; ++ ++ for (i = 0; i < MLXSW_CMD_QUERY_RESOURCES_PER_QUERY; i++) { ++ id = mlxsw_cmd_mbox_query_resource_id_get(mbox, i); ++ data = mlxsw_cmd_mbox_query_resource_data_get(mbox, i); ++ ++ if (id == MLXSW_CMD_QUERY_RESOURCES_TABLE_END_ID) ++ return 0; ++ ++ mlxsw_res_parse(res, id, data); ++ } ++ } ++ ++ /* If after MLXSW_RESOURCES_QUERY_MAX_QUERIES we still didn't get ++ * MLXSW_RESOURCES_TABLE_END_ID, something went bad in the FW. ++ */ ++ return -EIO; ++} ++EXPORT_SYMBOL(mlxsw_core_resources_query); ++ + static int __init mlxsw_core_module_init(void) + { + int err; +@@ -1801,21 +1820,22 @@ static int __init mlxsw_core_module_init(void) + mlxsw_wq = alloc_workqueue(mlxsw_core_driver_name, WQ_MEM_RECLAIM, 0); + if (!mlxsw_wq) + return -ENOMEM; +- mlxsw_core_dbg_root = debugfs_create_dir(mlxsw_core_driver_name, NULL); +- if (!mlxsw_core_dbg_root) { ++ mlxsw_owq = alloc_ordered_workqueue("%s_ordered", WQ_MEM_RECLAIM, ++ mlxsw_core_driver_name); ++ if (!mlxsw_owq) { + err = -ENOMEM; +- goto err_debugfs_create_dir; ++ goto err_alloc_ordered_workqueue; + } + return 0; + +-err_debugfs_create_dir: ++err_alloc_ordered_workqueue: + destroy_workqueue(mlxsw_wq); + return err; + } + + static void __exit mlxsw_core_module_exit(void) + { +- debugfs_remove_recursive(mlxsw_core_dbg_root); ++ destroy_workqueue(mlxsw_owq); + destroy_workqueue(mlxsw_wq); + } + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h +index db27dd0..76e8fd7 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/core.h +@@ -1,38 +1,5 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/core.h +- * Copyright (c) 2015 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2015 Jiri Pirko +- * Copyright (c) 2015 Ido Schimmel +- * Copyright (c) 2015 Elad Raz +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ ++/* Copyright (c) 2015-2018 Mellanox Technologies. All rights reserved */ + + #ifndef _MLXSW_CORE_H + #define _MLXSW_CORE_H +@@ -48,24 +15,17 @@ + + #include "trap.h" + #include "reg.h" +- + #include "cmd.h" +- +-#define MLXSW_MODULE_ALIAS_PREFIX "mlxsw-driver-" +-#define MODULE_MLXSW_DRIVER_ALIAS(kind) \ +- MODULE_ALIAS(MLXSW_MODULE_ALIAS_PREFIX kind) +- +-#define MLXSW_DEVICE_KIND_SWITCHX2 "switchx2" +-#define MLXSW_DEVICE_KIND_SPECTRUM "spectrum" ++#include "resources.h" + + struct mlxsw_core; ++struct mlxsw_core_port; + struct mlxsw_driver; + struct mlxsw_bus; + struct mlxsw_bus_info; + + unsigned int mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core); +-void mlxsw_core_max_ports_set(struct mlxsw_core *mlxsw_core, +- unsigned int max_ports); ++ + void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core); + + int mlxsw_core_driver_register(struct mlxsw_driver *mlxsw_driver); +@@ -73,8 +33,9 @@ void mlxsw_core_driver_unregister(struct mlxsw_driver *mlxsw_driver); + + int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, + const struct mlxsw_bus *mlxsw_bus, +- void *bus_priv); +-void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core); ++ void *bus_priv, bool reload, ++ struct devlink *devlink); ++void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core, bool reload); + + struct mlxsw_tx_info { + u8 local_port; +@@ -99,6 +60,50 @@ struct mlxsw_event_listener { + enum mlxsw_event_trap_id trap_id; + }; + ++struct mlxsw_listener { ++ u16 trap_id; ++ union { ++ struct mlxsw_rx_listener rx_listener; ++ struct mlxsw_event_listener event_listener; ++ } u; ++ enum mlxsw_reg_hpkt_action action; ++ enum mlxsw_reg_hpkt_action unreg_action; ++ u8 trap_group; ++ bool is_ctrl; /* should go via control buffer or not */ ++ bool is_event; ++}; ++ ++#define MLXSW_RXL(_func, _trap_id, _action, _is_ctrl, _trap_group, \ ++ _unreg_action) \ ++ { \ ++ .trap_id = MLXSW_TRAP_ID_##_trap_id, \ ++ .u.rx_listener = \ ++ { \ ++ .func = _func, \ ++ .local_port = MLXSW_PORT_DONT_CARE, \ ++ .trap_id = MLXSW_TRAP_ID_##_trap_id, \ ++ }, \ ++ .action = MLXSW_REG_HPKT_ACTION_##_action, \ ++ .unreg_action = MLXSW_REG_HPKT_ACTION_##_unreg_action, \ ++ .trap_group = MLXSW_REG_HTGT_TRAP_GROUP_##_trap_group, \ ++ .is_ctrl = _is_ctrl, \ ++ .is_event = false, \ ++ } ++ ++#define MLXSW_EVENTL(_func, _trap_id, _trap_group) \ ++ { \ ++ .trap_id = MLXSW_TRAP_ID_##_trap_id, \ ++ .u.event_listener = \ ++ { \ ++ .func = _func, \ ++ .trap_id = MLXSW_TRAP_ID_##_trap_id, \ ++ }, \ ++ .action = MLXSW_REG_HPKT_ACTION_TRAP_TO_CPU, \ ++ .trap_group = MLXSW_REG_HTGT_TRAP_GROUP_##_trap_group, \ ++ .is_ctrl = false, \ ++ .is_event = true, \ ++ } ++ + int mlxsw_core_rx_listener_register(struct mlxsw_core *mlxsw_core, + const struct mlxsw_rx_listener *rxl, + void *priv); +@@ -113,6 +118,13 @@ void mlxsw_core_event_listener_unregister(struct mlxsw_core *mlxsw_core, + const struct mlxsw_event_listener *el, + void *priv); + ++int mlxsw_core_trap_register(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_listener *listener, ++ void *priv); ++void mlxsw_core_trap_unregister(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_listener *listener, ++ void *priv); ++ + typedef void mlxsw_reg_trans_cb_t(struct mlxsw_core *mlxsw_core, char *payload, + size_t payload_len, unsigned long cb_priv); + +@@ -151,27 +163,27 @@ u8 mlxsw_core_lag_mapping_get(struct mlxsw_core *mlxsw_core, + void mlxsw_core_lag_mapping_clear(struct mlxsw_core *mlxsw_core, + u16 lag_id, u8 local_port); + +-struct mlxsw_core_port { +- struct devlink_port devlink_port; +-}; +- +-static inline void * +-mlxsw_core_port_driver_priv(struct mlxsw_core_port *mlxsw_core_port) +-{ +- /* mlxsw_core_port is ensured to always be the first field in driver +- * port structure. +- */ +- return mlxsw_core_port; +-} +- ++void *mlxsw_core_port_driver_priv(struct mlxsw_core_port *mlxsw_core_port); ++int mlxsw_core_port_init(struct mlxsw_core *mlxsw_core, u8 local_port); ++void mlxsw_core_port_fini(struct mlxsw_core *mlxsw_core, u8 local_port); ++void mlxsw_core_port_eth_set(struct mlxsw_core *mlxsw_core, u8 local_port, ++ void *port_driver_priv, struct net_device *dev, ++ u32 port_number, bool split, ++ u32 split_port_subnumber); ++void mlxsw_core_port_ib_set(struct mlxsw_core *mlxsw_core, u8 local_port, ++ void *port_driver_priv); ++void mlxsw_core_port_clear(struct mlxsw_core *mlxsw_core, u8 local_port, ++ void *port_driver_priv); ++enum devlink_port_type mlxsw_core_port_type_get(struct mlxsw_core *mlxsw_core, ++ u8 local_port); + int mlxsw_core_port_get_phys_port_name(struct mlxsw_core *mlxsw_core, + u8 local_port, char *name, size_t len); +-int mlxsw_core_port_init(struct mlxsw_core *mlxsw_core, +- struct mlxsw_core_port *mlxsw_core_port, u8 local_port, +- struct net_device *dev, bool split, u32 split_group); +-void mlxsw_core_port_fini(struct mlxsw_core_port *mlxsw_core_port); + + int mlxsw_core_schedule_dw(struct delayed_work *dwork, unsigned long delay); ++bool mlxsw_core_schedule_work(struct work_struct *work); ++void mlxsw_core_flush_owq(void); ++int mlxsw_core_resources_query(struct mlxsw_core *mlxsw_core, char *mbox, ++ struct mlxsw_res *res); + + #define MLXSW_CONFIG_PROFILE_SWID_COUNT 8 + +@@ -195,8 +207,7 @@ struct mlxsw_config_profile { + used_max_pkey:1, + used_ar_sec:1, + used_adaptive_routing_group_cap:1, +- used_kvd_split_data:1; /* indicate for the kvd's values */ +- ++ used_kvd_sizes:1; + u8 max_vepa_channels; + u16 max_mid; + u16 max_pgt; +@@ -216,24 +227,21 @@ struct mlxsw_config_profile { + u16 adaptive_routing_group_cap; + u8 arn; + u32 kvd_linear_size; +- u16 kvd_hash_granularity; + u8 kvd_hash_single_parts; + u8 kvd_hash_double_parts; +- u8 resource_query_enable; + struct mlxsw_swid_config swid_config[MLXSW_CONFIG_PROFILE_SWID_COUNT]; + }; + + struct mlxsw_driver { + struct list_head list; + const char *kind; +- struct module *owner; + size_t priv_size; + int (*init)(struct mlxsw_core *mlxsw_core, + const struct mlxsw_bus_info *mlxsw_bus_info); + void (*fini)(struct mlxsw_core *mlxsw_core); +- int (*port_split)(struct mlxsw_core *mlxsw_core, u8 local_port, +- unsigned int count); +- int (*port_unsplit)(struct mlxsw_core *mlxsw_core, u8 local_port); ++ int (*basic_trap_groups_set)(struct mlxsw_core *mlxsw_core); ++ int (*port_type_set)(struct mlxsw_core *mlxsw_core, u8 local_port, ++ enum devlink_port_type new_type); + int (*sb_pool_get)(struct mlxsw_core *mlxsw_core, + unsigned int sb_index, u16 pool_index, + struct devlink_sb_pool_info *pool_info); +@@ -267,51 +275,41 @@ struct mlxsw_driver { + u32 *p_cur, u32 *p_max); + void (*txhdr_construct)(struct sk_buff *skb, + const struct mlxsw_tx_info *tx_info); ++ int (*resources_register)(struct mlxsw_core *mlxsw_core); ++ int (*kvd_sizes_get)(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_config_profile *profile, ++ u64 *p_single_size, u64 *p_double_size, ++ u64 *p_linear_size); + u8 txhdr_len; + const struct mlxsw_config_profile *profile; ++ bool res_query_enabled; + }; + +-struct mlxsw_resources { +- u32 max_span_valid:1, +- max_lag_valid:1, +- max_ports_in_lag_valid:1, +- kvd_size_valid:1, +- kvd_single_min_size_valid:1, +- kvd_double_min_size_valid:1, +- max_virtual_routers_valid:1, +- max_system_ports_valid:1, +- max_vlan_groups_valid:1, +- max_regions_valid:1, +- max_rif_valid:1; +- u8 max_span; +- u8 max_lag; +- u8 max_ports_in_lag; +- u32 kvd_size; +- u32 kvd_single_min_size; +- u32 kvd_double_min_size; +- u16 max_virtual_routers; +- u16 max_system_ports; +- u16 max_vlan_groups; +- u16 max_regions; +- u16 max_rif; ++int mlxsw_core_kvd_sizes_get(struct mlxsw_core *mlxsw_core, ++ const struct mlxsw_config_profile *profile, ++ u64 *p_single_size, u64 *p_double_size, ++ u64 *p_linear_size); + +- /* Internal resources. +- * Determined by the SW, not queried from the HW. +- */ +- u32 kvd_single_size; +- u32 kvd_double_size; +- u32 kvd_linear_size; +-}; ++bool mlxsw_core_res_valid(struct mlxsw_core *mlxsw_core, ++ enum mlxsw_res_id res_id); ++ ++#define MLXSW_CORE_RES_VALID(mlxsw_core, short_res_id) \ ++ mlxsw_core_res_valid(mlxsw_core, MLXSW_RES_ID_##short_res_id) + +-struct mlxsw_resources *mlxsw_core_resources_get(struct mlxsw_core *mlxsw_core); ++u64 mlxsw_core_res_get(struct mlxsw_core *mlxsw_core, ++ enum mlxsw_res_id res_id); ++ ++#define MLXSW_CORE_RES_GET(mlxsw_core, short_res_id) \ ++ mlxsw_core_res_get(mlxsw_core, MLXSW_RES_ID_##short_res_id) + + #define MLXSW_BUS_F_TXRX BIT(0) ++#define MLXSW_BUS_F_RESET BIT(1) + + struct mlxsw_bus { + const char *kind; + int (*init)(void *bus_priv, struct mlxsw_core *mlxsw_core, + const struct mlxsw_config_profile *profile, +- struct mlxsw_resources *resources); ++ struct mlxsw_res *res); + void (*fini)(void *bus_priv); + bool (*skb_transmit_busy)(void *bus_priv, + const struct mlxsw_tx_info *tx_info); +@@ -325,15 +323,18 @@ struct mlxsw_bus { + u8 features; + }; + ++struct mlxsw_fw_rev { ++ u16 major; ++ u16 minor; ++ u16 subminor; ++ u16 can_reset_minor; ++}; ++ + struct mlxsw_bus_info { + const char *device_kind; + const char *device_name; + struct device *dev; +- struct { +- u16 major; +- u16 minor; +- u16 subminor; +- } fw_rev; ++ struct mlxsw_fw_rev fw_rev; + u8 vsd[MLXSW_CMD_BOARDINFO_VSD_LEN]; + u8 psid[MLXSW_CMD_BOARDINFO_PSID_LEN]; + u8 low_frequency; +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +index c047b61..444455c 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +@@ -873,7 +873,7 @@ mlxsw_thermal_module_tz_init(struct mlxsw_thermal_module *module_tz) + &mlxsw_thermal_module_params, + 0, 0); + if (IS_ERR(module_tz->tzdev)) { +- err = PTR_ERR(module_tz); ++ err = PTR_ERR(module_tz->tzdev); + return err; + } + +@@ -942,7 +942,7 @@ mlxsw_thermal_modules_init(struct device *dev, struct mlxsw_core *core, + if (!thermal->tz_module_arr) + return -ENOMEM; + +- for (i = 1; i <= module_count; i++) { ++ for (i = 1; i < module_count; i++) { + err = mlxsw_thermal_module_init(dev, core, thermal, i); + if (err) + goto err_unreg_tz_module_arr; +@@ -957,7 +957,7 @@ mlxsw_thermal_modules_init(struct device *dev, struct mlxsw_core *core, + return 0; + + err_unreg_tz_module_arr: +- for (i = thermal->tz_module_num - 1; i >= 0; i--) ++ for (i = module_count - 1; i >= 0; i--) + mlxsw_thermal_module_fini(&thermal->tz_module_arr[i]); + kfree(thermal->tz_module_arr); + return err; +@@ -966,9 +966,10 @@ mlxsw_thermal_modules_init(struct device *dev, struct mlxsw_core *core, + static void + mlxsw_thermal_modules_fini(struct mlxsw_thermal *thermal) + { ++ unsigned int module_count = mlxsw_core_max_ports(thermal->core); + int i; + +- for (i = thermal->tz_module_num - 1; i >= 0; i--) ++ for (i = module_count - 1; i >= 0; i--) + mlxsw_thermal_module_fini(&thermal->tz_module_arr[i]); + kfree(thermal->tz_module_arr); + } +diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c +index f1b95d5..b1471c2 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c +@@ -14,14 +14,17 @@ + #include "cmd.h" + #include "core.h" + #include "i2c.h" ++#include "resources.h" + + #define MLXSW_I2C_CIR2_BASE 0x72000 + #define MLXSW_I2C_CIR_STATUS_OFF 0x18 + #define MLXSW_I2C_CIR2_OFF_STATUS (MLXSW_I2C_CIR2_BASE + \ + MLXSW_I2C_CIR_STATUS_OFF) + #define MLXSW_I2C_OPMOD_SHIFT 12 ++#define MLXSW_I2C_EVENT_BIT_SHIFT 22 + #define MLXSW_I2C_GO_BIT_SHIFT 23 + #define MLXSW_I2C_CIR_CTRL_STATUS_SHIFT 24 ++#define MLXSW_I2C_EVENT_BIT BIT(MLXSW_I2C_EVENT_BIT_SHIFT) + #define MLXSW_I2C_GO_BIT BIT(MLXSW_I2C_GO_BIT_SHIFT) + #define MLXSW_I2C_GO_OPMODE BIT(MLXSW_I2C_OPMOD_SHIFT) + #define MLXSW_I2C_SET_IMM_CMD (MLXSW_I2C_GO_OPMODE | \ +@@ -33,6 +36,9 @@ + #define MLXSW_I2C_TLV_HDR_SIZE 0x10 + #define MLXSW_I2C_ADDR_WIDTH 4 + #define MLXSW_I2C_PUSH_CMD_SIZE (MLXSW_I2C_ADDR_WIDTH + 4) ++#define MLXSW_I2C_SET_EVENT_CMD (MLXSW_I2C_EVENT_BIT) ++#define MLXSW_I2C_PUSH_EVENT_CMD (MLXSW_I2C_GO_BIT | \ ++ MLXSW_I2C_SET_EVENT_CMD) + #define MLXSW_I2C_READ_SEMA_SIZE 4 + #define MLXSW_I2C_PREP_SIZE (MLXSW_I2C_ADDR_WIDTH + 28) + #define MLXSW_I2C_MBOX_SIZE 20 +@@ -44,6 +50,7 @@ + #define MLXSW_I2C_BLK_MAX 32 + #define MLXSW_I2C_RETRY 5 + #define MLXSW_I2C_TIMEOUT_MSECS 5000 ++#define MLXSW_I2C_MAX_DATA_SIZE 256 + + /** + * struct mlxsw_i2c - device private data: +@@ -167,7 +174,7 @@ static int mlxsw_i2c_wait_go_bit(struct i2c_client *client, + return err > 0 ? 0 : err; + } + +-/* Routine posts a command to ASIC though mail box. */ ++/* Routine posts a command to ASIC through mail box. */ + static int mlxsw_i2c_write_cmd(struct i2c_client *client, + struct mlxsw_i2c *mlxsw_i2c, + int immediate) +@@ -213,6 +220,66 @@ static int mlxsw_i2c_write_cmd(struct i2c_client *client, + return 0; + } + ++/* Routine posts initialization command to ASIC through mail box. */ ++static int ++mlxsw_i2c_write_init_cmd(struct i2c_client *client, ++ struct mlxsw_i2c *mlxsw_i2c, u16 opcode, u32 in_mod) ++{ ++ __be32 push_cmd_buf[MLXSW_I2C_PUSH_CMD_SIZE / 4] = { ++ 0, cpu_to_be32(MLXSW_I2C_PUSH_EVENT_CMD) ++ }; ++ __be32 prep_cmd_buf[MLXSW_I2C_PREP_SIZE / 4] = { ++ 0, 0, 0, 0, 0, 0, ++ cpu_to_be32(client->adapter->nr & 0xffff), ++ cpu_to_be32(MLXSW_I2C_SET_EVENT_CMD) ++ }; ++ struct i2c_msg push_cmd = ++ MLXSW_I2C_WRITE_MSG(client, push_cmd_buf, ++ MLXSW_I2C_PUSH_CMD_SIZE); ++ struct i2c_msg prep_cmd = ++ MLXSW_I2C_WRITE_MSG(client, prep_cmd_buf, MLXSW_I2C_PREP_SIZE); ++ u8 status; ++ int err; ++ ++ push_cmd_buf[1] = cpu_to_be32(MLXSW_I2C_PUSH_EVENT_CMD | opcode); ++ prep_cmd_buf[3] = cpu_to_be32(in_mod); ++ prep_cmd_buf[7] = cpu_to_be32(MLXSW_I2C_GO_BIT | opcode); ++ mlxsw_i2c_set_slave_addr((u8 *)prep_cmd_buf, ++ MLXSW_I2C_CIR2_BASE); ++ mlxsw_i2c_set_slave_addr((u8 *)push_cmd_buf, ++ MLXSW_I2C_CIR2_OFF_STATUS); ++ ++ /* Prepare Command Interface Register for transaction */ ++ err = i2c_transfer(client->adapter, &prep_cmd, 1); ++ if (err < 0) ++ return err; ++ else if (err != 1) ++ return -EIO; ++ ++ /* Write out Command Interface Register GO bit to push transaction */ ++ err = i2c_transfer(client->adapter, &push_cmd, 1); ++ if (err < 0) ++ return err; ++ else if (err != 1) ++ return -EIO; ++ ++ /* Wait until go bit is cleared. */ ++ err = mlxsw_i2c_wait_go_bit(client, mlxsw_i2c, &status); ++ if (err) { ++ dev_err(&client->dev, "HW semaphore is not released"); ++ return err; ++ } ++ ++ /* Validate transaction completion status. */ ++ if (status) { ++ dev_err(&client->dev, "Bad transaction completion status %x\n", ++ status); ++ return -EIO; ++ } ++ ++ return 0; ++} ++ + /* Routine obtains mail box offsets from ASIC register space. */ + static int mlxsw_i2c_get_mbox(struct i2c_client *client, + struct mlxsw_i2c *mlxsw_i2c) +@@ -310,8 +377,8 @@ mlxsw_i2c_write(struct device *dev, size_t in_mbox_size, u8 *in_mbox, int num, + + /* Routine executes I2C command. */ + static int +-mlxsw_i2c_cmd(struct device *dev, size_t in_mbox_size, u8 *in_mbox, +- size_t out_mbox_size, u8 *out_mbox, u8 *status) ++mlxsw_i2c_cmd(struct device *dev, u16 opcode, u32 in_mod, size_t in_mbox_size, ++ u8 *in_mbox, size_t out_mbox_size, u8 *out_mbox, u8 *status) + { + struct i2c_client *client = to_i2c_client(dev); + struct mlxsw_i2c *mlxsw_i2c = i2c_get_clientdata(client); +@@ -326,21 +393,34 @@ mlxsw_i2c_cmd(struct device *dev, size_t in_mbox_size, u8 *in_mbox, + + WARN_ON(in_mbox_size % sizeof(u32) || out_mbox_size % sizeof(u32)); + +- reg_size = mlxsw_i2c_get_reg_size(in_mbox); +- num = reg_size / MLXSW_I2C_BLK_MAX; +- if (reg_size % MLXSW_I2C_BLK_MAX) +- num++; ++ if (in_mbox) { ++ reg_size = mlxsw_i2c_get_reg_size(in_mbox); ++ num = reg_size / MLXSW_I2C_BLK_MAX; ++ if (reg_size % MLXSW_I2C_BLK_MAX) ++ num++; + +- mutex_lock(&mlxsw_i2c->cmd.lock); ++ mutex_lock(&mlxsw_i2c->cmd.lock); + +- err = mlxsw_i2c_write(dev, reg_size, in_mbox, num, status); +- if (err) +- goto cmd_fail; ++ err = mlxsw_i2c_write(dev, reg_size, in_mbox, num, status); ++ if (err) ++ goto cmd_fail; ++ ++ /* No out mailbox is case of write transaction. */ ++ if (!out_mbox) { ++ mutex_unlock(&mlxsw_i2c->cmd.lock); ++ return 0; ++ } ++ } else { ++ /* No input mailbox is case of initialization query command. */ ++ reg_size = MLXSW_I2C_MAX_DATA_SIZE; ++ num = reg_size / MLXSW_I2C_BLK_MAX; + +- /* No out mailbox is case of write transaction. */ +- if (!out_mbox) { +- mutex_unlock(&mlxsw_i2c->cmd.lock); +- return 0; ++ mutex_lock(&mlxsw_i2c->cmd.lock); ++ ++ err = mlxsw_i2c_write_init_cmd(client, mlxsw_i2c, opcode, ++ in_mod); ++ if (err) ++ goto cmd_fail; + } + + /* Send read transaction to get output mailbox content. */ +@@ -392,8 +472,8 @@ static int mlxsw_i2c_cmd_exec(void *bus_priv, u16 opcode, u8 opcode_mod, + { + struct mlxsw_i2c *mlxsw_i2c = bus_priv; + +- return mlxsw_i2c_cmd(mlxsw_i2c->dev, in_mbox_size, in_mbox, +- out_mbox_size, out_mbox, status); ++ return mlxsw_i2c_cmd(mlxsw_i2c->dev, opcode, in_mod, in_mbox_size, ++ in_mbox, out_mbox_size, out_mbox, status); + } + + static bool mlxsw_i2c_skb_transmit_busy(void *bus_priv, +@@ -411,13 +491,34 @@ static int mlxsw_i2c_skb_transmit(void *bus_priv, struct sk_buff *skb, + static int + mlxsw_i2c_init(void *bus_priv, struct mlxsw_core *mlxsw_core, + const struct mlxsw_config_profile *profile, +- struct mlxsw_resources *resources) ++ struct mlxsw_res *res) + { + struct mlxsw_i2c *mlxsw_i2c = bus_priv; ++ char *mbox; ++ int err; + + mlxsw_i2c->core = mlxsw_core; + +- return 0; ++ mbox = mlxsw_cmd_mbox_alloc(); ++ if (!mbox) ++ return -ENOMEM; ++ ++ err = mlxsw_cmd_query_fw(mlxsw_core, mbox); ++ if (err) ++ goto mbox_put; ++ ++ mlxsw_i2c->bus_info.fw_rev.major = ++ mlxsw_cmd_mbox_query_fw_fw_rev_major_get(mbox); ++ mlxsw_i2c->bus_info.fw_rev.minor = ++ mlxsw_cmd_mbox_query_fw_fw_rev_minor_get(mbox); ++ mlxsw_i2c->bus_info.fw_rev.subminor = ++ mlxsw_cmd_mbox_query_fw_fw_rev_subminor_get(mbox); ++ ++ err = mlxsw_core_resources_query(mlxsw_core, mbox, res); ++ ++mbox_put: ++ mlxsw_cmd_mbox_free(mbox); ++ return err; + } + + static void mlxsw_i2c_fini(void *bus_priv) +@@ -504,12 +605,18 @@ static int mlxsw_i2c_probe(struct i2c_client *client, + mlxsw_i2c->dev = &client->dev; + + err = mlxsw_core_bus_device_register(&mlxsw_i2c->bus_info, +- &mlxsw_i2c_bus, mlxsw_i2c); ++ &mlxsw_i2c_bus, mlxsw_i2c, false, ++ NULL); + if (err) { + dev_err(&client->dev, "Fail to register core bus\n"); + return err; + } + ++ dev_info(&client->dev, "Firmware revision: %d.%d.%d\n", ++ mlxsw_i2c->bus_info.fw_rev.major, ++ mlxsw_i2c->bus_info.fw_rev.minor, ++ mlxsw_i2c->bus_info.fw_rev.subminor); ++ + return 0; + + errout: +@@ -522,7 +629,7 @@ static int mlxsw_i2c_remove(struct i2c_client *client) + { + struct mlxsw_i2c *mlxsw_i2c = i2c_get_clientdata(client); + +- mlxsw_core_bus_device_unregister(mlxsw_i2c->core); ++ mlxsw_core_bus_device_unregister(mlxsw_i2c->core, false); + mutex_destroy(&mlxsw_i2c->cmd.lock); + + return 0; +diff --git a/drivers/net/ethernet/mellanox/mlxsw/item.h b/drivers/net/ethernet/mellanox/mlxsw/item.h +index a94dbda6..e92cadc 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/item.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/item.h +@@ -1,37 +1,5 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/item.h +- * Copyright (c) 2015 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2015 Jiri Pirko +- * Copyright (c) 2015 Ido Schimmel +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ ++/* Copyright (c) 2015-2018 Mellanox Technologies. All rights reserved */ + + #ifndef _MLXSW_ITEM_H + #define _MLXSW_ITEM_H +@@ -42,7 +10,7 @@ + + struct mlxsw_item { + unsigned short offset; /* bytes in container */ +- unsigned short step; /* step in bytes for indexed items */ ++ short step; /* step in bytes for indexed items */ + unsigned short in_step_offset; /* offset within one step */ + unsigned char shift; /* shift in bits */ + unsigned char element_size; /* size of element in bit array */ +@@ -55,7 +23,7 @@ struct mlxsw_item { + }; + + static inline unsigned int +-__mlxsw_item_offset(struct mlxsw_item *item, unsigned short index, ++__mlxsw_item_offset(const struct mlxsw_item *item, unsigned short index, + size_t typesize) + { + BUG_ON(index && !item->step); +@@ -72,7 +40,42 @@ __mlxsw_item_offset(struct mlxsw_item *item, unsigned short index, + typesize); + } + +-static inline u16 __mlxsw_item_get16(char *buf, struct mlxsw_item *item, ++static inline u8 __mlxsw_item_get8(const char *buf, ++ const struct mlxsw_item *item, ++ unsigned short index) ++{ ++ unsigned int offset = __mlxsw_item_offset(item, index, sizeof(u8)); ++ u8 *b = (u8 *) buf; ++ u8 tmp; ++ ++ tmp = b[offset]; ++ tmp >>= item->shift; ++ tmp &= GENMASK(item->size.bits - 1, 0); ++ if (item->no_real_shift) ++ tmp <<= item->shift; ++ return tmp; ++} ++ ++static inline void __mlxsw_item_set8(char *buf, const struct mlxsw_item *item, ++ unsigned short index, u8 val) ++{ ++ unsigned int offset = __mlxsw_item_offset(item, index, ++ sizeof(u8)); ++ u8 *b = (u8 *) buf; ++ u8 mask = GENMASK(item->size.bits - 1, 0) << item->shift; ++ u8 tmp; ++ ++ if (!item->no_real_shift) ++ val <<= item->shift; ++ val &= mask; ++ tmp = b[offset]; ++ tmp &= ~mask; ++ tmp |= val; ++ b[offset] = tmp; ++} ++ ++static inline u16 __mlxsw_item_get16(const char *buf, ++ const struct mlxsw_item *item, + unsigned short index) + { + unsigned int offset = __mlxsw_item_offset(item, index, sizeof(u16)); +@@ -87,7 +90,7 @@ static inline u16 __mlxsw_item_get16(char *buf, struct mlxsw_item *item, + return tmp; + } + +-static inline void __mlxsw_item_set16(char *buf, struct mlxsw_item *item, ++static inline void __mlxsw_item_set16(char *buf, const struct mlxsw_item *item, + unsigned short index, u16 val) + { + unsigned int offset = __mlxsw_item_offset(item, index, +@@ -105,7 +108,8 @@ static inline void __mlxsw_item_set16(char *buf, struct mlxsw_item *item, + b[offset] = cpu_to_be16(tmp); + } + +-static inline u32 __mlxsw_item_get32(char *buf, struct mlxsw_item *item, ++static inline u32 __mlxsw_item_get32(const char *buf, ++ const struct mlxsw_item *item, + unsigned short index) + { + unsigned int offset = __mlxsw_item_offset(item, index, sizeof(u32)); +@@ -120,7 +124,7 @@ static inline u32 __mlxsw_item_get32(char *buf, struct mlxsw_item *item, + return tmp; + } + +-static inline void __mlxsw_item_set32(char *buf, struct mlxsw_item *item, ++static inline void __mlxsw_item_set32(char *buf, const struct mlxsw_item *item, + unsigned short index, u32 val) + { + unsigned int offset = __mlxsw_item_offset(item, index, +@@ -138,7 +142,8 @@ static inline void __mlxsw_item_set32(char *buf, struct mlxsw_item *item, + b[offset] = cpu_to_be32(tmp); + } + +-static inline u64 __mlxsw_item_get64(char *buf, struct mlxsw_item *item, ++static inline u64 __mlxsw_item_get64(const char *buf, ++ const struct mlxsw_item *item, + unsigned short index) + { + unsigned int offset = __mlxsw_item_offset(item, index, sizeof(u64)); +@@ -153,7 +158,7 @@ static inline u64 __mlxsw_item_get64(char *buf, struct mlxsw_item *item, + return tmp; + } + +-static inline void __mlxsw_item_set64(char *buf, struct mlxsw_item *item, ++static inline void __mlxsw_item_set64(char *buf, const struct mlxsw_item *item, + unsigned short index, u64 val) + { + unsigned int offset = __mlxsw_item_offset(item, index, sizeof(u64)); +@@ -170,8 +175,8 @@ static inline void __mlxsw_item_set64(char *buf, struct mlxsw_item *item, + b[offset] = cpu_to_be64(tmp); + } + +-static inline void __mlxsw_item_memcpy_from(char *buf, char *dst, +- struct mlxsw_item *item, ++static inline void __mlxsw_item_memcpy_from(const char *buf, char *dst, ++ const struct mlxsw_item *item, + unsigned short index) + { + unsigned int offset = __mlxsw_item_offset(item, index, sizeof(char)); +@@ -180,7 +185,7 @@ static inline void __mlxsw_item_memcpy_from(char *buf, char *dst, + } + + static inline void __mlxsw_item_memcpy_to(char *buf, const char *src, +- struct mlxsw_item *item, ++ const struct mlxsw_item *item, + unsigned short index) + { + unsigned int offset = __mlxsw_item_offset(item, index, sizeof(char)); +@@ -188,8 +193,17 @@ static inline void __mlxsw_item_memcpy_to(char *buf, const char *src, + memcpy(&buf[offset], src, item->size.bytes); + } + ++static inline char *__mlxsw_item_data(char *buf, const struct mlxsw_item *item, ++ unsigned short index) ++{ ++ unsigned int offset = __mlxsw_item_offset(item, index, sizeof(char)); ++ ++ return &buf[offset]; ++} ++ + static inline u16 +-__mlxsw_item_bit_array_offset(struct mlxsw_item *item, u16 index, u8 *shift) ++__mlxsw_item_bit_array_offset(const struct mlxsw_item *item, ++ u16 index, u8 *shift) + { + u16 max_index, be_index; + u16 offset; /* byte offset inside the array */ +@@ -212,7 +226,8 @@ __mlxsw_item_bit_array_offset(struct mlxsw_item *item, u16 index, u8 *shift) + return item->offset + offset; + } + +-static inline u8 __mlxsw_item_bit_array_get(char *buf, struct mlxsw_item *item, ++static inline u8 __mlxsw_item_bit_array_get(const char *buf, ++ const struct mlxsw_item *item, + u16 index) + { + u8 shift, tmp; +@@ -224,7 +239,8 @@ static inline u8 __mlxsw_item_bit_array_get(char *buf, struct mlxsw_item *item, + return tmp; + } + +-static inline void __mlxsw_item_bit_array_set(char *buf, struct mlxsw_item *item, ++static inline void __mlxsw_item_bit_array_set(char *buf, ++ const struct mlxsw_item *item, + u16 index, u8 val) + { + u8 shift, tmp; +@@ -247,6 +263,47 @@ static inline void __mlxsw_item_bit_array_set(char *buf, struct mlxsw_item *item + * _iname: item name within the container + */ + ++#define MLXSW_ITEM8(_type, _cname, _iname, _offset, _shift, _sizebits) \ ++static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ ++ .offset = _offset, \ ++ .shift = _shift, \ ++ .size = {.bits = _sizebits,}, \ ++ .name = #_type "_" #_cname "_" #_iname, \ ++}; \ ++static inline u8 mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf) \ ++{ \ ++ return __mlxsw_item_get8(buf, &__ITEM_NAME(_type, _cname, _iname), 0); \ ++} \ ++static inline void mlxsw_##_type##_##_cname##_##_iname##_set(char *buf, u8 val)\ ++{ \ ++ __mlxsw_item_set8(buf, &__ITEM_NAME(_type, _cname, _iname), 0, val); \ ++} ++ ++#define MLXSW_ITEM8_INDEXED(_type, _cname, _iname, _offset, _shift, _sizebits, \ ++ _step, _instepoffset, _norealshift) \ ++static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ ++ .offset = _offset, \ ++ .step = _step, \ ++ .in_step_offset = _instepoffset, \ ++ .shift = _shift, \ ++ .no_real_shift = _norealshift, \ ++ .size = {.bits = _sizebits,}, \ ++ .name = #_type "_" #_cname "_" #_iname, \ ++}; \ ++static inline u8 \ ++mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf, unsigned short index)\ ++{ \ ++ return __mlxsw_item_get8(buf, &__ITEM_NAME(_type, _cname, _iname), \ ++ index); \ ++} \ ++static inline void \ ++mlxsw_##_type##_##_cname##_##_iname##_set(char *buf, unsigned short index, \ ++ u8 val) \ ++{ \ ++ __mlxsw_item_set8(buf, &__ITEM_NAME(_type, _cname, _iname), \ ++ index, val); \ ++} ++ + #define MLXSW_ITEM16(_type, _cname, _iname, _offset, _shift, _sizebits) \ + static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .offset = _offset, \ +@@ -254,7 +311,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .size = {.bits = _sizebits,}, \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ +-static inline u16 mlxsw_##_type##_##_cname##_##_iname##_get(char *buf) \ ++static inline u16 mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf) \ + { \ + return __mlxsw_item_get16(buf, &__ITEM_NAME(_type, _cname, _iname), 0); \ + } \ +@@ -275,7 +332,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ + static inline u16 \ +-mlxsw_##_type##_##_cname##_##_iname##_get(char *buf, unsigned short index) \ ++mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf, unsigned short index)\ + { \ + return __mlxsw_item_get16(buf, &__ITEM_NAME(_type, _cname, _iname), \ + index); \ +@@ -295,7 +352,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .size = {.bits = _sizebits,}, \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ +-static inline u32 mlxsw_##_type##_##_cname##_##_iname##_get(char *buf) \ ++static inline u32 mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf) \ + { \ + return __mlxsw_item_get32(buf, &__ITEM_NAME(_type, _cname, _iname), 0); \ + } \ +@@ -316,7 +373,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ + static inline u32 \ +-mlxsw_##_type##_##_cname##_##_iname##_get(char *buf, unsigned short index) \ ++mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf, unsigned short index)\ + { \ + return __mlxsw_item_get32(buf, &__ITEM_NAME(_type, _cname, _iname), \ + index); \ +@@ -336,7 +393,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .size = {.bits = _sizebits,}, \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ +-static inline u64 mlxsw_##_type##_##_cname##_##_iname##_get(char *buf) \ ++static inline u64 mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf) \ + { \ + return __mlxsw_item_get64(buf, &__ITEM_NAME(_type, _cname, _iname), 0); \ + } \ +@@ -357,7 +414,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ + static inline u64 \ +-mlxsw_##_type##_##_cname##_##_iname##_get(char *buf, unsigned short index) \ ++mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf, unsigned short index)\ + { \ + return __mlxsw_item_get64(buf, &__ITEM_NAME(_type, _cname, _iname), \ + index); \ +@@ -377,7 +434,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ + static inline void \ +-mlxsw_##_type##_##_cname##_##_iname##_memcpy_from(char *buf, char *dst) \ ++mlxsw_##_type##_##_cname##_##_iname##_memcpy_from(const char *buf, char *dst) \ + { \ + __mlxsw_item_memcpy_from(buf, dst, \ + &__ITEM_NAME(_type, _cname, _iname), 0); \ +@@ -387,6 +444,11 @@ mlxsw_##_type##_##_cname##_##_iname##_memcpy_to(char *buf, const char *src) \ + { \ + __mlxsw_item_memcpy_to(buf, src, \ + &__ITEM_NAME(_type, _cname, _iname), 0); \ ++} \ ++static inline char * \ ++mlxsw_##_type##_##_cname##_##_iname##_data(char *buf) \ ++{ \ ++ return __mlxsw_item_data(buf, &__ITEM_NAME(_type, _cname, _iname), 0); \ + } + + #define MLXSW_ITEM_BUF_INDEXED(_type, _cname, _iname, _offset, _sizebytes, \ +@@ -399,7 +461,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ + static inline void \ +-mlxsw_##_type##_##_cname##_##_iname##_memcpy_from(char *buf, \ ++mlxsw_##_type##_##_cname##_##_iname##_memcpy_from(const char *buf, \ + unsigned short index, \ + char *dst) \ + { \ +@@ -413,6 +475,12 @@ mlxsw_##_type##_##_cname##_##_iname##_memcpy_to(char *buf, \ + { \ + __mlxsw_item_memcpy_to(buf, src, \ + &__ITEM_NAME(_type, _cname, _iname), index); \ ++} \ ++static inline char * \ ++mlxsw_##_type##_##_cname##_##_iname##_data(char *buf, unsigned short index) \ ++{ \ ++ return __mlxsw_item_data(buf, \ ++ &__ITEM_NAME(_type, _cname, _iname), index); \ + } + + #define MLXSW_ITEM_BIT_ARRAY(_type, _cname, _iname, _offset, _sizebytes, \ +@@ -424,7 +492,7 @@ static struct mlxsw_item __ITEM_NAME(_type, _cname, _iname) = { \ + .name = #_type "_" #_cname "_" #_iname, \ + }; \ + static inline u8 \ +-mlxsw_##_type##_##_cname##_##_iname##_get(char *buf, u16 index) \ ++mlxsw_##_type##_##_cname##_##_iname##_get(const char *buf, u16 index) \ + { \ + return __mlxsw_item_bit_array_get(buf, \ + &__ITEM_NAME(_type, _cname, _iname), \ +diff --git a/drivers/net/ethernet/mellanox/mlxsw/minimal.c b/drivers/net/ethernet/mellanox/mlxsw/minimal.c +index c47949c..9312f42 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/minimal.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/minimal.c +@@ -8,6 +8,7 @@ + #include + #include + #include ++#include + #include + + #include "core.h" +@@ -19,16 +20,15 @@ static const char mlxsw_m_driver_name[] = "mlxsw_minimal"; + struct mlxsw_m_port; + + struct mlxsw_m { +- struct mlxsw_m_port **modules; ++ struct mlxsw_m_port **ports; + int *module_to_port; + struct mlxsw_core *core; + const struct mlxsw_bus_info *bus_info; + u8 base_mac[ETH_ALEN]; +- u8 max_modules; ++ u8 max_ports; + }; + + struct mlxsw_m_port { +- struct mlxsw_core_port core_port; /* must be first */ + struct net_device *dev; + struct mlxsw_m *mlxsw_m; + u8 local_port; +@@ -114,6 +114,14 @@ mlxsw_m_port_dev_addr_get(struct mlxsw_m_port *mlxsw_m_port) + return 0; + } + ++static void mlxsw_m_port_switchdev_init(struct mlxsw_m_port *mlxsw_m_port) ++{ ++} ++ ++static void mlxsw_m_port_switchdev_fini(struct mlxsw_m_port *mlxsw_m_port) ++{ ++} ++ + static int + mlxsw_m_port_create(struct mlxsw_m *mlxsw_m, u8 local_port, u8 module) + { +@@ -121,6 +129,13 @@ mlxsw_m_port_create(struct mlxsw_m *mlxsw_m, u8 local_port, u8 module) + struct net_device *dev; + int err; + ++ err = mlxsw_core_port_init(mlxsw_m->core, local_port); ++ if (err) { ++ dev_err(mlxsw_m->bus_info->dev, "Port %d: Failed to init core port\n", ++ local_port); ++ return err; ++ } ++ + dev = alloc_etherdev(sizeof(struct mlxsw_m_port)); + if (!dev) { + err = -ENOMEM; +@@ -134,20 +149,9 @@ mlxsw_m_port_create(struct mlxsw_m *mlxsw_m, u8 local_port, u8 module) + mlxsw_m_port->local_port = local_port; + mlxsw_m_port->module = module; + +- mlxsw_m->modules[local_port] = mlxsw_m_port; +- + dev->netdev_ops = &mlxsw_m_port_netdev_ops; + dev->ethtool_ops = &mlxsw_m_port_ethtool_ops; + +- err = mlxsw_core_port_init(mlxsw_m->core, +- &mlxsw_m_port->core_port, local_port, +- dev, false, module); +- if (err) { +- dev_err(mlxsw_m->bus_info->dev, "Port %d: Failed to init core port\n", +- local_port); +- goto err_alloc_etherdev; +- } +- + err = mlxsw_m_port_dev_addr_get(mlxsw_m_port); + if (err) { + dev_err(mlxsw_m->bus_info->dev, "Port %d: Unable to get port mac address\n", +@@ -156,7 +160,8 @@ mlxsw_m_port_create(struct mlxsw_m *mlxsw_m, u8 local_port, u8 module) + } + + netif_carrier_off(dev); +- ++ mlxsw_m_port_switchdev_init(mlxsw_m_port); ++ mlxsw_m->ports[local_port] = mlxsw_m_port; + err = register_netdev(dev); + if (err) { + dev_err(mlxsw_m->bus_info->dev, "Port %d: Failed to register netdev\n", +@@ -167,38 +172,29 @@ mlxsw_m_port_create(struct mlxsw_m *mlxsw_m, u8 local_port, u8 module) + return 0; + + err_register_netdev: ++ mlxsw_m->ports[local_port] = NULL; ++ mlxsw_m_port_switchdev_fini(mlxsw_m_port); + free_netdev(dev); + err_dev_addr_get: + err_alloc_etherdev: +- mlxsw_m->modules[local_port] = NULL; ++ mlxsw_core_port_fini(mlxsw_m->core, local_port); + return err; + } + + static void mlxsw_m_port_remove(struct mlxsw_m *mlxsw_m, u8 local_port) + { +- struct mlxsw_m_port *mlxsw_m_port = mlxsw_m->modules[local_port]; ++ struct mlxsw_m_port *mlxsw_m_port = mlxsw_m->ports[local_port]; + ++ mlxsw_core_port_clear(mlxsw_m->core, local_port, mlxsw_m); + unregister_netdev(mlxsw_m_port->dev); /* This calls ndo_stop */ ++ mlxsw_m->ports[local_port] = NULL; ++ mlxsw_m_port_switchdev_fini(mlxsw_m_port); + free_netdev(mlxsw_m_port->dev); +- mlxsw_core_port_fini(&mlxsw_m_port->core_port); ++ mlxsw_core_port_fini(mlxsw_m->core, local_port); + } + +-static void mlxsw_m_ports_remove(struct mlxsw_m *mlxsw_m) +-{ +- int i; +- +- for (i = 0; i < mlxsw_m->max_modules; i++) { +- if (mlxsw_m->module_to_port[i] > 0) +- mlxsw_m_port_remove(mlxsw_m, +- mlxsw_m->module_to_port[i]); +- } +- +- kfree(mlxsw_m->module_to_port); +- kfree(mlxsw_m->modules); +-} +- +-static int mlxsw_m_port_mapping_create(struct mlxsw_m *mlxsw_m, u8 local_port, +- u8 *last_module) ++static int mlxsw_m_port_module_map(struct mlxsw_m *mlxsw_m, u8 local_port, ++ u8 *last_module) + { + u8 module, width; + int err; +@@ -215,11 +211,34 @@ static int mlxsw_m_port_mapping_create(struct mlxsw_m *mlxsw_m, u8 local_port, + if (module == *last_module) + return 0; + *last_module = module; +- mlxsw_m->module_to_port[module] = ++mlxsw_m->max_modules; ++ mlxsw_m->module_to_port[module] = ++mlxsw_m->max_ports; + + return 0; + } + ++static int mlxsw_m_port_module_unmap(struct mlxsw_m *mlxsw_m, u8 module) ++{ ++ mlxsw_m->module_to_port[module] = -1; ++ ++ return 0; ++} ++ ++static void mlxsw_m_ports_remove(struct mlxsw_m *mlxsw_m) ++{ ++ int i; ++ ++ for (i = 0; i < mlxsw_m->max_ports; i++) { ++ if (mlxsw_m->module_to_port[i] > 0) { ++ mlxsw_m_port_remove(mlxsw_m, ++ mlxsw_m->module_to_port[i]); ++ mlxsw_m_port_module_unmap(mlxsw_m, i); ++ } ++ } ++ ++ kfree(mlxsw_m->module_to_port); ++ kfree(mlxsw_m->ports); ++} ++ + static int mlxsw_m_ports_create(struct mlxsw_m *mlxsw_m) + { + unsigned int max_port = mlxsw_core_max_ports(mlxsw_m->core); +@@ -227,9 +246,9 @@ static int mlxsw_m_ports_create(struct mlxsw_m *mlxsw_m) + int i; + int err; + +- mlxsw_m->modules = kcalloc(max_port, sizeof(*mlxsw_m->modules), +- GFP_KERNEL); +- if (!mlxsw_m->modules) ++ mlxsw_m->ports = kcalloc(max_port, sizeof(*mlxsw_m->ports), ++ GFP_KERNEL); ++ if (!mlxsw_m->ports) + return -ENOMEM; + + mlxsw_m->module_to_port = kmalloc_array(max_port, sizeof(int), +@@ -244,14 +263,14 @@ static int mlxsw_m_ports_create(struct mlxsw_m *mlxsw_m) + mlxsw_m->module_to_port[i] = -1; + + /* Fill out module to local port mapping array */ +- for (i = 1; i <= max_port; i++) { +- err = mlxsw_m_port_mapping_create(mlxsw_m, i, &last_module); ++ for (i = 1; i < max_port; i++) { ++ err = mlxsw_m_port_module_map(mlxsw_m, i, &last_module); + if (err) + goto err_port_create; + } + + /* Create port objects for each valid entry */ +- for (i = 0; i < mlxsw_m->max_modules; i++) { ++ for (i = 0; i < mlxsw_m->max_ports; i++) { + if (mlxsw_m->module_to_port[i] > 0) { + err = mlxsw_m_port_create(mlxsw_m, + mlxsw_m->module_to_port[i], +@@ -279,7 +298,7 @@ static int mlxsw_m_init(struct mlxsw_core *mlxsw_core, + + err = mlxsw_m_ports_create(mlxsw_m); + if (err) { +- dev_err(mlxsw_m->bus_info->dev, "Failed to create modules\n"); ++ dev_err(mlxsw_m->bus_info->dev, "Failed to create ports\n"); + return err; + } + +@@ -296,11 +315,12 @@ static void mlxsw_m_fini(struct mlxsw_core *mlxsw_core) + static const struct mlxsw_config_profile mlxsw_m_config_profile; + + static struct mlxsw_driver mlxsw_m_driver = { +- .kind = mlxsw_m_driver_name, +- .priv_size = sizeof(struct mlxsw_m), +- .init = mlxsw_m_init, +- .fini = mlxsw_m_fini, +- .profile = &mlxsw_m_config_profile, ++ .kind = mlxsw_m_driver_name, ++ .priv_size = sizeof(struct mlxsw_m), ++ .init = mlxsw_m_init, ++ .fini = mlxsw_m_fini, ++ .profile = &mlxsw_m_config_profile, ++ .res_query_enabled = true, + }; + + static const struct i2c_device_id mlxsw_m_i2c_id[] = { +diff --git a/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c b/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c +index 0781f16..bee2a08 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/qsfp_sysfs.c +@@ -298,7 +298,6 @@ int mlxsw_qsfp_init(struct mlxsw_core *mlxsw_core, + mlxsw_qsfp->bus_info = mlxsw_bus_info; + mlxsw_bus_info->dev->platform_data = mlxsw_qsfp; + +- mlxsw_core_max_ports_set(mlxsw_core, mlxsw_qsfp_num); + for (i = 1; i <= mlxsw_qsfp_num; i++) { + mlxsw_reg_pmlp_pack(pmlp_pl, i); + err = mlxsw_reg_query(mlxsw_qsfp->core, MLXSW_REG(pmlp), +diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h +index 20f01bb..4a12be2 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h +@@ -1,43 +1,10 @@ +-/* +- * drivers/net/ethernet/mellanox/mlxsw/reg.h +- * Copyright (c) 2015 Mellanox Technologies. All rights reserved. +- * Copyright (c) 2015-2016 Ido Schimmel +- * Copyright (c) 2015 Elad Raz +- * Copyright (c) 2015-2016 Jiri Pirko +- * Copyright (c) 2016 Yotam Gigi +- * +- * Redistribution and use in source and binary forms, with or without +- * modification, are permitted provided that the following conditions are met: +- * +- * 1. Redistributions of source code must retain the above copyright +- * notice, this list of conditions and the following disclaimer. +- * 2. Redistributions in binary form must reproduce the above copyright +- * notice, this list of conditions and the following disclaimer in the +- * documentation and/or other materials provided with the distribution. +- * 3. Neither the names of the copyright holders nor the names of its +- * contributors may be used to endorse or promote products derived from +- * this software without specific prior written permission. +- * +- * Alternatively, this software may be distributed under the terms of the +- * GNU General Public License ("GPL") version 2 as published by the Free +- * Software Foundation. +- * +- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +- * POSSIBILITY OF SUCH DAMAGE. +- */ ++/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ ++/* Copyright (c) 2015-2018 Mellanox Technologies. All rights reserved */ + + #ifndef _MLXSW_REG_H + #define _MLXSW_REG_H + ++#include + #include + #include + #include +@@ -48,12 +15,14 @@ + struct mlxsw_reg_info { + u16 id; + u16 len; /* In u8 */ ++ const char *name; + }; + + #define MLXSW_REG_DEFINE(_name, _id, _len) \ + static const struct mlxsw_reg_info mlxsw_reg_##_name = { \ + .id = _id, \ + .len = _len, \ ++ .name = #_name, \ + } + + #define MLXSW_REG(type) (&mlxsw_reg_##type) +@@ -67,10 +36,7 @@ static const struct mlxsw_reg_info mlxsw_reg_##_name = { \ + #define MLXSW_REG_SGCR_ID 0x2000 + #define MLXSW_REG_SGCR_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_sgcr = { +- .id = MLXSW_REG_SGCR_ID, +- .len = MLXSW_REG_SGCR_LEN, +-}; ++MLXSW_REG_DEFINE(sgcr, MLXSW_REG_SGCR_ID, MLXSW_REG_SGCR_LEN); + + /* reg_sgcr_llb + * Link Local Broadcast (Default=0) +@@ -93,10 +59,7 @@ static inline void mlxsw_reg_sgcr_pack(char *payload, bool llb) + #define MLXSW_REG_SPAD_ID 0x2002 + #define MLXSW_REG_SPAD_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_spad = { +- .id = MLXSW_REG_SPAD_ID, +- .len = MLXSW_REG_SPAD_LEN, +-}; ++MLXSW_REG_DEFINE(spad, MLXSW_REG_SPAD_ID, MLXSW_REG_SPAD_LEN); + + /* reg_spad_base_mac + * Base MAC address for the switch partitions. +@@ -115,10 +78,7 @@ MLXSW_ITEM_BUF(reg, spad, base_mac, 0x02, 6); + #define MLXSW_REG_SMID_ID 0x2007 + #define MLXSW_REG_SMID_LEN 0x240 + +-static const struct mlxsw_reg_info mlxsw_reg_smid = { +- .id = MLXSW_REG_SMID_ID, +- .len = MLXSW_REG_SMID_LEN, +-}; ++MLXSW_REG_DEFINE(smid, MLXSW_REG_SMID_ID, MLXSW_REG_SMID_LEN); + + /* reg_smid_swid + * Switch partition ID. +@@ -162,10 +122,7 @@ static inline void mlxsw_reg_smid_pack(char *payload, u16 mid, + #define MLXSW_REG_SSPR_ID 0x2008 + #define MLXSW_REG_SSPR_LEN 0x8 + +-static const struct mlxsw_reg_info mlxsw_reg_sspr = { +- .id = MLXSW_REG_SSPR_ID, +- .len = MLXSW_REG_SSPR_LEN, +-}; ++MLXSW_REG_DEFINE(sspr, MLXSW_REG_SSPR_ID, MLXSW_REG_SSPR_LEN); + + /* reg_sspr_m + * Master - if set, then the record describes the master system port. +@@ -221,10 +178,7 @@ static inline void mlxsw_reg_sspr_pack(char *payload, u8 local_port) + #define MLXSW_REG_SFDAT_ID 0x2009 + #define MLXSW_REG_SFDAT_LEN 0x8 + +-static const struct mlxsw_reg_info mlxsw_reg_sfdat = { +- .id = MLXSW_REG_SFDAT_ID, +- .len = MLXSW_REG_SFDAT_LEN, +-}; ++MLXSW_REG_DEFINE(sfdat, MLXSW_REG_SFDAT_ID, MLXSW_REG_SFDAT_LEN); + + /* reg_sfdat_swid + * Switch partition ID. +@@ -262,10 +216,7 @@ static inline void mlxsw_reg_sfdat_pack(char *payload, u32 age_time) + #define MLXSW_REG_SFD_LEN (MLXSW_REG_SFD_BASE_LEN + \ + MLXSW_REG_SFD_REC_LEN * MLXSW_REG_SFD_REC_MAX_COUNT) + +-static const struct mlxsw_reg_info mlxsw_reg_sfd = { +- .id = MLXSW_REG_SFD_ID, +- .len = MLXSW_REG_SFD_LEN, +-}; ++MLXSW_REG_DEFINE(sfd, MLXSW_REG_SFD_ID, MLXSW_REG_SFD_LEN); + + /* reg_sfd_swid + * Switch partition ID for queries. Reserved on Write. +@@ -344,6 +295,7 @@ enum mlxsw_reg_sfd_rec_type { + MLXSW_REG_SFD_REC_TYPE_UNICAST = 0x0, + MLXSW_REG_SFD_REC_TYPE_UNICAST_LAG = 0x1, + MLXSW_REG_SFD_REC_TYPE_MULTICAST = 0x2, ++ MLXSW_REG_SFD_REC_TYPE_UNICAST_TUNNEL = 0xC, + }; + + /* reg_sfd_rec_type +@@ -574,6 +526,61 @@ mlxsw_reg_sfd_mc_pack(char *payload, int rec_index, + mlxsw_reg_sfd_mc_mid_set(payload, rec_index, mid); + } + ++/* reg_sfd_uc_tunnel_uip_msb ++ * When protocol is IPv4, the most significant byte of the underlay IPv4 ++ * destination IP. ++ * When protocol is IPv6, reserved. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, sfd, uc_tunnel_uip_msb, MLXSW_REG_SFD_BASE_LEN, 24, ++ 8, MLXSW_REG_SFD_REC_LEN, 0x08, false); ++ ++/* reg_sfd_uc_tunnel_fid ++ * Filtering ID. ++ * Access: Index ++ */ ++MLXSW_ITEM32_INDEXED(reg, sfd, uc_tunnel_fid, MLXSW_REG_SFD_BASE_LEN, 0, 16, ++ MLXSW_REG_SFD_REC_LEN, 0x08, false); ++ ++enum mlxsw_reg_sfd_uc_tunnel_protocol { ++ MLXSW_REG_SFD_UC_TUNNEL_PROTOCOL_IPV4, ++ MLXSW_REG_SFD_UC_TUNNEL_PROTOCOL_IPV6, ++}; ++ ++/* reg_sfd_uc_tunnel_protocol ++ * IP protocol. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, sfd, uc_tunnel_protocol, MLXSW_REG_SFD_BASE_LEN, 27, ++ 1, MLXSW_REG_SFD_REC_LEN, 0x0C, false); ++ ++/* reg_sfd_uc_tunnel_uip_lsb ++ * When protocol is IPv4, the least significant bytes of the underlay ++ * IPv4 destination IP. ++ * When protocol is IPv6, pointer to the underlay IPv6 destination IP ++ * which is configured by RIPS. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, sfd, uc_tunnel_uip_lsb, MLXSW_REG_SFD_BASE_LEN, 0, ++ 24, MLXSW_REG_SFD_REC_LEN, 0x0C, false); ++ ++static inline void ++mlxsw_reg_sfd_uc_tunnel_pack(char *payload, int rec_index, ++ enum mlxsw_reg_sfd_rec_policy policy, ++ const char *mac, u16 fid, ++ enum mlxsw_reg_sfd_rec_action action, u32 uip, ++ enum mlxsw_reg_sfd_uc_tunnel_protocol proto) ++{ ++ mlxsw_reg_sfd_rec_pack(payload, rec_index, ++ MLXSW_REG_SFD_REC_TYPE_UNICAST_TUNNEL, mac, ++ action); ++ mlxsw_reg_sfd_rec_policy_set(payload, rec_index, policy); ++ mlxsw_reg_sfd_uc_tunnel_uip_msb_set(payload, rec_index, uip >> 24); ++ mlxsw_reg_sfd_uc_tunnel_uip_lsb_set(payload, rec_index, uip); ++ mlxsw_reg_sfd_uc_tunnel_fid_set(payload, rec_index, fid); ++ mlxsw_reg_sfd_uc_tunnel_protocol_set(payload, rec_index, proto); ++} ++ + /* SFN - Switch FDB Notification Register + * ------------------------------------------- + * The switch provides notifications on newly learned FDB entries and +@@ -586,10 +593,7 @@ mlxsw_reg_sfd_mc_pack(char *payload, int rec_index, + #define MLXSW_REG_SFN_LEN (MLXSW_REG_SFN_BASE_LEN + \ + MLXSW_REG_SFN_REC_LEN * MLXSW_REG_SFN_REC_MAX_COUNT) + +-static const struct mlxsw_reg_info mlxsw_reg_sfn = { +- .id = MLXSW_REG_SFN_ID, +- .len = MLXSW_REG_SFN_LEN, +-}; ++MLXSW_REG_DEFINE(sfn, MLXSW_REG_SFN_ID, MLXSW_REG_SFN_LEN); + + /* reg_sfn_swid + * Switch partition ID. +@@ -707,10 +711,7 @@ static inline void mlxsw_reg_sfn_mac_lag_unpack(char *payload, int rec_index, + #define MLXSW_REG_SPMS_ID 0x200D + #define MLXSW_REG_SPMS_LEN 0x404 + +-static const struct mlxsw_reg_info mlxsw_reg_spms = { +- .id = MLXSW_REG_SPMS_ID, +- .len = MLXSW_REG_SPMS_LEN, +-}; ++MLXSW_REG_DEFINE(spms, MLXSW_REG_SPMS_ID, MLXSW_REG_SPMS_LEN); + + /* reg_spms_local_port + * Local port number. +@@ -754,10 +755,7 @@ static inline void mlxsw_reg_spms_vid_pack(char *payload, u16 vid, + #define MLXSW_REG_SPVID_ID 0x200E + #define MLXSW_REG_SPVID_LEN 0x08 + +-static const struct mlxsw_reg_info mlxsw_reg_spvid = { +- .id = MLXSW_REG_SPVID_ID, +- .len = MLXSW_REG_SPVID_LEN, +-}; ++MLXSW_REG_DEFINE(spvid, MLXSW_REG_SPVID_ID, MLXSW_REG_SPVID_LEN); + + /* reg_spvid_local_port + * Local port number. +@@ -798,10 +796,7 @@ static inline void mlxsw_reg_spvid_pack(char *payload, u8 local_port, u16 pvid) + #define MLXSW_REG_SPVM_LEN (MLXSW_REG_SPVM_BASE_LEN + \ + MLXSW_REG_SPVM_REC_LEN * MLXSW_REG_SPVM_REC_MAX_COUNT) + +-static const struct mlxsw_reg_info mlxsw_reg_spvm = { +- .id = MLXSW_REG_SPVM_ID, +- .len = MLXSW_REG_SPVM_LEN, +-}; ++MLXSW_REG_DEFINE(spvm, MLXSW_REG_SPVM_ID, MLXSW_REG_SPVM_LEN); + + /* reg_spvm_pt + * Priority tagged. If this bit is set, packets forwarded to the port with +@@ -897,10 +892,7 @@ static inline void mlxsw_reg_spvm_pack(char *payload, u8 local_port, + #define MLXSW_REG_SPAFT_ID 0x2010 + #define MLXSW_REG_SPAFT_LEN 0x08 + +-static const struct mlxsw_reg_info mlxsw_reg_spaft = { +- .id = MLXSW_REG_SPAFT_ID, +- .len = MLXSW_REG_SPAFT_LEN, +-}; ++MLXSW_REG_DEFINE(spaft, MLXSW_REG_SPAFT_ID, MLXSW_REG_SPAFT_LEN); + + /* reg_spaft_local_port + * Local port number. +@@ -953,10 +945,7 @@ static inline void mlxsw_reg_spaft_pack(char *payload, u8 local_port, + #define MLXSW_REG_SFGC_ID 0x2011 + #define MLXSW_REG_SFGC_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_sfgc = { +- .id = MLXSW_REG_SFGC_ID, +- .len = MLXSW_REG_SFGC_LEN, +-}; ++MLXSW_REG_DEFINE(sfgc, MLXSW_REG_SFGC_ID, MLXSW_REG_SFGC_LEN); + + enum mlxsw_reg_sfgc_type { + MLXSW_REG_SFGC_TYPE_BROADCAST, +@@ -992,7 +981,7 @@ enum mlxsw_flood_table_type { + MLXSW_REG_SFGC_TABLE_TYPE_VID = 1, + MLXSW_REG_SFGC_TABLE_TYPE_SINGLE = 2, + MLXSW_REG_SFGC_TABLE_TYPE_ANY = 0, +- MLXSW_REG_SFGC_TABLE_TYPE_FID_OFFEST = 3, ++ MLXSW_REG_SFGC_TABLE_TYPE_FID_OFFSET = 3, + MLXSW_REG_SFGC_TABLE_TYPE_FID = 4, + }; + +@@ -1051,10 +1040,7 @@ mlxsw_reg_sfgc_pack(char *payload, enum mlxsw_reg_sfgc_type type, + #define MLXSW_REG_SFTR_ID 0x2012 + #define MLXSW_REG_SFTR_LEN 0x420 + +-static const struct mlxsw_reg_info mlxsw_reg_sftr = { +- .id = MLXSW_REG_SFTR_ID, +- .len = MLXSW_REG_SFTR_LEN, +-}; ++MLXSW_REG_DEFINE(sftr, MLXSW_REG_SFTR_ID, MLXSW_REG_SFTR_LEN); + + /* reg_sftr_swid + * Switch partition ID with which to associate the port. +@@ -1124,10 +1110,7 @@ static inline void mlxsw_reg_sftr_pack(char *payload, + #define MLXSW_REG_SFDF_ID 0x2013 + #define MLXSW_REG_SFDF_LEN 0x14 + +-static const struct mlxsw_reg_info mlxsw_reg_sfdf = { +- .id = MLXSW_REG_SFDF_ID, +- .len = MLXSW_REG_SFDF_LEN, +-}; ++MLXSW_REG_DEFINE(sfdf, MLXSW_REG_SFDF_ID, MLXSW_REG_SFDF_LEN); + + /* reg_sfdf_swid + * Switch partition ID. +@@ -1142,6 +1125,8 @@ enum mlxsw_reg_sfdf_flush_type { + MLXSW_REG_SFDF_FLUSH_PER_PORT_AND_FID, + MLXSW_REG_SFDF_FLUSH_PER_LAG, + MLXSW_REG_SFDF_FLUSH_PER_LAG_AND_FID, ++ MLXSW_REG_SFDF_FLUSH_PER_NVE, ++ MLXSW_REG_SFDF_FLUSH_PER_NVE_AND_FID, + }; + + /* reg_sfdf_flush_type +@@ -1152,6 +1137,10 @@ enum mlxsw_reg_sfdf_flush_type { + * 3 - All FID dynamic entries pointing to port are flushed. + * 4 - All dynamic entries pointing to LAG are flushed. + * 5 - All FID dynamic entries pointing to LAG are flushed. ++ * 6 - All entries of type "Unicast Tunnel" or "Multicast Tunnel" are ++ * flushed. ++ * 7 - All entries of type "Unicast Tunnel" or "Multicast Tunnel" are ++ * flushed, per FID. + * Access: RW + */ + MLXSW_ITEM32(reg, sfdf, flush_type, 0x04, 28, 4); +@@ -1211,10 +1200,7 @@ MLXSW_ITEM32(reg, sfdf, lag_fid_lag_id, 0x08, 0, 10); + #define MLXSW_REG_SLDR_ID 0x2014 + #define MLXSW_REG_SLDR_LEN 0x0C /* counting in only one port in list */ + +-static const struct mlxsw_reg_info mlxsw_reg_sldr = { +- .id = MLXSW_REG_SLDR_ID, +- .len = MLXSW_REG_SLDR_LEN, +-}; ++MLXSW_REG_DEFINE(sldr, MLXSW_REG_SLDR_ID, MLXSW_REG_SLDR_LEN); + + enum mlxsw_reg_sldr_op { + /* Indicates a creation of a new LAG-ID, lag_id must be valid */ +@@ -1294,10 +1280,7 @@ static inline void mlxsw_reg_sldr_lag_remove_port_pack(char *payload, u8 lag_id, + #define MLXSW_REG_SLCR_ID 0x2015 + #define MLXSW_REG_SLCR_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_slcr = { +- .id = MLXSW_REG_SLCR_ID, +- .len = MLXSW_REG_SLCR_LEN, +-}; ++MLXSW_REG_DEFINE(slcr, MLXSW_REG_SLCR_ID, MLXSW_REG_SLCR_LEN); + + enum mlxsw_reg_slcr_pp { + /* Global Configuration (for all ports) */ +@@ -1394,12 +1377,19 @@ MLXSW_ITEM32(reg, slcr, type, 0x00, 0, 4); + */ + MLXSW_ITEM32(reg, slcr, lag_hash, 0x04, 0, 20); + +-static inline void mlxsw_reg_slcr_pack(char *payload, u16 lag_hash) ++/* reg_slcr_seed ++ * LAG seed value. The seed is the same for all ports. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, slcr, seed, 0x08, 0, 32); ++ ++static inline void mlxsw_reg_slcr_pack(char *payload, u16 lag_hash, u32 seed) + { + MLXSW_REG_ZERO(slcr, payload); + mlxsw_reg_slcr_pp_set(payload, MLXSW_REG_SLCR_PP_GLOBAL); + mlxsw_reg_slcr_type_set(payload, MLXSW_REG_SLCR_TYPE_CRC); + mlxsw_reg_slcr_lag_hash_set(payload, lag_hash); ++ mlxsw_reg_slcr_seed_set(payload, seed); + } + + /* SLCOR - Switch LAG Collector Register +@@ -1410,10 +1400,7 @@ static inline void mlxsw_reg_slcr_pack(char *payload, u16 lag_hash) + #define MLXSW_REG_SLCOR_ID 0x2016 + #define MLXSW_REG_SLCOR_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_slcor = { +- .id = MLXSW_REG_SLCOR_ID, +- .len = MLXSW_REG_SLCOR_LEN, +-}; ++MLXSW_REG_DEFINE(slcor, MLXSW_REG_SLCOR_ID, MLXSW_REG_SLCOR_LEN); + + enum mlxsw_reg_slcor_col { + /* Port is added with collector disabled */ +@@ -1496,10 +1483,7 @@ static inline void mlxsw_reg_slcor_col_disable_pack(char *payload, + #define MLXSW_REG_SPMLR_ID 0x2018 + #define MLXSW_REG_SPMLR_LEN 0x8 + +-static const struct mlxsw_reg_info mlxsw_reg_spmlr = { +- .id = MLXSW_REG_SPMLR_ID, +- .len = MLXSW_REG_SPMLR_LEN, +-}; ++MLXSW_REG_DEFINE(spmlr, MLXSW_REG_SPMLR_ID, MLXSW_REG_SPMLR_LEN); + + /* reg_spmlr_local_port + * Local port number. +@@ -1550,10 +1534,7 @@ static inline void mlxsw_reg_spmlr_pack(char *payload, u8 local_port, + #define MLXSW_REG_SVFA_ID 0x201C + #define MLXSW_REG_SVFA_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_svfa = { +- .id = MLXSW_REG_SVFA_ID, +- .len = MLXSW_REG_SVFA_LEN, +-}; ++MLXSW_REG_DEFINE(svfa, MLXSW_REG_SVFA_ID, MLXSW_REG_SVFA_LEN); + + /* reg_svfa_swid + * Switch partition ID. +@@ -1642,10 +1623,7 @@ static inline void mlxsw_reg_svfa_pack(char *payload, u8 local_port, + #define MLXSW_REG_SVPE_ID 0x201E + #define MLXSW_REG_SVPE_LEN 0x4 + +-static const struct mlxsw_reg_info mlxsw_reg_svpe = { +- .id = MLXSW_REG_SVPE_ID, +- .len = MLXSW_REG_SVPE_LEN, +-}; ++MLXSW_REG_DEFINE(svpe, MLXSW_REG_SVPE_ID, MLXSW_REG_SVPE_LEN); + + /* reg_svpe_local_port + * Local port number +@@ -1678,10 +1656,7 @@ static inline void mlxsw_reg_svpe_pack(char *payload, u8 local_port, + #define MLXSW_REG_SFMR_ID 0x201F + #define MLXSW_REG_SFMR_LEN 0x18 + +-static const struct mlxsw_reg_info mlxsw_reg_sfmr = { +- .id = MLXSW_REG_SFMR_ID, +- .len = MLXSW_REG_SFMR_LEN, +-}; ++MLXSW_REG_DEFINE(sfmr, MLXSW_REG_SFMR_ID, MLXSW_REG_SFMR_LEN); + + enum mlxsw_reg_sfmr_op { + MLXSW_REG_SFMR_OP_CREATE_FID, +@@ -1768,10 +1743,7 @@ static inline void mlxsw_reg_sfmr_pack(char *payload, + MLXSW_REG_SPVMLR_REC_LEN * \ + MLXSW_REG_SPVMLR_REC_MAX_COUNT) + +-static const struct mlxsw_reg_info mlxsw_reg_spvmlr = { +- .id = MLXSW_REG_SPVMLR_ID, +- .len = MLXSW_REG_SPVMLR_LEN, +-}; ++MLXSW_REG_DEFINE(spvmlr, MLXSW_REG_SPVMLR_ID, MLXSW_REG_SPVMLR_LEN); + + /* reg_spvmlr_local_port + * Local ingress port. +@@ -1821,3337 +1793,7279 @@ static inline void mlxsw_reg_spvmlr_pack(char *payload, u8 local_port, + } + } + +-/* QTCT - QoS Switch Traffic Class Table +- * ------------------------------------- +- * Configures the mapping between the packet switch priority and the +- * traffic class on the transmit port. ++/* CWTP - Congetion WRED ECN TClass Profile ++ * ---------------------------------------- ++ * Configures the profiles for queues of egress port and traffic class + */ +-#define MLXSW_REG_QTCT_ID 0x400A +-#define MLXSW_REG_QTCT_LEN 0x08 ++#define MLXSW_REG_CWTP_ID 0x2802 ++#define MLXSW_REG_CWTP_BASE_LEN 0x28 ++#define MLXSW_REG_CWTP_PROFILE_DATA_REC_LEN 0x08 ++#define MLXSW_REG_CWTP_LEN 0x40 + +-static const struct mlxsw_reg_info mlxsw_reg_qtct = { +- .id = MLXSW_REG_QTCT_ID, +- .len = MLXSW_REG_QTCT_LEN, +-}; ++MLXSW_REG_DEFINE(cwtp, MLXSW_REG_CWTP_ID, MLXSW_REG_CWTP_LEN); + +-/* reg_qtct_local_port +- * Local port number. ++/* reg_cwtp_local_port ++ * Local port number ++ * Not supported for CPU port + * Access: Index +- * +- * Note: CPU port is not supported. + */ +-MLXSW_ITEM32(reg, qtct, local_port, 0x00, 16, 8); ++MLXSW_ITEM32(reg, cwtp, local_port, 0, 16, 8); + +-/* reg_qtct_sub_port +- * Virtual port within the physical port. +- * Should be set to 0 when virtual ports are not enabled on the port. ++/* reg_cwtp_traffic_class ++ * Traffic Class to configure + * Access: Index + */ +-MLXSW_ITEM32(reg, qtct, sub_port, 0x00, 8, 8); ++MLXSW_ITEM32(reg, cwtp, traffic_class, 32, 0, 8); + +-/* reg_qtct_switch_prio +- * Switch priority. +- * Access: Index ++/* reg_cwtp_profile_min ++ * Minimum Average Queue Size of the profile in cells. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, qtct, switch_prio, 0x00, 0, 4); ++MLXSW_ITEM32_INDEXED(reg, cwtp, profile_min, MLXSW_REG_CWTP_BASE_LEN, ++ 0, 20, MLXSW_REG_CWTP_PROFILE_DATA_REC_LEN, 0, false); + +-/* reg_qtct_tclass +- * Traffic class. +- * Default values: +- * switch_prio 0 : tclass 1 +- * switch_prio 1 : tclass 0 +- * switch_prio i : tclass i, for i > 1 ++/* reg_cwtp_profile_percent ++ * Percentage of WRED and ECN marking for maximum Average Queue size ++ * Range is 0 to 100, units of integer percentage + * Access: RW + */ +-MLXSW_ITEM32(reg, qtct, tclass, 0x04, 0, 4); ++MLXSW_ITEM32_INDEXED(reg, cwtp, profile_percent, MLXSW_REG_CWTP_BASE_LEN, ++ 24, 7, MLXSW_REG_CWTP_PROFILE_DATA_REC_LEN, 4, false); + +-static inline void mlxsw_reg_qtct_pack(char *payload, u8 local_port, +- u8 switch_prio, u8 tclass) ++/* reg_cwtp_profile_max ++ * Maximum Average Queue size of the profile in cells ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, cwtp, profile_max, MLXSW_REG_CWTP_BASE_LEN, ++ 0, 20, MLXSW_REG_CWTP_PROFILE_DATA_REC_LEN, 4, false); ++ ++#define MLXSW_REG_CWTP_MIN_VALUE 64 ++#define MLXSW_REG_CWTP_MAX_PROFILE 2 ++#define MLXSW_REG_CWTP_DEFAULT_PROFILE 1 ++ ++static inline void mlxsw_reg_cwtp_pack(char *payload, u8 local_port, ++ u8 traffic_class) + { +- MLXSW_REG_ZERO(qtct, payload); +- mlxsw_reg_qtct_local_port_set(payload, local_port); +- mlxsw_reg_qtct_switch_prio_set(payload, switch_prio); +- mlxsw_reg_qtct_tclass_set(payload, tclass); ++ int i; ++ ++ MLXSW_REG_ZERO(cwtp, payload); ++ mlxsw_reg_cwtp_local_port_set(payload, local_port); ++ mlxsw_reg_cwtp_traffic_class_set(payload, traffic_class); ++ ++ for (i = 0; i <= MLXSW_REG_CWTP_MAX_PROFILE; i++) { ++ mlxsw_reg_cwtp_profile_min_set(payload, i, ++ MLXSW_REG_CWTP_MIN_VALUE); ++ mlxsw_reg_cwtp_profile_max_set(payload, i, ++ MLXSW_REG_CWTP_MIN_VALUE); ++ } + } + +-/* QEEC - QoS ETS Element Configuration Register +- * --------------------------------------------- +- * Configures the ETS elements. +- */ +-#define MLXSW_REG_QEEC_ID 0x400D +-#define MLXSW_REG_QEEC_LEN 0x1C ++#define MLXSW_REG_CWTP_PROFILE_TO_INDEX(profile) (profile - 1) + +-static const struct mlxsw_reg_info mlxsw_reg_qeec = { +- .id = MLXSW_REG_QEEC_ID, +- .len = MLXSW_REG_QEEC_LEN, +-}; ++static inline void ++mlxsw_reg_cwtp_profile_pack(char *payload, u8 profile, u32 min, u32 max, ++ u32 probability) ++{ ++ u8 index = MLXSW_REG_CWTP_PROFILE_TO_INDEX(profile); + +-/* reg_qeec_local_port +- * Local port number. +- * Access: Index +- * +- * Note: CPU port is supported. ++ mlxsw_reg_cwtp_profile_min_set(payload, index, min); ++ mlxsw_reg_cwtp_profile_max_set(payload, index, max); ++ mlxsw_reg_cwtp_profile_percent_set(payload, index, probability); ++} ++ ++/* CWTPM - Congestion WRED ECN TClass and Pool Mapping ++ * --------------------------------------------------- ++ * The CWTPM register maps each egress port and traffic class to profile num. + */ +-MLXSW_ITEM32(reg, qeec, local_port, 0x00, 16, 8); ++#define MLXSW_REG_CWTPM_ID 0x2803 ++#define MLXSW_REG_CWTPM_LEN 0x44 + +-enum mlxsw_reg_qeec_hr { +- MLXSW_REG_QEEC_HIERARCY_PORT, +- MLXSW_REG_QEEC_HIERARCY_GROUP, +- MLXSW_REG_QEEC_HIERARCY_SUBGROUP, +- MLXSW_REG_QEEC_HIERARCY_TC, +-}; ++MLXSW_REG_DEFINE(cwtpm, MLXSW_REG_CWTPM_ID, MLXSW_REG_CWTPM_LEN); + +-/* reg_qeec_element_hierarchy +- * 0 - Port +- * 1 - Group +- * 2 - Subgroup +- * 3 - Traffic Class ++/* reg_cwtpm_local_port ++ * Local port number ++ * Not supported for CPU port + * Access: Index + */ +-MLXSW_ITEM32(reg, qeec, element_hierarchy, 0x04, 16, 4); ++MLXSW_ITEM32(reg, cwtpm, local_port, 0, 16, 8); + +-/* reg_qeec_element_index +- * The index of the element in the hierarchy. ++/* reg_cwtpm_traffic_class ++ * Traffic Class to configure + * Access: Index + */ +-MLXSW_ITEM32(reg, qeec, element_index, 0x04, 0, 8); ++MLXSW_ITEM32(reg, cwtpm, traffic_class, 32, 0, 8); + +-/* reg_qeec_next_element_index +- * The index of the next (lower) element in the hierarchy. ++/* reg_cwtpm_ew ++ * Control enablement of WRED for traffic class: ++ * 0 - Disable ++ * 1 - Enable + * Access: RW +- * +- * Note: Reserved for element_hierarchy 0. + */ +-MLXSW_ITEM32(reg, qeec, next_element_index, 0x08, 0, 8); +- +-enum { +- MLXSW_REG_QEEC_BYTES_MODE, +- MLXSW_REG_QEEC_PACKETS_MODE, +-}; ++MLXSW_ITEM32(reg, cwtpm, ew, 36, 1, 1); + +-/* reg_qeec_pb +- * Packets or bytes mode. +- * 0 - Bytes mode +- * 1 - Packets mode ++/* reg_cwtpm_ee ++ * Control enablement of ECN for traffic class: ++ * 0 - Disable ++ * 1 - Enable + * Access: RW +- * +- * Note: Used for max shaper configuration. For Spectrum, packets mode +- * is supported only for traffic classes of CPU port. + */ +-MLXSW_ITEM32(reg, qeec, pb, 0x0C, 28, 1); ++MLXSW_ITEM32(reg, cwtpm, ee, 36, 0, 1); + +-/* reg_qeec_mase +- * Max shaper configuration enable. Enables configuration of the max +- * shaper on this ETS element. +- * 0 - Disable +- * 1 - Enable ++/* reg_cwtpm_tcp_g ++ * TCP Green Profile. ++ * Index of the profile within {port, traffic class} to use. ++ * 0 for disabling both WRED and ECN for this type of traffic. + * Access: RW + */ +-MLXSW_ITEM32(reg, qeec, mase, 0x10, 31, 1); ++MLXSW_ITEM32(reg, cwtpm, tcp_g, 52, 0, 2); + +-/* A large max rate will disable the max shaper. */ +-#define MLXSW_REG_QEEC_MAS_DIS 200000000 /* Kbps */ ++/* reg_cwtpm_tcp_y ++ * TCP Yellow Profile. ++ * Index of the profile within {port, traffic class} to use. ++ * 0 for disabling both WRED and ECN for this type of traffic. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, cwtpm, tcp_y, 56, 16, 2); + +-/* reg_qeec_max_shaper_rate +- * Max shaper information rate. +- * For CPU port, can only be configured for port hierarchy. +- * When in bytes mode, value is specified in units of 1000bps. ++/* reg_cwtpm_tcp_r ++ * TCP Red Profile. ++ * Index of the profile within {port, traffic class} to use. ++ * 0 for disabling both WRED and ECN for this type of traffic. + * Access: RW + */ +-MLXSW_ITEM32(reg, qeec, max_shaper_rate, 0x10, 0, 28); ++MLXSW_ITEM32(reg, cwtpm, tcp_r, 56, 0, 2); + +-/* reg_qeec_de +- * DWRR configuration enable. Enables configuration of the dwrr and +- * dwrr_weight. +- * 0 - Disable +- * 1 - Enable ++/* reg_cwtpm_ntcp_g ++ * Non-TCP Green Profile. ++ * Index of the profile within {port, traffic class} to use. ++ * 0 for disabling both WRED and ECN for this type of traffic. + * Access: RW + */ +-MLXSW_ITEM32(reg, qeec, de, 0x18, 31, 1); ++MLXSW_ITEM32(reg, cwtpm, ntcp_g, 60, 0, 2); + +-/* reg_qeec_dwrr +- * Transmission selection algorithm to use on the link going down from +- * the ETS element. +- * 0 - Strict priority +- * 1 - DWRR ++/* reg_cwtpm_ntcp_y ++ * Non-TCP Yellow Profile. ++ * Index of the profile within {port, traffic class} to use. ++ * 0 for disabling both WRED and ECN for this type of traffic. + * Access: RW + */ +-MLXSW_ITEM32(reg, qeec, dwrr, 0x18, 15, 1); ++MLXSW_ITEM32(reg, cwtpm, ntcp_y, 64, 16, 2); + +-/* reg_qeec_dwrr_weight +- * DWRR weight on the link going down from the ETS element. The +- * percentage of bandwidth guaranteed to an ETS element within +- * its hierarchy. The sum of all weights across all ETS elements +- * within one hierarchy should be equal to 100. Reserved when +- * transmission selection algorithm is strict priority. ++/* reg_cwtpm_ntcp_r ++ * Non-TCP Red Profile. ++ * Index of the profile within {port, traffic class} to use. ++ * 0 for disabling both WRED and ECN for this type of traffic. + * Access: RW + */ +-MLXSW_ITEM32(reg, qeec, dwrr_weight, 0x18, 0, 8); ++MLXSW_ITEM32(reg, cwtpm, ntcp_r, 64, 0, 2); + +-static inline void mlxsw_reg_qeec_pack(char *payload, u8 local_port, +- enum mlxsw_reg_qeec_hr hr, u8 index, +- u8 next_index) ++#define MLXSW_REG_CWTPM_RESET_PROFILE 0 ++ ++static inline void mlxsw_reg_cwtpm_pack(char *payload, u8 local_port, ++ u8 traffic_class, u8 profile, ++ bool wred, bool ecn) + { +- MLXSW_REG_ZERO(qeec, payload); +- mlxsw_reg_qeec_local_port_set(payload, local_port); +- mlxsw_reg_qeec_element_hierarchy_set(payload, hr); +- mlxsw_reg_qeec_element_index_set(payload, index); +- mlxsw_reg_qeec_next_element_index_set(payload, next_index); ++ MLXSW_REG_ZERO(cwtpm, payload); ++ mlxsw_reg_cwtpm_local_port_set(payload, local_port); ++ mlxsw_reg_cwtpm_traffic_class_set(payload, traffic_class); ++ mlxsw_reg_cwtpm_ew_set(payload, wred); ++ mlxsw_reg_cwtpm_ee_set(payload, ecn); ++ mlxsw_reg_cwtpm_tcp_g_set(payload, profile); ++ mlxsw_reg_cwtpm_tcp_y_set(payload, profile); ++ mlxsw_reg_cwtpm_tcp_r_set(payload, profile); ++ mlxsw_reg_cwtpm_ntcp_g_set(payload, profile); ++ mlxsw_reg_cwtpm_ntcp_y_set(payload, profile); ++ mlxsw_reg_cwtpm_ntcp_r_set(payload, profile); + } + +-/* PMLP - Ports Module to Local Port Register +- * ------------------------------------------ +- * Configures the assignment of modules to local ports. ++/* PGCR - Policy-Engine General Configuration Register ++ * --------------------------------------------------- ++ * This register configures general Policy-Engine settings. + */ +-#define MLXSW_REG_PMLP_ID 0x5002 +-#define MLXSW_REG_PMLP_LEN 0x40 ++#define MLXSW_REG_PGCR_ID 0x3001 ++#define MLXSW_REG_PGCR_LEN 0x20 + +-static const struct mlxsw_reg_info mlxsw_reg_pmlp = { +- .id = MLXSW_REG_PMLP_ID, +- .len = MLXSW_REG_PMLP_LEN, +-}; ++MLXSW_REG_DEFINE(pgcr, MLXSW_REG_PGCR_ID, MLXSW_REG_PGCR_LEN); + +-/* reg_pmlp_rxtx +- * 0 - Tx value is used for both Tx and Rx. +- * 1 - Rx value is taken from a separte field. ++/* reg_pgcr_default_action_pointer_base ++ * Default action pointer base. Each region has a default action pointer ++ * which is equal to default_action_pointer_base + region_id. + * Access: RW + */ +-MLXSW_ITEM32(reg, pmlp, rxtx, 0x00, 31, 1); ++MLXSW_ITEM32(reg, pgcr, default_action_pointer_base, 0x1C, 0, 24); + +-/* reg_pmlp_local_port +- * Local port number. ++static inline void mlxsw_reg_pgcr_pack(char *payload, u32 pointer_base) ++{ ++ MLXSW_REG_ZERO(pgcr, payload); ++ mlxsw_reg_pgcr_default_action_pointer_base_set(payload, pointer_base); ++} ++ ++/* PPBT - Policy-Engine Port Binding Table ++ * --------------------------------------- ++ * This register is used for configuration of the Port Binding Table. ++ */ ++#define MLXSW_REG_PPBT_ID 0x3002 ++#define MLXSW_REG_PPBT_LEN 0x14 ++ ++MLXSW_REG_DEFINE(ppbt, MLXSW_REG_PPBT_ID, MLXSW_REG_PPBT_LEN); ++ ++enum mlxsw_reg_pxbt_e { ++ MLXSW_REG_PXBT_E_IACL, ++ MLXSW_REG_PXBT_E_EACL, ++}; ++ ++/* reg_ppbt_e + * Access: Index + */ +-MLXSW_ITEM32(reg, pmlp, local_port, 0x00, 16, 8); ++MLXSW_ITEM32(reg, ppbt, e, 0x00, 31, 1); + +-/* reg_pmlp_width +- * 0 - Unmap local port. +- * 1 - Lane 0 is used. +- * 2 - Lanes 0 and 1 are used. +- * 4 - Lanes 0, 1, 2 and 3 are used. ++enum mlxsw_reg_pxbt_op { ++ MLXSW_REG_PXBT_OP_BIND, ++ MLXSW_REG_PXBT_OP_UNBIND, ++}; ++ ++/* reg_ppbt_op + * Access: RW + */ +-MLXSW_ITEM32(reg, pmlp, width, 0x00, 0, 8); ++MLXSW_ITEM32(reg, ppbt, op, 0x00, 28, 3); + +-/* reg_pmlp_module +- * Module number. +- * Access: RW ++/* reg_ppbt_local_port ++ * Local port. Not including CPU port. ++ * Access: Index + */ +-MLXSW_ITEM32_INDEXED(reg, pmlp, module, 0x04, 0, 8, 0x04, 0x00, false); ++MLXSW_ITEM32(reg, ppbt, local_port, 0x00, 16, 8); + +-/* reg_pmlp_tx_lane +- * Tx Lane. When rxtx field is cleared, this field is used for Rx as well. ++/* reg_ppbt_g ++ * group - When set, the binding is of an ACL group. When cleared, ++ * the binding is of an ACL. ++ * Must be set to 1 for Spectrum. + * Access: RW + */ +-MLXSW_ITEM32_INDEXED(reg, pmlp, tx_lane, 0x04, 16, 2, 0x04, 0x00, false); ++MLXSW_ITEM32(reg, ppbt, g, 0x10, 31, 1); + +-/* reg_pmlp_rx_lane +- * Rx Lane. When rxtx field is cleared, this field is ignored and Rx lane is +- * equal to Tx lane. ++/* reg_ppbt_acl_info ++ * ACL/ACL group identifier. If the g bit is set, this field should hold ++ * the acl_group_id, else it should hold the acl_id. + * Access: RW + */ +-MLXSW_ITEM32_INDEXED(reg, pmlp, rx_lane, 0x04, 24, 2, 0x04, 0x00, false); ++MLXSW_ITEM32(reg, ppbt, acl_info, 0x10, 0, 16); + +-static inline void mlxsw_reg_pmlp_pack(char *payload, u8 local_port) ++static inline void mlxsw_reg_ppbt_pack(char *payload, enum mlxsw_reg_pxbt_e e, ++ enum mlxsw_reg_pxbt_op op, ++ u8 local_port, u16 acl_info) + { +- MLXSW_REG_ZERO(pmlp, payload); +- mlxsw_reg_pmlp_local_port_set(payload, local_port); ++ MLXSW_REG_ZERO(ppbt, payload); ++ mlxsw_reg_ppbt_e_set(payload, e); ++ mlxsw_reg_ppbt_op_set(payload, op); ++ mlxsw_reg_ppbt_local_port_set(payload, local_port); ++ mlxsw_reg_ppbt_g_set(payload, true); ++ mlxsw_reg_ppbt_acl_info_set(payload, acl_info); + } + +-/* PMTU - Port MTU Register +- * ------------------------ +- * Configures and reports the port MTU. ++/* PACL - Policy-Engine ACL Register ++ * --------------------------------- ++ * This register is used for configuration of the ACL. + */ +-#define MLXSW_REG_PMTU_ID 0x5003 +-#define MLXSW_REG_PMTU_LEN 0x10 ++#define MLXSW_REG_PACL_ID 0x3004 ++#define MLXSW_REG_PACL_LEN 0x70 + +-static const struct mlxsw_reg_info mlxsw_reg_pmtu = { +- .id = MLXSW_REG_PMTU_ID, +- .len = MLXSW_REG_PMTU_LEN, +-}; ++MLXSW_REG_DEFINE(pacl, MLXSW_REG_PACL_ID, MLXSW_REG_PACL_LEN); + +-/* reg_pmtu_local_port +- * Local port number. ++/* reg_pacl_v ++ * Valid. Setting the v bit makes the ACL valid. It should not be cleared ++ * while the ACL is bounded to either a port, VLAN or ACL rule. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pacl, v, 0x00, 24, 1); ++ ++/* reg_pacl_acl_id ++ * An identifier representing the ACL (managed by software) ++ * Range 0 .. cap_max_acl_regions - 1 + * Access: Index + */ +-MLXSW_ITEM32(reg, pmtu, local_port, 0x00, 16, 8); ++MLXSW_ITEM32(reg, pacl, acl_id, 0x08, 0, 16); + +-/* reg_pmtu_max_mtu +- * Maximum MTU. +- * When port type (e.g. Ethernet) is configured, the relevant MTU is +- * reported, otherwise the minimum between the max_mtu of the different +- * types is reported. +- * Access: RO +- */ +-MLXSW_ITEM32(reg, pmtu, max_mtu, 0x04, 16, 16); ++#define MLXSW_REG_PXXX_TCAM_REGION_INFO_LEN 16 + +-/* reg_pmtu_admin_mtu +- * MTU value to set port to. Must be smaller or equal to max_mtu. +- * Note: If port type is Infiniband, then port must be disabled, when its +- * MTU is set. ++/* reg_pacl_tcam_region_info ++ * Opaque object that represents a TCAM region. ++ * Obtained through PTAR register. + * Access: RW + */ +-MLXSW_ITEM32(reg, pmtu, admin_mtu, 0x08, 16, 16); ++MLXSW_ITEM_BUF(reg, pacl, tcam_region_info, 0x30, ++ MLXSW_REG_PXXX_TCAM_REGION_INFO_LEN); + +-/* reg_pmtu_oper_mtu +- * The actual MTU configured on the port. Packets exceeding this size +- * will be dropped. +- * Note: In Ethernet and FC oper_mtu == admin_mtu, however, in Infiniband +- * oper_mtu might be smaller than admin_mtu. +- * Access: RO ++static inline void mlxsw_reg_pacl_pack(char *payload, u16 acl_id, ++ bool valid, const char *tcam_region_info) ++{ ++ MLXSW_REG_ZERO(pacl, payload); ++ mlxsw_reg_pacl_acl_id_set(payload, acl_id); ++ mlxsw_reg_pacl_v_set(payload, valid); ++ mlxsw_reg_pacl_tcam_region_info_memcpy_to(payload, tcam_region_info); ++} ++ ++/* PAGT - Policy-Engine ACL Group Table ++ * ------------------------------------ ++ * This register is used for configuration of the ACL Group Table. ++ */ ++#define MLXSW_REG_PAGT_ID 0x3005 ++#define MLXSW_REG_PAGT_BASE_LEN 0x30 ++#define MLXSW_REG_PAGT_ACL_LEN 4 ++#define MLXSW_REG_PAGT_ACL_MAX_NUM 16 ++#define MLXSW_REG_PAGT_LEN (MLXSW_REG_PAGT_BASE_LEN + \ ++ MLXSW_REG_PAGT_ACL_MAX_NUM * MLXSW_REG_PAGT_ACL_LEN) ++ ++MLXSW_REG_DEFINE(pagt, MLXSW_REG_PAGT_ID, MLXSW_REG_PAGT_LEN); ++ ++/* reg_pagt_size ++ * Number of ACLs in the group. ++ * Size 0 invalidates a group. ++ * Range 0 .. cap_max_acl_group_size (hard coded to 16 for now) ++ * Total number of ACLs in all groups must be lower or equal ++ * to cap_max_acl_tot_groups ++ * Note: a group which is binded must not be invalidated ++ * Access: Index + */ +-MLXSW_ITEM32(reg, pmtu, oper_mtu, 0x0C, 16, 16); ++MLXSW_ITEM32(reg, pagt, size, 0x00, 0, 8); + +-static inline void mlxsw_reg_pmtu_pack(char *payload, u8 local_port, +- u16 new_mtu) ++/* reg_pagt_acl_group_id ++ * An identifier (numbered from 0..cap_max_acl_groups-1) representing ++ * the ACL Group identifier (managed by software). ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, pagt, acl_group_id, 0x08, 0, 16); ++ ++/* reg_pagt_acl_id ++ * ACL identifier ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, pagt, acl_id, 0x30, 0, 16, 0x04, 0x00, false); ++ ++static inline void mlxsw_reg_pagt_pack(char *payload, u16 acl_group_id) + { +- MLXSW_REG_ZERO(pmtu, payload); +- mlxsw_reg_pmtu_local_port_set(payload, local_port); +- mlxsw_reg_pmtu_max_mtu_set(payload, 0); +- mlxsw_reg_pmtu_admin_mtu_set(payload, new_mtu); +- mlxsw_reg_pmtu_oper_mtu_set(payload, 0); ++ MLXSW_REG_ZERO(pagt, payload); ++ mlxsw_reg_pagt_acl_group_id_set(payload, acl_group_id); + } + +-/* PTYS - Port Type and Speed Register +- * ----------------------------------- +- * Configures and reports the port speed type. +- * +- * Note: When set while the link is up, the changes will not take effect +- * until the port transitions from down to up state. +- */ +-#define MLXSW_REG_PTYS_ID 0x5004 +-#define MLXSW_REG_PTYS_LEN 0x40 ++static inline void mlxsw_reg_pagt_acl_id_pack(char *payload, int index, ++ u16 acl_id) ++{ ++ u8 size = mlxsw_reg_pagt_size_get(payload); ++ ++ if (index >= size) ++ mlxsw_reg_pagt_size_set(payload, index + 1); ++ mlxsw_reg_pagt_acl_id_set(payload, index, acl_id); ++} + +-static const struct mlxsw_reg_info mlxsw_reg_ptys = { +- .id = MLXSW_REG_PTYS_ID, +- .len = MLXSW_REG_PTYS_LEN, ++/* PTAR - Policy-Engine TCAM Allocation Register ++ * --------------------------------------------- ++ * This register is used for allocation of regions in the TCAM. ++ * Note: Query method is not supported on this register. ++ */ ++#define MLXSW_REG_PTAR_ID 0x3006 ++#define MLXSW_REG_PTAR_BASE_LEN 0x20 ++#define MLXSW_REG_PTAR_KEY_ID_LEN 1 ++#define MLXSW_REG_PTAR_KEY_ID_MAX_NUM 16 ++#define MLXSW_REG_PTAR_LEN (MLXSW_REG_PTAR_BASE_LEN + \ ++ MLXSW_REG_PTAR_KEY_ID_MAX_NUM * MLXSW_REG_PTAR_KEY_ID_LEN) ++ ++MLXSW_REG_DEFINE(ptar, MLXSW_REG_PTAR_ID, MLXSW_REG_PTAR_LEN); ++ ++enum mlxsw_reg_ptar_op { ++ /* allocate a TCAM region */ ++ MLXSW_REG_PTAR_OP_ALLOC, ++ /* resize a TCAM region */ ++ MLXSW_REG_PTAR_OP_RESIZE, ++ /* deallocate TCAM region */ ++ MLXSW_REG_PTAR_OP_FREE, ++ /* test allocation */ ++ MLXSW_REG_PTAR_OP_TEST, + }; + +-/* reg_ptys_local_port +- * Local port number. +- * Access: Index ++/* reg_ptar_op ++ * Access: OP + */ +-MLXSW_ITEM32(reg, ptys, local_port, 0x00, 16, 8); +- +-#define MLXSW_REG_PTYS_PROTO_MASK_ETH BIT(2) ++MLXSW_ITEM32(reg, ptar, op, 0x00, 28, 4); + +-/* reg_ptys_proto_mask +- * Protocol mask. Indicates which protocol is used. +- * 0 - Infiniband. +- * 1 - Fibre Channel. +- * 2 - Ethernet. +- * Access: Index ++/* reg_ptar_action_set_type ++ * Type of action set to be used on this region. ++ * For Spectrum and Spectrum-2, this is always type 2 - "flexible" ++ * Access: WO + */ +-MLXSW_ITEM32(reg, ptys, proto_mask, 0x00, 0, 3); ++MLXSW_ITEM32(reg, ptar, action_set_type, 0x00, 16, 8); + +-enum { +- MLXSW_REG_PTYS_AN_STATUS_NA, +- MLXSW_REG_PTYS_AN_STATUS_OK, +- MLXSW_REG_PTYS_AN_STATUS_FAIL, ++enum mlxsw_reg_ptar_key_type { ++ MLXSW_REG_PTAR_KEY_TYPE_FLEX = 0x50, /* Spetrum */ ++ MLXSW_REG_PTAR_KEY_TYPE_FLEX2 = 0x51, /* Spectrum-2 */ + }; + +-/* reg_ptys_an_status +- * Autonegotiation status. +- * Access: RO ++/* reg_ptar_key_type ++ * TCAM key type for the region. ++ * Access: WO + */ +-MLXSW_ITEM32(reg, ptys, an_status, 0x04, 28, 4); ++MLXSW_ITEM32(reg, ptar, key_type, 0x00, 0, 8); + +-#define MLXSW_REG_PTYS_ETH_SPEED_SGMII BIT(0) +-#define MLXSW_REG_PTYS_ETH_SPEED_1000BASE_KX BIT(1) +-#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_CX4 BIT(2) +-#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_KX4 BIT(3) +-#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_KR BIT(4) +-#define MLXSW_REG_PTYS_ETH_SPEED_20GBASE_KR2 BIT(5) +-#define MLXSW_REG_PTYS_ETH_SPEED_40GBASE_CR4 BIT(6) +-#define MLXSW_REG_PTYS_ETH_SPEED_40GBASE_KR4 BIT(7) +-#define MLXSW_REG_PTYS_ETH_SPEED_56GBASE_R4 BIT(8) +-#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_CR BIT(12) +-#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_SR BIT(13) +-#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_ER_LR BIT(14) +-#define MLXSW_REG_PTYS_ETH_SPEED_40GBASE_SR4 BIT(15) +-#define MLXSW_REG_PTYS_ETH_SPEED_40GBASE_LR4_ER4 BIT(16) +-#define MLXSW_REG_PTYS_ETH_SPEED_50GBASE_SR2 BIT(18) +-#define MLXSW_REG_PTYS_ETH_SPEED_50GBASE_KR4 BIT(19) +-#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 BIT(20) +-#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 BIT(21) +-#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 BIT(22) +-#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4 BIT(23) +-#define MLXSW_REG_PTYS_ETH_SPEED_100BASE_TX BIT(24) +-#define MLXSW_REG_PTYS_ETH_SPEED_100BASE_T BIT(25) +-#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_T BIT(26) +-#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_CR BIT(27) +-#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_KR BIT(28) +-#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_SR BIT(29) +-#define MLXSW_REG_PTYS_ETH_SPEED_50GBASE_CR2 BIT(30) +-#define MLXSW_REG_PTYS_ETH_SPEED_50GBASE_KR2 BIT(31) ++/* reg_ptar_region_size ++ * TCAM region size. When allocating/resizing this is the requested size, ++ * the response is the actual size. Note that actual size may be ++ * larger than requested. ++ * Allowed range 1 .. cap_max_rules-1 ++ * Reserved during op deallocate. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, ptar, region_size, 0x04, 0, 16); + +-/* reg_ptys_eth_proto_cap +- * Ethernet port supported speeds and protocols. +- * Access: RO ++/* reg_ptar_region_id ++ * Region identifier ++ * Range 0 .. cap_max_regions-1 ++ * Access: Index + */ +-MLXSW_ITEM32(reg, ptys, eth_proto_cap, 0x0C, 0, 32); ++MLXSW_ITEM32(reg, ptar, region_id, 0x08, 0, 16); + +-/* reg_ptys_eth_proto_admin +- * Speed and protocol to set port to. ++/* reg_ptar_tcam_region_info ++ * Opaque object that represents the TCAM region. ++ * Returned when allocating a region. ++ * Provided by software for ACL generation and region deallocation and resize. + * Access: RW + */ +-MLXSW_ITEM32(reg, ptys, eth_proto_admin, 0x18, 0, 32); ++MLXSW_ITEM_BUF(reg, ptar, tcam_region_info, 0x10, ++ MLXSW_REG_PXXX_TCAM_REGION_INFO_LEN); + +-/* reg_ptys_eth_proto_oper +- * The current speed and protocol configured for the port. +- * Access: RO ++/* reg_ptar_flexible_key_id ++ * Identifier of the Flexible Key. ++ * Only valid if key_type == "FLEX_KEY" ++ * The key size will be rounded up to one of the following values: ++ * 9B, 18B, 36B, 54B. ++ * This field is reserved for in resize operation. ++ * Access: WO + */ +-MLXSW_ITEM32(reg, ptys, eth_proto_oper, 0x24, 0, 32); ++MLXSW_ITEM8_INDEXED(reg, ptar, flexible_key_id, 0x20, 0, 8, ++ MLXSW_REG_PTAR_KEY_ID_LEN, 0x00, false); + +-/* reg_ptys_eth_proto_lp_advertise +- * The protocols that were advertised by the link partner during +- * autonegotiation. +- * Access: RO +- */ +-MLXSW_ITEM32(reg, ptys, eth_proto_lp_advertise, 0x30, 0, 32); ++static inline void mlxsw_reg_ptar_pack(char *payload, enum mlxsw_reg_ptar_op op, ++ enum mlxsw_reg_ptar_key_type key_type, ++ u16 region_size, u16 region_id, ++ const char *tcam_region_info) ++{ ++ MLXSW_REG_ZERO(ptar, payload); ++ mlxsw_reg_ptar_op_set(payload, op); ++ mlxsw_reg_ptar_action_set_type_set(payload, 2); /* "flexible" */ ++ mlxsw_reg_ptar_key_type_set(payload, key_type); ++ mlxsw_reg_ptar_region_size_set(payload, region_size); ++ mlxsw_reg_ptar_region_id_set(payload, region_id); ++ mlxsw_reg_ptar_tcam_region_info_memcpy_to(payload, tcam_region_info); ++} + +-static inline void mlxsw_reg_ptys_pack(char *payload, u8 local_port, +- u32 proto_admin) ++static inline void mlxsw_reg_ptar_key_id_pack(char *payload, int index, ++ u16 key_id) + { +- MLXSW_REG_ZERO(ptys, payload); +- mlxsw_reg_ptys_local_port_set(payload, local_port); +- mlxsw_reg_ptys_proto_mask_set(payload, MLXSW_REG_PTYS_PROTO_MASK_ETH); +- mlxsw_reg_ptys_eth_proto_admin_set(payload, proto_admin); ++ mlxsw_reg_ptar_flexible_key_id_set(payload, index, key_id); + } + +-static inline void mlxsw_reg_ptys_unpack(char *payload, u32 *p_eth_proto_cap, +- u32 *p_eth_proto_adm, +- u32 *p_eth_proto_oper) ++static inline void mlxsw_reg_ptar_unpack(char *payload, char *tcam_region_info) + { +- if (p_eth_proto_cap) +- *p_eth_proto_cap = mlxsw_reg_ptys_eth_proto_cap_get(payload); +- if (p_eth_proto_adm) +- *p_eth_proto_adm = mlxsw_reg_ptys_eth_proto_admin_get(payload); +- if (p_eth_proto_oper) +- *p_eth_proto_oper = mlxsw_reg_ptys_eth_proto_oper_get(payload); ++ mlxsw_reg_ptar_tcam_region_info_memcpy_from(payload, tcam_region_info); + } + +-/* PPAD - Port Physical Address Register +- * ------------------------------------- +- * The PPAD register configures the per port physical MAC address. ++/* PPBS - Policy-Engine Policy Based Switching Register ++ * ---------------------------------------------------- ++ * This register retrieves and sets Policy Based Switching Table entries. + */ +-#define MLXSW_REG_PPAD_ID 0x5005 +-#define MLXSW_REG_PPAD_LEN 0x10 ++#define MLXSW_REG_PPBS_ID 0x300C ++#define MLXSW_REG_PPBS_LEN 0x14 + +-static const struct mlxsw_reg_info mlxsw_reg_ppad = { +- .id = MLXSW_REG_PPAD_ID, +- .len = MLXSW_REG_PPAD_LEN, +-}; +- +-/* reg_ppad_single_base_mac +- * 0: base_mac, local port should be 0 and mac[7:0] is +- * reserved. HW will set incremental +- * 1: single_mac - mac of the local_port +- * Access: RW +- */ +-MLXSW_ITEM32(reg, ppad, single_base_mac, 0x00, 28, 1); ++MLXSW_REG_DEFINE(ppbs, MLXSW_REG_PPBS_ID, MLXSW_REG_PPBS_LEN); + +-/* reg_ppad_local_port +- * port number, if single_base_mac = 0 then local_port is reserved +- * Access: RW ++/* reg_ppbs_pbs_ptr ++ * Index into the PBS table. ++ * For Spectrum, the index points to the KVD Linear. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, ppad, local_port, 0x00, 16, 8); ++MLXSW_ITEM32(reg, ppbs, pbs_ptr, 0x08, 0, 24); + +-/* reg_ppad_mac +- * If single_base_mac = 0 - base MAC address, mac[7:0] is reserved. +- * If single_base_mac = 1 - the per port MAC address ++/* reg_ppbs_system_port ++ * Unique port identifier for the final destination of the packet. + * Access: RW + */ +-MLXSW_ITEM_BUF(reg, ppad, mac, 0x02, 6); ++MLXSW_ITEM32(reg, ppbs, system_port, 0x10, 0, 16); + +-static inline void mlxsw_reg_ppad_pack(char *payload, bool single_base_mac, +- u8 local_port) ++static inline void mlxsw_reg_ppbs_pack(char *payload, u32 pbs_ptr, ++ u16 system_port) + { +- MLXSW_REG_ZERO(ppad, payload); +- mlxsw_reg_ppad_single_base_mac_set(payload, !!single_base_mac); +- mlxsw_reg_ppad_local_port_set(payload, local_port); ++ MLXSW_REG_ZERO(ppbs, payload); ++ mlxsw_reg_ppbs_pbs_ptr_set(payload, pbs_ptr); ++ mlxsw_reg_ppbs_system_port_set(payload, system_port); + } + +-/* PAOS - Ports Administrative and Operational Status Register +- * ----------------------------------------------------------- +- * Configures and retrieves per port administrative and operational status. ++/* PRCR - Policy-Engine Rules Copy Register ++ * ---------------------------------------- ++ * This register is used for accessing rules within a TCAM region. + */ +-#define MLXSW_REG_PAOS_ID 0x5006 +-#define MLXSW_REG_PAOS_LEN 0x10 ++#define MLXSW_REG_PRCR_ID 0x300D ++#define MLXSW_REG_PRCR_LEN 0x40 + +-static const struct mlxsw_reg_info mlxsw_reg_paos = { +- .id = MLXSW_REG_PAOS_ID, +- .len = MLXSW_REG_PAOS_LEN, ++MLXSW_REG_DEFINE(prcr, MLXSW_REG_PRCR_ID, MLXSW_REG_PRCR_LEN); ++ ++enum mlxsw_reg_prcr_op { ++ /* Move rules. Moves the rules from "tcam_region_info" starting ++ * at offset "offset" to "dest_tcam_region_info" ++ * at offset "dest_offset." ++ */ ++ MLXSW_REG_PRCR_OP_MOVE, ++ /* Copy rules. Copies the rules from "tcam_region_info" starting ++ * at offset "offset" to "dest_tcam_region_info" ++ * at offset "dest_offset." ++ */ ++ MLXSW_REG_PRCR_OP_COPY, + }; + +-/* reg_paos_swid +- * Switch partition ID with which to associate the port. +- * Note: while external ports uses unique local port numbers (and thus swid is +- * redundant), router ports use the same local port number where swid is the +- * only indication for the relevant port. +- * Access: Index ++/* reg_prcr_op ++ * Access: OP + */ +-MLXSW_ITEM32(reg, paos, swid, 0x00, 24, 8); ++MLXSW_ITEM32(reg, prcr, op, 0x00, 28, 4); + +-/* reg_paos_local_port +- * Local port number. ++/* reg_prcr_offset ++ * Offset within the source region to copy/move from. + * Access: Index + */ +-MLXSW_ITEM32(reg, paos, local_port, 0x00, 16, 8); +- +-/* reg_paos_admin_status +- * Port administrative state (the desired state of the port): +- * 1 - Up. +- * 2 - Down. +- * 3 - Up once. This means that in case of link failure, the port won't go +- * into polling mode, but will wait to be re-enabled by software. +- * 4 - Disabled by system. Can only be set by hardware. +- * Access: RW +- */ +-MLXSW_ITEM32(reg, paos, admin_status, 0x00, 8, 4); ++MLXSW_ITEM32(reg, prcr, offset, 0x00, 0, 16); + +-/* reg_paos_oper_status +- * Port operational state (the current state): +- * 1 - Up. +- * 2 - Down. +- * 3 - Down by port failure. This means that the device will not let the +- * port up again until explicitly specified by software. +- * Access: RO ++/* reg_prcr_size ++ * The number of rules to copy/move. ++ * Access: WO + */ +-MLXSW_ITEM32(reg, paos, oper_status, 0x00, 0, 4); ++MLXSW_ITEM32(reg, prcr, size, 0x04, 0, 16); + +-/* reg_paos_ase +- * Admin state update enabled. +- * Access: WO ++/* reg_prcr_tcam_region_info ++ * Opaque object that represents the source TCAM region. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, paos, ase, 0x04, 31, 1); ++MLXSW_ITEM_BUF(reg, prcr, tcam_region_info, 0x10, ++ MLXSW_REG_PXXX_TCAM_REGION_INFO_LEN); + +-/* reg_paos_ee +- * Event update enable. If this bit is set, event generation will be +- * updated based on the e field. +- * Access: WO ++/* reg_prcr_dest_offset ++ * Offset within the source region to copy/move to. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, paos, ee, 0x04, 30, 1); ++MLXSW_ITEM32(reg, prcr, dest_offset, 0x20, 0, 16); + +-/* reg_paos_e +- * Event generation on operational state change: +- * 0 - Do not generate event. +- * 1 - Generate Event. +- * 2 - Generate Single Event. +- * Access: RW ++/* reg_prcr_dest_tcam_region_info ++ * Opaque object that represents the destination TCAM region. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, paos, e, 0x04, 0, 2); ++MLXSW_ITEM_BUF(reg, prcr, dest_tcam_region_info, 0x30, ++ MLXSW_REG_PXXX_TCAM_REGION_INFO_LEN); + +-static inline void mlxsw_reg_paos_pack(char *payload, u8 local_port, +- enum mlxsw_port_admin_status status) ++static inline void mlxsw_reg_prcr_pack(char *payload, enum mlxsw_reg_prcr_op op, ++ const char *src_tcam_region_info, ++ u16 src_offset, ++ const char *dest_tcam_region_info, ++ u16 dest_offset, u16 size) + { +- MLXSW_REG_ZERO(paos, payload); +- mlxsw_reg_paos_swid_set(payload, 0); +- mlxsw_reg_paos_local_port_set(payload, local_port); +- mlxsw_reg_paos_admin_status_set(payload, status); +- mlxsw_reg_paos_oper_status_set(payload, 0); +- mlxsw_reg_paos_ase_set(payload, 1); +- mlxsw_reg_paos_ee_set(payload, 1); +- mlxsw_reg_paos_e_set(payload, 1); ++ MLXSW_REG_ZERO(prcr, payload); ++ mlxsw_reg_prcr_op_set(payload, op); ++ mlxsw_reg_prcr_offset_set(payload, src_offset); ++ mlxsw_reg_prcr_size_set(payload, size); ++ mlxsw_reg_prcr_tcam_region_info_memcpy_to(payload, ++ src_tcam_region_info); ++ mlxsw_reg_prcr_dest_offset_set(payload, dest_offset); ++ mlxsw_reg_prcr_dest_tcam_region_info_memcpy_to(payload, ++ dest_tcam_region_info); + } + +-/* PFCC - Ports Flow Control Configuration Register +- * ------------------------------------------------ +- * Configures and retrieves the per port flow control configuration. ++/* PEFA - Policy-Engine Extended Flexible Action Register ++ * ------------------------------------------------------ ++ * This register is used for accessing an extended flexible action entry ++ * in the central KVD Linear Database. + */ +-#define MLXSW_REG_PFCC_ID 0x5007 +-#define MLXSW_REG_PFCC_LEN 0x20 +- +-static const struct mlxsw_reg_info mlxsw_reg_pfcc = { +- .id = MLXSW_REG_PFCC_ID, +- .len = MLXSW_REG_PFCC_LEN, +-}; ++#define MLXSW_REG_PEFA_ID 0x300F ++#define MLXSW_REG_PEFA_LEN 0xB0 + +-/* reg_pfcc_local_port +- * Local port number. +- * Access: Index +- */ +-MLXSW_ITEM32(reg, pfcc, local_port, 0x00, 16, 8); ++MLXSW_REG_DEFINE(pefa, MLXSW_REG_PEFA_ID, MLXSW_REG_PEFA_LEN); + +-/* reg_pfcc_pnat +- * Port number access type. Determines the way local_port is interpreted: +- * 0 - Local port number. +- * 1 - IB / label port number. ++/* reg_pefa_index ++ * Index in the KVD Linear Centralized Database. + * Access: Index + */ +-MLXSW_ITEM32(reg, pfcc, pnat, 0x00, 14, 2); ++MLXSW_ITEM32(reg, pefa, index, 0x00, 0, 24); + +-/* reg_pfcc_shl_cap +- * Send to higher layers capabilities: +- * 0 - No capability of sending Pause and PFC frames to higher layers. +- * 1 - Device has capability of sending Pause and PFC frames to higher +- * layers. ++/* reg_pefa_a ++ * Index in the KVD Linear Centralized Database. ++ * Activity ++ * For a new entry: set if ca=0, clear if ca=1 ++ * Set if a packet lookup has hit on the specific entry + * Access: RO + */ +-MLXSW_ITEM32(reg, pfcc, shl_cap, 0x00, 1, 1); ++MLXSW_ITEM32(reg, pefa, a, 0x04, 29, 1); + +-/* reg_pfcc_shl_opr +- * Send to higher layers operation: +- * 0 - Pause and PFC frames are handled by the port (default). +- * 1 - Pause and PFC frames are handled by the port and also sent to +- * higher layers. Only valid if shl_cap = 1. +- * Access: RW ++/* reg_pefa_ca ++ * Clear activity ++ * When write: activity is according to this field ++ * When read: after reading the activity is cleared according to ca ++ * Access: OP + */ +-MLXSW_ITEM32(reg, pfcc, shl_opr, 0x00, 0, 1); ++MLXSW_ITEM32(reg, pefa, ca, 0x04, 24, 1); + +-/* reg_pfcc_ppan +- * Pause policy auto negotiation. +- * 0 - Disabled. Generate / ignore Pause frames based on pptx / pprtx. +- * 1 - Enabled. When auto-negotiation is performed, set the Pause policy +- * based on the auto-negotiation resolution. ++#define MLXSW_REG_FLEX_ACTION_SET_LEN 0xA8 ++ ++/* reg_pefa_flex_action_set ++ * Action-set to perform when rule is matched. ++ * Must be zero padded if action set is shorter. + * Access: RW +- * +- * Note: The auto-negotiation advertisement is set according to pptx and +- * pprtx. When PFC is set on Tx / Rx, ppan must be set to 0. + */ +-MLXSW_ITEM32(reg, pfcc, ppan, 0x04, 28, 4); ++MLXSW_ITEM_BUF(reg, pefa, flex_action_set, 0x08, MLXSW_REG_FLEX_ACTION_SET_LEN); + +-/* reg_pfcc_prio_mask_tx +- * Bit per priority indicating if Tx flow control policy should be +- * updated based on bit pfctx. +- * Access: WO +- */ +-MLXSW_ITEM32(reg, pfcc, prio_mask_tx, 0x04, 16, 8); ++static inline void mlxsw_reg_pefa_pack(char *payload, u32 index, bool ca, ++ const char *flex_action_set) ++{ ++ MLXSW_REG_ZERO(pefa, payload); ++ mlxsw_reg_pefa_index_set(payload, index); ++ mlxsw_reg_pefa_ca_set(payload, ca); ++ if (flex_action_set) ++ mlxsw_reg_pefa_flex_action_set_memcpy_to(payload, ++ flex_action_set); ++} + +-/* reg_pfcc_prio_mask_rx +- * Bit per priority indicating if Rx flow control policy should be +- * updated based on bit pfcrx. +- * Access: WO ++static inline void mlxsw_reg_pefa_unpack(char *payload, bool *p_a) ++{ ++ *p_a = mlxsw_reg_pefa_a_get(payload); ++} ++ ++/* PTCE-V2 - Policy-Engine TCAM Entry Register Version 2 ++ * ----------------------------------------------------- ++ * This register is used for accessing rules within a TCAM region. ++ * It is a new version of PTCE in order to support wider key, ++ * mask and action within a TCAM region. This register is not supported ++ * by SwitchX and SwitchX-2. + */ +-MLXSW_ITEM32(reg, pfcc, prio_mask_rx, 0x04, 0, 8); ++#define MLXSW_REG_PTCE2_ID 0x3017 ++#define MLXSW_REG_PTCE2_LEN 0x1D8 + +-/* reg_pfcc_pptx +- * Admin Pause policy on Tx. +- * 0 - Never generate Pause frames (default). +- * 1 - Generate Pause frames according to Rx buffer threshold. ++MLXSW_REG_DEFINE(ptce2, MLXSW_REG_PTCE2_ID, MLXSW_REG_PTCE2_LEN); ++ ++/* reg_ptce2_v ++ * Valid. + * Access: RW + */ +-MLXSW_ITEM32(reg, pfcc, pptx, 0x08, 31, 1); ++MLXSW_ITEM32(reg, ptce2, v, 0x00, 31, 1); + +-/* reg_pfcc_aptx +- * Active (operational) Pause policy on Tx. +- * 0 - Never generate Pause frames. +- * 1 - Generate Pause frames according to Rx buffer threshold. ++/* reg_ptce2_a ++ * Activity. Set if a packet lookup has hit on the specific entry. ++ * To clear the "a" bit, use "clear activity" op or "clear on read" op. + * Access: RO + */ +-MLXSW_ITEM32(reg, pfcc, aptx, 0x08, 30, 1); ++MLXSW_ITEM32(reg, ptce2, a, 0x00, 30, 1); + +-/* reg_pfcc_pfctx +- * Priority based flow control policy on Tx[7:0]. Per-priority bit mask: +- * 0 - Never generate priority Pause frames on the specified priority +- * (default). +- * 1 - Generate priority Pause frames according to Rx buffer threshold on +- * the specified priority. +- * Access: RW +- * +- * Note: pfctx and pptx must be mutually exclusive. ++enum mlxsw_reg_ptce2_op { ++ /* Read operation. */ ++ MLXSW_REG_PTCE2_OP_QUERY_READ = 0, ++ /* clear on read operation. Used to read entry ++ * and clear Activity bit. ++ */ ++ MLXSW_REG_PTCE2_OP_QUERY_CLEAR_ON_READ = 1, ++ /* Write operation. Used to write a new entry to the table. ++ * All R/W fields are relevant for new entry. Activity bit is set ++ * for new entries - Note write with v = 0 will delete the entry. ++ */ ++ MLXSW_REG_PTCE2_OP_WRITE_WRITE = 0, ++ /* Update action. Only action set will be updated. */ ++ MLXSW_REG_PTCE2_OP_WRITE_UPDATE = 1, ++ /* Clear activity. A bit is cleared for the entry. */ ++ MLXSW_REG_PTCE2_OP_WRITE_CLEAR_ACTIVITY = 2, ++}; ++ ++/* reg_ptce2_op ++ * Access: OP + */ +-MLXSW_ITEM32(reg, pfcc, pfctx, 0x08, 16, 8); ++MLXSW_ITEM32(reg, ptce2, op, 0x00, 20, 3); + +-/* reg_pfcc_pprx +- * Admin Pause policy on Rx. +- * 0 - Ignore received Pause frames (default). +- * 1 - Respect received Pause frames. ++/* reg_ptce2_offset ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ptce2, offset, 0x00, 0, 16); ++ ++/* reg_ptce2_priority ++ * Priority of the rule, higher values win. The range is 1..cap_kvd_size-1. ++ * Note: priority does not have to be unique per rule. ++ * Within a region, higher priority should have lower offset (no limitation ++ * between regions in a multi-region). + * Access: RW + */ +-MLXSW_ITEM32(reg, pfcc, pprx, 0x0C, 31, 1); ++MLXSW_ITEM32(reg, ptce2, priority, 0x04, 0, 24); + +-/* reg_pfcc_aprx +- * Active (operational) Pause policy on Rx. +- * 0 - Ignore received Pause frames. +- * 1 - Respect received Pause frames. +- * Access: RO ++/* reg_ptce2_tcam_region_info ++ * Opaque object that represents the TCAM region. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, pfcc, aprx, 0x0C, 30, 1); ++MLXSW_ITEM_BUF(reg, ptce2, tcam_region_info, 0x10, ++ MLXSW_REG_PXXX_TCAM_REGION_INFO_LEN); + +-/* reg_pfcc_pfcrx +- * Priority based flow control policy on Rx[7:0]. Per-priority bit mask: +- * 0 - Ignore incoming priority Pause frames on the specified priority +- * (default). +- * 1 - Respect incoming priority Pause frames on the specified priority. ++#define MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN 96 ++ ++/* reg_ptce2_flex_key_blocks ++ * ACL Key. + * Access: RW + */ +-MLXSW_ITEM32(reg, pfcc, pfcrx, 0x0C, 16, 8); ++MLXSW_ITEM_BUF(reg, ptce2, flex_key_blocks, 0x20, ++ MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN); + +-#define MLXSW_REG_PFCC_ALL_PRIO 0xFF ++/* reg_ptce2_mask ++ * mask- in the same size as key. A bit that is set directs the TCAM ++ * to compare the corresponding bit in key. A bit that is clear directs ++ * the TCAM to ignore the corresponding bit in key. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, ptce2, mask, 0x80, ++ MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN); + +-static inline void mlxsw_reg_pfcc_prio_pack(char *payload, u8 pfc_en) +-{ +- mlxsw_reg_pfcc_prio_mask_tx_set(payload, MLXSW_REG_PFCC_ALL_PRIO); +- mlxsw_reg_pfcc_prio_mask_rx_set(payload, MLXSW_REG_PFCC_ALL_PRIO); +- mlxsw_reg_pfcc_pfctx_set(payload, pfc_en); +- mlxsw_reg_pfcc_pfcrx_set(payload, pfc_en); +-} ++/* reg_ptce2_flex_action_set ++ * ACL action set. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, ptce2, flex_action_set, 0xE0, ++ MLXSW_REG_FLEX_ACTION_SET_LEN); + +-static inline void mlxsw_reg_pfcc_pack(char *payload, u8 local_port) ++static inline void mlxsw_reg_ptce2_pack(char *payload, bool valid, ++ enum mlxsw_reg_ptce2_op op, ++ const char *tcam_region_info, ++ u16 offset, u32 priority) + { +- MLXSW_REG_ZERO(pfcc, payload); +- mlxsw_reg_pfcc_local_port_set(payload, local_port); ++ MLXSW_REG_ZERO(ptce2, payload); ++ mlxsw_reg_ptce2_v_set(payload, valid); ++ mlxsw_reg_ptce2_op_set(payload, op); ++ mlxsw_reg_ptce2_offset_set(payload, offset); ++ mlxsw_reg_ptce2_priority_set(payload, priority); ++ mlxsw_reg_ptce2_tcam_region_info_memcpy_to(payload, tcam_region_info); + } + +-/* PPCNT - Ports Performance Counters Register +- * ------------------------------------------- +- * The PPCNT register retrieves per port performance counters. ++/* PERPT - Policy-Engine ERP Table Register ++ * ---------------------------------------- ++ * This register adds and removes eRPs from the eRP table. + */ +-#define MLXSW_REG_PPCNT_ID 0x5008 +-#define MLXSW_REG_PPCNT_LEN 0x100 ++#define MLXSW_REG_PERPT_ID 0x3021 ++#define MLXSW_REG_PERPT_LEN 0x80 + +-static const struct mlxsw_reg_info mlxsw_reg_ppcnt = { +- .id = MLXSW_REG_PPCNT_ID, +- .len = MLXSW_REG_PPCNT_LEN, +-}; ++MLXSW_REG_DEFINE(perpt, MLXSW_REG_PERPT_ID, MLXSW_REG_PERPT_LEN); + +-/* reg_ppcnt_swid +- * For HCA: must be always 0. +- * Switch partition ID to associate port with. +- * Switch partitions are numbered from 0 to 7 inclusively. +- * Switch partition 254 indicates stacking ports. +- * Switch partition 255 indicates all switch partitions. +- * Only valid on Set() operation with local_port=255. ++/* reg_perpt_erpt_bank ++ * eRP table bank. ++ * Range 0 .. cap_max_erp_table_banks - 1 + * Access: Index + */ +-MLXSW_ITEM32(reg, ppcnt, swid, 0x00, 24, 8); ++MLXSW_ITEM32(reg, perpt, erpt_bank, 0x00, 16, 4); + +-/* reg_ppcnt_local_port +- * Local port number. +- * 255 indicates all ports on the device, and is only allowed +- * for Set() operation. ++/* reg_perpt_erpt_index ++ * Index to eRP table within the eRP bank. ++ * Range is 0 .. cap_max_erp_table_bank_size - 1 + * Access: Index + */ +-MLXSW_ITEM32(reg, ppcnt, local_port, 0x00, 16, 8); ++MLXSW_ITEM32(reg, perpt, erpt_index, 0x00, 0, 8); + +-/* reg_ppcnt_pnat +- * Port number access type: +- * 0 - Local port number +- * 1 - IB port number +- * Access: Index ++enum mlxsw_reg_perpt_key_size { ++ MLXSW_REG_PERPT_KEY_SIZE_2KB, ++ MLXSW_REG_PERPT_KEY_SIZE_4KB, ++ MLXSW_REG_PERPT_KEY_SIZE_8KB, ++ MLXSW_REG_PERPT_KEY_SIZE_12KB, ++}; ++ ++/* reg_perpt_key_size ++ * Access: OP + */ +-MLXSW_ITEM32(reg, ppcnt, pnat, 0x00, 14, 2); ++MLXSW_ITEM32(reg, perpt, key_size, 0x04, 0, 4); + +-enum mlxsw_reg_ppcnt_grp { +- MLXSW_REG_PPCNT_IEEE_8023_CNT = 0x0, +- MLXSW_REG_PPCNT_PRIO_CNT = 0x10, +- MLXSW_REG_PPCNT_TC_CNT = 0x11, +-}; ++/* reg_perpt_bf_bypass ++ * 0 - The eRP is used only if bloom filter state is set for the given ++ * rule. ++ * 1 - The eRP is used regardless of bloom filter state. ++ * The bypass is an OR condition of region_id or eRP. See PERCR.bf_bypass ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, perpt, bf_bypass, 0x08, 8, 1); + +-/* reg_ppcnt_grp +- * Performance counter group. +- * Group 63 indicates all groups. Only valid on Set() operation with +- * clr bit set. +- * 0x0: IEEE 802.3 Counters +- * 0x1: RFC 2863 Counters +- * 0x2: RFC 2819 Counters +- * 0x3: RFC 3635 Counters +- * 0x5: Ethernet Extended Counters +- * 0x8: Link Level Retransmission Counters +- * 0x10: Per Priority Counters +- * 0x11: Per Traffic Class Counters +- * 0x12: Physical Layer Counters +- * Access: Index ++/* reg_perpt_erp_id ++ * eRP ID for use by the rules. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, ppcnt, grp, 0x00, 0, 6); ++MLXSW_ITEM32(reg, perpt, erp_id, 0x08, 0, 4); + +-/* reg_ppcnt_clr +- * Clear counters. Setting the clr bit will reset the counter value +- * for all counters in the counter group. This bit can be set +- * for both Set() and Get() operation. ++/* reg_perpt_erpt_base_bank ++ * Base eRP table bank, points to head of erp_vector ++ * Range is 0 .. cap_max_erp_table_banks - 1 + * Access: OP + */ +-MLXSW_ITEM32(reg, ppcnt, clr, 0x04, 31, 1); ++MLXSW_ITEM32(reg, perpt, erpt_base_bank, 0x0C, 16, 4); + +-/* reg_ppcnt_prio_tc +- * Priority for counter set that support per priority, valid values: 0-7. +- * Traffic class for counter set that support per traffic class, +- * valid values: 0- cap_max_tclass-1 . +- * For HCA: cap_max_tclass is always 8. +- * Otherwise must be 0. +- * Access: Index ++/* reg_perpt_erpt_base_index ++ * Base index to eRP table within the eRP bank ++ * Range is 0 .. cap_max_erp_table_bank_size - 1 ++ * Access: OP + */ +-MLXSW_ITEM32(reg, ppcnt, prio_tc, 0x04, 0, 5); +- +-/* Ethernet IEEE 802.3 Counter Group */ ++MLXSW_ITEM32(reg, perpt, erpt_base_index, 0x0C, 0, 8); + +-/* reg_ppcnt_a_frames_transmitted_ok +- * Access: RO ++/* reg_perpt_erp_index_in_vector ++ * eRP index in the vector. ++ * Access: OP + */ +-MLXSW_ITEM64(reg, ppcnt, a_frames_transmitted_ok, +- 0x08 + 0x00, 0, 64); ++MLXSW_ITEM32(reg, perpt, erp_index_in_vector, 0x10, 0, 4); + +-/* reg_ppcnt_a_frames_received_ok +- * Access: RO ++/* reg_perpt_erp_vector ++ * eRP vector. ++ * Access: OP + */ +-MLXSW_ITEM64(reg, ppcnt, a_frames_received_ok, +- 0x08 + 0x08, 0, 64); ++MLXSW_ITEM_BIT_ARRAY(reg, perpt, erp_vector, 0x14, 4, 1); + +-/* reg_ppcnt_a_frame_check_sequence_errors +- * Access: RO ++/* reg_perpt_mask ++ * Mask ++ * 0 - A-TCAM will ignore the bit in key ++ * 1 - A-TCAM will compare the bit in key ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, a_frame_check_sequence_errors, +- 0x08 + 0x10, 0, 64); ++MLXSW_ITEM_BUF(reg, perpt, mask, 0x20, MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN); + +-/* reg_ppcnt_a_alignment_errors +- * Access: RO ++static inline void mlxsw_reg_perpt_erp_vector_pack(char *payload, ++ unsigned long *erp_vector, ++ unsigned long size) ++{ ++ unsigned long bit; ++ ++ for_each_set_bit(bit, erp_vector, size) ++ mlxsw_reg_perpt_erp_vector_set(payload, bit, true); ++} ++ ++static inline void ++mlxsw_reg_perpt_pack(char *payload, u8 erpt_bank, u8 erpt_index, ++ enum mlxsw_reg_perpt_key_size key_size, u8 erp_id, ++ u8 erpt_base_bank, u8 erpt_base_index, u8 erp_index, ++ char *mask) ++{ ++ MLXSW_REG_ZERO(perpt, payload); ++ mlxsw_reg_perpt_erpt_bank_set(payload, erpt_bank); ++ mlxsw_reg_perpt_erpt_index_set(payload, erpt_index); ++ mlxsw_reg_perpt_key_size_set(payload, key_size); ++ mlxsw_reg_perpt_bf_bypass_set(payload, true); ++ mlxsw_reg_perpt_erp_id_set(payload, erp_id); ++ mlxsw_reg_perpt_erpt_base_bank_set(payload, erpt_base_bank); ++ mlxsw_reg_perpt_erpt_base_index_set(payload, erpt_base_index); ++ mlxsw_reg_perpt_erp_index_in_vector_set(payload, erp_index); ++ mlxsw_reg_perpt_mask_memcpy_to(payload, mask); ++} ++ ++/* PERAR - Policy-Engine Region Association Register ++ * ------------------------------------------------- ++ * This register associates a hw region for region_id's. Changing on the fly ++ * is supported by the device. + */ +-MLXSW_ITEM64(reg, ppcnt, a_alignment_errors, +- 0x08 + 0x18, 0, 64); ++#define MLXSW_REG_PERAR_ID 0x3026 ++#define MLXSW_REG_PERAR_LEN 0x08 + +-/* reg_ppcnt_a_octets_transmitted_ok +- * Access: RO ++MLXSW_REG_DEFINE(perar, MLXSW_REG_PERAR_ID, MLXSW_REG_PERAR_LEN); ++ ++/* reg_perar_region_id ++ * Region identifier ++ * Range 0 .. cap_max_regions-1 ++ * Access: Index + */ +-MLXSW_ITEM64(reg, ppcnt, a_octets_transmitted_ok, +- 0x08 + 0x20, 0, 64); ++MLXSW_ITEM32(reg, perar, region_id, 0x00, 0, 16); + +-/* reg_ppcnt_a_octets_received_ok +- * Access: RO ++static inline unsigned int ++mlxsw_reg_perar_hw_regions_needed(unsigned int block_num) ++{ ++ return DIV_ROUND_UP(block_num, 4); ++} ++ ++/* reg_perar_hw_region ++ * HW Region ++ * Range 0 .. cap_max_regions-1 ++ * Default: hw_region = region_id ++ * For a 8 key block region, 2 consecutive regions are used ++ * For a 12 key block region, 3 consecutive regions are used ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, a_octets_received_ok, +- 0x08 + 0x28, 0, 64); ++MLXSW_ITEM32(reg, perar, hw_region, 0x04, 0, 16); + +-/* reg_ppcnt_a_multicast_frames_xmitted_ok +- * Access: RO ++static inline void mlxsw_reg_perar_pack(char *payload, u16 region_id, ++ u16 hw_region) ++{ ++ MLXSW_REG_ZERO(perar, payload); ++ mlxsw_reg_perar_region_id_set(payload, region_id); ++ mlxsw_reg_perar_hw_region_set(payload, hw_region); ++} ++ ++/* PTCE-V3 - Policy-Engine TCAM Entry Register Version 3 ++ * ----------------------------------------------------- ++ * This register is a new version of PTCE-V2 in order to support the ++ * A-TCAM. This register is not supported by SwitchX/-2 and Spectrum. + */ +-MLXSW_ITEM64(reg, ppcnt, a_multicast_frames_xmitted_ok, +- 0x08 + 0x30, 0, 64); ++#define MLXSW_REG_PTCE3_ID 0x3027 ++#define MLXSW_REG_PTCE3_LEN 0xF0 + +-/* reg_ppcnt_a_broadcast_frames_xmitted_ok +- * Access: RO ++MLXSW_REG_DEFINE(ptce3, MLXSW_REG_PTCE3_ID, MLXSW_REG_PTCE3_LEN); ++ ++/* reg_ptce3_v ++ * Valid. ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, a_broadcast_frames_xmitted_ok, +- 0x08 + 0x38, 0, 64); ++MLXSW_ITEM32(reg, ptce3, v, 0x00, 31, 1); + +-/* reg_ppcnt_a_multicast_frames_received_ok +- * Access: RO ++enum mlxsw_reg_ptce3_op { ++ /* Write operation. Used to write a new entry to the table. ++ * All R/W fields are relevant for new entry. Activity bit is set ++ * for new entries. Write with v = 0 will delete the entry. Must ++ * not be used if an entry exists. ++ */ ++ MLXSW_REG_PTCE3_OP_WRITE_WRITE = 0, ++ /* Update operation */ ++ MLXSW_REG_PTCE3_OP_WRITE_UPDATE = 1, ++ /* Read operation */ ++ MLXSW_REG_PTCE3_OP_QUERY_READ = 0, ++}; ++ ++/* reg_ptce3_op ++ * Access: OP + */ +-MLXSW_ITEM64(reg, ppcnt, a_multicast_frames_received_ok, +- 0x08 + 0x40, 0, 64); ++MLXSW_ITEM32(reg, ptce3, op, 0x00, 20, 3); + +-/* reg_ppcnt_a_broadcast_frames_received_ok +- * Access: RO ++/* reg_ptce3_priority ++ * Priority of the rule. Higher values win. ++ * For Spectrum-2 range is 1..cap_kvd_size - 1 ++ * Note: Priority does not have to be unique per rule. ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, a_broadcast_frames_received_ok, +- 0x08 + 0x48, 0, 64); ++MLXSW_ITEM32(reg, ptce3, priority, 0x04, 0, 24); + +-/* reg_ppcnt_a_in_range_length_errors +- * Access: RO ++/* reg_ptce3_tcam_region_info ++ * Opaque object that represents the TCAM region. ++ * Access: Index + */ +-MLXSW_ITEM64(reg, ppcnt, a_in_range_length_errors, +- 0x08 + 0x50, 0, 64); ++MLXSW_ITEM_BUF(reg, ptce3, tcam_region_info, 0x10, ++ MLXSW_REG_PXXX_TCAM_REGION_INFO_LEN); + +-/* reg_ppcnt_a_out_of_range_length_field +- * Access: RO ++/* reg_ptce3_flex2_key_blocks ++ * ACL key. The key must be masked according to eRP (if exists) or ++ * according to master mask. ++ * Access: Index + */ +-MLXSW_ITEM64(reg, ppcnt, a_out_of_range_length_field, +- 0x08 + 0x58, 0, 64); ++MLXSW_ITEM_BUF(reg, ptce3, flex2_key_blocks, 0x20, ++ MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN); + +-/* reg_ppcnt_a_frame_too_long_errors +- * Access: RO ++/* reg_ptce3_erp_id ++ * eRP ID. ++ * Access: Index + */ +-MLXSW_ITEM64(reg, ppcnt, a_frame_too_long_errors, +- 0x08 + 0x60, 0, 64); ++MLXSW_ITEM32(reg, ptce3, erp_id, 0x80, 0, 4); + +-/* reg_ppcnt_a_symbol_error_during_carrier +- * Access: RO ++/* reg_ptce3_delta_start ++ * Start point of delta_value and delta_mask, in bits. Must not exceed ++ * num_key_blocks * 36 - 8. Reserved when delta_mask = 0. ++ * Access: Index + */ +-MLXSW_ITEM64(reg, ppcnt, a_symbol_error_during_carrier, +- 0x08 + 0x68, 0, 64); ++MLXSW_ITEM32(reg, ptce3, delta_start, 0x84, 0, 10); + +-/* reg_ppcnt_a_mac_control_frames_transmitted +- * Access: RO ++/* reg_ptce3_delta_mask ++ * Delta mask. ++ * 0 - Ignore relevant bit in delta_value ++ * 1 - Compare relevant bit in delta_value ++ * Delta mask must not be set for reserved fields in the key blocks. ++ * Note: No delta when no eRPs. Thus, for regions with ++ * PERERP.erpt_pointer_valid = 0 the delta mask must be 0. ++ * Access: Index + */ +-MLXSW_ITEM64(reg, ppcnt, a_mac_control_frames_transmitted, +- 0x08 + 0x70, 0, 64); ++MLXSW_ITEM32(reg, ptce3, delta_mask, 0x88, 16, 8); + +-/* reg_ppcnt_a_mac_control_frames_received +- * Access: RO ++/* reg_ptce3_delta_value ++ * Delta value. ++ * Bits which are masked by delta_mask must be 0. ++ * Access: Index + */ +-MLXSW_ITEM64(reg, ppcnt, a_mac_control_frames_received, +- 0x08 + 0x78, 0, 64); ++MLXSW_ITEM32(reg, ptce3, delta_value, 0x88, 0, 8); + +-/* reg_ppcnt_a_unsupported_opcodes_received +- * Access: RO ++/* reg_ptce3_prune_vector ++ * Pruning vector relative to the PERPT.erp_id. ++ * Used for reducing lookups. ++ * 0 - NEED: Do a lookup using the eRP. ++ * 1 - PRUNE: Do not perform a lookup using the eRP. ++ * Maybe be modified by PEAPBL and PEAPBM. ++ * Note: In Spectrum-2, a region of 8 key blocks must be set to either ++ * all 1's or all 0's. ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, a_unsupported_opcodes_received, +- 0x08 + 0x80, 0, 64); ++MLXSW_ITEM_BIT_ARRAY(reg, ptce3, prune_vector, 0x90, 4, 1); + +-/* reg_ppcnt_a_pause_mac_ctrl_frames_received +- * Access: RO +- */ +-MLXSW_ITEM64(reg, ppcnt, a_pause_mac_ctrl_frames_received, +- 0x08 + 0x88, 0, 64); +- +-/* reg_ppcnt_a_pause_mac_ctrl_frames_transmitted +- * Access: RO ++/* reg_ptce3_prune_ctcam ++ * Pruning on C-TCAM. Used for reducing lookups. ++ * 0 - NEED: Do a lookup in the C-TCAM. ++ * 1 - PRUNE: Do not perform a lookup in the C-TCAM. ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, a_pause_mac_ctrl_frames_transmitted, +- 0x08 + 0x90, 0, 64); +- +-/* Ethernet Per Priority Group Counters */ ++MLXSW_ITEM32(reg, ptce3, prune_ctcam, 0x94, 31, 1); + +-/* reg_ppcnt_rx_octets +- * Access: RO ++/* reg_ptce3_large_exists ++ * Large entry key ID exists. ++ * Within the region: ++ * 0 - SINGLE: The large_entry_key_id is not currently in use. ++ * For rule insert: The MSB of the key (blocks 6..11) will be added. ++ * For rule delete: The MSB of the key will be removed. ++ * 1 - NON_SINGLE: The large_entry_key_id is currently in use. ++ * For rule insert: The MSB of the key (blocks 6..11) will not be added. ++ * For rule delete: The MSB of the key will not be removed. ++ * Access: WO + */ +-MLXSW_ITEM64(reg, ppcnt, rx_octets, 0x08 + 0x00, 0, 64); ++MLXSW_ITEM32(reg, ptce3, large_exists, 0x98, 31, 1); + +-/* reg_ppcnt_rx_frames +- * Access: RO ++/* reg_ptce3_large_entry_key_id ++ * Large entry key ID. ++ * A key for 12 key blocks rules. Reserved when region has less than 12 key ++ * blocks. Must be different for different keys which have the same common ++ * 6 key blocks (MSB, blocks 6..11) key within a region. ++ * Range is 0..cap_max_pe_large_key_id - 1 ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, rx_frames, 0x08 + 0x20, 0, 64); ++MLXSW_ITEM32(reg, ptce3, large_entry_key_id, 0x98, 0, 24); + +-/* reg_ppcnt_tx_octets +- * Access: RO ++/* reg_ptce3_action_pointer ++ * Pointer to action. ++ * Range is 0..cap_max_kvd_action_sets - 1 ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, tx_octets, 0x08 + 0x28, 0, 64); ++MLXSW_ITEM32(reg, ptce3, action_pointer, 0xA0, 0, 24); + +-/* reg_ppcnt_tx_frames +- * Access: RO +- */ +-MLXSW_ITEM64(reg, ppcnt, tx_frames, 0x08 + 0x48, 0, 64); ++static inline void mlxsw_reg_ptce3_pack(char *payload, bool valid, ++ enum mlxsw_reg_ptce3_op op, ++ u32 priority, ++ const char *tcam_region_info, ++ const char *key, u8 erp_id, ++ bool large_exists, u32 lkey_id, ++ u32 action_pointer) ++{ ++ MLXSW_REG_ZERO(ptce3, payload); ++ mlxsw_reg_ptce3_v_set(payload, valid); ++ mlxsw_reg_ptce3_op_set(payload, op); ++ mlxsw_reg_ptce3_priority_set(payload, priority); ++ mlxsw_reg_ptce3_tcam_region_info_memcpy_to(payload, tcam_region_info); ++ mlxsw_reg_ptce3_flex2_key_blocks_memcpy_to(payload, key); ++ mlxsw_reg_ptce3_erp_id_set(payload, erp_id); ++ mlxsw_reg_ptce3_large_exists_set(payload, large_exists); ++ mlxsw_reg_ptce3_large_entry_key_id_set(payload, lkey_id); ++ mlxsw_reg_ptce3_action_pointer_set(payload, action_pointer); ++} + +-/* reg_ppcnt_rx_pause +- * Access: RO ++/* PERCR - Policy-Engine Region Configuration Register ++ * --------------------------------------------------- ++ * This register configures the region parameters. The region_id must be ++ * allocated. + */ +-MLXSW_ITEM64(reg, ppcnt, rx_pause, 0x08 + 0x50, 0, 64); ++#define MLXSW_REG_PERCR_ID 0x302A ++#define MLXSW_REG_PERCR_LEN 0x80 + +-/* reg_ppcnt_rx_pause_duration +- * Access: RO +- */ +-MLXSW_ITEM64(reg, ppcnt, rx_pause_duration, 0x08 + 0x58, 0, 64); ++MLXSW_REG_DEFINE(percr, MLXSW_REG_PERCR_ID, MLXSW_REG_PERCR_LEN); + +-/* reg_ppcnt_tx_pause +- * Access: RO ++/* reg_percr_region_id ++ * Region identifier. ++ * Range 0..cap_max_regions-1 ++ * Access: Index + */ +-MLXSW_ITEM64(reg, ppcnt, tx_pause, 0x08 + 0x60, 0, 64); ++MLXSW_ITEM32(reg, percr, region_id, 0x00, 0, 16); + +-/* reg_ppcnt_tx_pause_duration +- * Access: RO ++/* reg_percr_atcam_ignore_prune ++ * Ignore prune_vector by other A-TCAM rules. Used e.g., for a new rule. ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, tx_pause_duration, 0x08 + 0x68, 0, 64); ++MLXSW_ITEM32(reg, percr, atcam_ignore_prune, 0x04, 25, 1); + +-/* reg_ppcnt_rx_pause_transition +- * Access: RO ++/* reg_percr_ctcam_ignore_prune ++ * Ignore prune_ctcam by other A-TCAM rules. Used e.g., for a new rule. ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, tx_pause_transition, 0x08 + 0x70, 0, 64); ++MLXSW_ITEM32(reg, percr, ctcam_ignore_prune, 0x04, 24, 1); + +-/* Ethernet Per Traffic Group Counters */ +- +-/* reg_ppcnt_tc_transmit_queue +- * Contains the transmit queue depth in cells of traffic class +- * selected by prio_tc and the port selected by local_port. +- * The field cannot be cleared. +- * Access: RO ++/* reg_percr_bf_bypass ++ * Bloom filter bypass. ++ * 0 - Bloom filter is used (default) ++ * 1 - Bloom filter is bypassed. The bypass is an OR condition of ++ * region_id or eRP. See PERPT.bf_bypass ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, tc_transmit_queue, 0x08 + 0x00, 0, 64); ++MLXSW_ITEM32(reg, percr, bf_bypass, 0x04, 16, 1); + +-/* reg_ppcnt_tc_no_buffer_discard_uc +- * The number of unicast packets dropped due to lack of shared +- * buffer resources. +- * Access: RO ++/* reg_percr_master_mask ++ * Master mask. Logical OR mask of all masks of all rules of a region ++ * (both A-TCAM and C-TCAM). When there are no eRPs ++ * (erpt_pointer_valid = 0), then this provides the mask. ++ * Access: RW + */ +-MLXSW_ITEM64(reg, ppcnt, tc_no_buffer_discard_uc, 0x08 + 0x08, 0, 64); ++MLXSW_ITEM_BUF(reg, percr, master_mask, 0x20, 96); + +-static inline void mlxsw_reg_ppcnt_pack(char *payload, u8 local_port, +- enum mlxsw_reg_ppcnt_grp grp, +- u8 prio_tc) ++static inline void mlxsw_reg_percr_pack(char *payload, u16 region_id) + { +- MLXSW_REG_ZERO(ppcnt, payload); +- mlxsw_reg_ppcnt_swid_set(payload, 0); +- mlxsw_reg_ppcnt_local_port_set(payload, local_port); +- mlxsw_reg_ppcnt_pnat_set(payload, 0); +- mlxsw_reg_ppcnt_grp_set(payload, grp); +- mlxsw_reg_ppcnt_clr_set(payload, 0); +- mlxsw_reg_ppcnt_prio_tc_set(payload, prio_tc); ++ MLXSW_REG_ZERO(percr, payload); ++ mlxsw_reg_percr_region_id_set(payload, region_id); ++ mlxsw_reg_percr_atcam_ignore_prune_set(payload, false); ++ mlxsw_reg_percr_ctcam_ignore_prune_set(payload, false); ++ mlxsw_reg_percr_bf_bypass_set(payload, true); + } + +-/* PPTB - Port Prio To Buffer Register +- * ----------------------------------- +- * Configures the switch priority to buffer table. ++/* PERERP - Policy-Engine Region eRP Register ++ * ------------------------------------------ ++ * This register configures the region eRP. The region_id must be ++ * allocated. + */ +-#define MLXSW_REG_PPTB_ID 0x500B +-#define MLXSW_REG_PPTB_LEN 0x10 +- +-static const struct mlxsw_reg_info mlxsw_reg_pptb = { +- .id = MLXSW_REG_PPTB_ID, +- .len = MLXSW_REG_PPTB_LEN, +-}; +- +-enum { +- MLXSW_REG_PPTB_MM_UM, +- MLXSW_REG_PPTB_MM_UNICAST, +- MLXSW_REG_PPTB_MM_MULTICAST, +-}; ++#define MLXSW_REG_PERERP_ID 0x302B ++#define MLXSW_REG_PERERP_LEN 0x1C + +-/* reg_pptb_mm +- * Mapping mode. +- * 0 - Map both unicast and multicast packets to the same buffer. +- * 1 - Map only unicast packets. +- * 2 - Map only multicast packets. +- * Access: Index +- * +- * Note: SwitchX-2 only supports the first option. +- */ +-MLXSW_ITEM32(reg, pptb, mm, 0x00, 28, 2); ++MLXSW_REG_DEFINE(pererp, MLXSW_REG_PERERP_ID, MLXSW_REG_PERERP_LEN); + +-/* reg_pptb_local_port +- * Local port number. ++/* reg_pererp_region_id ++ * Region identifier. ++ * Range 0..cap_max_regions-1 + * Access: Index + */ +-MLXSW_ITEM32(reg, pptb, local_port, 0x00, 16, 8); ++MLXSW_ITEM32(reg, pererp, region_id, 0x00, 0, 16); + +-/* reg_pptb_um +- * Enables the update of the untagged_buf field. ++/* reg_pererp_ctcam_le ++ * C-TCAM lookup enable. Reserved when erpt_pointer_valid = 0. + * Access: RW + */ +-MLXSW_ITEM32(reg, pptb, um, 0x00, 8, 1); ++MLXSW_ITEM32(reg, pererp, ctcam_le, 0x04, 28, 1); + +-/* reg_pptb_pm +- * Enables the update of the prio_to_buff field. +- * Bit is a flag for updating the mapping for switch priority . ++/* reg_pererp_erpt_pointer_valid ++ * erpt_pointer is valid. + * Access: RW + */ +-MLXSW_ITEM32(reg, pptb, pm, 0x00, 0, 8); ++MLXSW_ITEM32(reg, pererp, erpt_pointer_valid, 0x10, 31, 1); + +-/* reg_pptb_prio_to_buff +- * Mapping of switch priority to one of the allocated receive port +- * buffers. +- * Access: RW ++/* reg_pererp_erpt_bank_pointer ++ * Pointer to eRP table bank. May be modified at any time. ++ * Range 0..cap_max_erp_table_banks-1 ++ * Reserved when erpt_pointer_valid = 0 + */ +-MLXSW_ITEM_BIT_ARRAY(reg, pptb, prio_to_buff, 0x04, 0x04, 4); ++MLXSW_ITEM32(reg, pererp, erpt_bank_pointer, 0x10, 16, 4); + +-/* reg_pptb_pm_msb +- * Enables the update of the prio_to_buff field. +- * Bit is a flag for updating the mapping for switch priority . ++/* reg_pererp_erpt_pointer ++ * Pointer to eRP table within the eRP bank. Can be changed for an ++ * existing region. ++ * Range 0..cap_max_erp_table_size-1 ++ * Reserved when erpt_pointer_valid = 0 + * Access: RW + */ +-MLXSW_ITEM32(reg, pptb, pm_msb, 0x08, 24, 8); ++MLXSW_ITEM32(reg, pererp, erpt_pointer, 0x10, 0, 8); + +-/* reg_pptb_untagged_buff +- * Mapping of untagged frames to one of the allocated receive port buffers. ++/* reg_pererp_erpt_vector ++ * Vector of allowed eRP indexes starting from erpt_pointer within the ++ * erpt_bank_pointer. Next entries will be in next bank. ++ * Note that eRP index is used and not eRP ID. ++ * Reserved when erpt_pointer_valid = 0 + * Access: RW +- * +- * Note: In SwitchX-2 this field must be mapped to buffer 8. Reserved for +- * Spectrum, as it maps untagged packets based on the default switch priority. + */ +-MLXSW_ITEM32(reg, pptb, untagged_buff, 0x08, 0, 4); ++MLXSW_ITEM_BIT_ARRAY(reg, pererp, erpt_vector, 0x14, 4, 1); + +-/* reg_pptb_prio_to_buff_msb +- * Mapping of switch priority to one of the allocated receive port +- * buffers. ++/* reg_pererp_master_rp_id ++ * Master RP ID. When there are no eRPs, then this provides the eRP ID ++ * for the lookup. Can be changed for an existing region. ++ * Reserved when erpt_pointer_valid = 1 + * Access: RW + */ +-MLXSW_ITEM_BIT_ARRAY(reg, pptb, prio_to_buff_msb, 0x0C, 0x04, 4); ++MLXSW_ITEM32(reg, pererp, master_rp_id, 0x18, 0, 4); + +-#define MLXSW_REG_PPTB_ALL_PRIO 0xFF +- +-static inline void mlxsw_reg_pptb_pack(char *payload, u8 local_port) ++static inline void mlxsw_reg_pererp_erp_vector_pack(char *payload, ++ unsigned long *erp_vector, ++ unsigned long size) + { +- MLXSW_REG_ZERO(pptb, payload); +- mlxsw_reg_pptb_mm_set(payload, MLXSW_REG_PPTB_MM_UM); +- mlxsw_reg_pptb_local_port_set(payload, local_port); +- mlxsw_reg_pptb_pm_set(payload, MLXSW_REG_PPTB_ALL_PRIO); +- mlxsw_reg_pptb_pm_msb_set(payload, MLXSW_REG_PPTB_ALL_PRIO); ++ unsigned long bit; ++ ++ for_each_set_bit(bit, erp_vector, size) ++ mlxsw_reg_pererp_erpt_vector_set(payload, bit, true); + } + +-static inline void mlxsw_reg_pptb_prio_to_buff_pack(char *payload, u8 prio, +- u8 buff) ++static inline void mlxsw_reg_pererp_pack(char *payload, u16 region_id, ++ bool ctcam_le, bool erpt_pointer_valid, ++ u8 erpt_bank_pointer, u8 erpt_pointer, ++ u8 master_rp_id) + { +- mlxsw_reg_pptb_prio_to_buff_set(payload, prio, buff); +- mlxsw_reg_pptb_prio_to_buff_msb_set(payload, prio, buff); ++ MLXSW_REG_ZERO(pererp, payload); ++ mlxsw_reg_pererp_region_id_set(payload, region_id); ++ mlxsw_reg_pererp_ctcam_le_set(payload, ctcam_le); ++ mlxsw_reg_pererp_erpt_pointer_valid_set(payload, erpt_pointer_valid); ++ mlxsw_reg_pererp_erpt_bank_pointer_set(payload, erpt_bank_pointer); ++ mlxsw_reg_pererp_erpt_pointer_set(payload, erpt_pointer); ++ mlxsw_reg_pererp_master_rp_id_set(payload, master_rp_id); + } + +-/* PBMC - Port Buffer Management Control Register +- * ---------------------------------------------- +- * The PBMC register configures and retrieves the port packet buffer +- * allocation for different Prios, and the Pause threshold management. ++/* IEDR - Infrastructure Entry Delete Register ++ * ---------------------------------------------------- ++ * This register is used for deleting entries from the entry tables. ++ * It is legitimate to attempt to delete a nonexisting entry (the device will ++ * respond as a good flow). ++ */ ++#define MLXSW_REG_IEDR_ID 0x3804 ++#define MLXSW_REG_IEDR_BASE_LEN 0x10 /* base length, without records */ ++#define MLXSW_REG_IEDR_REC_LEN 0x8 /* record length */ ++#define MLXSW_REG_IEDR_REC_MAX_COUNT 64 ++#define MLXSW_REG_IEDR_LEN (MLXSW_REG_IEDR_BASE_LEN + \ ++ MLXSW_REG_IEDR_REC_LEN * \ ++ MLXSW_REG_IEDR_REC_MAX_COUNT) ++ ++MLXSW_REG_DEFINE(iedr, MLXSW_REG_IEDR_ID, MLXSW_REG_IEDR_LEN); ++ ++/* reg_iedr_num_rec ++ * Number of records. ++ * Access: OP + */ +-#define MLXSW_REG_PBMC_ID 0x500C +-#define MLXSW_REG_PBMC_LEN 0x6C +- +-static const struct mlxsw_reg_info mlxsw_reg_pbmc = { +- .id = MLXSW_REG_PBMC_ID, +- .len = MLXSW_REG_PBMC_LEN, +-}; ++MLXSW_ITEM32(reg, iedr, num_rec, 0x00, 0, 8); + +-/* reg_pbmc_local_port +- * Local port number. +- * Access: Index ++/* reg_iedr_rec_type ++ * Resource type. ++ * Access: OP + */ +-MLXSW_ITEM32(reg, pbmc, local_port, 0x00, 16, 8); ++MLXSW_ITEM32_INDEXED(reg, iedr, rec_type, MLXSW_REG_IEDR_BASE_LEN, 24, 8, ++ MLXSW_REG_IEDR_REC_LEN, 0x00, false); + +-/* reg_pbmc_xoff_timer_value +- * When device generates a pause frame, it uses this value as the pause +- * timer (time for the peer port to pause in quota-512 bit time). +- * Access: RW ++/* reg_iedr_rec_size ++ * Size of entries do be deleted. The unit is 1 entry, regardless of entry type. ++ * Access: OP + */ +-MLXSW_ITEM32(reg, pbmc, xoff_timer_value, 0x04, 16, 16); ++MLXSW_ITEM32_INDEXED(reg, iedr, rec_size, MLXSW_REG_IEDR_BASE_LEN, 0, 11, ++ MLXSW_REG_IEDR_REC_LEN, 0x00, false); + +-/* reg_pbmc_xoff_refresh +- * The time before a new pause frame should be sent to refresh the pause RW +- * state. Using the same units as xoff_timer_value above (in quota-512 bit +- * time). +- * Access: RW ++/* reg_iedr_rec_index_start ++ * Resource index start. ++ * Access: OP + */ +-MLXSW_ITEM32(reg, pbmc, xoff_refresh, 0x04, 0, 16); ++MLXSW_ITEM32_INDEXED(reg, iedr, rec_index_start, MLXSW_REG_IEDR_BASE_LEN, 0, 24, ++ MLXSW_REG_IEDR_REC_LEN, 0x04, false); + +-#define MLXSW_REG_PBMC_PORT_SHARED_BUF_IDX 11 ++static inline void mlxsw_reg_iedr_pack(char *payload) ++{ ++ MLXSW_REG_ZERO(iedr, payload); ++} + +-/* reg_pbmc_buf_lossy +- * The field indicates if the buffer is lossy. +- * 0 - Lossless +- * 1 - Lossy +- * Access: RW +- */ +-MLXSW_ITEM32_INDEXED(reg, pbmc, buf_lossy, 0x0C, 25, 1, 0x08, 0x00, false); ++static inline void mlxsw_reg_iedr_rec_pack(char *payload, int rec_index, ++ u8 rec_type, u16 rec_size, ++ u32 rec_index_start) ++{ ++ u8 num_rec = mlxsw_reg_iedr_num_rec_get(payload); + +-/* reg_pbmc_buf_epsb +- * Eligible for Port Shared buffer. +- * If epsb is set, packets assigned to buffer are allowed to insert the port +- * shared buffer. +- * When buf_lossy is MLXSW_REG_PBMC_LOSSY_LOSSY this field is reserved. +- * Access: RW +- */ +-MLXSW_ITEM32_INDEXED(reg, pbmc, buf_epsb, 0x0C, 24, 1, 0x08, 0x00, false); ++ if (rec_index >= num_rec) ++ mlxsw_reg_iedr_num_rec_set(payload, rec_index + 1); ++ mlxsw_reg_iedr_rec_type_set(payload, rec_index, rec_type); ++ mlxsw_reg_iedr_rec_size_set(payload, rec_index, rec_size); ++ mlxsw_reg_iedr_rec_index_start_set(payload, rec_index, rec_index_start); ++} + +-/* reg_pbmc_buf_size +- * The part of the packet buffer array is allocated for the specific buffer. +- * Units are represented in cells. +- * Access: RW ++/* QPTS - QoS Priority Trust State Register ++ * ---------------------------------------- ++ * This register controls the port policy to calculate the switch priority and ++ * packet color based on incoming packet fields. + */ +-MLXSW_ITEM32_INDEXED(reg, pbmc, buf_size, 0x0C, 0, 16, 0x08, 0x00, false); ++#define MLXSW_REG_QPTS_ID 0x4002 ++#define MLXSW_REG_QPTS_LEN 0x8 + +-/* reg_pbmc_buf_xoff_threshold +- * Once the amount of data in the buffer goes above this value, device +- * starts sending PFC frames for all priorities associated with the +- * buffer. Units are represented in cells. Reserved in case of lossy +- * buffer. +- * Access: RW ++MLXSW_REG_DEFINE(qpts, MLXSW_REG_QPTS_ID, MLXSW_REG_QPTS_LEN); ++ ++/* reg_qpts_local_port ++ * Local port number. ++ * Access: Index + * +- * Note: In Spectrum, reserved for buffer[9]. ++ * Note: CPU port is supported. + */ +-MLXSW_ITEM32_INDEXED(reg, pbmc, buf_xoff_threshold, 0x0C, 16, 16, +- 0x08, 0x04, false); ++MLXSW_ITEM32(reg, qpts, local_port, 0x00, 16, 8); + +-/* reg_pbmc_buf_xon_threshold +- * When the amount of data in the buffer goes below this value, device +- * stops sending PFC frames for the priorities associated with the +- * buffer. Units are represented in cells. Reserved in case of lossy +- * buffer. ++enum mlxsw_reg_qpts_trust_state { ++ MLXSW_REG_QPTS_TRUST_STATE_PCP = 1, ++ MLXSW_REG_QPTS_TRUST_STATE_DSCP = 2, /* For MPLS, trust EXP. */ ++}; ++ ++/* reg_qpts_trust_state ++ * Trust state for a given port. + * Access: RW +- * +- * Note: In Spectrum, reserved for buffer[9]. + */ +-MLXSW_ITEM32_INDEXED(reg, pbmc, buf_xon_threshold, 0x0C, 0, 16, +- 0x08, 0x04, false); ++MLXSW_ITEM32(reg, qpts, trust_state, 0x04, 0, 3); + +-static inline void mlxsw_reg_pbmc_pack(char *payload, u8 local_port, +- u16 xoff_timer_value, u16 xoff_refresh) +-{ +- MLXSW_REG_ZERO(pbmc, payload); +- mlxsw_reg_pbmc_local_port_set(payload, local_port); +- mlxsw_reg_pbmc_xoff_timer_value_set(payload, xoff_timer_value); +- mlxsw_reg_pbmc_xoff_refresh_set(payload, xoff_refresh); +-} +- +-static inline void mlxsw_reg_pbmc_lossy_buffer_pack(char *payload, +- int buf_index, +- u16 size) ++static inline void mlxsw_reg_qpts_pack(char *payload, u8 local_port, ++ enum mlxsw_reg_qpts_trust_state ts) + { +- mlxsw_reg_pbmc_buf_lossy_set(payload, buf_index, 1); +- mlxsw_reg_pbmc_buf_epsb_set(payload, buf_index, 0); +- mlxsw_reg_pbmc_buf_size_set(payload, buf_index, size); +-} ++ MLXSW_REG_ZERO(qpts, payload); + +-static inline void mlxsw_reg_pbmc_lossless_buffer_pack(char *payload, +- int buf_index, u16 size, +- u16 threshold) +-{ +- mlxsw_reg_pbmc_buf_lossy_set(payload, buf_index, 0); +- mlxsw_reg_pbmc_buf_epsb_set(payload, buf_index, 0); +- mlxsw_reg_pbmc_buf_size_set(payload, buf_index, size); +- mlxsw_reg_pbmc_buf_xoff_threshold_set(payload, buf_index, threshold); +- mlxsw_reg_pbmc_buf_xon_threshold_set(payload, buf_index, threshold); ++ mlxsw_reg_qpts_local_port_set(payload, local_port); ++ mlxsw_reg_qpts_trust_state_set(payload, ts); + } + +-/* PSPA - Port Switch Partition Allocation +- * --------------------------------------- +- * Controls the association of a port with a switch partition and enables +- * configuring ports as stacking ports. ++/* QPCR - QoS Policer Configuration Register ++ * ----------------------------------------- ++ * The QPCR register is used to create policers - that limit ++ * the rate of bytes or packets via some trap group. + */ +-#define MLXSW_REG_PSPA_ID 0x500D +-#define MLXSW_REG_PSPA_LEN 0x8 ++#define MLXSW_REG_QPCR_ID 0x4004 ++#define MLXSW_REG_QPCR_LEN 0x28 + +-static const struct mlxsw_reg_info mlxsw_reg_pspa = { +- .id = MLXSW_REG_PSPA_ID, +- .len = MLXSW_REG_PSPA_LEN, ++MLXSW_REG_DEFINE(qpcr, MLXSW_REG_QPCR_ID, MLXSW_REG_QPCR_LEN); ++ ++enum mlxsw_reg_qpcr_g { ++ MLXSW_REG_QPCR_G_GLOBAL = 2, ++ MLXSW_REG_QPCR_G_STORM_CONTROL = 3, + }; + +-/* reg_pspa_swid +- * Switch partition ID. +- * Access: RW ++/* reg_qpcr_g ++ * The policer type. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, pspa, swid, 0x00, 24, 8); ++MLXSW_ITEM32(reg, qpcr, g, 0x00, 14, 2); + +-/* reg_pspa_local_port +- * Local port number. ++/* reg_qpcr_pid ++ * Policer ID. + * Access: Index + */ +-MLXSW_ITEM32(reg, pspa, local_port, 0x00, 16, 8); ++MLXSW_ITEM32(reg, qpcr, pid, 0x00, 0, 14); + +-/* reg_pspa_sub_port +- * Virtual port within the local port. Set to 0 when virtual ports are +- * disabled on the local port. +- * Access: Index ++/* reg_qpcr_color_aware ++ * Is the policer aware of colors. ++ * Must be 0 (unaware) for cpu port. ++ * Access: RW for unbounded policer. RO for bounded policer. + */ +-MLXSW_ITEM32(reg, pspa, sub_port, 0x00, 8, 8); ++MLXSW_ITEM32(reg, qpcr, color_aware, 0x04, 15, 1); + +-static inline void mlxsw_reg_pspa_pack(char *payload, u8 swid, u8 local_port) +-{ +- MLXSW_REG_ZERO(pspa, payload); +- mlxsw_reg_pspa_swid_set(payload, swid); +- mlxsw_reg_pspa_local_port_set(payload, local_port); +- mlxsw_reg_pspa_sub_port_set(payload, 0); +-} +- +-/* HTGT - Host Trap Group Table +- * ---------------------------- +- * Configures the properties for forwarding to CPU. ++/* reg_qpcr_bytes ++ * Is policer limit is for bytes per sec or packets per sec. ++ * 0 - packets ++ * 1 - bytes ++ * Access: RW for unbounded policer. RO for bounded policer. + */ +-#define MLXSW_REG_HTGT_ID 0x7002 +-#define MLXSW_REG_HTGT_LEN 0x100 ++MLXSW_ITEM32(reg, qpcr, bytes, 0x04, 14, 1); + +-static const struct mlxsw_reg_info mlxsw_reg_htgt = { +- .id = MLXSW_REG_HTGT_ID, +- .len = MLXSW_REG_HTGT_LEN, ++enum mlxsw_reg_qpcr_ir_units { ++ MLXSW_REG_QPCR_IR_UNITS_M, ++ MLXSW_REG_QPCR_IR_UNITS_K, + }; + +-/* reg_htgt_swid +- * Switch partition ID. +- * Access: Index +- */ +-MLXSW_ITEM32(reg, htgt, swid, 0x00, 24, 8); +- +-#define MLXSW_REG_HTGT_PATH_TYPE_LOCAL 0x0 /* For locally attached CPU */ +- +-/* reg_htgt_type +- * CPU path type. +- * Access: RW ++/* reg_qpcr_ir_units ++ * Policer's units for cir and eir fields (for bytes limits only) ++ * 1 - 10^3 ++ * 0 - 10^6 ++ * Access: OP + */ +-MLXSW_ITEM32(reg, htgt, type, 0x00, 8, 4); ++MLXSW_ITEM32(reg, qpcr, ir_units, 0x04, 12, 1); + +-enum mlxsw_reg_htgt_trap_group { +- MLXSW_REG_HTGT_TRAP_GROUP_EMAD, +- MLXSW_REG_HTGT_TRAP_GROUP_RX, +- MLXSW_REG_HTGT_TRAP_GROUP_CTRL, ++enum mlxsw_reg_qpcr_rate_type { ++ MLXSW_REG_QPCR_RATE_TYPE_SINGLE = 1, ++ MLXSW_REG_QPCR_RATE_TYPE_DOUBLE = 2, + }; + +-/* reg_htgt_trap_group +- * Trap group number. User defined number specifying which trap groups +- * should be forwarded to the CPU. The mapping between trap IDs and trap +- * groups is configured using HPKT register. +- * Access: Index ++/* reg_qpcr_rate_type ++ * Policer can have one limit (single rate) or 2 limits with specific operation ++ * for packets that exceed the lower rate but not the upper one. ++ * (For cpu port must be single rate) ++ * Access: RW for unbounded policer. RO for bounded policer. + */ +-MLXSW_ITEM32(reg, htgt, trap_group, 0x00, 0, 8); ++MLXSW_ITEM32(reg, qpcr, rate_type, 0x04, 8, 2); + +-enum { +- MLXSW_REG_HTGT_POLICER_DISABLE, +- MLXSW_REG_HTGT_POLICER_ENABLE, +-}; ++/* reg_qpc_cbs ++ * Policer's committed burst size. ++ * The policer is working with time slices of 50 nano sec. By default every ++ * slice is granted the proportionate share of the committed rate. If we want to ++ * allow a slice to exceed that share (while still keeping the rate per sec) we ++ * can allow burst. The burst size is between the default proportionate share ++ * (and no lower than 8) to 32Gb. (Even though giving a number higher than the ++ * committed rate will result in exceeding the rate). The burst size must be a ++ * log of 2 and will be determined by 2^cbs. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, qpcr, cbs, 0x08, 24, 6); + +-/* reg_htgt_pide +- * Enable policer ID specified using 'pid' field. ++/* reg_qpcr_cir ++ * Policer's committed rate. ++ * The rate used for sungle rate, the lower rate for double rate. ++ * For bytes limits, the rate will be this value * the unit from ir_units. ++ * (Resolution error is up to 1%). + * Access: RW + */ +-MLXSW_ITEM32(reg, htgt, pide, 0x04, 15, 1); ++MLXSW_ITEM32(reg, qpcr, cir, 0x0C, 0, 32); + +-/* reg_htgt_pid +- * Policer ID for the trap group. ++/* reg_qpcr_eir ++ * Policer's exceed rate. ++ * The higher rate for double rate, reserved for single rate. ++ * Lower rate for double rate policer. ++ * For bytes limits, the rate will be this value * the unit from ir_units. ++ * (Resolution error is up to 1%). + * Access: RW + */ +-MLXSW_ITEM32(reg, htgt, pid, 0x04, 0, 8); ++MLXSW_ITEM32(reg, qpcr, eir, 0x10, 0, 32); + +-#define MLXSW_REG_HTGT_TRAP_TO_CPU 0x0 ++#define MLXSW_REG_QPCR_DOUBLE_RATE_ACTION 2 + +-/* reg_htgt_mirror_action +- * Mirror action to use. +- * 0 - Trap to CPU. +- * 1 - Trap to CPU and mirror to a mirroring agent. +- * 2 - Mirror to a mirroring agent and do not trap to CPU. +- * Access: RW +- * +- * Note: Mirroring to a mirroring agent is only supported in Spectrum. ++/* reg_qpcr_exceed_action. ++ * What to do with packets between the 2 limits for double rate. ++ * Access: RW for unbounded policer. RO for bounded policer. + */ +-MLXSW_ITEM32(reg, htgt, mirror_action, 0x08, 8, 2); ++MLXSW_ITEM32(reg, qpcr, exceed_action, 0x14, 0, 4); + +-/* reg_htgt_mirroring_agent +- * Mirroring agent. +- * Access: RW ++enum mlxsw_reg_qpcr_action { ++ /* Discard */ ++ MLXSW_REG_QPCR_ACTION_DISCARD = 1, ++ /* Forward and set color to red. ++ * If the packet is intended to cpu port, it will be dropped. ++ */ ++ MLXSW_REG_QPCR_ACTION_FORWARD = 2, ++}; ++ ++/* reg_qpcr_violate_action ++ * What to do with packets that cross the cir limit (for single rate) or the eir ++ * limit (for double rate). ++ * Access: RW for unbounded policer. RO for bounded policer. + */ +-MLXSW_ITEM32(reg, htgt, mirroring_agent, 0x08, 0, 3); ++MLXSW_ITEM32(reg, qpcr, violate_action, 0x18, 0, 4); + +-/* reg_htgt_priority +- * Trap group priority. +- * In case a packet matches multiple classification rules, the packet will +- * only be trapped once, based on the trap ID associated with the group (via +- * register HPKT) with the highest priority. +- * Supported values are 0-7, with 7 represnting the highest priority. +- * Access: RW ++static inline void mlxsw_reg_qpcr_pack(char *payload, u16 pid, ++ enum mlxsw_reg_qpcr_ir_units ir_units, ++ bool bytes, u32 cir, u16 cbs) ++{ ++ MLXSW_REG_ZERO(qpcr, payload); ++ mlxsw_reg_qpcr_pid_set(payload, pid); ++ mlxsw_reg_qpcr_g_set(payload, MLXSW_REG_QPCR_G_GLOBAL); ++ mlxsw_reg_qpcr_rate_type_set(payload, MLXSW_REG_QPCR_RATE_TYPE_SINGLE); ++ mlxsw_reg_qpcr_violate_action_set(payload, ++ MLXSW_REG_QPCR_ACTION_DISCARD); ++ mlxsw_reg_qpcr_cir_set(payload, cir); ++ mlxsw_reg_qpcr_ir_units_set(payload, ir_units); ++ mlxsw_reg_qpcr_bytes_set(payload, bytes); ++ mlxsw_reg_qpcr_cbs_set(payload, cbs); ++} ++ ++/* QTCT - QoS Switch Traffic Class Table ++ * ------------------------------------- ++ * Configures the mapping between the packet switch priority and the ++ * traffic class on the transmit port. ++ */ ++#define MLXSW_REG_QTCT_ID 0x400A ++#define MLXSW_REG_QTCT_LEN 0x08 ++ ++MLXSW_REG_DEFINE(qtct, MLXSW_REG_QTCT_ID, MLXSW_REG_QTCT_LEN); ++ ++/* reg_qtct_local_port ++ * Local port number. ++ * Access: Index + * +- * Note: In SwitchX-2 this field is ignored and the priority value is replaced +- * by the 'trap_group' field. ++ * Note: CPU port is not supported. + */ +-MLXSW_ITEM32(reg, htgt, priority, 0x0C, 0, 4); ++MLXSW_ITEM32(reg, qtct, local_port, 0x00, 16, 8); + +-/* reg_htgt_local_path_cpu_tclass +- * CPU ingress traffic class for the trap group. +- * Access: RW ++/* reg_qtct_sub_port ++ * Virtual port within the physical port. ++ * Should be set to 0 when virtual ports are not enabled on the port. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, htgt, local_path_cpu_tclass, 0x10, 16, 6); ++MLXSW_ITEM32(reg, qtct, sub_port, 0x00, 8, 8); + +-#define MLXSW_REG_HTGT_LOCAL_PATH_RDQ_EMAD 0x15 +-#define MLXSW_REG_HTGT_LOCAL_PATH_RDQ_RX 0x14 +-#define MLXSW_REG_HTGT_LOCAL_PATH_RDQ_CTRL 0x13 ++/* reg_qtct_switch_prio ++ * Switch priority. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, qtct, switch_prio, 0x00, 0, 4); + +-/* reg_htgt_local_path_rdq +- * Receive descriptor queue (RDQ) to use for the trap group. ++/* reg_qtct_tclass ++ * Traffic class. ++ * Default values: ++ * switch_prio 0 : tclass 1 ++ * switch_prio 1 : tclass 0 ++ * switch_prio i : tclass i, for i > 1 + * Access: RW + */ +-MLXSW_ITEM32(reg, htgt, local_path_rdq, 0x10, 0, 6); ++MLXSW_ITEM32(reg, qtct, tclass, 0x04, 0, 4); + +-static inline void mlxsw_reg_htgt_pack(char *payload, +- enum mlxsw_reg_htgt_trap_group group) ++static inline void mlxsw_reg_qtct_pack(char *payload, u8 local_port, ++ u8 switch_prio, u8 tclass) + { +- u8 swid, rdq; +- +- MLXSW_REG_ZERO(htgt, payload); +- switch (group) { +- case MLXSW_REG_HTGT_TRAP_GROUP_EMAD: +- swid = MLXSW_PORT_SWID_ALL_SWIDS; +- rdq = MLXSW_REG_HTGT_LOCAL_PATH_RDQ_EMAD; +- break; +- case MLXSW_REG_HTGT_TRAP_GROUP_RX: +- swid = 0; +- rdq = MLXSW_REG_HTGT_LOCAL_PATH_RDQ_RX; +- break; +- case MLXSW_REG_HTGT_TRAP_GROUP_CTRL: +- swid = 0; +- rdq = MLXSW_REG_HTGT_LOCAL_PATH_RDQ_CTRL; +- break; +- } +- mlxsw_reg_htgt_swid_set(payload, swid); +- mlxsw_reg_htgt_type_set(payload, MLXSW_REG_HTGT_PATH_TYPE_LOCAL); +- mlxsw_reg_htgt_trap_group_set(payload, group); +- mlxsw_reg_htgt_pide_set(payload, MLXSW_REG_HTGT_POLICER_DISABLE); +- mlxsw_reg_htgt_pid_set(payload, 0); +- mlxsw_reg_htgt_mirror_action_set(payload, MLXSW_REG_HTGT_TRAP_TO_CPU); +- mlxsw_reg_htgt_mirroring_agent_set(payload, 0); +- mlxsw_reg_htgt_priority_set(payload, 0); +- mlxsw_reg_htgt_local_path_cpu_tclass_set(payload, 7); +- mlxsw_reg_htgt_local_path_rdq_set(payload, rdq); ++ MLXSW_REG_ZERO(qtct, payload); ++ mlxsw_reg_qtct_local_port_set(payload, local_port); ++ mlxsw_reg_qtct_switch_prio_set(payload, switch_prio); ++ mlxsw_reg_qtct_tclass_set(payload, tclass); + } + +-/* HPKT - Host Packet Trap +- * ----------------------- +- * Configures trap IDs inside trap groups. ++/* QEEC - QoS ETS Element Configuration Register ++ * --------------------------------------------- ++ * Configures the ETS elements. + */ +-#define MLXSW_REG_HPKT_ID 0x7003 +-#define MLXSW_REG_HPKT_LEN 0x10 +- +-static const struct mlxsw_reg_info mlxsw_reg_hpkt = { +- .id = MLXSW_REG_HPKT_ID, +- .len = MLXSW_REG_HPKT_LEN, +-}; ++#define MLXSW_REG_QEEC_ID 0x400D ++#define MLXSW_REG_QEEC_LEN 0x20 + +-enum { +- MLXSW_REG_HPKT_ACK_NOT_REQUIRED, +- MLXSW_REG_HPKT_ACK_REQUIRED, +-}; ++MLXSW_REG_DEFINE(qeec, MLXSW_REG_QEEC_ID, MLXSW_REG_QEEC_LEN); + +-/* reg_hpkt_ack +- * Require acknowledgements from the host for events. +- * If set, then the device will wait for the event it sent to be acknowledged +- * by the host. This option is only relevant for event trap IDs. +- * Access: RW ++/* reg_qeec_local_port ++ * Local port number. ++ * Access: Index + * +- * Note: Currently not supported by firmware. ++ * Note: CPU port is supported. + */ +-MLXSW_ITEM32(reg, hpkt, ack, 0x00, 24, 1); ++MLXSW_ITEM32(reg, qeec, local_port, 0x00, 16, 8); + +-enum mlxsw_reg_hpkt_action { +- MLXSW_REG_HPKT_ACTION_FORWARD, +- MLXSW_REG_HPKT_ACTION_TRAP_TO_CPU, +- MLXSW_REG_HPKT_ACTION_MIRROR_TO_CPU, +- MLXSW_REG_HPKT_ACTION_DISCARD, +- MLXSW_REG_HPKT_ACTION_SOFT_DISCARD, +- MLXSW_REG_HPKT_ACTION_TRAP_AND_SOFT_DISCARD, ++enum mlxsw_reg_qeec_hr { ++ MLXSW_REG_QEEC_HIERARCY_PORT, ++ MLXSW_REG_QEEC_HIERARCY_GROUP, ++ MLXSW_REG_QEEC_HIERARCY_SUBGROUP, ++ MLXSW_REG_QEEC_HIERARCY_TC, + }; + +-/* reg_hpkt_action +- * Action to perform on packet when trapped. +- * 0 - No action. Forward to CPU based on switching rules. +- * 1 - Trap to CPU (CPU receives sole copy). +- * 2 - Mirror to CPU (CPU receives a replica of the packet). +- * 3 - Discard. +- * 4 - Soft discard (allow other traps to act on the packet). +- * 5 - Trap and soft discard (allow other traps to overwrite this trap). +- * Access: RW +- * +- * Note: Must be set to 0 (forward) for event trap IDs, as they are already +- * addressed to the CPU. ++/* reg_qeec_element_hierarchy ++ * 0 - Port ++ * 1 - Group ++ * 2 - Subgroup ++ * 3 - Traffic Class ++ * Access: Index + */ +-MLXSW_ITEM32(reg, hpkt, action, 0x00, 20, 3); ++MLXSW_ITEM32(reg, qeec, element_hierarchy, 0x04, 16, 4); + +-/* reg_hpkt_trap_group +- * Trap group to associate the trap with. +- * Access: RW ++/* reg_qeec_element_index ++ * The index of the element in the hierarchy. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, hpkt, trap_group, 0x00, 12, 6); ++MLXSW_ITEM32(reg, qeec, element_index, 0x04, 0, 8); + +-/* reg_hpkt_trap_id +- * Trap ID. +- * Access: Index ++/* reg_qeec_next_element_index ++ * The index of the next (lower) element in the hierarchy. ++ * Access: RW + * +- * Note: A trap ID can only be associated with a single trap group. The device +- * will associate the trap ID with the last trap group configured. ++ * Note: Reserved for element_hierarchy 0. + */ +-MLXSW_ITEM32(reg, hpkt, trap_id, 0x00, 0, 9); +- +-enum { +- MLXSW_REG_HPKT_CTRL_PACKET_DEFAULT, +- MLXSW_REG_HPKT_CTRL_PACKET_NO_BUFFER, +- MLXSW_REG_HPKT_CTRL_PACKET_USE_BUFFER, +-}; ++MLXSW_ITEM32(reg, qeec, next_element_index, 0x08, 0, 8); + +-/* reg_hpkt_ctrl +- * Configure dedicated buffer resources for control packets. +- * 0 - Keep factory defaults. +- * 1 - Do not use control buffer for this trap ID. +- * 2 - Use control buffer for this trap ID. ++/* reg_qeec_mise ++ * Min shaper configuration enable. Enables configuration of the min ++ * shaper on this ETS element ++ * 0 - Disable ++ * 1 - Enable + * Access: RW + */ +-MLXSW_ITEM32(reg, hpkt, ctrl, 0x04, 16, 2); ++MLXSW_ITEM32(reg, qeec, mise, 0x0C, 31, 1); + +-static inline void mlxsw_reg_hpkt_pack(char *payload, u8 action, u16 trap_id) +-{ +- enum mlxsw_reg_htgt_trap_group trap_group; +- +- MLXSW_REG_ZERO(hpkt, payload); +- mlxsw_reg_hpkt_ack_set(payload, MLXSW_REG_HPKT_ACK_NOT_REQUIRED); +- mlxsw_reg_hpkt_action_set(payload, action); +- switch (trap_id) { +- case MLXSW_TRAP_ID_ETHEMAD: +- case MLXSW_TRAP_ID_PUDE: +- trap_group = MLXSW_REG_HTGT_TRAP_GROUP_EMAD; +- break; +- default: +- trap_group = MLXSW_REG_HTGT_TRAP_GROUP_RX; +- break; +- } +- mlxsw_reg_hpkt_trap_group_set(payload, trap_group); +- mlxsw_reg_hpkt_trap_id_set(payload, trap_id); +- mlxsw_reg_hpkt_ctrl_set(payload, MLXSW_REG_HPKT_CTRL_PACKET_DEFAULT); +-} ++enum { ++ MLXSW_REG_QEEC_BYTES_MODE, ++ MLXSW_REG_QEEC_PACKETS_MODE, ++}; + +-/* RGCR - Router General Configuration Register +- * -------------------------------------------- +- * The register is used for setting up the router configuration. ++/* reg_qeec_pb ++ * Packets or bytes mode. ++ * 0 - Bytes mode ++ * 1 - Packets mode ++ * Access: RW ++ * ++ * Note: Used for max shaper configuration. For Spectrum, packets mode ++ * is supported only for traffic classes of CPU port. + */ +-#define MLXSW_REG_RGCR_ID 0x8001 +-#define MLXSW_REG_RGCR_LEN 0x28 ++MLXSW_ITEM32(reg, qeec, pb, 0x0C, 28, 1); + +-static const struct mlxsw_reg_info mlxsw_reg_rgcr = { +- .id = MLXSW_REG_RGCR_ID, +- .len = MLXSW_REG_RGCR_LEN, +-}; ++/* The smallest permitted min shaper rate. */ ++#define MLXSW_REG_QEEC_MIS_MIN 200000 /* Kbps */ + +-/* reg_rgcr_ipv4_en +- * IPv4 router enable. ++/* reg_qeec_min_shaper_rate ++ * Min shaper information rate. ++ * For CPU port, can only be configured for port hierarchy. ++ * When in bytes mode, value is specified in units of 1000bps. + * Access: RW + */ +-MLXSW_ITEM32(reg, rgcr, ipv4_en, 0x00, 31, 1); ++MLXSW_ITEM32(reg, qeec, min_shaper_rate, 0x0C, 0, 28); + +-/* reg_rgcr_ipv6_en +- * IPv6 router enable. ++/* reg_qeec_mase ++ * Max shaper configuration enable. Enables configuration of the max ++ * shaper on this ETS element. ++ * 0 - Disable ++ * 1 - Enable + * Access: RW + */ +-MLXSW_ITEM32(reg, rgcr, ipv6_en, 0x00, 30, 1); ++MLXSW_ITEM32(reg, qeec, mase, 0x10, 31, 1); + +-/* reg_rgcr_max_router_interfaces +- * Defines the maximum number of active router interfaces for all virtual +- * routers. ++/* A large max rate will disable the max shaper. */ ++#define MLXSW_REG_QEEC_MAS_DIS 200000000 /* Kbps */ ++ ++/* reg_qeec_max_shaper_rate ++ * Max shaper information rate. ++ * For CPU port, can only be configured for port hierarchy. ++ * When in bytes mode, value is specified in units of 1000bps. + * Access: RW + */ +-MLXSW_ITEM32(reg, rgcr, max_router_interfaces, 0x10, 0, 16); ++MLXSW_ITEM32(reg, qeec, max_shaper_rate, 0x10, 0, 28); + +-/* reg_rgcr_usp +- * Update switch priority and packet color. +- * 0 - Preserve the value of Switch Priority and packet color. +- * 1 - Recalculate the value of Switch Priority and packet color. ++/* reg_qeec_de ++ * DWRR configuration enable. Enables configuration of the dwrr and ++ * dwrr_weight. ++ * 0 - Disable ++ * 1 - Enable + * Access: RW +- * +- * Note: Not supported by SwitchX and SwitchX-2. + */ +-MLXSW_ITEM32(reg, rgcr, usp, 0x18, 20, 1); ++MLXSW_ITEM32(reg, qeec, de, 0x18, 31, 1); + +-/* reg_rgcr_pcp_rw +- * Indicates how to handle the pcp_rewrite_en value: +- * 0 - Preserve the value of pcp_rewrite_en. +- * 2 - Disable PCP rewrite. +- * 3 - Enable PCP rewrite. ++/* reg_qeec_dwrr ++ * Transmission selection algorithm to use on the link going down from ++ * the ETS element. ++ * 0 - Strict priority ++ * 1 - DWRR + * Access: RW +- * +- * Note: Not supported by SwitchX and SwitchX-2. + */ +-MLXSW_ITEM32(reg, rgcr, pcp_rw, 0x18, 16, 2); ++MLXSW_ITEM32(reg, qeec, dwrr, 0x18, 15, 1); + +-/* reg_rgcr_activity_dis +- * Activity disable: +- * 0 - Activity will be set when an entry is hit (default). +- * 1 - Activity will not be set when an entry is hit. +- * +- * Bit 0 - Disable activity bit in Router Algorithmic LPM Unicast Entry +- * (RALUE). +- * Bit 1 - Disable activity bit in Router Algorithmic LPM Unicast Host +- * Entry (RAUHT). +- * Bits 2:7 are reserved. ++/* reg_qeec_dwrr_weight ++ * DWRR weight on the link going down from the ETS element. The ++ * percentage of bandwidth guaranteed to an ETS element within ++ * its hierarchy. The sum of all weights across all ETS elements ++ * within one hierarchy should be equal to 100. Reserved when ++ * transmission selection algorithm is strict priority. + * Access: RW +- * +- * Note: Not supported by SwitchX, SwitchX-2 and Switch-IB. + */ +-MLXSW_ITEM32(reg, rgcr, activity_dis, 0x20, 0, 8); ++MLXSW_ITEM32(reg, qeec, dwrr_weight, 0x18, 0, 8); + +-static inline void mlxsw_reg_rgcr_pack(char *payload, bool ipv4_en) ++static inline void mlxsw_reg_qeec_pack(char *payload, u8 local_port, ++ enum mlxsw_reg_qeec_hr hr, u8 index, ++ u8 next_index) + { +- MLXSW_REG_ZERO(rgcr, payload); +- mlxsw_reg_rgcr_ipv4_en_set(payload, ipv4_en); ++ MLXSW_REG_ZERO(qeec, payload); ++ mlxsw_reg_qeec_local_port_set(payload, local_port); ++ mlxsw_reg_qeec_element_hierarchy_set(payload, hr); ++ mlxsw_reg_qeec_element_index_set(payload, index); ++ mlxsw_reg_qeec_next_element_index_set(payload, next_index); + } + +-/* RITR - Router Interface Table Register +- * -------------------------------------- +- * The register is used to configure the router interface table. ++/* QRWE - QoS ReWrite Enable ++ * ------------------------- ++ * This register configures the rewrite enable per receive port. + */ +-#define MLXSW_REG_RITR_ID 0x8002 +-#define MLXSW_REG_RITR_LEN 0x40 ++#define MLXSW_REG_QRWE_ID 0x400F ++#define MLXSW_REG_QRWE_LEN 0x08 + +-static const struct mlxsw_reg_info mlxsw_reg_ritr = { +- .id = MLXSW_REG_RITR_ID, +- .len = MLXSW_REG_RITR_LEN, +-}; ++MLXSW_REG_DEFINE(qrwe, MLXSW_REG_QRWE_ID, MLXSW_REG_QRWE_LEN); + +-/* reg_ritr_enable +- * Enables routing on the router interface. +- * Access: RW ++/* reg_qrwe_local_port ++ * Local port number. ++ * Access: Index ++ * ++ * Note: CPU port is supported. No support for router port. + */ +-MLXSW_ITEM32(reg, ritr, enable, 0x00, 31, 1); ++MLXSW_ITEM32(reg, qrwe, local_port, 0x00, 16, 8); + +-/* reg_ritr_ipv4 +- * IPv4 routing enable. Enables routing of IPv4 traffic on the router +- * interface. ++/* reg_qrwe_dscp ++ * Whether to enable DSCP rewrite (default is 0, don't rewrite). + * Access: RW + */ +-MLXSW_ITEM32(reg, ritr, ipv4, 0x00, 29, 1); ++MLXSW_ITEM32(reg, qrwe, dscp, 0x04, 1, 1); + +-/* reg_ritr_ipv6 +- * IPv6 routing enable. Enables routing of IPv6 traffic on the router +- * interface. ++/* reg_qrwe_pcp ++ * Whether to enable PCP and DEI rewrite (default is 0, don't rewrite). + * Access: RW + */ +-MLXSW_ITEM32(reg, ritr, ipv6, 0x00, 28, 1); ++MLXSW_ITEM32(reg, qrwe, pcp, 0x04, 0, 1); + +-enum mlxsw_reg_ritr_if_type { +- MLXSW_REG_RITR_VLAN_IF, +- MLXSW_REG_RITR_FID_IF, +- MLXSW_REG_RITR_SP_IF, +-}; ++static inline void mlxsw_reg_qrwe_pack(char *payload, u8 local_port, ++ bool rewrite_pcp, bool rewrite_dscp) ++{ ++ MLXSW_REG_ZERO(qrwe, payload); ++ mlxsw_reg_qrwe_local_port_set(payload, local_port); ++ mlxsw_reg_qrwe_pcp_set(payload, rewrite_pcp); ++ mlxsw_reg_qrwe_dscp_set(payload, rewrite_dscp); ++} + +-/* reg_ritr_type +- * Router interface type. +- * 0 - VLAN interface. +- * 1 - FID interface. +- * 2 - Sub-port interface. +- * Access: RW ++/* QPDSM - QoS Priority to DSCP Mapping ++ * ------------------------------------ ++ * QoS Priority to DSCP Mapping Register + */ +-MLXSW_ITEM32(reg, ritr, type, 0x00, 23, 3); +- +-enum { +- MLXSW_REG_RITR_RIF_CREATE, +- MLXSW_REG_RITR_RIF_DEL, +-}; ++#define MLXSW_REG_QPDSM_ID 0x4011 ++#define MLXSW_REG_QPDSM_BASE_LEN 0x04 /* base length, without records */ ++#define MLXSW_REG_QPDSM_PRIO_ENTRY_REC_LEN 0x4 /* record length */ ++#define MLXSW_REG_QPDSM_PRIO_ENTRY_REC_MAX_COUNT 16 ++#define MLXSW_REG_QPDSM_LEN (MLXSW_REG_QPDSM_BASE_LEN + \ ++ MLXSW_REG_QPDSM_PRIO_ENTRY_REC_LEN * \ ++ MLXSW_REG_QPDSM_PRIO_ENTRY_REC_MAX_COUNT) + +-/* reg_ritr_op +- * Opcode: +- * 0 - Create or edit RIF. +- * 1 - Delete RIF. +- * Reserved for SwitchX-2. For Spectrum, editing of interface properties +- * is not supported. An interface must be deleted and re-created in order +- * to update properties. +- * Access: WO +- */ +-MLXSW_ITEM32(reg, ritr, op, 0x00, 20, 2); ++MLXSW_REG_DEFINE(qpdsm, MLXSW_REG_QPDSM_ID, MLXSW_REG_QPDSM_LEN); + +-/* reg_ritr_rif +- * Router interface index. A pointer to the Router Interface Table. ++/* reg_qpdsm_local_port ++ * Local Port. Supported for data packets from CPU port. + * Access: Index + */ +-MLXSW_ITEM32(reg, ritr, rif, 0x00, 0, 16); ++MLXSW_ITEM32(reg, qpdsm, local_port, 0x00, 16, 8); + +-/* reg_ritr_ipv4_fe +- * IPv4 Forwarding Enable. +- * Enables routing of IPv4 traffic on the router interface. When disabled, +- * forwarding is blocked but local traffic (traps and IP2ME) will be enabled. +- * Not supported in SwitchX-2. +- * Access: RW ++/* reg_qpdsm_prio_entry_color0_e ++ * Enable update of the entry for color 0 and a given port. ++ * Access: WO + */ +-MLXSW_ITEM32(reg, ritr, ipv4_fe, 0x04, 29, 1); ++MLXSW_ITEM32_INDEXED(reg, qpdsm, prio_entry_color0_e, ++ MLXSW_REG_QPDSM_BASE_LEN, 31, 1, ++ MLXSW_REG_QPDSM_PRIO_ENTRY_REC_LEN, 0x00, false); + +-/* reg_ritr_ipv6_fe +- * IPv6 Forwarding Enable. +- * Enables routing of IPv6 traffic on the router interface. When disabled, +- * forwarding is blocked but local traffic (traps and IP2ME) will be enabled. +- * Not supported in SwitchX-2. ++/* reg_qpdsm_prio_entry_color0_dscp ++ * DSCP field in the outer label of the packet for color 0 and a given port. ++ * Reserved when e=0. + * Access: RW + */ +-MLXSW_ITEM32(reg, ritr, ipv6_fe, 0x04, 28, 1); ++MLXSW_ITEM32_INDEXED(reg, qpdsm, prio_entry_color0_dscp, ++ MLXSW_REG_QPDSM_BASE_LEN, 24, 6, ++ MLXSW_REG_QPDSM_PRIO_ENTRY_REC_LEN, 0x00, false); + +-/* reg_ritr_lb_en +- * Loop-back filter enable for unicast packets. +- * If the flag is set then loop-back filter for unicast packets is +- * implemented on the RIF. Multicast packets are always subject to +- * loop-back filtering. +- * Access: RW ++/* reg_qpdsm_prio_entry_color1_e ++ * Enable update of the entry for color 1 and a given port. ++ * Access: WO + */ +-MLXSW_ITEM32(reg, ritr, lb_en, 0x04, 24, 1); ++MLXSW_ITEM32_INDEXED(reg, qpdsm, prio_entry_color1_e, ++ MLXSW_REG_QPDSM_BASE_LEN, 23, 1, ++ MLXSW_REG_QPDSM_PRIO_ENTRY_REC_LEN, 0x00, false); + +-/* reg_ritr_virtual_router +- * Virtual router ID associated with the router interface. ++/* reg_qpdsm_prio_entry_color1_dscp ++ * DSCP field in the outer label of the packet for color 1 and a given port. ++ * Reserved when e=0. + * Access: RW + */ +-MLXSW_ITEM32(reg, ritr, virtual_router, 0x04, 0, 16); ++MLXSW_ITEM32_INDEXED(reg, qpdsm, prio_entry_color1_dscp, ++ MLXSW_REG_QPDSM_BASE_LEN, 16, 6, ++ MLXSW_REG_QPDSM_PRIO_ENTRY_REC_LEN, 0x00, false); + +-/* reg_ritr_mtu +- * Router interface MTU. +- * Access: RW ++/* reg_qpdsm_prio_entry_color2_e ++ * Enable update of the entry for color 2 and a given port. ++ * Access: WO + */ +-MLXSW_ITEM32(reg, ritr, mtu, 0x34, 0, 16); ++MLXSW_ITEM32_INDEXED(reg, qpdsm, prio_entry_color2_e, ++ MLXSW_REG_QPDSM_BASE_LEN, 15, 1, ++ MLXSW_REG_QPDSM_PRIO_ENTRY_REC_LEN, 0x00, false); + +-/* reg_ritr_if_swid +- * Switch partition ID. ++/* reg_qpdsm_prio_entry_color2_dscp ++ * DSCP field in the outer label of the packet for color 2 and a given port. ++ * Reserved when e=0. + * Access: RW + */ +-MLXSW_ITEM32(reg, ritr, if_swid, 0x08, 24, 8); ++MLXSW_ITEM32_INDEXED(reg, qpdsm, prio_entry_color2_dscp, ++ MLXSW_REG_QPDSM_BASE_LEN, 8, 6, ++ MLXSW_REG_QPDSM_PRIO_ENTRY_REC_LEN, 0x00, false); + +-/* reg_ritr_if_mac +- * Router interface MAC address. +- * In Spectrum, all MAC addresses must have the same 38 MSBits. +- * Access: RW ++static inline void mlxsw_reg_qpdsm_pack(char *payload, u8 local_port) ++{ ++ MLXSW_REG_ZERO(qpdsm, payload); ++ mlxsw_reg_qpdsm_local_port_set(payload, local_port); ++} ++ ++static inline void ++mlxsw_reg_qpdsm_prio_pack(char *payload, unsigned short prio, u8 dscp) ++{ ++ mlxsw_reg_qpdsm_prio_entry_color0_e_set(payload, prio, 1); ++ mlxsw_reg_qpdsm_prio_entry_color0_dscp_set(payload, prio, dscp); ++ mlxsw_reg_qpdsm_prio_entry_color1_e_set(payload, prio, 1); ++ mlxsw_reg_qpdsm_prio_entry_color1_dscp_set(payload, prio, dscp); ++ mlxsw_reg_qpdsm_prio_entry_color2_e_set(payload, prio, 1); ++ mlxsw_reg_qpdsm_prio_entry_color2_dscp_set(payload, prio, dscp); ++} ++ ++/* QPDPM - QoS Port DSCP to Priority Mapping Register ++ * -------------------------------------------------- ++ * This register controls the mapping from DSCP field to ++ * Switch Priority for IP packets. + */ +-MLXSW_ITEM_BUF(reg, ritr, if_mac, 0x12, 6); ++#define MLXSW_REG_QPDPM_ID 0x4013 ++#define MLXSW_REG_QPDPM_BASE_LEN 0x4 /* base length, without records */ ++#define MLXSW_REG_QPDPM_DSCP_ENTRY_REC_LEN 0x2 /* record length */ ++#define MLXSW_REG_QPDPM_DSCP_ENTRY_REC_MAX_COUNT 64 ++#define MLXSW_REG_QPDPM_LEN (MLXSW_REG_QPDPM_BASE_LEN + \ ++ MLXSW_REG_QPDPM_DSCP_ENTRY_REC_LEN * \ ++ MLXSW_REG_QPDPM_DSCP_ENTRY_REC_MAX_COUNT) + +-/* VLAN Interface */ ++MLXSW_REG_DEFINE(qpdpm, MLXSW_REG_QPDPM_ID, MLXSW_REG_QPDPM_LEN); + +-/* reg_ritr_vlan_if_vid +- * VLAN ID. +- * Access: RW ++/* reg_qpdpm_local_port ++ * Local Port. Supported for data packets from CPU port. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, ritr, vlan_if_vid, 0x08, 0, 12); ++MLXSW_ITEM32(reg, qpdpm, local_port, 0x00, 16, 8); + +-/* FID Interface */ ++/* reg_qpdpm_dscp_e ++ * Enable update of the specific entry. When cleared, the switch_prio and color ++ * fields are ignored and the previous switch_prio and color values are ++ * preserved. ++ * Access: WO ++ */ ++MLXSW_ITEM16_INDEXED(reg, qpdpm, dscp_entry_e, MLXSW_REG_QPDPM_BASE_LEN, 15, 1, ++ MLXSW_REG_QPDPM_DSCP_ENTRY_REC_LEN, 0x00, false); + +-/* reg_ritr_fid_if_fid +- * Filtering ID. Used to connect a bridge to the router. Only FIDs from +- * the vFID range are supported. ++/* reg_qpdpm_dscp_prio ++ * The new Switch Priority value for the relevant DSCP value. + * Access: RW + */ +-MLXSW_ITEM32(reg, ritr, fid_if_fid, 0x08, 0, 16); ++MLXSW_ITEM16_INDEXED(reg, qpdpm, dscp_entry_prio, ++ MLXSW_REG_QPDPM_BASE_LEN, 0, 4, ++ MLXSW_REG_QPDPM_DSCP_ENTRY_REC_LEN, 0x00, false); + +-static inline void mlxsw_reg_ritr_fid_set(char *payload, +- enum mlxsw_reg_ritr_if_type rif_type, +- u16 fid) ++static inline void mlxsw_reg_qpdpm_pack(char *payload, u8 local_port) + { +- if (rif_type == MLXSW_REG_RITR_FID_IF) +- mlxsw_reg_ritr_fid_if_fid_set(payload, fid); +- else +- mlxsw_reg_ritr_vlan_if_vid_set(payload, fid); ++ MLXSW_REG_ZERO(qpdpm, payload); ++ mlxsw_reg_qpdpm_local_port_set(payload, local_port); + } + +-/* Sub-port Interface */ ++static inline void ++mlxsw_reg_qpdpm_dscp_pack(char *payload, unsigned short dscp, u8 prio) ++{ ++ mlxsw_reg_qpdpm_dscp_entry_e_set(payload, dscp, 1); ++ mlxsw_reg_qpdpm_dscp_entry_prio_set(payload, dscp, prio); ++} + +-/* reg_ritr_sp_if_lag +- * LAG indication. When this bit is set the system_port field holds the +- * LAG identifier. +- * Access: RW ++/* QTCTM - QoS Switch Traffic Class Table is Multicast-Aware Register ++ * ------------------------------------------------------------------ ++ * This register configures if the Switch Priority to Traffic Class mapping is ++ * based on Multicast packet indication. If so, then multicast packets will get ++ * a Traffic Class that is plus (cap_max_tclass_data/2) the value configured by ++ * QTCT. ++ * By default, Switch Priority to Traffic Class mapping is not based on ++ * Multicast packet indication. + */ +-MLXSW_ITEM32(reg, ritr, sp_if_lag, 0x08, 24, 1); ++#define MLXSW_REG_QTCTM_ID 0x401A ++#define MLXSW_REG_QTCTM_LEN 0x08 + +-/* reg_ritr_sp_system_port +- * Port unique indentifier. When lag bit is set, this field holds the +- * lag_id in bits 0:9. +- * Access: RW ++MLXSW_REG_DEFINE(qtctm, MLXSW_REG_QTCTM_ID, MLXSW_REG_QTCTM_LEN); ++ ++/* reg_qtctm_local_port ++ * Local port number. ++ * No support for CPU port. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, ritr, sp_if_system_port, 0x08, 0, 16); ++MLXSW_ITEM32(reg, qtctm, local_port, 0x00, 16, 8); + +-/* reg_ritr_sp_if_vid +- * VLAN ID. +- * Access: RW ++/* reg_qtctm_mc ++ * Multicast Mode ++ * Whether Switch Priority to Traffic Class mapping is based on Multicast packet ++ * indication (default is 0, not based on Multicast packet indication). + */ +-MLXSW_ITEM32(reg, ritr, sp_if_vid, 0x18, 0, 12); ++MLXSW_ITEM32(reg, qtctm, mc, 0x04, 0, 1); + +-static inline void mlxsw_reg_ritr_rif_pack(char *payload, u16 rif) ++static inline void ++mlxsw_reg_qtctm_pack(char *payload, u8 local_port, bool mc) + { +- MLXSW_REG_ZERO(ritr, payload); +- mlxsw_reg_ritr_rif_set(payload, rif); ++ MLXSW_REG_ZERO(qtctm, payload); ++ mlxsw_reg_qtctm_local_port_set(payload, local_port); ++ mlxsw_reg_qtctm_mc_set(payload, mc); + } + +-static inline void mlxsw_reg_ritr_sp_if_pack(char *payload, bool lag, +- u16 system_port, u16 vid) +-{ +- mlxsw_reg_ritr_sp_if_lag_set(payload, lag); +- mlxsw_reg_ritr_sp_if_system_port_set(payload, system_port); +- mlxsw_reg_ritr_sp_if_vid_set(payload, vid); +-} ++/* PMLP - Ports Module to Local Port Register ++ * ------------------------------------------ ++ * Configures the assignment of modules to local ports. ++ */ ++#define MLXSW_REG_PMLP_ID 0x5002 ++#define MLXSW_REG_PMLP_LEN 0x40 + +-static inline void mlxsw_reg_ritr_pack(char *payload, bool enable, +- enum mlxsw_reg_ritr_if_type type, +- u16 rif, u16 mtu, const char *mac) +-{ +- bool op = enable ? MLXSW_REG_RITR_RIF_CREATE : MLXSW_REG_RITR_RIF_DEL; ++MLXSW_REG_DEFINE(pmlp, MLXSW_REG_PMLP_ID, MLXSW_REG_PMLP_LEN); + +- MLXSW_REG_ZERO(ritr, payload); +- mlxsw_reg_ritr_enable_set(payload, enable); +- mlxsw_reg_ritr_ipv4_set(payload, 1); +- mlxsw_reg_ritr_type_set(payload, type); +- mlxsw_reg_ritr_op_set(payload, op); +- mlxsw_reg_ritr_rif_set(payload, rif); +- mlxsw_reg_ritr_ipv4_fe_set(payload, 1); +- mlxsw_reg_ritr_lb_en_set(payload, 1); +- mlxsw_reg_ritr_mtu_set(payload, mtu); +- mlxsw_reg_ritr_if_mac_memcpy_to(payload, mac); +-} ++/* reg_pmlp_rxtx ++ * 0 - Tx value is used for both Tx and Rx. ++ * 1 - Rx value is taken from a separte field. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pmlp, rxtx, 0x00, 31, 1); + +-/* RATR - Router Adjacency Table Register +- * -------------------------------------- +- * The RATR register is used to configure the Router Adjacency (next-hop) +- * Table. ++/* reg_pmlp_local_port ++ * Local port number. ++ * Access: Index + */ +-#define MLXSW_REG_RATR_ID 0x8008 +-#define MLXSW_REG_RATR_LEN 0x2C ++MLXSW_ITEM32(reg, pmlp, local_port, 0x00, 16, 8); + +-static const struct mlxsw_reg_info mlxsw_reg_ratr = { +- .id = MLXSW_REG_RATR_ID, +- .len = MLXSW_REG_RATR_LEN, +-}; ++/* reg_pmlp_width ++ * 0 - Unmap local port. ++ * 1 - Lane 0 is used. ++ * 2 - Lanes 0 and 1 are used. ++ * 4 - Lanes 0, 1, 2 and 3 are used. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pmlp, width, 0x00, 0, 8); + +-enum mlxsw_reg_ratr_op { +- /* Read */ +- MLXSW_REG_RATR_OP_QUERY_READ = 0, +- /* Read and clear activity */ +- MLXSW_REG_RATR_OP_QUERY_READ_CLEAR = 2, +- /* Write Adjacency entry */ +- MLXSW_REG_RATR_OP_WRITE_WRITE_ENTRY = 1, +- /* Write Adjacency entry only if the activity is cleared. +- * The write may not succeed if the activity is set. There is not +- * direct feedback if the write has succeeded or not, however +- * the get will reveal the actual entry (SW can compare the get +- * response to the set command). +- */ +- MLXSW_REG_RATR_OP_WRITE_WRITE_ENTRY_ON_ACTIVITY = 3, ++/* reg_pmlp_module ++ * Module number. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, pmlp, module, 0x04, 0, 8, 0x04, 0x00, false); ++ ++/* reg_pmlp_tx_lane ++ * Tx Lane. When rxtx field is cleared, this field is used for Rx as well. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, pmlp, tx_lane, 0x04, 16, 2, 0x04, 0x00, false); ++ ++/* reg_pmlp_rx_lane ++ * Rx Lane. When rxtx field is cleared, this field is ignored and Rx lane is ++ * equal to Tx lane. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, pmlp, rx_lane, 0x04, 24, 2, 0x04, 0x00, false); ++ ++static inline void mlxsw_reg_pmlp_pack(char *payload, u8 local_port) ++{ ++ MLXSW_REG_ZERO(pmlp, payload); ++ mlxsw_reg_pmlp_local_port_set(payload, local_port); ++} ++ ++/* PMTU - Port MTU Register ++ * ------------------------ ++ * Configures and reports the port MTU. ++ */ ++#define MLXSW_REG_PMTU_ID 0x5003 ++#define MLXSW_REG_PMTU_LEN 0x10 ++ ++MLXSW_REG_DEFINE(pmtu, MLXSW_REG_PMTU_ID, MLXSW_REG_PMTU_LEN); ++ ++/* reg_pmtu_local_port ++ * Local port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, pmtu, local_port, 0x00, 16, 8); ++ ++/* reg_pmtu_max_mtu ++ * Maximum MTU. ++ * When port type (e.g. Ethernet) is configured, the relevant MTU is ++ * reported, otherwise the minimum between the max_mtu of the different ++ * types is reported. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, pmtu, max_mtu, 0x04, 16, 16); ++ ++/* reg_pmtu_admin_mtu ++ * MTU value to set port to. Must be smaller or equal to max_mtu. ++ * Note: If port type is Infiniband, then port must be disabled, when its ++ * MTU is set. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pmtu, admin_mtu, 0x08, 16, 16); ++ ++/* reg_pmtu_oper_mtu ++ * The actual MTU configured on the port. Packets exceeding this size ++ * will be dropped. ++ * Note: In Ethernet and FC oper_mtu == admin_mtu, however, in Infiniband ++ * oper_mtu might be smaller than admin_mtu. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, pmtu, oper_mtu, 0x0C, 16, 16); ++ ++static inline void mlxsw_reg_pmtu_pack(char *payload, u8 local_port, ++ u16 new_mtu) ++{ ++ MLXSW_REG_ZERO(pmtu, payload); ++ mlxsw_reg_pmtu_local_port_set(payload, local_port); ++ mlxsw_reg_pmtu_max_mtu_set(payload, 0); ++ mlxsw_reg_pmtu_admin_mtu_set(payload, new_mtu); ++ mlxsw_reg_pmtu_oper_mtu_set(payload, 0); ++} ++ ++/* PTYS - Port Type and Speed Register ++ * ----------------------------------- ++ * Configures and reports the port speed type. ++ * ++ * Note: When set while the link is up, the changes will not take effect ++ * until the port transitions from down to up state. ++ */ ++#define MLXSW_REG_PTYS_ID 0x5004 ++#define MLXSW_REG_PTYS_LEN 0x40 ++ ++MLXSW_REG_DEFINE(ptys, MLXSW_REG_PTYS_ID, MLXSW_REG_PTYS_LEN); ++ ++/* an_disable_admin ++ * Auto negotiation disable administrative configuration ++ * 0 - Device doesn't support AN disable. ++ * 1 - Device supports AN disable. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ptys, an_disable_admin, 0x00, 30, 1); ++ ++/* reg_ptys_local_port ++ * Local port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ptys, local_port, 0x00, 16, 8); ++ ++#define MLXSW_REG_PTYS_PROTO_MASK_IB BIT(0) ++#define MLXSW_REG_PTYS_PROTO_MASK_ETH BIT(2) ++ ++/* reg_ptys_proto_mask ++ * Protocol mask. Indicates which protocol is used. ++ * 0 - Infiniband. ++ * 1 - Fibre Channel. ++ * 2 - Ethernet. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ptys, proto_mask, 0x00, 0, 3); ++ ++enum { ++ MLXSW_REG_PTYS_AN_STATUS_NA, ++ MLXSW_REG_PTYS_AN_STATUS_OK, ++ MLXSW_REG_PTYS_AN_STATUS_FAIL, + }; + +-/* reg_ratr_op +- * Note that Write operation may also be used for updating +- * counter_set_type and counter_index. In this case all other +- * fields must not be updated. +- * Access: OP ++/* reg_ptys_an_status ++ * Autonegotiation status. ++ * Access: RO + */ +-MLXSW_ITEM32(reg, ratr, op, 0x00, 28, 4); ++MLXSW_ITEM32(reg, ptys, an_status, 0x04, 28, 4); + +-/* reg_ratr_v +- * Valid bit. Indicates if the adjacency entry is valid. +- * Note: the device may need some time before reusing an invalidated +- * entry. During this time the entry can not be reused. It is +- * recommended to use another entry before reusing an invalidated +- * entry (e.g. software can put it at the end of the list for +- * reusing). Trying to access an invalidated entry not yet cleared +- * by the device results with failure indicating "Try Again" status. +- * When valid is '0' then egress_router_interface,trap_action, +- * adjacency_parameters and counters are reserved ++#define MLXSW_REG_PTYS_ETH_SPEED_SGMII BIT(0) ++#define MLXSW_REG_PTYS_ETH_SPEED_1000BASE_KX BIT(1) ++#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_CX4 BIT(2) ++#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_KX4 BIT(3) ++#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_KR BIT(4) ++#define MLXSW_REG_PTYS_ETH_SPEED_20GBASE_KR2 BIT(5) ++#define MLXSW_REG_PTYS_ETH_SPEED_40GBASE_CR4 BIT(6) ++#define MLXSW_REG_PTYS_ETH_SPEED_40GBASE_KR4 BIT(7) ++#define MLXSW_REG_PTYS_ETH_SPEED_56GBASE_R4 BIT(8) ++#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_CR BIT(12) ++#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_SR BIT(13) ++#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_ER_LR BIT(14) ++#define MLXSW_REG_PTYS_ETH_SPEED_40GBASE_SR4 BIT(15) ++#define MLXSW_REG_PTYS_ETH_SPEED_40GBASE_LR4_ER4 BIT(16) ++#define MLXSW_REG_PTYS_ETH_SPEED_50GBASE_SR2 BIT(18) ++#define MLXSW_REG_PTYS_ETH_SPEED_50GBASE_KR4 BIT(19) ++#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 BIT(20) ++#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 BIT(21) ++#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 BIT(22) ++#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4 BIT(23) ++#define MLXSW_REG_PTYS_ETH_SPEED_100BASE_TX BIT(24) ++#define MLXSW_REG_PTYS_ETH_SPEED_100BASE_T BIT(25) ++#define MLXSW_REG_PTYS_ETH_SPEED_10GBASE_T BIT(26) ++#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_CR BIT(27) ++#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_KR BIT(28) ++#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_SR BIT(29) ++#define MLXSW_REG_PTYS_ETH_SPEED_50GBASE_CR2 BIT(30) ++#define MLXSW_REG_PTYS_ETH_SPEED_50GBASE_KR2 BIT(31) ++ ++/* reg_ptys_eth_proto_cap ++ * Ethernet port supported speeds and protocols. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, ptys, eth_proto_cap, 0x0C, 0, 32); ++ ++/* reg_ptys_ib_link_width_cap ++ * IB port supported widths. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, ptys, ib_link_width_cap, 0x10, 16, 16); ++ ++#define MLXSW_REG_PTYS_IB_SPEED_SDR BIT(0) ++#define MLXSW_REG_PTYS_IB_SPEED_DDR BIT(1) ++#define MLXSW_REG_PTYS_IB_SPEED_QDR BIT(2) ++#define MLXSW_REG_PTYS_IB_SPEED_FDR10 BIT(3) ++#define MLXSW_REG_PTYS_IB_SPEED_FDR BIT(4) ++#define MLXSW_REG_PTYS_IB_SPEED_EDR BIT(5) ++ ++/* reg_ptys_ib_proto_cap ++ * IB port supported speeds and protocols. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, ptys, ib_proto_cap, 0x10, 0, 16); ++ ++/* reg_ptys_eth_proto_admin ++ * Speed and protocol to set port to. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ptys, eth_proto_admin, 0x18, 0, 32); ++ ++/* reg_ptys_ib_link_width_admin ++ * IB width to set port to. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ptys, ib_link_width_admin, 0x1C, 16, 16); ++ ++/* reg_ptys_ib_proto_admin ++ * IB speeds and protocols to set port to. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ptys, ib_proto_admin, 0x1C, 0, 16); ++ ++/* reg_ptys_eth_proto_oper ++ * The current speed and protocol configured for the port. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, ptys, eth_proto_oper, 0x24, 0, 32); ++ ++/* reg_ptys_ib_link_width_oper ++ * The current IB width to set port to. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, ptys, ib_link_width_oper, 0x28, 16, 16); ++ ++/* reg_ptys_ib_proto_oper ++ * The current IB speed and protocol. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, ptys, ib_proto_oper, 0x28, 0, 16); ++ ++/* reg_ptys_eth_proto_lp_advertise ++ * The protocols that were advertised by the link partner during ++ * autonegotiation. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, ptys, eth_proto_lp_advertise, 0x30, 0, 32); ++ ++static inline void mlxsw_reg_ptys_eth_pack(char *payload, u8 local_port, ++ u32 proto_admin, bool autoneg) ++{ ++ MLXSW_REG_ZERO(ptys, payload); ++ mlxsw_reg_ptys_local_port_set(payload, local_port); ++ mlxsw_reg_ptys_proto_mask_set(payload, MLXSW_REG_PTYS_PROTO_MASK_ETH); ++ mlxsw_reg_ptys_eth_proto_admin_set(payload, proto_admin); ++ mlxsw_reg_ptys_an_disable_admin_set(payload, !autoneg); ++} ++ ++static inline void mlxsw_reg_ptys_eth_unpack(char *payload, ++ u32 *p_eth_proto_cap, ++ u32 *p_eth_proto_adm, ++ u32 *p_eth_proto_oper) ++{ ++ if (p_eth_proto_cap) ++ *p_eth_proto_cap = mlxsw_reg_ptys_eth_proto_cap_get(payload); ++ if (p_eth_proto_adm) ++ *p_eth_proto_adm = mlxsw_reg_ptys_eth_proto_admin_get(payload); ++ if (p_eth_proto_oper) ++ *p_eth_proto_oper = mlxsw_reg_ptys_eth_proto_oper_get(payload); ++} ++ ++static inline void mlxsw_reg_ptys_ib_pack(char *payload, u8 local_port, ++ u16 proto_admin, u16 link_width) ++{ ++ MLXSW_REG_ZERO(ptys, payload); ++ mlxsw_reg_ptys_local_port_set(payload, local_port); ++ mlxsw_reg_ptys_proto_mask_set(payload, MLXSW_REG_PTYS_PROTO_MASK_IB); ++ mlxsw_reg_ptys_ib_proto_admin_set(payload, proto_admin); ++ mlxsw_reg_ptys_ib_link_width_admin_set(payload, link_width); ++} ++ ++static inline void mlxsw_reg_ptys_ib_unpack(char *payload, u16 *p_ib_proto_cap, ++ u16 *p_ib_link_width_cap, ++ u16 *p_ib_proto_oper, ++ u16 *p_ib_link_width_oper) ++{ ++ if (p_ib_proto_cap) ++ *p_ib_proto_cap = mlxsw_reg_ptys_ib_proto_cap_get(payload); ++ if (p_ib_link_width_cap) ++ *p_ib_link_width_cap = ++ mlxsw_reg_ptys_ib_link_width_cap_get(payload); ++ if (p_ib_proto_oper) ++ *p_ib_proto_oper = mlxsw_reg_ptys_ib_proto_oper_get(payload); ++ if (p_ib_link_width_oper) ++ *p_ib_link_width_oper = ++ mlxsw_reg_ptys_ib_link_width_oper_get(payload); ++} ++ ++/* PPAD - Port Physical Address Register ++ * ------------------------------------- ++ * The PPAD register configures the per port physical MAC address. ++ */ ++#define MLXSW_REG_PPAD_ID 0x5005 ++#define MLXSW_REG_PPAD_LEN 0x10 ++ ++MLXSW_REG_DEFINE(ppad, MLXSW_REG_PPAD_ID, MLXSW_REG_PPAD_LEN); ++ ++/* reg_ppad_single_base_mac ++ * 0: base_mac, local port should be 0 and mac[7:0] is ++ * reserved. HW will set incremental ++ * 1: single_mac - mac of the local_port ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ppad, single_base_mac, 0x00, 28, 1); ++ ++/* reg_ppad_local_port ++ * port number, if single_base_mac = 0 then local_port is reserved ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ppad, local_port, 0x00, 16, 8); ++ ++/* reg_ppad_mac ++ * If single_base_mac = 0 - base MAC address, mac[7:0] is reserved. ++ * If single_base_mac = 1 - the per port MAC address ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, ppad, mac, 0x02, 6); ++ ++static inline void mlxsw_reg_ppad_pack(char *payload, bool single_base_mac, ++ u8 local_port) ++{ ++ MLXSW_REG_ZERO(ppad, payload); ++ mlxsw_reg_ppad_single_base_mac_set(payload, !!single_base_mac); ++ mlxsw_reg_ppad_local_port_set(payload, local_port); ++} ++ ++/* PAOS - Ports Administrative and Operational Status Register ++ * ----------------------------------------------------------- ++ * Configures and retrieves per port administrative and operational status. ++ */ ++#define MLXSW_REG_PAOS_ID 0x5006 ++#define MLXSW_REG_PAOS_LEN 0x10 ++ ++MLXSW_REG_DEFINE(paos, MLXSW_REG_PAOS_ID, MLXSW_REG_PAOS_LEN); ++ ++/* reg_paos_swid ++ * Switch partition ID with which to associate the port. ++ * Note: while external ports uses unique local port numbers (and thus swid is ++ * redundant), router ports use the same local port number where swid is the ++ * only indication for the relevant port. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, paos, swid, 0x00, 24, 8); ++ ++/* reg_paos_local_port ++ * Local port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, paos, local_port, 0x00, 16, 8); ++ ++/* reg_paos_admin_status ++ * Port administrative state (the desired state of the port): ++ * 1 - Up. ++ * 2 - Down. ++ * 3 - Up once. This means that in case of link failure, the port won't go ++ * into polling mode, but will wait to be re-enabled by software. ++ * 4 - Disabled by system. Can only be set by hardware. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, paos, admin_status, 0x00, 8, 4); ++ ++/* reg_paos_oper_status ++ * Port operational state (the current state): ++ * 1 - Up. ++ * 2 - Down. ++ * 3 - Down by port failure. This means that the device will not let the ++ * port up again until explicitly specified by software. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, paos, oper_status, 0x00, 0, 4); ++ ++/* reg_paos_ase ++ * Admin state update enabled. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, paos, ase, 0x04, 31, 1); ++ ++/* reg_paos_ee ++ * Event update enable. If this bit is set, event generation will be ++ * updated based on the e field. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, paos, ee, 0x04, 30, 1); ++ ++/* reg_paos_e ++ * Event generation on operational state change: ++ * 0 - Do not generate event. ++ * 1 - Generate Event. ++ * 2 - Generate Single Event. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, paos, e, 0x04, 0, 2); ++ ++static inline void mlxsw_reg_paos_pack(char *payload, u8 local_port, ++ enum mlxsw_port_admin_status status) ++{ ++ MLXSW_REG_ZERO(paos, payload); ++ mlxsw_reg_paos_swid_set(payload, 0); ++ mlxsw_reg_paos_local_port_set(payload, local_port); ++ mlxsw_reg_paos_admin_status_set(payload, status); ++ mlxsw_reg_paos_oper_status_set(payload, 0); ++ mlxsw_reg_paos_ase_set(payload, 1); ++ mlxsw_reg_paos_ee_set(payload, 1); ++ mlxsw_reg_paos_e_set(payload, 1); ++} ++ ++/* PFCC - Ports Flow Control Configuration Register ++ * ------------------------------------------------ ++ * Configures and retrieves the per port flow control configuration. ++ */ ++#define MLXSW_REG_PFCC_ID 0x5007 ++#define MLXSW_REG_PFCC_LEN 0x20 ++ ++MLXSW_REG_DEFINE(pfcc, MLXSW_REG_PFCC_ID, MLXSW_REG_PFCC_LEN); ++ ++/* reg_pfcc_local_port ++ * Local port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, pfcc, local_port, 0x00, 16, 8); ++ ++/* reg_pfcc_pnat ++ * Port number access type. Determines the way local_port is interpreted: ++ * 0 - Local port number. ++ * 1 - IB / label port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, pfcc, pnat, 0x00, 14, 2); ++ ++/* reg_pfcc_shl_cap ++ * Send to higher layers capabilities: ++ * 0 - No capability of sending Pause and PFC frames to higher layers. ++ * 1 - Device has capability of sending Pause and PFC frames to higher ++ * layers. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, pfcc, shl_cap, 0x00, 1, 1); ++ ++/* reg_pfcc_shl_opr ++ * Send to higher layers operation: ++ * 0 - Pause and PFC frames are handled by the port (default). ++ * 1 - Pause and PFC frames are handled by the port and also sent to ++ * higher layers. Only valid if shl_cap = 1. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pfcc, shl_opr, 0x00, 0, 1); ++ ++/* reg_pfcc_ppan ++ * Pause policy auto negotiation. ++ * 0 - Disabled. Generate / ignore Pause frames based on pptx / pprtx. ++ * 1 - Enabled. When auto-negotiation is performed, set the Pause policy ++ * based on the auto-negotiation resolution. ++ * Access: RW ++ * ++ * Note: The auto-negotiation advertisement is set according to pptx and ++ * pprtx. When PFC is set on Tx / Rx, ppan must be set to 0. ++ */ ++MLXSW_ITEM32(reg, pfcc, ppan, 0x04, 28, 4); ++ ++/* reg_pfcc_prio_mask_tx ++ * Bit per priority indicating if Tx flow control policy should be ++ * updated based on bit pfctx. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, pfcc, prio_mask_tx, 0x04, 16, 8); ++ ++/* reg_pfcc_prio_mask_rx ++ * Bit per priority indicating if Rx flow control policy should be ++ * updated based on bit pfcrx. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, pfcc, prio_mask_rx, 0x04, 0, 8); ++ ++/* reg_pfcc_pptx ++ * Admin Pause policy on Tx. ++ * 0 - Never generate Pause frames (default). ++ * 1 - Generate Pause frames according to Rx buffer threshold. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pfcc, pptx, 0x08, 31, 1); ++ ++/* reg_pfcc_aptx ++ * Active (operational) Pause policy on Tx. ++ * 0 - Never generate Pause frames. ++ * 1 - Generate Pause frames according to Rx buffer threshold. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, pfcc, aptx, 0x08, 30, 1); ++ ++/* reg_pfcc_pfctx ++ * Priority based flow control policy on Tx[7:0]. Per-priority bit mask: ++ * 0 - Never generate priority Pause frames on the specified priority ++ * (default). ++ * 1 - Generate priority Pause frames according to Rx buffer threshold on ++ * the specified priority. ++ * Access: RW ++ * ++ * Note: pfctx and pptx must be mutually exclusive. ++ */ ++MLXSW_ITEM32(reg, pfcc, pfctx, 0x08, 16, 8); ++ ++/* reg_pfcc_pprx ++ * Admin Pause policy on Rx. ++ * 0 - Ignore received Pause frames (default). ++ * 1 - Respect received Pause frames. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pfcc, pprx, 0x0C, 31, 1); ++ ++/* reg_pfcc_aprx ++ * Active (operational) Pause policy on Rx. ++ * 0 - Ignore received Pause frames. ++ * 1 - Respect received Pause frames. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, pfcc, aprx, 0x0C, 30, 1); ++ ++/* reg_pfcc_pfcrx ++ * Priority based flow control policy on Rx[7:0]. Per-priority bit mask: ++ * 0 - Ignore incoming priority Pause frames on the specified priority ++ * (default). ++ * 1 - Respect incoming priority Pause frames on the specified priority. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pfcc, pfcrx, 0x0C, 16, 8); ++ ++#define MLXSW_REG_PFCC_ALL_PRIO 0xFF ++ ++static inline void mlxsw_reg_pfcc_prio_pack(char *payload, u8 pfc_en) ++{ ++ mlxsw_reg_pfcc_prio_mask_tx_set(payload, MLXSW_REG_PFCC_ALL_PRIO); ++ mlxsw_reg_pfcc_prio_mask_rx_set(payload, MLXSW_REG_PFCC_ALL_PRIO); ++ mlxsw_reg_pfcc_pfctx_set(payload, pfc_en); ++ mlxsw_reg_pfcc_pfcrx_set(payload, pfc_en); ++} ++ ++static inline void mlxsw_reg_pfcc_pack(char *payload, u8 local_port) ++{ ++ MLXSW_REG_ZERO(pfcc, payload); ++ mlxsw_reg_pfcc_local_port_set(payload, local_port); ++} ++ ++/* PPCNT - Ports Performance Counters Register ++ * ------------------------------------------- ++ * The PPCNT register retrieves per port performance counters. ++ */ ++#define MLXSW_REG_PPCNT_ID 0x5008 ++#define MLXSW_REG_PPCNT_LEN 0x100 ++#define MLXSW_REG_PPCNT_COUNTERS_OFFSET 0x08 ++ ++MLXSW_REG_DEFINE(ppcnt, MLXSW_REG_PPCNT_ID, MLXSW_REG_PPCNT_LEN); ++ ++/* reg_ppcnt_swid ++ * For HCA: must be always 0. ++ * Switch partition ID to associate port with. ++ * Switch partitions are numbered from 0 to 7 inclusively. ++ * Switch partition 254 indicates stacking ports. ++ * Switch partition 255 indicates all switch partitions. ++ * Only valid on Set() operation with local_port=255. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ppcnt, swid, 0x00, 24, 8); ++ ++/* reg_ppcnt_local_port ++ * Local port number. ++ * 255 indicates all ports on the device, and is only allowed ++ * for Set() operation. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ppcnt, local_port, 0x00, 16, 8); ++ ++/* reg_ppcnt_pnat ++ * Port number access type: ++ * 0 - Local port number ++ * 1 - IB port number ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ppcnt, pnat, 0x00, 14, 2); ++ ++enum mlxsw_reg_ppcnt_grp { ++ MLXSW_REG_PPCNT_IEEE_8023_CNT = 0x0, ++ MLXSW_REG_PPCNT_RFC_2819_CNT = 0x2, ++ MLXSW_REG_PPCNT_EXT_CNT = 0x5, ++ MLXSW_REG_PPCNT_PRIO_CNT = 0x10, ++ MLXSW_REG_PPCNT_TC_CNT = 0x11, ++ MLXSW_REG_PPCNT_TC_CONG_TC = 0x13, ++}; ++ ++/* reg_ppcnt_grp ++ * Performance counter group. ++ * Group 63 indicates all groups. Only valid on Set() operation with ++ * clr bit set. ++ * 0x0: IEEE 802.3 Counters ++ * 0x1: RFC 2863 Counters ++ * 0x2: RFC 2819 Counters ++ * 0x3: RFC 3635 Counters ++ * 0x5: Ethernet Extended Counters ++ * 0x8: Link Level Retransmission Counters ++ * 0x10: Per Priority Counters ++ * 0x11: Per Traffic Class Counters ++ * 0x12: Physical Layer Counters ++ * 0x13: Per Traffic Class Congestion Counters ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ppcnt, grp, 0x00, 0, 6); ++ ++/* reg_ppcnt_clr ++ * Clear counters. Setting the clr bit will reset the counter value ++ * for all counters in the counter group. This bit can be set ++ * for both Set() and Get() operation. ++ * Access: OP ++ */ ++MLXSW_ITEM32(reg, ppcnt, clr, 0x04, 31, 1); ++ ++/* reg_ppcnt_prio_tc ++ * Priority for counter set that support per priority, valid values: 0-7. ++ * Traffic class for counter set that support per traffic class, ++ * valid values: 0- cap_max_tclass-1 . ++ * For HCA: cap_max_tclass is always 8. ++ * Otherwise must be 0. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ppcnt, prio_tc, 0x04, 0, 5); ++ ++/* Ethernet IEEE 802.3 Counter Group */ ++ ++/* reg_ppcnt_a_frames_transmitted_ok ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_frames_transmitted_ok, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x00, 0, 64); ++ ++/* reg_ppcnt_a_frames_received_ok ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_frames_received_ok, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x08, 0, 64); ++ ++/* reg_ppcnt_a_frame_check_sequence_errors ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_frame_check_sequence_errors, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x10, 0, 64); ++ ++/* reg_ppcnt_a_alignment_errors ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_alignment_errors, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x18, 0, 64); ++ ++/* reg_ppcnt_a_octets_transmitted_ok ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_octets_transmitted_ok, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x20, 0, 64); ++ ++/* reg_ppcnt_a_octets_received_ok ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_octets_received_ok, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x28, 0, 64); ++ ++/* reg_ppcnt_a_multicast_frames_xmitted_ok ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_multicast_frames_xmitted_ok, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x30, 0, 64); ++ ++/* reg_ppcnt_a_broadcast_frames_xmitted_ok ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_broadcast_frames_xmitted_ok, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x38, 0, 64); ++ ++/* reg_ppcnt_a_multicast_frames_received_ok ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_multicast_frames_received_ok, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x40, 0, 64); ++ ++/* reg_ppcnt_a_broadcast_frames_received_ok ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_broadcast_frames_received_ok, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x48, 0, 64); ++ ++/* reg_ppcnt_a_in_range_length_errors ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_in_range_length_errors, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x50, 0, 64); ++ ++/* reg_ppcnt_a_out_of_range_length_field ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_out_of_range_length_field, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x58, 0, 64); ++ ++/* reg_ppcnt_a_frame_too_long_errors ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_frame_too_long_errors, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x60, 0, 64); ++ ++/* reg_ppcnt_a_symbol_error_during_carrier ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_symbol_error_during_carrier, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x68, 0, 64); ++ ++/* reg_ppcnt_a_mac_control_frames_transmitted ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_mac_control_frames_transmitted, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x70, 0, 64); ++ ++/* reg_ppcnt_a_mac_control_frames_received ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_mac_control_frames_received, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x78, 0, 64); ++ ++/* reg_ppcnt_a_unsupported_opcodes_received ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_unsupported_opcodes_received, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x80, 0, 64); ++ ++/* reg_ppcnt_a_pause_mac_ctrl_frames_received ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_pause_mac_ctrl_frames_received, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x88, 0, 64); ++ ++/* reg_ppcnt_a_pause_mac_ctrl_frames_transmitted ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, a_pause_mac_ctrl_frames_transmitted, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x90, 0, 64); ++ ++/* Ethernet RFC 2819 Counter Group */ ++ ++/* reg_ppcnt_ether_stats_pkts64octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts64octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x58, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts65to127octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts65to127octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x60, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts128to255octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts128to255octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x68, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts256to511octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts256to511octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x70, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts512to1023octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts512to1023octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x78, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts1024to1518octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts1024to1518octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x80, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts1519to2047octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts1519to2047octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x88, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts2048to4095octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts2048to4095octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x90, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts4096to8191octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts4096to8191octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x98, 0, 64); ++ ++/* reg_ppcnt_ether_stats_pkts8192to10239octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts8192to10239octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0xA0, 0, 64); ++ ++/* Ethernet Extended Counter Group Counters */ ++ ++/* reg_ppcnt_ecn_marked ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, ecn_marked, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x08, 0, 64); ++ ++/* Ethernet Per Priority Group Counters */ ++ ++/* reg_ppcnt_rx_octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, rx_octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x00, 0, 64); ++ ++/* reg_ppcnt_rx_frames ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, rx_frames, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x20, 0, 64); ++ ++/* reg_ppcnt_tx_octets ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, tx_octets, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x28, 0, 64); ++ ++/* reg_ppcnt_tx_frames ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, tx_frames, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x48, 0, 64); ++ ++/* reg_ppcnt_rx_pause ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, rx_pause, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x50, 0, 64); ++ ++/* reg_ppcnt_rx_pause_duration ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, rx_pause_duration, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x58, 0, 64); ++ ++/* reg_ppcnt_tx_pause ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, tx_pause, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x60, 0, 64); ++ ++/* reg_ppcnt_tx_pause_duration ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, tx_pause_duration, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x68, 0, 64); ++ ++/* reg_ppcnt_rx_pause_transition ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, tx_pause_transition, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x70, 0, 64); ++ ++/* Ethernet Per Traffic Group Counters */ ++ ++/* reg_ppcnt_tc_transmit_queue ++ * Contains the transmit queue depth in cells of traffic class ++ * selected by prio_tc and the port selected by local_port. ++ * The field cannot be cleared. ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, tc_transmit_queue, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x00, 0, 64); ++ ++/* reg_ppcnt_tc_no_buffer_discard_uc ++ * The number of unicast packets dropped due to lack of shared ++ * buffer resources. ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, tc_no_buffer_discard_uc, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x08, 0, 64); ++ ++/* Ethernet Per Traffic Class Congestion Group Counters */ ++ ++/* reg_ppcnt_wred_discard ++ * Access: RO ++ */ ++MLXSW_ITEM64(reg, ppcnt, wred_discard, ++ MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x00, 0, 64); ++ ++static inline void mlxsw_reg_ppcnt_pack(char *payload, u8 local_port, ++ enum mlxsw_reg_ppcnt_grp grp, ++ u8 prio_tc) ++{ ++ MLXSW_REG_ZERO(ppcnt, payload); ++ mlxsw_reg_ppcnt_swid_set(payload, 0); ++ mlxsw_reg_ppcnt_local_port_set(payload, local_port); ++ mlxsw_reg_ppcnt_pnat_set(payload, 0); ++ mlxsw_reg_ppcnt_grp_set(payload, grp); ++ mlxsw_reg_ppcnt_clr_set(payload, 0); ++ mlxsw_reg_ppcnt_prio_tc_set(payload, prio_tc); ++} ++ ++/* PLIB - Port Local to InfiniBand Port ++ * ------------------------------------ ++ * The PLIB register performs mapping from Local Port into InfiniBand Port. ++ */ ++#define MLXSW_REG_PLIB_ID 0x500A ++#define MLXSW_REG_PLIB_LEN 0x10 ++ ++MLXSW_REG_DEFINE(plib, MLXSW_REG_PLIB_ID, MLXSW_REG_PLIB_LEN); ++ ++/* reg_plib_local_port ++ * Local port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, plib, local_port, 0x00, 16, 8); ++ ++/* reg_plib_ib_port ++ * InfiniBand port remapping for local_port. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, plib, ib_port, 0x00, 0, 8); ++ ++/* PPTB - Port Prio To Buffer Register ++ * ----------------------------------- ++ * Configures the switch priority to buffer table. ++ */ ++#define MLXSW_REG_PPTB_ID 0x500B ++#define MLXSW_REG_PPTB_LEN 0x10 ++ ++MLXSW_REG_DEFINE(pptb, MLXSW_REG_PPTB_ID, MLXSW_REG_PPTB_LEN); ++ ++enum { ++ MLXSW_REG_PPTB_MM_UM, ++ MLXSW_REG_PPTB_MM_UNICAST, ++ MLXSW_REG_PPTB_MM_MULTICAST, ++}; ++ ++/* reg_pptb_mm ++ * Mapping mode. ++ * 0 - Map both unicast and multicast packets to the same buffer. ++ * 1 - Map only unicast packets. ++ * 2 - Map only multicast packets. ++ * Access: Index ++ * ++ * Note: SwitchX-2 only supports the first option. ++ */ ++MLXSW_ITEM32(reg, pptb, mm, 0x00, 28, 2); ++ ++/* reg_pptb_local_port ++ * Local port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, pptb, local_port, 0x00, 16, 8); ++ ++/* reg_pptb_um ++ * Enables the update of the untagged_buf field. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pptb, um, 0x00, 8, 1); ++ ++/* reg_pptb_pm ++ * Enables the update of the prio_to_buff field. ++ * Bit is a flag for updating the mapping for switch priority . ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pptb, pm, 0x00, 0, 8); ++ ++/* reg_pptb_prio_to_buff ++ * Mapping of switch priority to one of the allocated receive port ++ * buffers. ++ * Access: RW ++ */ ++MLXSW_ITEM_BIT_ARRAY(reg, pptb, prio_to_buff, 0x04, 0x04, 4); ++ ++/* reg_pptb_pm_msb ++ * Enables the update of the prio_to_buff field. ++ * Bit is a flag for updating the mapping for switch priority . ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pptb, pm_msb, 0x08, 24, 8); ++ ++/* reg_pptb_untagged_buff ++ * Mapping of untagged frames to one of the allocated receive port buffers. ++ * Access: RW ++ * ++ * Note: In SwitchX-2 this field must be mapped to buffer 8. Reserved for ++ * Spectrum, as it maps untagged packets based on the default switch priority. ++ */ ++MLXSW_ITEM32(reg, pptb, untagged_buff, 0x08, 0, 4); ++ ++/* reg_pptb_prio_to_buff_msb ++ * Mapping of switch priority to one of the allocated receive port ++ * buffers. ++ * Access: RW ++ */ ++MLXSW_ITEM_BIT_ARRAY(reg, pptb, prio_to_buff_msb, 0x0C, 0x04, 4); ++ ++#define MLXSW_REG_PPTB_ALL_PRIO 0xFF ++ ++static inline void mlxsw_reg_pptb_pack(char *payload, u8 local_port) ++{ ++ MLXSW_REG_ZERO(pptb, payload); ++ mlxsw_reg_pptb_mm_set(payload, MLXSW_REG_PPTB_MM_UM); ++ mlxsw_reg_pptb_local_port_set(payload, local_port); ++ mlxsw_reg_pptb_pm_set(payload, MLXSW_REG_PPTB_ALL_PRIO); ++ mlxsw_reg_pptb_pm_msb_set(payload, MLXSW_REG_PPTB_ALL_PRIO); ++} ++ ++static inline void mlxsw_reg_pptb_prio_to_buff_pack(char *payload, u8 prio, ++ u8 buff) ++{ ++ mlxsw_reg_pptb_prio_to_buff_set(payload, prio, buff); ++ mlxsw_reg_pptb_prio_to_buff_msb_set(payload, prio, buff); ++} ++ ++/* PBMC - Port Buffer Management Control Register ++ * ---------------------------------------------- ++ * The PBMC register configures and retrieves the port packet buffer ++ * allocation for different Prios, and the Pause threshold management. ++ */ ++#define MLXSW_REG_PBMC_ID 0x500C ++#define MLXSW_REG_PBMC_LEN 0x6C ++ ++MLXSW_REG_DEFINE(pbmc, MLXSW_REG_PBMC_ID, MLXSW_REG_PBMC_LEN); ++ ++/* reg_pbmc_local_port ++ * Local port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, pbmc, local_port, 0x00, 16, 8); ++ ++/* reg_pbmc_xoff_timer_value ++ * When device generates a pause frame, it uses this value as the pause ++ * timer (time for the peer port to pause in quota-512 bit time). ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pbmc, xoff_timer_value, 0x04, 16, 16); ++ ++/* reg_pbmc_xoff_refresh ++ * The time before a new pause frame should be sent to refresh the pause RW ++ * state. Using the same units as xoff_timer_value above (in quota-512 bit ++ * time). ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pbmc, xoff_refresh, 0x04, 0, 16); ++ ++#define MLXSW_REG_PBMC_PORT_SHARED_BUF_IDX 11 ++ ++/* reg_pbmc_buf_lossy ++ * The field indicates if the buffer is lossy. ++ * 0 - Lossless ++ * 1 - Lossy ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, pbmc, buf_lossy, 0x0C, 25, 1, 0x08, 0x00, false); ++ ++/* reg_pbmc_buf_epsb ++ * Eligible for Port Shared buffer. ++ * If epsb is set, packets assigned to buffer are allowed to insert the port ++ * shared buffer. ++ * When buf_lossy is MLXSW_REG_PBMC_LOSSY_LOSSY this field is reserved. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, pbmc, buf_epsb, 0x0C, 24, 1, 0x08, 0x00, false); ++ ++/* reg_pbmc_buf_size ++ * The part of the packet buffer array is allocated for the specific buffer. ++ * Units are represented in cells. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, pbmc, buf_size, 0x0C, 0, 16, 0x08, 0x00, false); ++ ++/* reg_pbmc_buf_xoff_threshold ++ * Once the amount of data in the buffer goes above this value, device ++ * starts sending PFC frames for all priorities associated with the ++ * buffer. Units are represented in cells. Reserved in case of lossy ++ * buffer. ++ * Access: RW ++ * ++ * Note: In Spectrum, reserved for buffer[9]. ++ */ ++MLXSW_ITEM32_INDEXED(reg, pbmc, buf_xoff_threshold, 0x0C, 16, 16, ++ 0x08, 0x04, false); ++ ++/* reg_pbmc_buf_xon_threshold ++ * When the amount of data in the buffer goes below this value, device ++ * stops sending PFC frames for the priorities associated with the ++ * buffer. Units are represented in cells. Reserved in case of lossy ++ * buffer. ++ * Access: RW ++ * ++ * Note: In Spectrum, reserved for buffer[9]. ++ */ ++MLXSW_ITEM32_INDEXED(reg, pbmc, buf_xon_threshold, 0x0C, 0, 16, ++ 0x08, 0x04, false); ++ ++static inline void mlxsw_reg_pbmc_pack(char *payload, u8 local_port, ++ u16 xoff_timer_value, u16 xoff_refresh) ++{ ++ MLXSW_REG_ZERO(pbmc, payload); ++ mlxsw_reg_pbmc_local_port_set(payload, local_port); ++ mlxsw_reg_pbmc_xoff_timer_value_set(payload, xoff_timer_value); ++ mlxsw_reg_pbmc_xoff_refresh_set(payload, xoff_refresh); ++} ++ ++static inline void mlxsw_reg_pbmc_lossy_buffer_pack(char *payload, ++ int buf_index, ++ u16 size) ++{ ++ mlxsw_reg_pbmc_buf_lossy_set(payload, buf_index, 1); ++ mlxsw_reg_pbmc_buf_epsb_set(payload, buf_index, 0); ++ mlxsw_reg_pbmc_buf_size_set(payload, buf_index, size); ++} ++ ++static inline void mlxsw_reg_pbmc_lossless_buffer_pack(char *payload, ++ int buf_index, u16 size, ++ u16 threshold) ++{ ++ mlxsw_reg_pbmc_buf_lossy_set(payload, buf_index, 0); ++ mlxsw_reg_pbmc_buf_epsb_set(payload, buf_index, 0); ++ mlxsw_reg_pbmc_buf_size_set(payload, buf_index, size); ++ mlxsw_reg_pbmc_buf_xoff_threshold_set(payload, buf_index, threshold); ++ mlxsw_reg_pbmc_buf_xon_threshold_set(payload, buf_index, threshold); ++} ++ ++/* PSPA - Port Switch Partition Allocation ++ * --------------------------------------- ++ * Controls the association of a port with a switch partition and enables ++ * configuring ports as stacking ports. ++ */ ++#define MLXSW_REG_PSPA_ID 0x500D ++#define MLXSW_REG_PSPA_LEN 0x8 ++ ++MLXSW_REG_DEFINE(pspa, MLXSW_REG_PSPA_ID, MLXSW_REG_PSPA_LEN); ++ ++/* reg_pspa_swid ++ * Switch partition ID. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, pspa, swid, 0x00, 24, 8); ++ ++/* reg_pspa_local_port ++ * Local port number. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, pspa, local_port, 0x00, 16, 8); ++ ++/* reg_pspa_sub_port ++ * Virtual port within the local port. Set to 0 when virtual ports are ++ * disabled on the local port. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, pspa, sub_port, 0x00, 8, 8); ++ ++static inline void mlxsw_reg_pspa_pack(char *payload, u8 swid, u8 local_port) ++{ ++ MLXSW_REG_ZERO(pspa, payload); ++ mlxsw_reg_pspa_swid_set(payload, swid); ++ mlxsw_reg_pspa_local_port_set(payload, local_port); ++ mlxsw_reg_pspa_sub_port_set(payload, 0); ++} ++ ++/* HTGT - Host Trap Group Table ++ * ---------------------------- ++ * Configures the properties for forwarding to CPU. ++ */ ++#define MLXSW_REG_HTGT_ID 0x7002 ++#define MLXSW_REG_HTGT_LEN 0x20 ++ ++MLXSW_REG_DEFINE(htgt, MLXSW_REG_HTGT_ID, MLXSW_REG_HTGT_LEN); ++ ++/* reg_htgt_swid ++ * Switch partition ID. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, htgt, swid, 0x00, 24, 8); ++ ++#define MLXSW_REG_HTGT_PATH_TYPE_LOCAL 0x0 /* For locally attached CPU */ ++ ++/* reg_htgt_type ++ * CPU path type. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, htgt, type, 0x00, 8, 4); ++ ++enum mlxsw_reg_htgt_trap_group { ++ MLXSW_REG_HTGT_TRAP_GROUP_EMAD, ++ MLXSW_REG_HTGT_TRAP_GROUP_SX2_RX, ++ MLXSW_REG_HTGT_TRAP_GROUP_SX2_CTRL, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_STP, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_LACP, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_LLDP, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_IGMP, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_BGP, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_OSPF, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_PIM, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_MULTICAST, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_ARP, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_HOST_MISS, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_ROUTER_EXP, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_REMOTE_ROUTE, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_IP2ME, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_DHCP, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_RPF, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_EVENT, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6_MLD, ++ MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6_ND, ++}; ++ ++/* reg_htgt_trap_group ++ * Trap group number. User defined number specifying which trap groups ++ * should be forwarded to the CPU. The mapping between trap IDs and trap ++ * groups is configured using HPKT register. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, htgt, trap_group, 0x00, 0, 8); ++ ++enum { ++ MLXSW_REG_HTGT_POLICER_DISABLE, ++ MLXSW_REG_HTGT_POLICER_ENABLE, ++}; ++ ++/* reg_htgt_pide ++ * Enable policer ID specified using 'pid' field. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, htgt, pide, 0x04, 15, 1); ++ ++#define MLXSW_REG_HTGT_INVALID_POLICER 0xff ++ ++/* reg_htgt_pid ++ * Policer ID for the trap group. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, htgt, pid, 0x04, 0, 8); ++ ++#define MLXSW_REG_HTGT_TRAP_TO_CPU 0x0 ++ ++/* reg_htgt_mirror_action ++ * Mirror action to use. ++ * 0 - Trap to CPU. ++ * 1 - Trap to CPU and mirror to a mirroring agent. ++ * 2 - Mirror to a mirroring agent and do not trap to CPU. ++ * Access: RW ++ * ++ * Note: Mirroring to a mirroring agent is only supported in Spectrum. ++ */ ++MLXSW_ITEM32(reg, htgt, mirror_action, 0x08, 8, 2); ++ ++/* reg_htgt_mirroring_agent ++ * Mirroring agent. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, htgt, mirroring_agent, 0x08, 0, 3); ++ ++#define MLXSW_REG_HTGT_DEFAULT_PRIORITY 0 ++ ++/* reg_htgt_priority ++ * Trap group priority. ++ * In case a packet matches multiple classification rules, the packet will ++ * only be trapped once, based on the trap ID associated with the group (via ++ * register HPKT) with the highest priority. ++ * Supported values are 0-7, with 7 represnting the highest priority. ++ * Access: RW ++ * ++ * Note: In SwitchX-2 this field is ignored and the priority value is replaced ++ * by the 'trap_group' field. ++ */ ++MLXSW_ITEM32(reg, htgt, priority, 0x0C, 0, 4); ++ ++#define MLXSW_REG_HTGT_DEFAULT_TC 7 ++ ++/* reg_htgt_local_path_cpu_tclass ++ * CPU ingress traffic class for the trap group. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, htgt, local_path_cpu_tclass, 0x10, 16, 6); ++ ++enum mlxsw_reg_htgt_local_path_rdq { ++ MLXSW_REG_HTGT_LOCAL_PATH_RDQ_SX2_CTRL = 0x13, ++ MLXSW_REG_HTGT_LOCAL_PATH_RDQ_SX2_RX = 0x14, ++ MLXSW_REG_HTGT_LOCAL_PATH_RDQ_SX2_EMAD = 0x15, ++ MLXSW_REG_HTGT_LOCAL_PATH_RDQ_SIB_EMAD = 0x15, ++}; ++/* reg_htgt_local_path_rdq ++ * Receive descriptor queue (RDQ) to use for the trap group. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, htgt, local_path_rdq, 0x10, 0, 6); ++ ++static inline void mlxsw_reg_htgt_pack(char *payload, u8 group, u8 policer_id, ++ u8 priority, u8 tc) ++{ ++ MLXSW_REG_ZERO(htgt, payload); ++ ++ if (policer_id == MLXSW_REG_HTGT_INVALID_POLICER) { ++ mlxsw_reg_htgt_pide_set(payload, ++ MLXSW_REG_HTGT_POLICER_DISABLE); ++ } else { ++ mlxsw_reg_htgt_pide_set(payload, ++ MLXSW_REG_HTGT_POLICER_ENABLE); ++ mlxsw_reg_htgt_pid_set(payload, policer_id); ++ } ++ ++ mlxsw_reg_htgt_type_set(payload, MLXSW_REG_HTGT_PATH_TYPE_LOCAL); ++ mlxsw_reg_htgt_trap_group_set(payload, group); ++ mlxsw_reg_htgt_mirror_action_set(payload, MLXSW_REG_HTGT_TRAP_TO_CPU); ++ mlxsw_reg_htgt_mirroring_agent_set(payload, 0); ++ mlxsw_reg_htgt_priority_set(payload, priority); ++ mlxsw_reg_htgt_local_path_cpu_tclass_set(payload, tc); ++ mlxsw_reg_htgt_local_path_rdq_set(payload, group); ++} ++ ++/* HPKT - Host Packet Trap ++ * ----------------------- ++ * Configures trap IDs inside trap groups. ++ */ ++#define MLXSW_REG_HPKT_ID 0x7003 ++#define MLXSW_REG_HPKT_LEN 0x10 ++ ++MLXSW_REG_DEFINE(hpkt, MLXSW_REG_HPKT_ID, MLXSW_REG_HPKT_LEN); ++ ++enum { ++ MLXSW_REG_HPKT_ACK_NOT_REQUIRED, ++ MLXSW_REG_HPKT_ACK_REQUIRED, ++}; ++ ++/* reg_hpkt_ack ++ * Require acknowledgements from the host for events. ++ * If set, then the device will wait for the event it sent to be acknowledged ++ * by the host. This option is only relevant for event trap IDs. ++ * Access: RW ++ * ++ * Note: Currently not supported by firmware. ++ */ ++MLXSW_ITEM32(reg, hpkt, ack, 0x00, 24, 1); ++ ++enum mlxsw_reg_hpkt_action { ++ MLXSW_REG_HPKT_ACTION_FORWARD, ++ MLXSW_REG_HPKT_ACTION_TRAP_TO_CPU, ++ MLXSW_REG_HPKT_ACTION_MIRROR_TO_CPU, ++ MLXSW_REG_HPKT_ACTION_DISCARD, ++ MLXSW_REG_HPKT_ACTION_SOFT_DISCARD, ++ MLXSW_REG_HPKT_ACTION_TRAP_AND_SOFT_DISCARD, ++}; ++ ++/* reg_hpkt_action ++ * Action to perform on packet when trapped. ++ * 0 - No action. Forward to CPU based on switching rules. ++ * 1 - Trap to CPU (CPU receives sole copy). ++ * 2 - Mirror to CPU (CPU receives a replica of the packet). ++ * 3 - Discard. ++ * 4 - Soft discard (allow other traps to act on the packet). ++ * 5 - Trap and soft discard (allow other traps to overwrite this trap). ++ * Access: RW ++ * ++ * Note: Must be set to 0 (forward) for event trap IDs, as they are already ++ * addressed to the CPU. ++ */ ++MLXSW_ITEM32(reg, hpkt, action, 0x00, 20, 3); ++ ++/* reg_hpkt_trap_group ++ * Trap group to associate the trap with. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, hpkt, trap_group, 0x00, 12, 6); ++ ++/* reg_hpkt_trap_id ++ * Trap ID. ++ * Access: Index ++ * ++ * Note: A trap ID can only be associated with a single trap group. The device ++ * will associate the trap ID with the last trap group configured. ++ */ ++MLXSW_ITEM32(reg, hpkt, trap_id, 0x00, 0, 9); ++ ++enum { ++ MLXSW_REG_HPKT_CTRL_PACKET_DEFAULT, ++ MLXSW_REG_HPKT_CTRL_PACKET_NO_BUFFER, ++ MLXSW_REG_HPKT_CTRL_PACKET_USE_BUFFER, ++}; ++ ++/* reg_hpkt_ctrl ++ * Configure dedicated buffer resources for control packets. ++ * Ignored by SwitchX-2. ++ * 0 - Keep factory defaults. ++ * 1 - Do not use control buffer for this trap ID. ++ * 2 - Use control buffer for this trap ID. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, hpkt, ctrl, 0x04, 16, 2); ++ ++static inline void mlxsw_reg_hpkt_pack(char *payload, u8 action, u16 trap_id, ++ enum mlxsw_reg_htgt_trap_group trap_group, ++ bool is_ctrl) ++{ ++ MLXSW_REG_ZERO(hpkt, payload); ++ mlxsw_reg_hpkt_ack_set(payload, MLXSW_REG_HPKT_ACK_NOT_REQUIRED); ++ mlxsw_reg_hpkt_action_set(payload, action); ++ mlxsw_reg_hpkt_trap_group_set(payload, trap_group); ++ mlxsw_reg_hpkt_trap_id_set(payload, trap_id); ++ mlxsw_reg_hpkt_ctrl_set(payload, is_ctrl ? ++ MLXSW_REG_HPKT_CTRL_PACKET_USE_BUFFER : ++ MLXSW_REG_HPKT_CTRL_PACKET_NO_BUFFER); ++} ++ ++/* RGCR - Router General Configuration Register ++ * -------------------------------------------- ++ * The register is used for setting up the router configuration. ++ */ ++#define MLXSW_REG_RGCR_ID 0x8001 ++#define MLXSW_REG_RGCR_LEN 0x28 ++ ++MLXSW_REG_DEFINE(rgcr, MLXSW_REG_RGCR_ID, MLXSW_REG_RGCR_LEN); ++ ++/* reg_rgcr_ipv4_en ++ * IPv4 router enable. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rgcr, ipv4_en, 0x00, 31, 1); ++ ++/* reg_rgcr_ipv6_en ++ * IPv6 router enable. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rgcr, ipv6_en, 0x00, 30, 1); ++ ++/* reg_rgcr_max_router_interfaces ++ * Defines the maximum number of active router interfaces for all virtual ++ * routers. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rgcr, max_router_interfaces, 0x10, 0, 16); ++ ++/* reg_rgcr_usp ++ * Update switch priority and packet color. ++ * 0 - Preserve the value of Switch Priority and packet color. ++ * 1 - Recalculate the value of Switch Priority and packet color. ++ * Access: RW ++ * ++ * Note: Not supported by SwitchX and SwitchX-2. ++ */ ++MLXSW_ITEM32(reg, rgcr, usp, 0x18, 20, 1); ++ ++/* reg_rgcr_pcp_rw ++ * Indicates how to handle the pcp_rewrite_en value: ++ * 0 - Preserve the value of pcp_rewrite_en. ++ * 2 - Disable PCP rewrite. ++ * 3 - Enable PCP rewrite. ++ * Access: RW ++ * ++ * Note: Not supported by SwitchX and SwitchX-2. ++ */ ++MLXSW_ITEM32(reg, rgcr, pcp_rw, 0x18, 16, 2); ++ ++/* reg_rgcr_activity_dis ++ * Activity disable: ++ * 0 - Activity will be set when an entry is hit (default). ++ * 1 - Activity will not be set when an entry is hit. ++ * ++ * Bit 0 - Disable activity bit in Router Algorithmic LPM Unicast Entry ++ * (RALUE). ++ * Bit 1 - Disable activity bit in Router Algorithmic LPM Unicast Host ++ * Entry (RAUHT). ++ * Bits 2:7 are reserved. ++ * Access: RW ++ * ++ * Note: Not supported by SwitchX, SwitchX-2 and Switch-IB. ++ */ ++MLXSW_ITEM32(reg, rgcr, activity_dis, 0x20, 0, 8); ++ ++static inline void mlxsw_reg_rgcr_pack(char *payload, bool ipv4_en, ++ bool ipv6_en) ++{ ++ MLXSW_REG_ZERO(rgcr, payload); ++ mlxsw_reg_rgcr_ipv4_en_set(payload, ipv4_en); ++ mlxsw_reg_rgcr_ipv6_en_set(payload, ipv6_en); ++} ++ ++/* RITR - Router Interface Table Register ++ * -------------------------------------- ++ * The register is used to configure the router interface table. ++ */ ++#define MLXSW_REG_RITR_ID 0x8002 ++#define MLXSW_REG_RITR_LEN 0x40 ++ ++MLXSW_REG_DEFINE(ritr, MLXSW_REG_RITR_ID, MLXSW_REG_RITR_LEN); ++ ++/* reg_ritr_enable ++ * Enables routing on the router interface. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, enable, 0x00, 31, 1); ++ ++/* reg_ritr_ipv4 ++ * IPv4 routing enable. Enables routing of IPv4 traffic on the router ++ * interface. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ipv4, 0x00, 29, 1); ++ ++/* reg_ritr_ipv6 ++ * IPv6 routing enable. Enables routing of IPv6 traffic on the router ++ * interface. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ipv6, 0x00, 28, 1); ++ ++/* reg_ritr_ipv4_mc ++ * IPv4 multicast routing enable. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ipv4_mc, 0x00, 27, 1); ++ ++/* reg_ritr_ipv6_mc ++ * IPv6 multicast routing enable. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ipv6_mc, 0x00, 26, 1); ++ ++enum mlxsw_reg_ritr_if_type { ++ /* VLAN interface. */ ++ MLXSW_REG_RITR_VLAN_IF, ++ /* FID interface. */ ++ MLXSW_REG_RITR_FID_IF, ++ /* Sub-port interface. */ ++ MLXSW_REG_RITR_SP_IF, ++ /* Loopback Interface. */ ++ MLXSW_REG_RITR_LOOPBACK_IF, ++}; ++ ++/* reg_ritr_type ++ * Router interface type as per enum mlxsw_reg_ritr_if_type. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, type, 0x00, 23, 3); ++ ++enum { ++ MLXSW_REG_RITR_RIF_CREATE, ++ MLXSW_REG_RITR_RIF_DEL, ++}; ++ ++/* reg_ritr_op ++ * Opcode: ++ * 0 - Create or edit RIF. ++ * 1 - Delete RIF. ++ * Reserved for SwitchX-2. For Spectrum, editing of interface properties ++ * is not supported. An interface must be deleted and re-created in order ++ * to update properties. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, ritr, op, 0x00, 20, 2); ++ ++/* reg_ritr_rif ++ * Router interface index. A pointer to the Router Interface Table. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ritr, rif, 0x00, 0, 16); ++ ++/* reg_ritr_ipv4_fe ++ * IPv4 Forwarding Enable. ++ * Enables routing of IPv4 traffic on the router interface. When disabled, ++ * forwarding is blocked but local traffic (traps and IP2ME) will be enabled. ++ * Not supported in SwitchX-2. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ipv4_fe, 0x04, 29, 1); ++ ++/* reg_ritr_ipv6_fe ++ * IPv6 Forwarding Enable. ++ * Enables routing of IPv6 traffic on the router interface. When disabled, ++ * forwarding is blocked but local traffic (traps and IP2ME) will be enabled. ++ * Not supported in SwitchX-2. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ipv6_fe, 0x04, 28, 1); ++ ++/* reg_ritr_ipv4_mc_fe ++ * IPv4 Multicast Forwarding Enable. ++ * When disabled, forwarding is blocked but local traffic (traps and IP to me) ++ * will be enabled. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ipv4_mc_fe, 0x04, 27, 1); ++ ++/* reg_ritr_ipv6_mc_fe ++ * IPv6 Multicast Forwarding Enable. ++ * When disabled, forwarding is blocked but local traffic (traps and IP to me) ++ * will be enabled. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ipv6_mc_fe, 0x04, 26, 1); ++ ++/* reg_ritr_lb_en ++ * Loop-back filter enable for unicast packets. ++ * If the flag is set then loop-back filter for unicast packets is ++ * implemented on the RIF. Multicast packets are always subject to ++ * loop-back filtering. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, lb_en, 0x04, 24, 1); ++ ++/* reg_ritr_virtual_router ++ * Virtual router ID associated with the router interface. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, virtual_router, 0x04, 0, 16); ++ ++/* reg_ritr_mtu ++ * Router interface MTU. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, mtu, 0x34, 0, 16); ++ ++/* reg_ritr_if_swid ++ * Switch partition ID. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, if_swid, 0x08, 24, 8); ++ ++/* reg_ritr_if_mac ++ * Router interface MAC address. ++ * In Spectrum, all MAC addresses must have the same 38 MSBits. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, ritr, if_mac, 0x12, 6); ++ ++/* reg_ritr_if_vrrp_id_ipv6 ++ * VRRP ID for IPv6 ++ * Note: Reserved for RIF types other than VLAN, FID and Sub-port. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, if_vrrp_id_ipv6, 0x1C, 8, 8); ++ ++/* reg_ritr_if_vrrp_id_ipv4 ++ * VRRP ID for IPv4 ++ * Note: Reserved for RIF types other than VLAN, FID and Sub-port. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, if_vrrp_id_ipv4, 0x1C, 0, 8); ++ ++/* VLAN Interface */ ++ ++/* reg_ritr_vlan_if_vid ++ * VLAN ID. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, vlan_if_vid, 0x08, 0, 12); ++ ++/* FID Interface */ ++ ++/* reg_ritr_fid_if_fid ++ * Filtering ID. Used to connect a bridge to the router. Only FIDs from ++ * the vFID range are supported. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, fid_if_fid, 0x08, 0, 16); ++ ++static inline void mlxsw_reg_ritr_fid_set(char *payload, ++ enum mlxsw_reg_ritr_if_type rif_type, ++ u16 fid) ++{ ++ if (rif_type == MLXSW_REG_RITR_FID_IF) ++ mlxsw_reg_ritr_fid_if_fid_set(payload, fid); ++ else ++ mlxsw_reg_ritr_vlan_if_vid_set(payload, fid); ++} ++ ++/* Sub-port Interface */ ++ ++/* reg_ritr_sp_if_lag ++ * LAG indication. When this bit is set the system_port field holds the ++ * LAG identifier. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, sp_if_lag, 0x08, 24, 1); ++ ++/* reg_ritr_sp_system_port ++ * Port unique indentifier. When lag bit is set, this field holds the ++ * lag_id in bits 0:9. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, sp_if_system_port, 0x08, 0, 16); ++ ++/* reg_ritr_sp_if_vid ++ * VLAN ID. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, sp_if_vid, 0x18, 0, 12); ++ ++/* Loopback Interface */ ++ ++enum mlxsw_reg_ritr_loopback_protocol { ++ /* IPinIP IPv4 underlay Unicast */ ++ MLXSW_REG_RITR_LOOPBACK_PROTOCOL_IPIP_IPV4, ++ /* IPinIP IPv6 underlay Unicast */ ++ MLXSW_REG_RITR_LOOPBACK_PROTOCOL_IPIP_IPV6, ++}; ++ ++/* reg_ritr_loopback_protocol ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, loopback_protocol, 0x08, 28, 4); ++ ++enum mlxsw_reg_ritr_loopback_ipip_type { ++ /* Tunnel is IPinIP. */ ++ MLXSW_REG_RITR_LOOPBACK_IPIP_TYPE_IP_IN_IP, ++ /* Tunnel is GRE, no key. */ ++ MLXSW_REG_RITR_LOOPBACK_IPIP_TYPE_IP_IN_GRE_IN_IP, ++ /* Tunnel is GRE, with a key. */ ++ MLXSW_REG_RITR_LOOPBACK_IPIP_TYPE_IP_IN_GRE_KEY_IN_IP, ++}; ++ ++/* reg_ritr_loopback_ipip_type ++ * Encapsulation type. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, loopback_ipip_type, 0x10, 24, 4); ++ ++enum mlxsw_reg_ritr_loopback_ipip_options { ++ /* The key is defined by gre_key. */ ++ MLXSW_REG_RITR_LOOPBACK_IPIP_OPTIONS_GRE_KEY_PRESET, ++}; ++ ++/* reg_ritr_loopback_ipip_options ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, loopback_ipip_options, 0x10, 20, 4); ++ ++/* reg_ritr_loopback_ipip_uvr ++ * Underlay Virtual Router ID. ++ * Range is 0..cap_max_virtual_routers-1. ++ * Reserved for Spectrum-2. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, loopback_ipip_uvr, 0x10, 0, 16); ++ ++/* reg_ritr_loopback_ipip_usip* ++ * Encapsulation Underlay source IP. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, ritr, loopback_ipip_usip6, 0x18, 16); ++MLXSW_ITEM32(reg, ritr, loopback_ipip_usip4, 0x24, 0, 32); ++ ++/* reg_ritr_loopback_ipip_gre_key ++ * GRE Key. ++ * Reserved when ipip_type is not IP_IN_GRE_KEY_IN_IP. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, loopback_ipip_gre_key, 0x28, 0, 32); ++ ++/* Shared between ingress/egress */ ++enum mlxsw_reg_ritr_counter_set_type { ++ /* No Count. */ ++ MLXSW_REG_RITR_COUNTER_SET_TYPE_NO_COUNT = 0x0, ++ /* Basic. Used for router interfaces, counting the following: ++ * - Error and Discard counters. ++ * - Unicast, Multicast and Broadcast counters. Sharing the ++ * same set of counters for the different type of traffic ++ * (IPv4, IPv6 and mpls). ++ */ ++ MLXSW_REG_RITR_COUNTER_SET_TYPE_BASIC = 0x9, ++}; ++ ++/* reg_ritr_ingress_counter_index ++ * Counter Index for flow counter. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ingress_counter_index, 0x38, 0, 24); ++ ++/* reg_ritr_ingress_counter_set_type ++ * Igress Counter Set Type for router interface counter. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, ingress_counter_set_type, 0x38, 24, 8); ++ ++/* reg_ritr_egress_counter_index ++ * Counter Index for flow counter. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, egress_counter_index, 0x3C, 0, 24); ++ ++/* reg_ritr_egress_counter_set_type ++ * Egress Counter Set Type for router interface counter. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ritr, egress_counter_set_type, 0x3C, 24, 8); ++ ++static inline void mlxsw_reg_ritr_counter_pack(char *payload, u32 index, ++ bool enable, bool egress) ++{ ++ enum mlxsw_reg_ritr_counter_set_type set_type; ++ ++ if (enable) ++ set_type = MLXSW_REG_RITR_COUNTER_SET_TYPE_BASIC; ++ else ++ set_type = MLXSW_REG_RITR_COUNTER_SET_TYPE_NO_COUNT; ++ mlxsw_reg_ritr_egress_counter_set_type_set(payload, set_type); ++ ++ if (egress) ++ mlxsw_reg_ritr_egress_counter_index_set(payload, index); ++ else ++ mlxsw_reg_ritr_ingress_counter_index_set(payload, index); ++} ++ ++static inline void mlxsw_reg_ritr_rif_pack(char *payload, u16 rif) ++{ ++ MLXSW_REG_ZERO(ritr, payload); ++ mlxsw_reg_ritr_rif_set(payload, rif); ++} ++ ++static inline void mlxsw_reg_ritr_sp_if_pack(char *payload, bool lag, ++ u16 system_port, u16 vid) ++{ ++ mlxsw_reg_ritr_sp_if_lag_set(payload, lag); ++ mlxsw_reg_ritr_sp_if_system_port_set(payload, system_port); ++ mlxsw_reg_ritr_sp_if_vid_set(payload, vid); ++} ++ ++static inline void mlxsw_reg_ritr_pack(char *payload, bool enable, ++ enum mlxsw_reg_ritr_if_type type, ++ u16 rif, u16 vr_id, u16 mtu) ++{ ++ bool op = enable ? MLXSW_REG_RITR_RIF_CREATE : MLXSW_REG_RITR_RIF_DEL; ++ ++ MLXSW_REG_ZERO(ritr, payload); ++ mlxsw_reg_ritr_enable_set(payload, enable); ++ mlxsw_reg_ritr_ipv4_set(payload, 1); ++ mlxsw_reg_ritr_ipv6_set(payload, 1); ++ mlxsw_reg_ritr_ipv4_mc_set(payload, 1); ++ mlxsw_reg_ritr_ipv6_mc_set(payload, 1); ++ mlxsw_reg_ritr_type_set(payload, type); ++ mlxsw_reg_ritr_op_set(payload, op); ++ mlxsw_reg_ritr_rif_set(payload, rif); ++ mlxsw_reg_ritr_ipv4_fe_set(payload, 1); ++ mlxsw_reg_ritr_ipv6_fe_set(payload, 1); ++ mlxsw_reg_ritr_ipv4_mc_fe_set(payload, 1); ++ mlxsw_reg_ritr_ipv6_mc_fe_set(payload, 1); ++ mlxsw_reg_ritr_lb_en_set(payload, 1); ++ mlxsw_reg_ritr_virtual_router_set(payload, vr_id); ++ mlxsw_reg_ritr_mtu_set(payload, mtu); ++} ++ ++static inline void mlxsw_reg_ritr_mac_pack(char *payload, const char *mac) ++{ ++ mlxsw_reg_ritr_if_mac_memcpy_to(payload, mac); ++} ++ ++static inline void ++mlxsw_reg_ritr_loopback_ipip_common_pack(char *payload, ++ enum mlxsw_reg_ritr_loopback_ipip_type ipip_type, ++ enum mlxsw_reg_ritr_loopback_ipip_options options, ++ u16 uvr_id, u32 gre_key) ++{ ++ mlxsw_reg_ritr_loopback_ipip_type_set(payload, ipip_type); ++ mlxsw_reg_ritr_loopback_ipip_options_set(payload, options); ++ mlxsw_reg_ritr_loopback_ipip_uvr_set(payload, uvr_id); ++ mlxsw_reg_ritr_loopback_ipip_gre_key_set(payload, gre_key); ++} ++ ++static inline void ++mlxsw_reg_ritr_loopback_ipip4_pack(char *payload, ++ enum mlxsw_reg_ritr_loopback_ipip_type ipip_type, ++ enum mlxsw_reg_ritr_loopback_ipip_options options, ++ u16 uvr_id, u32 usip, u32 gre_key) ++{ ++ mlxsw_reg_ritr_loopback_protocol_set(payload, ++ MLXSW_REG_RITR_LOOPBACK_PROTOCOL_IPIP_IPV4); ++ mlxsw_reg_ritr_loopback_ipip_common_pack(payload, ipip_type, options, ++ uvr_id, gre_key); ++ mlxsw_reg_ritr_loopback_ipip_usip4_set(payload, usip); ++} ++ ++/* RTAR - Router TCAM Allocation Register ++ * -------------------------------------- ++ * This register is used for allocation of regions in the TCAM table. ++ */ ++#define MLXSW_REG_RTAR_ID 0x8004 ++#define MLXSW_REG_RTAR_LEN 0x20 ++ ++MLXSW_REG_DEFINE(rtar, MLXSW_REG_RTAR_ID, MLXSW_REG_RTAR_LEN); ++ ++enum mlxsw_reg_rtar_op { ++ MLXSW_REG_RTAR_OP_ALLOCATE, ++ MLXSW_REG_RTAR_OP_RESIZE, ++ MLXSW_REG_RTAR_OP_DEALLOCATE, ++}; ++ ++/* reg_rtar_op ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, rtar, op, 0x00, 28, 4); ++ ++enum mlxsw_reg_rtar_key_type { ++ MLXSW_REG_RTAR_KEY_TYPE_IPV4_MULTICAST = 1, ++ MLXSW_REG_RTAR_KEY_TYPE_IPV6_MULTICAST = 3 ++}; ++ ++/* reg_rtar_key_type ++ * TCAM key type for the region. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, rtar, key_type, 0x00, 0, 8); ++ ++/* reg_rtar_region_size ++ * TCAM region size. When allocating/resizing this is the requested ++ * size, the response is the actual size. ++ * Note: Actual size may be larger than requested. ++ * Reserved for op = Deallocate ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, rtar, region_size, 0x04, 0, 16); ++ ++static inline void mlxsw_reg_rtar_pack(char *payload, ++ enum mlxsw_reg_rtar_op op, ++ enum mlxsw_reg_rtar_key_type key_type, ++ u16 region_size) ++{ ++ MLXSW_REG_ZERO(rtar, payload); ++ mlxsw_reg_rtar_op_set(payload, op); ++ mlxsw_reg_rtar_key_type_set(payload, key_type); ++ mlxsw_reg_rtar_region_size_set(payload, region_size); ++} ++ ++/* RATR - Router Adjacency Table Register ++ * -------------------------------------- ++ * The RATR register is used to configure the Router Adjacency (next-hop) ++ * Table. ++ */ ++#define MLXSW_REG_RATR_ID 0x8008 ++#define MLXSW_REG_RATR_LEN 0x2C ++ ++MLXSW_REG_DEFINE(ratr, MLXSW_REG_RATR_ID, MLXSW_REG_RATR_LEN); ++ ++enum mlxsw_reg_ratr_op { ++ /* Read */ ++ MLXSW_REG_RATR_OP_QUERY_READ = 0, ++ /* Read and clear activity */ ++ MLXSW_REG_RATR_OP_QUERY_READ_CLEAR = 2, ++ /* Write Adjacency entry */ ++ MLXSW_REG_RATR_OP_WRITE_WRITE_ENTRY = 1, ++ /* Write Adjacency entry only if the activity is cleared. ++ * The write may not succeed if the activity is set. There is not ++ * direct feedback if the write has succeeded or not, however ++ * the get will reveal the actual entry (SW can compare the get ++ * response to the set command). ++ */ ++ MLXSW_REG_RATR_OP_WRITE_WRITE_ENTRY_ON_ACTIVITY = 3, ++}; ++ ++/* reg_ratr_op ++ * Note that Write operation may also be used for updating ++ * counter_set_type and counter_index. In this case all other ++ * fields must not be updated. ++ * Access: OP ++ */ ++MLXSW_ITEM32(reg, ratr, op, 0x00, 28, 4); ++ ++/* reg_ratr_v ++ * Valid bit. Indicates if the adjacency entry is valid. ++ * Note: the device may need some time before reusing an invalidated ++ * entry. During this time the entry can not be reused. It is ++ * recommended to use another entry before reusing an invalidated ++ * entry (e.g. software can put it at the end of the list for ++ * reusing). Trying to access an invalidated entry not yet cleared ++ * by the device results with failure indicating "Try Again" status. ++ * When valid is '0' then egress_router_interface,trap_action, ++ * adjacency_parameters and counters are reserved ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, v, 0x00, 24, 1); ++ ++/* reg_ratr_a ++ * Activity. Set for new entries. Set if a packet lookup has hit on ++ * the specific entry. To clear the a bit, use "clear activity". ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, ratr, a, 0x00, 16, 1); ++ ++enum mlxsw_reg_ratr_type { ++ /* Ethernet */ ++ MLXSW_REG_RATR_TYPE_ETHERNET, ++ /* IPoIB Unicast without GRH. ++ * Reserved for Spectrum. ++ */ ++ MLXSW_REG_RATR_TYPE_IPOIB_UC, ++ /* IPoIB Unicast with GRH. Supported only in table 0 (Ethernet unicast ++ * adjacency). ++ * Reserved for Spectrum. ++ */ ++ MLXSW_REG_RATR_TYPE_IPOIB_UC_W_GRH, ++ /* IPoIB Multicast. ++ * Reserved for Spectrum. ++ */ ++ MLXSW_REG_RATR_TYPE_IPOIB_MC, ++ /* MPLS. ++ * Reserved for SwitchX/-2. ++ */ ++ MLXSW_REG_RATR_TYPE_MPLS, ++ /* IPinIP Encap. ++ * Reserved for SwitchX/-2. ++ */ ++ MLXSW_REG_RATR_TYPE_IPIP, ++}; ++ ++/* reg_ratr_type ++ * Adjacency entry type. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, type, 0x04, 28, 4); ++ ++/* reg_ratr_adjacency_index_low ++ * Bits 15:0 of index into the adjacency table. ++ * For SwitchX and SwitchX-2, the adjacency table is linear and ++ * used for adjacency entries only. ++ * For Spectrum, the index is to the KVD linear. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ratr, adjacency_index_low, 0x04, 0, 16); ++ ++/* reg_ratr_egress_router_interface ++ * Range is 0 .. cap_max_router_interfaces - 1 ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, egress_router_interface, 0x08, 0, 16); ++ ++enum mlxsw_reg_ratr_trap_action { ++ MLXSW_REG_RATR_TRAP_ACTION_NOP, ++ MLXSW_REG_RATR_TRAP_ACTION_TRAP, ++ MLXSW_REG_RATR_TRAP_ACTION_MIRROR_TO_CPU, ++ MLXSW_REG_RATR_TRAP_ACTION_MIRROR, ++ MLXSW_REG_RATR_TRAP_ACTION_DISCARD_ERRORS, ++}; ++ ++/* reg_ratr_trap_action ++ * see mlxsw_reg_ratr_trap_action ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, trap_action, 0x0C, 28, 4); ++ ++/* reg_ratr_adjacency_index_high ++ * Bits 23:16 of the adjacency_index. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ratr, adjacency_index_high, 0x0C, 16, 8); ++ ++enum mlxsw_reg_ratr_trap_id { ++ MLXSW_REG_RATR_TRAP_ID_RTR_EGRESS0, ++ MLXSW_REG_RATR_TRAP_ID_RTR_EGRESS1, ++}; ++ ++/* reg_ratr_trap_id ++ * Trap ID to be reported to CPU. ++ * Trap-ID is RTR_EGRESS0 or RTR_EGRESS1. ++ * For trap_action of NOP, MIRROR and DISCARD_ERROR ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, trap_id, 0x0C, 0, 8); ++ ++/* reg_ratr_eth_destination_mac ++ * MAC address of the destination next-hop. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, ratr, eth_destination_mac, 0x12, 6); ++ ++enum mlxsw_reg_ratr_ipip_type { ++ /* IPv4, address set by mlxsw_reg_ratr_ipip_ipv4_udip. */ ++ MLXSW_REG_RATR_IPIP_TYPE_IPV4, ++ /* IPv6, address set by mlxsw_reg_ratr_ipip_ipv6_ptr. */ ++ MLXSW_REG_RATR_IPIP_TYPE_IPV6, ++}; ++ ++/* reg_ratr_ipip_type ++ * Underlay destination ip type. ++ * Note: the type field must match the protocol of the router interface. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, ipip_type, 0x10, 16, 4); ++ ++/* reg_ratr_ipip_ipv4_udip ++ * Underlay ipv4 dip. ++ * Reserved when ipip_type is IPv6. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, ipip_ipv4_udip, 0x18, 0, 32); ++ ++/* reg_ratr_ipip_ipv6_ptr ++ * Pointer to IPv6 underlay destination ip address. ++ * For Spectrum: Pointer to KVD linear space. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, ipip_ipv6_ptr, 0x1C, 0, 24); ++ ++enum mlxsw_reg_flow_counter_set_type { ++ /* No count */ ++ MLXSW_REG_FLOW_COUNTER_SET_TYPE_NO_COUNT = 0x00, ++ /* Count packets and bytes */ ++ MLXSW_REG_FLOW_COUNTER_SET_TYPE_PACKETS_BYTES = 0x03, ++ /* Count only packets */ ++ MLXSW_REG_FLOW_COUNTER_SET_TYPE_PACKETS = 0x05, ++}; ++ ++/* reg_ratr_counter_set_type ++ * Counter set type for flow counters ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, counter_set_type, 0x28, 24, 8); ++ ++/* reg_ratr_counter_index ++ * Counter index for flow counters ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ratr, counter_index, 0x28, 0, 24); ++ ++static inline void ++mlxsw_reg_ratr_pack(char *payload, ++ enum mlxsw_reg_ratr_op op, bool valid, ++ enum mlxsw_reg_ratr_type type, ++ u32 adjacency_index, u16 egress_rif) ++{ ++ MLXSW_REG_ZERO(ratr, payload); ++ mlxsw_reg_ratr_op_set(payload, op); ++ mlxsw_reg_ratr_v_set(payload, valid); ++ mlxsw_reg_ratr_type_set(payload, type); ++ mlxsw_reg_ratr_adjacency_index_low_set(payload, adjacency_index); ++ mlxsw_reg_ratr_adjacency_index_high_set(payload, adjacency_index >> 16); ++ mlxsw_reg_ratr_egress_router_interface_set(payload, egress_rif); ++} ++ ++static inline void mlxsw_reg_ratr_eth_entry_pack(char *payload, ++ const char *dest_mac) ++{ ++ mlxsw_reg_ratr_eth_destination_mac_memcpy_to(payload, dest_mac); ++} ++ ++static inline void mlxsw_reg_ratr_ipip4_entry_pack(char *payload, u32 ipv4_udip) ++{ ++ mlxsw_reg_ratr_ipip_type_set(payload, MLXSW_REG_RATR_IPIP_TYPE_IPV4); ++ mlxsw_reg_ratr_ipip_ipv4_udip_set(payload, ipv4_udip); ++} ++ ++static inline void mlxsw_reg_ratr_counter_pack(char *payload, u64 counter_index, ++ bool counter_enable) ++{ ++ enum mlxsw_reg_flow_counter_set_type set_type; ++ ++ if (counter_enable) ++ set_type = MLXSW_REG_FLOW_COUNTER_SET_TYPE_PACKETS_BYTES; ++ else ++ set_type = MLXSW_REG_FLOW_COUNTER_SET_TYPE_NO_COUNT; ++ ++ mlxsw_reg_ratr_counter_index_set(payload, counter_index); ++ mlxsw_reg_ratr_counter_set_type_set(payload, set_type); ++} ++ ++/* RDPM - Router DSCP to Priority Mapping ++ * -------------------------------------- ++ * Controls the mapping from DSCP field to switch priority on routed packets ++ */ ++#define MLXSW_REG_RDPM_ID 0x8009 ++#define MLXSW_REG_RDPM_BASE_LEN 0x00 ++#define MLXSW_REG_RDPM_DSCP_ENTRY_REC_LEN 0x01 ++#define MLXSW_REG_RDPM_DSCP_ENTRY_REC_MAX_COUNT 64 ++#define MLXSW_REG_RDPM_LEN 0x40 ++#define MLXSW_REG_RDPM_LAST_ENTRY (MLXSW_REG_RDPM_BASE_LEN + \ ++ MLXSW_REG_RDPM_LEN - \ ++ MLXSW_REG_RDPM_DSCP_ENTRY_REC_LEN) ++ ++MLXSW_REG_DEFINE(rdpm, MLXSW_REG_RDPM_ID, MLXSW_REG_RDPM_LEN); ++ ++/* reg_dscp_entry_e ++ * Enable update of the specific entry ++ * Access: Index ++ */ ++MLXSW_ITEM8_INDEXED(reg, rdpm, dscp_entry_e, MLXSW_REG_RDPM_LAST_ENTRY, 7, 1, ++ -MLXSW_REG_RDPM_DSCP_ENTRY_REC_LEN, 0x00, false); ++ ++/* reg_dscp_entry_prio ++ * Switch Priority ++ * Access: RW ++ */ ++MLXSW_ITEM8_INDEXED(reg, rdpm, dscp_entry_prio, MLXSW_REG_RDPM_LAST_ENTRY, 0, 4, ++ -MLXSW_REG_RDPM_DSCP_ENTRY_REC_LEN, 0x00, false); ++ ++static inline void mlxsw_reg_rdpm_pack(char *payload, unsigned short index, ++ u8 prio) ++{ ++ mlxsw_reg_rdpm_dscp_entry_e_set(payload, index, 1); ++ mlxsw_reg_rdpm_dscp_entry_prio_set(payload, index, prio); ++} ++ ++/* RICNT - Router Interface Counter Register ++ * ----------------------------------------- ++ * The RICNT register retrieves per port performance counters ++ */ ++#define MLXSW_REG_RICNT_ID 0x800B ++#define MLXSW_REG_RICNT_LEN 0x100 ++ ++MLXSW_REG_DEFINE(ricnt, MLXSW_REG_RICNT_ID, MLXSW_REG_RICNT_LEN); ++ ++/* reg_ricnt_counter_index ++ * Counter index ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ricnt, counter_index, 0x04, 0, 24); ++ ++enum mlxsw_reg_ricnt_counter_set_type { ++ /* No Count. */ ++ MLXSW_REG_RICNT_COUNTER_SET_TYPE_NO_COUNT = 0x00, ++ /* Basic. Used for router interfaces, counting the following: ++ * - Error and Discard counters. ++ * - Unicast, Multicast and Broadcast counters. Sharing the ++ * same set of counters for the different type of traffic ++ * (IPv4, IPv6 and mpls). ++ */ ++ MLXSW_REG_RICNT_COUNTER_SET_TYPE_BASIC = 0x09, ++}; ++ ++/* reg_ricnt_counter_set_type ++ * Counter Set Type for router interface counter ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ricnt, counter_set_type, 0x04, 24, 8); ++ ++enum mlxsw_reg_ricnt_opcode { ++ /* Nop. Supported only for read access*/ ++ MLXSW_REG_RICNT_OPCODE_NOP = 0x00, ++ /* Clear. Setting the clr bit will reset the counter value for ++ * all counters of the specified Router Interface. ++ */ ++ MLXSW_REG_RICNT_OPCODE_CLEAR = 0x08, ++}; ++ ++/* reg_ricnt_opcode ++ * Opcode ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ricnt, op, 0x00, 28, 4); ++ ++/* reg_ricnt_good_unicast_packets ++ * good unicast packets. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, good_unicast_packets, 0x08, 0, 64); ++ ++/* reg_ricnt_good_multicast_packets ++ * good multicast packets. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, good_multicast_packets, 0x10, 0, 64); ++ ++/* reg_ricnt_good_broadcast_packets ++ * good broadcast packets ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, good_broadcast_packets, 0x18, 0, 64); ++ ++/* reg_ricnt_good_unicast_bytes ++ * A count of L3 data and padding octets not including L2 headers ++ * for good unicast frames. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, good_unicast_bytes, 0x20, 0, 64); ++ ++/* reg_ricnt_good_multicast_bytes ++ * A count of L3 data and padding octets not including L2 headers ++ * for good multicast frames. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, good_multicast_bytes, 0x28, 0, 64); ++ ++/* reg_ritr_good_broadcast_bytes ++ * A count of L3 data and padding octets not including L2 headers ++ * for good broadcast frames. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, good_broadcast_bytes, 0x30, 0, 64); ++ ++/* reg_ricnt_error_packets ++ * A count of errored frames that do not pass the router checks. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, error_packets, 0x38, 0, 64); ++ ++/* reg_ricnt_discrad_packets ++ * A count of non-errored frames that do not pass the router checks. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, discard_packets, 0x40, 0, 64); ++ ++/* reg_ricnt_error_bytes ++ * A count of L3 data and padding octets not including L2 headers ++ * for errored frames. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, error_bytes, 0x48, 0, 64); ++ ++/* reg_ricnt_discard_bytes ++ * A count of L3 data and padding octets not including L2 headers ++ * for non-errored frames that do not pass the router checks. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, ricnt, discard_bytes, 0x50, 0, 64); ++ ++static inline void mlxsw_reg_ricnt_pack(char *payload, u32 index, ++ enum mlxsw_reg_ricnt_opcode op) ++{ ++ MLXSW_REG_ZERO(ricnt, payload); ++ mlxsw_reg_ricnt_op_set(payload, op); ++ mlxsw_reg_ricnt_counter_index_set(payload, index); ++ mlxsw_reg_ricnt_counter_set_type_set(payload, ++ MLXSW_REG_RICNT_COUNTER_SET_TYPE_BASIC); ++} ++ ++/* RRCR - Router Rules Copy Register Layout ++ * ---------------------------------------- ++ * This register is used for moving and copying route entry rules. ++ */ ++#define MLXSW_REG_RRCR_ID 0x800F ++#define MLXSW_REG_RRCR_LEN 0x24 ++ ++MLXSW_REG_DEFINE(rrcr, MLXSW_REG_RRCR_ID, MLXSW_REG_RRCR_LEN); ++ ++enum mlxsw_reg_rrcr_op { ++ /* Move rules */ ++ MLXSW_REG_RRCR_OP_MOVE, ++ /* Copy rules */ ++ MLXSW_REG_RRCR_OP_COPY, ++}; ++ ++/* reg_rrcr_op ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, rrcr, op, 0x00, 28, 4); ++ ++/* reg_rrcr_offset ++ * Offset within the region from which to copy/move. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rrcr, offset, 0x00, 0, 16); ++ ++/* reg_rrcr_size ++ * The number of rules to copy/move. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, rrcr, size, 0x04, 0, 16); ++ ++/* reg_rrcr_table_id ++ * Identifier of the table on which to perform the operation. Encoding is the ++ * same as in RTAR.key_type ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rrcr, table_id, 0x10, 0, 4); ++ ++/* reg_rrcr_dest_offset ++ * Offset within the region to which to copy/move ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rrcr, dest_offset, 0x20, 0, 16); ++ ++static inline void mlxsw_reg_rrcr_pack(char *payload, enum mlxsw_reg_rrcr_op op, ++ u16 offset, u16 size, ++ enum mlxsw_reg_rtar_key_type table_id, ++ u16 dest_offset) ++{ ++ MLXSW_REG_ZERO(rrcr, payload); ++ mlxsw_reg_rrcr_op_set(payload, op); ++ mlxsw_reg_rrcr_offset_set(payload, offset); ++ mlxsw_reg_rrcr_size_set(payload, size); ++ mlxsw_reg_rrcr_table_id_set(payload, table_id); ++ mlxsw_reg_rrcr_dest_offset_set(payload, dest_offset); ++} ++ ++/* RALTA - Router Algorithmic LPM Tree Allocation Register ++ * ------------------------------------------------------- ++ * RALTA is used to allocate the LPM trees of the SHSPM method. ++ */ ++#define MLXSW_REG_RALTA_ID 0x8010 ++#define MLXSW_REG_RALTA_LEN 0x04 ++ ++MLXSW_REG_DEFINE(ralta, MLXSW_REG_RALTA_ID, MLXSW_REG_RALTA_LEN); ++ ++/* reg_ralta_op ++ * opcode (valid for Write, must be 0 on Read) ++ * 0 - allocate a tree ++ * 1 - deallocate a tree ++ * Access: OP ++ */ ++MLXSW_ITEM32(reg, ralta, op, 0x00, 28, 2); ++ ++enum mlxsw_reg_ralxx_protocol { ++ MLXSW_REG_RALXX_PROTOCOL_IPV4, ++ MLXSW_REG_RALXX_PROTOCOL_IPV6, ++}; ++ ++/* reg_ralta_protocol ++ * Protocol. ++ * Deallocation opcode: Reserved. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralta, protocol, 0x00, 24, 4); ++ ++/* reg_ralta_tree_id ++ * An identifier (numbered from 1..cap_shspm_max_trees-1) representing ++ * the tree identifier (managed by software). ++ * Note that tree_id 0 is allocated for a default-route tree. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ralta, tree_id, 0x00, 0, 8); ++ ++static inline void mlxsw_reg_ralta_pack(char *payload, bool alloc, ++ enum mlxsw_reg_ralxx_protocol protocol, ++ u8 tree_id) ++{ ++ MLXSW_REG_ZERO(ralta, payload); ++ mlxsw_reg_ralta_op_set(payload, !alloc); ++ mlxsw_reg_ralta_protocol_set(payload, protocol); ++ mlxsw_reg_ralta_tree_id_set(payload, tree_id); ++} ++ ++/* RALST - Router Algorithmic LPM Structure Tree Register ++ * ------------------------------------------------------ ++ * RALST is used to set and query the structure of an LPM tree. ++ * The structure of the tree must be sorted as a sorted binary tree, while ++ * each node is a bin that is tagged as the length of the prefixes the lookup ++ * will refer to. Therefore, bin X refers to a set of entries with prefixes ++ * of X bits to match with the destination address. The bin 0 indicates ++ * the default action, when there is no match of any prefix. ++ */ ++#define MLXSW_REG_RALST_ID 0x8011 ++#define MLXSW_REG_RALST_LEN 0x104 ++ ++MLXSW_REG_DEFINE(ralst, MLXSW_REG_RALST_ID, MLXSW_REG_RALST_LEN); ++ ++/* reg_ralst_root_bin ++ * The bin number of the root bin. ++ * 064 the entry consumes ++ * two entries in the physical HW table. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ralue, prefix_len, 0x08, 0, 8); ++ ++/* reg_ralue_dip* ++ * The prefix of the route or of the marker that the object of the LPM ++ * is compared with. The most significant bits of the dip are the prefix. ++ * The least significant bits must be '0' if the prefix_len is smaller ++ * than 128 for IPv6 or smaller than 32 for IPv4. ++ * IPv4 address uses bits dip[31:0] and bits dip[127:32] are reserved. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, ralue, dip4, 0x18, 0, 32); ++MLXSW_ITEM_BUF(reg, ralue, dip6, 0x0C, 16); ++ ++enum mlxsw_reg_ralue_entry_type { ++ MLXSW_REG_RALUE_ENTRY_TYPE_MARKER_ENTRY = 1, ++ MLXSW_REG_RALUE_ENTRY_TYPE_ROUTE_ENTRY = 2, ++ MLXSW_REG_RALUE_ENTRY_TYPE_MARKER_AND_ROUTE_ENTRY = 3, ++}; ++ ++/* reg_ralue_entry_type ++ * Entry type. ++ * Note - for Marker entries, the action_type and action fields are reserved. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, entry_type, 0x1C, 30, 2); ++ ++/* reg_ralue_bmp_len ++ * The best match prefix length in the case that there is no match for ++ * longer prefixes. ++ * If (entry_type != MARKER_ENTRY), bmp_len must be equal to prefix_len ++ * Note for any update operation with entry_type modification this ++ * field must be set. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, bmp_len, 0x1C, 16, 8); ++ ++enum mlxsw_reg_ralue_action_type { ++ MLXSW_REG_RALUE_ACTION_TYPE_REMOTE, ++ MLXSW_REG_RALUE_ACTION_TYPE_LOCAL, ++ MLXSW_REG_RALUE_ACTION_TYPE_IP2ME, ++}; ++ ++/* reg_ralue_action_type ++ * Action Type ++ * Indicates how the IP address is connected. ++ * It can be connected to a local subnet through local_erif or can be ++ * on a remote subnet connected through a next-hop router, ++ * or transmitted to the CPU. ++ * Reserved when entry_type = MARKER_ENTRY ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, action_type, 0x1C, 0, 2); ++ ++enum mlxsw_reg_ralue_trap_action { ++ MLXSW_REG_RALUE_TRAP_ACTION_NOP, ++ MLXSW_REG_RALUE_TRAP_ACTION_TRAP, ++ MLXSW_REG_RALUE_TRAP_ACTION_MIRROR_TO_CPU, ++ MLXSW_REG_RALUE_TRAP_ACTION_MIRROR, ++ MLXSW_REG_RALUE_TRAP_ACTION_DISCARD_ERROR, ++}; ++ ++/* reg_ralue_trap_action ++ * Trap action. ++ * For IP2ME action, only NOP and MIRROR are possible. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, trap_action, 0x20, 28, 4); ++ ++/* reg_ralue_trap_id ++ * Trap ID to be reported to CPU. ++ * Trap ID is RTR_INGRESS0 or RTR_INGRESS1. ++ * For trap_action of NOP, MIRROR and DISCARD_ERROR, trap_id is reserved. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, trap_id, 0x20, 0, 9); ++ ++/* reg_ralue_adjacency_index ++ * Points to the first entry of the group-based ECMP. ++ * Only relevant in case of REMOTE action. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, adjacency_index, 0x24, 0, 24); ++ ++/* reg_ralue_ecmp_size ++ * Amount of sequential entries starting ++ * from the adjacency_index (the number of ECMPs). ++ * The valid range is 1-64, 512, 1024, 2048 and 4096. ++ * Reserved when trap_action is TRAP or DISCARD_ERROR. ++ * Only relevant in case of REMOTE action. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, ecmp_size, 0x28, 0, 13); ++ ++/* reg_ralue_local_erif ++ * Egress Router Interface. ++ * Only relevant in case of LOCAL action. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, local_erif, 0x24, 0, 16); ++ ++/* reg_ralue_ip2me_v ++ * Valid bit for the tunnel_ptr field. ++ * If valid = 0 then trap to CPU as IP2ME trap ID. ++ * If valid = 1 and the packet format allows NVE or IPinIP tunnel ++ * decapsulation then tunnel decapsulation is done. ++ * If valid = 1 and packet format does not allow NVE or IPinIP tunnel ++ * decapsulation then trap as IP2ME trap ID. ++ * Only relevant in case of IP2ME action. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, ip2me_v, 0x24, 31, 1); ++ ++/* reg_ralue_ip2me_tunnel_ptr ++ * Tunnel Pointer for NVE or IPinIP tunnel decapsulation. ++ * For Spectrum, pointer to KVD Linear. ++ * Only relevant in case of IP2ME action. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, ralue, ip2me_tunnel_ptr, 0x24, 0, 24); ++ ++static inline void mlxsw_reg_ralue_pack(char *payload, ++ enum mlxsw_reg_ralxx_protocol protocol, ++ enum mlxsw_reg_ralue_op op, ++ u16 virtual_router, u8 prefix_len) ++{ ++ MLXSW_REG_ZERO(ralue, payload); ++ mlxsw_reg_ralue_protocol_set(payload, protocol); ++ mlxsw_reg_ralue_op_set(payload, op); ++ mlxsw_reg_ralue_virtual_router_set(payload, virtual_router); ++ mlxsw_reg_ralue_prefix_len_set(payload, prefix_len); ++ mlxsw_reg_ralue_entry_type_set(payload, ++ MLXSW_REG_RALUE_ENTRY_TYPE_ROUTE_ENTRY); ++ mlxsw_reg_ralue_bmp_len_set(payload, prefix_len); ++} ++ ++static inline void mlxsw_reg_ralue_pack4(char *payload, ++ enum mlxsw_reg_ralxx_protocol protocol, ++ enum mlxsw_reg_ralue_op op, ++ u16 virtual_router, u8 prefix_len, ++ u32 dip) ++{ ++ mlxsw_reg_ralue_pack(payload, protocol, op, virtual_router, prefix_len); ++ mlxsw_reg_ralue_dip4_set(payload, dip); ++} ++ ++static inline void mlxsw_reg_ralue_pack6(char *payload, ++ enum mlxsw_reg_ralxx_protocol protocol, ++ enum mlxsw_reg_ralue_op op, ++ u16 virtual_router, u8 prefix_len, ++ const void *dip) ++{ ++ mlxsw_reg_ralue_pack(payload, protocol, op, virtual_router, prefix_len); ++ mlxsw_reg_ralue_dip6_memcpy_to(payload, dip); ++} ++ ++static inline void ++mlxsw_reg_ralue_act_remote_pack(char *payload, ++ enum mlxsw_reg_ralue_trap_action trap_action, ++ u16 trap_id, u32 adjacency_index, u16 ecmp_size) ++{ ++ mlxsw_reg_ralue_action_type_set(payload, ++ MLXSW_REG_RALUE_ACTION_TYPE_REMOTE); ++ mlxsw_reg_ralue_trap_action_set(payload, trap_action); ++ mlxsw_reg_ralue_trap_id_set(payload, trap_id); ++ mlxsw_reg_ralue_adjacency_index_set(payload, adjacency_index); ++ mlxsw_reg_ralue_ecmp_size_set(payload, ecmp_size); ++} ++ ++static inline void ++mlxsw_reg_ralue_act_local_pack(char *payload, ++ enum mlxsw_reg_ralue_trap_action trap_action, ++ u16 trap_id, u16 local_erif) ++{ ++ mlxsw_reg_ralue_action_type_set(payload, ++ MLXSW_REG_RALUE_ACTION_TYPE_LOCAL); ++ mlxsw_reg_ralue_trap_action_set(payload, trap_action); ++ mlxsw_reg_ralue_trap_id_set(payload, trap_id); ++ mlxsw_reg_ralue_local_erif_set(payload, local_erif); ++} ++ ++static inline void ++mlxsw_reg_ralue_act_ip2me_pack(char *payload) ++{ ++ mlxsw_reg_ralue_action_type_set(payload, ++ MLXSW_REG_RALUE_ACTION_TYPE_IP2ME); ++} ++ ++static inline void ++mlxsw_reg_ralue_act_ip2me_tun_pack(char *payload, u32 tunnel_ptr) ++{ ++ mlxsw_reg_ralue_action_type_set(payload, ++ MLXSW_REG_RALUE_ACTION_TYPE_IP2ME); ++ mlxsw_reg_ralue_ip2me_v_set(payload, 1); ++ mlxsw_reg_ralue_ip2me_tunnel_ptr_set(payload, tunnel_ptr); ++} ++ ++/* RAUHT - Router Algorithmic LPM Unicast Host Table Register ++ * ---------------------------------------------------------- ++ * The RAUHT register is used to configure and query the Unicast Host table in ++ * devices that implement the Algorithmic LPM. ++ */ ++#define MLXSW_REG_RAUHT_ID 0x8014 ++#define MLXSW_REG_RAUHT_LEN 0x74 ++ ++MLXSW_REG_DEFINE(rauht, MLXSW_REG_RAUHT_ID, MLXSW_REG_RAUHT_LEN); ++ ++enum mlxsw_reg_rauht_type { ++ MLXSW_REG_RAUHT_TYPE_IPV4, ++ MLXSW_REG_RAUHT_TYPE_IPV6, ++}; ++ ++/* reg_rauht_type ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rauht, type, 0x00, 24, 2); ++ ++enum mlxsw_reg_rauht_op { ++ MLXSW_REG_RAUHT_OP_QUERY_READ = 0, ++ /* Read operation */ ++ MLXSW_REG_RAUHT_OP_QUERY_CLEAR_ON_READ = 1, ++ /* Clear on read operation. Used to read entry and clear ++ * activity bit. ++ */ ++ MLXSW_REG_RAUHT_OP_WRITE_ADD = 0, ++ /* Add. Used to write a new entry to the table. All R/W fields are ++ * relevant for new entry. Activity bit is set for new entries. ++ */ ++ MLXSW_REG_RAUHT_OP_WRITE_UPDATE = 1, ++ /* Update action. Used to update an existing route entry and ++ * only update the following fields: ++ * trap_action, trap_id, mac, counter_set_type, counter_index ++ */ ++ MLXSW_REG_RAUHT_OP_WRITE_CLEAR_ACTIVITY = 2, ++ /* Clear activity. A bit is cleared for the entry. */ ++ MLXSW_REG_RAUHT_OP_WRITE_DELETE = 3, ++ /* Delete entry */ ++ MLXSW_REG_RAUHT_OP_WRITE_DELETE_ALL = 4, ++ /* Delete all host entries on a RIF. In this command, dip ++ * field is reserved. ++ */ ++}; ++ ++/* reg_rauht_op ++ * Access: OP ++ */ ++MLXSW_ITEM32(reg, rauht, op, 0x00, 20, 3); ++ ++/* reg_rauht_a ++ * Activity. Set for new entries. Set if a packet lookup has hit on ++ * the specific entry. ++ * To clear the a bit, use "clear activity" op. ++ * Enabled by activity_dis in RGCR ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, rauht, a, 0x00, 16, 1); ++ ++/* reg_rauht_rif ++ * Router Interface ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rauht, rif, 0x00, 0, 16); ++ ++/* reg_rauht_dip* ++ * Destination address. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rauht, dip4, 0x1C, 0x0, 32); ++MLXSW_ITEM_BUF(reg, rauht, dip6, 0x10, 16); ++ ++enum mlxsw_reg_rauht_trap_action { ++ MLXSW_REG_RAUHT_TRAP_ACTION_NOP, ++ MLXSW_REG_RAUHT_TRAP_ACTION_TRAP, ++ MLXSW_REG_RAUHT_TRAP_ACTION_MIRROR_TO_CPU, ++ MLXSW_REG_RAUHT_TRAP_ACTION_MIRROR, ++ MLXSW_REG_RAUHT_TRAP_ACTION_DISCARD_ERRORS, ++}; ++ ++/* reg_rauht_trap_action ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rauht, trap_action, 0x60, 28, 4); ++ ++enum mlxsw_reg_rauht_trap_id { ++ MLXSW_REG_RAUHT_TRAP_ID_RTR_EGRESS0, ++ MLXSW_REG_RAUHT_TRAP_ID_RTR_EGRESS1, ++}; ++ ++/* reg_rauht_trap_id ++ * Trap ID to be reported to CPU. ++ * Trap-ID is RTR_EGRESS0 or RTR_EGRESS1. ++ * For trap_action of NOP, MIRROR and DISCARD_ERROR, ++ * trap_id is reserved. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rauht, trap_id, 0x60, 0, 9); ++ ++/* reg_rauht_counter_set_type ++ * Counter set type for flow counters ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rauht, counter_set_type, 0x68, 24, 8); ++ ++/* reg_rauht_counter_index ++ * Counter index for flow counters ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rauht, counter_index, 0x68, 0, 24); ++ ++/* reg_rauht_mac ++ * MAC address. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, rauht, mac, 0x6E, 6); ++ ++static inline void mlxsw_reg_rauht_pack(char *payload, ++ enum mlxsw_reg_rauht_op op, u16 rif, ++ const char *mac) ++{ ++ MLXSW_REG_ZERO(rauht, payload); ++ mlxsw_reg_rauht_op_set(payload, op); ++ mlxsw_reg_rauht_rif_set(payload, rif); ++ mlxsw_reg_rauht_mac_memcpy_to(payload, mac); ++} ++ ++static inline void mlxsw_reg_rauht_pack4(char *payload, ++ enum mlxsw_reg_rauht_op op, u16 rif, ++ const char *mac, u32 dip) ++{ ++ mlxsw_reg_rauht_pack(payload, op, rif, mac); ++ mlxsw_reg_rauht_dip4_set(payload, dip); ++} ++ ++static inline void mlxsw_reg_rauht_pack6(char *payload, ++ enum mlxsw_reg_rauht_op op, u16 rif, ++ const char *mac, const char *dip) ++{ ++ mlxsw_reg_rauht_pack(payload, op, rif, mac); ++ mlxsw_reg_rauht_type_set(payload, MLXSW_REG_RAUHT_TYPE_IPV6); ++ mlxsw_reg_rauht_dip6_memcpy_to(payload, dip); ++} ++ ++static inline void mlxsw_reg_rauht_pack_counter(char *payload, ++ u64 counter_index) ++{ ++ mlxsw_reg_rauht_counter_index_set(payload, counter_index); ++ mlxsw_reg_rauht_counter_set_type_set(payload, ++ MLXSW_REG_FLOW_COUNTER_SET_TYPE_PACKETS_BYTES); ++} ++ ++/* RALEU - Router Algorithmic LPM ECMP Update Register ++ * --------------------------------------------------- ++ * The register enables updating the ECMP section in the action for multiple ++ * LPM Unicast entries in a single operation. The update is executed to ++ * all entries of a {virtual router, protocol} tuple using the same ECMP group. ++ */ ++#define MLXSW_REG_RALEU_ID 0x8015 ++#define MLXSW_REG_RALEU_LEN 0x28 ++ ++MLXSW_REG_DEFINE(raleu, MLXSW_REG_RALEU_ID, MLXSW_REG_RALEU_LEN); ++ ++/* reg_raleu_protocol ++ * Protocol. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, raleu, protocol, 0x00, 24, 4); ++ ++/* reg_raleu_virtual_router ++ * Virtual Router ID ++ * Range is 0..cap_max_virtual_routers-1 ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, raleu, virtual_router, 0x00, 0, 16); ++ ++/* reg_raleu_adjacency_index ++ * Adjacency Index used for matching on the existing entries. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, raleu, adjacency_index, 0x10, 0, 24); ++ ++/* reg_raleu_ecmp_size ++ * ECMP Size used for matching on the existing entries. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, raleu, ecmp_size, 0x14, 0, 13); ++ ++/* reg_raleu_new_adjacency_index ++ * New Adjacency Index. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, raleu, new_adjacency_index, 0x20, 0, 24); ++ ++/* reg_raleu_new_ecmp_size ++ * New ECMP Size. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, raleu, new_ecmp_size, 0x24, 0, 13); ++ ++static inline void mlxsw_reg_raleu_pack(char *payload, ++ enum mlxsw_reg_ralxx_protocol protocol, ++ u16 virtual_router, ++ u32 adjacency_index, u16 ecmp_size, ++ u32 new_adjacency_index, ++ u16 new_ecmp_size) ++{ ++ MLXSW_REG_ZERO(raleu, payload); ++ mlxsw_reg_raleu_protocol_set(payload, protocol); ++ mlxsw_reg_raleu_virtual_router_set(payload, virtual_router); ++ mlxsw_reg_raleu_adjacency_index_set(payload, adjacency_index); ++ mlxsw_reg_raleu_ecmp_size_set(payload, ecmp_size); ++ mlxsw_reg_raleu_new_adjacency_index_set(payload, new_adjacency_index); ++ mlxsw_reg_raleu_new_ecmp_size_set(payload, new_ecmp_size); ++} ++ ++/* RAUHTD - Router Algorithmic LPM Unicast Host Table Dump Register ++ * ---------------------------------------------------------------- ++ * The RAUHTD register allows dumping entries from the Router Unicast Host ++ * Table. For a given session an entry is dumped no more than one time. The ++ * first RAUHTD access after reset is a new session. A session ends when the ++ * num_rec response is smaller than num_rec request or for IPv4 when the ++ * num_entries is smaller than 4. The clear activity affect the current session ++ * or the last session if a new session has not started. ++ */ ++#define MLXSW_REG_RAUHTD_ID 0x8018 ++#define MLXSW_REG_RAUHTD_BASE_LEN 0x20 ++#define MLXSW_REG_RAUHTD_REC_LEN 0x20 ++#define MLXSW_REG_RAUHTD_REC_MAX_NUM 32 ++#define MLXSW_REG_RAUHTD_LEN (MLXSW_REG_RAUHTD_BASE_LEN + \ ++ MLXSW_REG_RAUHTD_REC_MAX_NUM * MLXSW_REG_RAUHTD_REC_LEN) ++#define MLXSW_REG_RAUHTD_IPV4_ENT_PER_REC 4 ++ ++MLXSW_REG_DEFINE(rauhtd, MLXSW_REG_RAUHTD_ID, MLXSW_REG_RAUHTD_LEN); ++ ++#define MLXSW_REG_RAUHTD_FILTER_A BIT(0) ++#define MLXSW_REG_RAUHTD_FILTER_RIF BIT(3) ++ ++/* reg_rauhtd_filter_fields ++ * if a bit is '0' then the relevant field is ignored and dump is done ++ * regardless of the field value ++ * Bit0 - filter by activity: entry_a ++ * Bit3 - filter by entry rip: entry_rif ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rauhtd, filter_fields, 0x00, 0, 8); ++ ++enum mlxsw_reg_rauhtd_op { ++ MLXSW_REG_RAUHTD_OP_DUMP, ++ MLXSW_REG_RAUHTD_OP_DUMP_AND_CLEAR, ++}; ++ ++/* reg_rauhtd_op ++ * Access: OP ++ */ ++MLXSW_ITEM32(reg, rauhtd, op, 0x04, 24, 2); ++ ++/* reg_rauhtd_num_rec ++ * At request: number of records requested ++ * At response: number of records dumped ++ * For IPv4, each record has 4 entries at request and up to 4 entries ++ * at response ++ * Range is 0..MLXSW_REG_RAUHTD_REC_MAX_NUM ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rauhtd, num_rec, 0x04, 0, 8); ++ ++/* reg_rauhtd_entry_a ++ * Dump only if activity has value of entry_a ++ * Reserved if filter_fields bit0 is '0' ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rauhtd, entry_a, 0x08, 16, 1); ++ ++enum mlxsw_reg_rauhtd_type { ++ MLXSW_REG_RAUHTD_TYPE_IPV4, ++ MLXSW_REG_RAUHTD_TYPE_IPV6, ++}; ++ ++/* reg_rauhtd_type ++ * Dump only if record type is: ++ * 0 - IPv4 ++ * 1 - IPv6 ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rauhtd, type, 0x08, 0, 4); ++ ++/* reg_rauhtd_entry_rif ++ * Dump only if RIF has value of entry_rif ++ * Reserved if filter_fields bit3 is '0' ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rauhtd, entry_rif, 0x0C, 0, 16); ++ ++static inline void mlxsw_reg_rauhtd_pack(char *payload, ++ enum mlxsw_reg_rauhtd_type type) ++{ ++ MLXSW_REG_ZERO(rauhtd, payload); ++ mlxsw_reg_rauhtd_filter_fields_set(payload, MLXSW_REG_RAUHTD_FILTER_A); ++ mlxsw_reg_rauhtd_op_set(payload, MLXSW_REG_RAUHTD_OP_DUMP_AND_CLEAR); ++ mlxsw_reg_rauhtd_num_rec_set(payload, MLXSW_REG_RAUHTD_REC_MAX_NUM); ++ mlxsw_reg_rauhtd_entry_a_set(payload, 1); ++ mlxsw_reg_rauhtd_type_set(payload, type); ++} ++ ++/* reg_rauhtd_ipv4_rec_num_entries ++ * Number of valid entries in this record: ++ * 0 - 1 valid entry ++ * 1 - 2 valid entries ++ * 2 - 3 valid entries ++ * 3 - 4 valid entries ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv4_rec_num_entries, ++ MLXSW_REG_RAUHTD_BASE_LEN, 28, 2, ++ MLXSW_REG_RAUHTD_REC_LEN, 0x00, false); ++ ++/* reg_rauhtd_rec_type ++ * Record type. ++ * 0 - IPv4 ++ * 1 - IPv6 ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, rauhtd, rec_type, MLXSW_REG_RAUHTD_BASE_LEN, 24, 2, ++ MLXSW_REG_RAUHTD_REC_LEN, 0x00, false); ++ ++#define MLXSW_REG_RAUHTD_IPV4_ENT_LEN 0x8 ++ ++/* reg_rauhtd_ipv4_ent_a ++ * Activity. Set for new entries. Set if a packet lookup has hit on the ++ * specific entry. ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv4_ent_a, MLXSW_REG_RAUHTD_BASE_LEN, 16, 1, ++ MLXSW_REG_RAUHTD_IPV4_ENT_LEN, 0x00, false); ++ ++/* reg_rauhtd_ipv4_ent_rif ++ * Router interface. ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv4_ent_rif, MLXSW_REG_RAUHTD_BASE_LEN, 0, ++ 16, MLXSW_REG_RAUHTD_IPV4_ENT_LEN, 0x00, false); ++ ++/* reg_rauhtd_ipv4_ent_dip ++ * Destination IPv4 address. ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv4_ent_dip, MLXSW_REG_RAUHTD_BASE_LEN, 0, ++ 32, MLXSW_REG_RAUHTD_IPV4_ENT_LEN, 0x04, false); ++ ++#define MLXSW_REG_RAUHTD_IPV6_ENT_LEN 0x20 ++ ++/* reg_rauhtd_ipv6_ent_a ++ * Activity. Set for new entries. Set if a packet lookup has hit on the ++ * specific entry. ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv6_ent_a, MLXSW_REG_RAUHTD_BASE_LEN, 16, 1, ++ MLXSW_REG_RAUHTD_IPV6_ENT_LEN, 0x00, false); ++ ++/* reg_rauhtd_ipv6_ent_rif ++ * Router interface. ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv6_ent_rif, MLXSW_REG_RAUHTD_BASE_LEN, 0, ++ 16, MLXSW_REG_RAUHTD_IPV6_ENT_LEN, 0x00, false); ++ ++/* reg_rauhtd_ipv6_ent_dip ++ * Destination IPv6 address. ++ * Access: RO ++ */ ++MLXSW_ITEM_BUF_INDEXED(reg, rauhtd, ipv6_ent_dip, MLXSW_REG_RAUHTD_BASE_LEN, ++ 16, MLXSW_REG_RAUHTD_IPV6_ENT_LEN, 0x10); ++ ++static inline void mlxsw_reg_rauhtd_ent_ipv4_unpack(char *payload, ++ int ent_index, u16 *p_rif, ++ u32 *p_dip) ++{ ++ *p_rif = mlxsw_reg_rauhtd_ipv4_ent_rif_get(payload, ent_index); ++ *p_dip = mlxsw_reg_rauhtd_ipv4_ent_dip_get(payload, ent_index); ++} ++ ++static inline void mlxsw_reg_rauhtd_ent_ipv6_unpack(char *payload, ++ int rec_index, u16 *p_rif, ++ char *p_dip) ++{ ++ *p_rif = mlxsw_reg_rauhtd_ipv6_ent_rif_get(payload, rec_index); ++ mlxsw_reg_rauhtd_ipv6_ent_dip_memcpy_from(payload, rec_index, p_dip); ++} ++ ++/* RTDP - Routing Tunnel Decap Properties Register ++ * ----------------------------------------------- ++ * The RTDP register is used for configuring the tunnel decap properties of NVE ++ * and IPinIP. ++ */ ++#define MLXSW_REG_RTDP_ID 0x8020 ++#define MLXSW_REG_RTDP_LEN 0x44 ++ ++MLXSW_REG_DEFINE(rtdp, MLXSW_REG_RTDP_ID, MLXSW_REG_RTDP_LEN); ++ ++enum mlxsw_reg_rtdp_type { ++ MLXSW_REG_RTDP_TYPE_NVE, ++ MLXSW_REG_RTDP_TYPE_IPIP, ++}; ++ ++/* reg_rtdp_type ++ * Type of the RTDP entry as per enum mlxsw_reg_rtdp_type. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rtdp, type, 0x00, 28, 4); ++ ++/* reg_rtdp_tunnel_index ++ * Index to the Decap entry. ++ * For Spectrum, Index to KVD Linear. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rtdp, tunnel_index, 0x00, 0, 24); ++ ++/* IPinIP */ ++ ++/* reg_rtdp_ipip_irif ++ * Ingress Router Interface for the overlay router ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rtdp, ipip_irif, 0x04, 16, 16); ++ ++enum mlxsw_reg_rtdp_ipip_sip_check { ++ /* No sip checks. */ ++ MLXSW_REG_RTDP_IPIP_SIP_CHECK_NO, ++ /* Filter packet if underlay is not IPv4 or if underlay SIP does not ++ * equal ipv4_usip. ++ */ ++ MLXSW_REG_RTDP_IPIP_SIP_CHECK_FILTER_IPV4, ++ /* Filter packet if underlay is not IPv6 or if underlay SIP does not ++ * equal ipv6_usip. ++ */ ++ MLXSW_REG_RTDP_IPIP_SIP_CHECK_FILTER_IPV6 = 3, ++}; ++ ++/* reg_rtdp_ipip_sip_check ++ * SIP check to perform. If decapsulation failed due to these configurations ++ * then trap_id is IPIP_DECAP_ERROR. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rtdp, ipip_sip_check, 0x04, 0, 3); ++ ++/* If set, allow decapsulation of IPinIP (without GRE). */ ++#define MLXSW_REG_RTDP_IPIP_TYPE_CHECK_ALLOW_IPIP BIT(0) ++/* If set, allow decapsulation of IPinGREinIP without a key. */ ++#define MLXSW_REG_RTDP_IPIP_TYPE_CHECK_ALLOW_GRE BIT(1) ++/* If set, allow decapsulation of IPinGREinIP with a key. */ ++#define MLXSW_REG_RTDP_IPIP_TYPE_CHECK_ALLOW_GRE_KEY BIT(2) ++ ++/* reg_rtdp_ipip_type_check ++ * Flags as per MLXSW_REG_RTDP_IPIP_TYPE_CHECK_*. If decapsulation failed due to ++ * these configurations then trap_id is IPIP_DECAP_ERROR. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rtdp, ipip_type_check, 0x08, 24, 3); ++ ++/* reg_rtdp_ipip_gre_key_check ++ * Whether GRE key should be checked. When check is enabled: ++ * - A packet received as IPinIP (without GRE) will always pass. ++ * - A packet received as IPinGREinIP without a key will not pass the check. ++ * - A packet received as IPinGREinIP with a key will pass the check only if the ++ * key in the packet is equal to expected_gre_key. ++ * If decapsulation failed due to GRE key then trap_id is IPIP_DECAP_ERROR. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rtdp, ipip_gre_key_check, 0x08, 23, 1); ++ ++/* reg_rtdp_ipip_ipv4_usip ++ * Underlay IPv4 address for ipv4 source address check. ++ * Reserved when sip_check is not '1'. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rtdp, ipip_ipv4_usip, 0x0C, 0, 32); ++ ++/* reg_rtdp_ipip_ipv6_usip_ptr ++ * This field is valid when sip_check is "sipv6 check explicitly". This is a ++ * pointer to the IPv6 DIP which is configured by RIPS. For Spectrum, the index ++ * is to the KVD linear. ++ * Reserved when sip_check is not MLXSW_REG_RTDP_IPIP_SIP_CHECK_FILTER_IPV6. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rtdp, ipip_ipv6_usip_ptr, 0x10, 0, 24); ++ ++/* reg_rtdp_ipip_expected_gre_key ++ * GRE key for checking. ++ * Reserved when gre_key_check is '0'. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rtdp, ipip_expected_gre_key, 0x14, 0, 32); ++ ++static inline void mlxsw_reg_rtdp_pack(char *payload, ++ enum mlxsw_reg_rtdp_type type, ++ u32 tunnel_index) ++{ ++ MLXSW_REG_ZERO(rtdp, payload); ++ mlxsw_reg_rtdp_type_set(payload, type); ++ mlxsw_reg_rtdp_tunnel_index_set(payload, tunnel_index); ++} ++ ++static inline void ++mlxsw_reg_rtdp_ipip4_pack(char *payload, u16 irif, ++ enum mlxsw_reg_rtdp_ipip_sip_check sip_check, ++ unsigned int type_check, bool gre_key_check, ++ u32 ipv4_usip, u32 expected_gre_key) ++{ ++ mlxsw_reg_rtdp_ipip_irif_set(payload, irif); ++ mlxsw_reg_rtdp_ipip_sip_check_set(payload, sip_check); ++ mlxsw_reg_rtdp_ipip_type_check_set(payload, type_check); ++ mlxsw_reg_rtdp_ipip_gre_key_check_set(payload, gre_key_check); ++ mlxsw_reg_rtdp_ipip_ipv4_usip_set(payload, ipv4_usip); ++ mlxsw_reg_rtdp_ipip_expected_gre_key_set(payload, expected_gre_key); ++} ++ ++/* RIGR-V2 - Router Interface Group Register Version 2 ++ * --------------------------------------------------- ++ * The RIGR_V2 register is used to add, remove and query egress interface list ++ * of a multicast forwarding entry. ++ */ ++#define MLXSW_REG_RIGR2_ID 0x8023 ++#define MLXSW_REG_RIGR2_LEN 0xB0 ++ ++#define MLXSW_REG_RIGR2_MAX_ERIFS 32 ++ ++MLXSW_REG_DEFINE(rigr2, MLXSW_REG_RIGR2_ID, MLXSW_REG_RIGR2_LEN); ++ ++/* reg_rigr2_rigr_index ++ * KVD Linear index. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rigr2, rigr_index, 0x04, 0, 24); ++ ++/* reg_rigr2_vnext ++ * Next RIGR Index is valid. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rigr2, vnext, 0x08, 31, 1); ++ ++/* reg_rigr2_next_rigr_index ++ * Next RIGR Index. The index is to the KVD linear. ++ * Reserved when vnxet = '0'. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rigr2, next_rigr_index, 0x08, 0, 24); ++ ++/* reg_rigr2_vrmid ++ * RMID Index is valid. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rigr2, vrmid, 0x20, 31, 1); ++ ++/* reg_rigr2_rmid_index ++ * RMID Index. ++ * Range 0 .. max_mid - 1 ++ * Reserved when vrmid = '0'. ++ * The index is to the Port Group Table (PGT) ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rigr2, rmid_index, 0x20, 0, 16); ++ ++/* reg_rigr2_erif_entry_v ++ * Egress Router Interface is valid. ++ * Note that low-entries must be set if high-entries are set. For ++ * example: if erif_entry[2].v is set then erif_entry[1].v and ++ * erif_entry[0].v must be set. ++ * Index can be from 0 to cap_mc_erif_list_entries-1 ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, rigr2, erif_entry_v, 0x24, 31, 1, 4, 0, false); ++ ++/* reg_rigr2_erif_entry_erif ++ * Egress Router Interface. ++ * Valid range is from 0 to cap_max_router_interfaces - 1 ++ * Index can be from 0 to MLXSW_REG_RIGR2_MAX_ERIFS - 1 ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, rigr2, erif_entry_erif, 0x24, 0, 16, 4, 0, false); ++ ++static inline void mlxsw_reg_rigr2_pack(char *payload, u32 rigr_index, ++ bool vnext, u32 next_rigr_index) ++{ ++ MLXSW_REG_ZERO(rigr2, payload); ++ mlxsw_reg_rigr2_rigr_index_set(payload, rigr_index); ++ mlxsw_reg_rigr2_vnext_set(payload, vnext); ++ mlxsw_reg_rigr2_next_rigr_index_set(payload, next_rigr_index); ++ mlxsw_reg_rigr2_vrmid_set(payload, 0); ++ mlxsw_reg_rigr2_rmid_index_set(payload, 0); ++} ++ ++static inline void mlxsw_reg_rigr2_erif_entry_pack(char *payload, int index, ++ bool v, u16 erif) ++{ ++ mlxsw_reg_rigr2_erif_entry_v_set(payload, index, v); ++ mlxsw_reg_rigr2_erif_entry_erif_set(payload, index, erif); ++} ++ ++/* RECR-V2 - Router ECMP Configuration Version 2 Register ++ * ------------------------------------------------------ ++ */ ++#define MLXSW_REG_RECR2_ID 0x8025 ++#define MLXSW_REG_RECR2_LEN 0x38 ++ ++MLXSW_REG_DEFINE(recr2, MLXSW_REG_RECR2_ID, MLXSW_REG_RECR2_LEN); ++ ++/* reg_recr2_pp ++ * Per-port configuration ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, recr2, pp, 0x00, 24, 1); ++ ++/* reg_recr2_sh ++ * Symmetric hash ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, recr2, sh, 0x00, 8, 1); ++ ++/* reg_recr2_seed ++ * Seed ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, recr2, seed, 0x08, 0, 32); ++ ++enum { ++ /* Enable IPv4 fields if packet is not TCP and not UDP */ ++ MLXSW_REG_RECR2_IPV4_EN_NOT_TCP_NOT_UDP = 3, ++ /* Enable IPv4 fields if packet is TCP or UDP */ ++ MLXSW_REG_RECR2_IPV4_EN_TCP_UDP = 4, ++ /* Enable IPv6 fields if packet is not TCP and not UDP */ ++ MLXSW_REG_RECR2_IPV6_EN_NOT_TCP_NOT_UDP = 5, ++ /* Enable IPv6 fields if packet is TCP or UDP */ ++ MLXSW_REG_RECR2_IPV6_EN_TCP_UDP = 6, ++ /* Enable TCP/UDP header fields if packet is IPv4 */ ++ MLXSW_REG_RECR2_TCP_UDP_EN_IPV4 = 7, ++ /* Enable TCP/UDP header fields if packet is IPv6 */ ++ MLXSW_REG_RECR2_TCP_UDP_EN_IPV6 = 8, ++}; ++ ++/* reg_recr2_outer_header_enables ++ * Bit mask where each bit enables a specific layer to be included in ++ * the hash calculation. ++ * Access: RW ++ */ ++MLXSW_ITEM_BIT_ARRAY(reg, recr2, outer_header_enables, 0x10, 0x04, 1); ++ ++enum { ++ /* IPv4 Source IP */ ++ MLXSW_REG_RECR2_IPV4_SIP0 = 9, ++ MLXSW_REG_RECR2_IPV4_SIP3 = 12, ++ /* IPv4 Destination IP */ ++ MLXSW_REG_RECR2_IPV4_DIP0 = 13, ++ MLXSW_REG_RECR2_IPV4_DIP3 = 16, ++ /* IP Protocol */ ++ MLXSW_REG_RECR2_IPV4_PROTOCOL = 17, ++ /* IPv6 Source IP */ ++ MLXSW_REG_RECR2_IPV6_SIP0_7 = 21, ++ MLXSW_REG_RECR2_IPV6_SIP8 = 29, ++ MLXSW_REG_RECR2_IPV6_SIP15 = 36, ++ /* IPv6 Destination IP */ ++ MLXSW_REG_RECR2_IPV6_DIP0_7 = 37, ++ MLXSW_REG_RECR2_IPV6_DIP8 = 45, ++ MLXSW_REG_RECR2_IPV6_DIP15 = 52, ++ /* IPv6 Next Header */ ++ MLXSW_REG_RECR2_IPV6_NEXT_HEADER = 53, ++ /* IPv6 Flow Label */ ++ MLXSW_REG_RECR2_IPV6_FLOW_LABEL = 57, ++ /* TCP/UDP Source Port */ ++ MLXSW_REG_RECR2_TCP_UDP_SPORT = 74, ++ /* TCP/UDP Destination Port */ ++ MLXSW_REG_RECR2_TCP_UDP_DPORT = 75, ++}; ++ ++/* reg_recr2_outer_header_fields_enable ++ * Packet fields to enable for ECMP hash subject to outer_header_enable. ++ * Access: RW ++ */ ++MLXSW_ITEM_BIT_ARRAY(reg, recr2, outer_header_fields_enable, 0x14, 0x14, 1); ++ ++static inline void mlxsw_reg_recr2_ipv4_sip_enable(char *payload) ++{ ++ int i; ++ ++ for (i = MLXSW_REG_RECR2_IPV4_SIP0; i <= MLXSW_REG_RECR2_IPV4_SIP3; i++) ++ mlxsw_reg_recr2_outer_header_fields_enable_set(payload, i, ++ true); ++} ++ ++static inline void mlxsw_reg_recr2_ipv4_dip_enable(char *payload) ++{ ++ int i; ++ ++ for (i = MLXSW_REG_RECR2_IPV4_DIP0; i <= MLXSW_REG_RECR2_IPV4_DIP3; i++) ++ mlxsw_reg_recr2_outer_header_fields_enable_set(payload, i, ++ true); ++} ++ ++static inline void mlxsw_reg_recr2_ipv6_sip_enable(char *payload) ++{ ++ int i = MLXSW_REG_RECR2_IPV6_SIP0_7; ++ ++ mlxsw_reg_recr2_outer_header_fields_enable_set(payload, i, true); ++ ++ i = MLXSW_REG_RECR2_IPV6_SIP8; ++ for (; i <= MLXSW_REG_RECR2_IPV6_SIP15; i++) ++ mlxsw_reg_recr2_outer_header_fields_enable_set(payload, i, ++ true); ++} ++ ++static inline void mlxsw_reg_recr2_ipv6_dip_enable(char *payload) ++{ ++ int i = MLXSW_REG_RECR2_IPV6_DIP0_7; ++ ++ mlxsw_reg_recr2_outer_header_fields_enable_set(payload, i, true); ++ ++ i = MLXSW_REG_RECR2_IPV6_DIP8; ++ for (; i <= MLXSW_REG_RECR2_IPV6_DIP15; i++) ++ mlxsw_reg_recr2_outer_header_fields_enable_set(payload, i, ++ true); ++} ++ ++static inline void mlxsw_reg_recr2_pack(char *payload, u32 seed) ++{ ++ MLXSW_REG_ZERO(recr2, payload); ++ mlxsw_reg_recr2_pp_set(payload, false); ++ mlxsw_reg_recr2_sh_set(payload, true); ++ mlxsw_reg_recr2_seed_set(payload, seed); ++} ++ ++/* RMFT-V2 - Router Multicast Forwarding Table Version 2 Register ++ * -------------------------------------------------------------- ++ * The RMFT_V2 register is used to configure and query the multicast table. ++ */ ++#define MLXSW_REG_RMFT2_ID 0x8027 ++#define MLXSW_REG_RMFT2_LEN 0x174 ++ ++MLXSW_REG_DEFINE(rmft2, MLXSW_REG_RMFT2_ID, MLXSW_REG_RMFT2_LEN); ++ ++/* reg_rmft2_v ++ * Valid ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rmft2, v, 0x00, 31, 1); ++ ++enum mlxsw_reg_rmft2_type { ++ MLXSW_REG_RMFT2_TYPE_IPV4, ++ MLXSW_REG_RMFT2_TYPE_IPV6 ++}; ++ ++/* reg_rmft2_type ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rmft2, type, 0x00, 28, 2); ++ ++enum mlxsw_sp_reg_rmft2_op { ++ /* For Write: ++ * Write operation. Used to write a new entry to the table. All RW ++ * fields are relevant for new entry. Activity bit is set for new ++ * entries - Note write with v (Valid) 0 will delete the entry. ++ * For Query: ++ * Read operation ++ */ ++ MLXSW_REG_RMFT2_OP_READ_WRITE, ++}; ++ ++/* reg_rmft2_op ++ * Operation. ++ * Access: OP ++ */ ++MLXSW_ITEM32(reg, rmft2, op, 0x00, 20, 2); ++ ++/* reg_rmft2_a ++ * Activity. Set for new entries. Set if a packet lookup has hit on the specific ++ * entry. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, rmft2, a, 0x00, 16, 1); ++ ++/* reg_rmft2_offset ++ * Offset within the multicast forwarding table to write to. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, rmft2, offset, 0x00, 0, 16); ++ ++/* reg_rmft2_virtual_router ++ * Virtual Router ID. Range from 0..cap_max_virtual_routers-1 ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rmft2, virtual_router, 0x04, 0, 16); ++ ++enum mlxsw_reg_rmft2_irif_mask { ++ MLXSW_REG_RMFT2_IRIF_MASK_IGNORE, ++ MLXSW_REG_RMFT2_IRIF_MASK_COMPARE ++}; ++ ++/* reg_rmft2_irif_mask ++ * Ingress RIF mask. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rmft2, irif_mask, 0x08, 24, 1); ++ ++/* reg_rmft2_irif ++ * Ingress RIF index. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, rmft2, irif, 0x08, 0, 16); ++ ++/* reg_rmft2_dip{4,6} ++ * Destination IPv4/6 address ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, rmft2, dip6, 0x10, 16); ++MLXSW_ITEM32(reg, rmft2, dip4, 0x1C, 0, 32); ++ ++/* reg_rmft2_dip{4,6}_mask ++ * A bit that is set directs the TCAM to compare the corresponding bit in key. A ++ * bit that is clear directs the TCAM to ignore the corresponding bit in key. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, rmft2, dip6_mask, 0x20, 16); ++MLXSW_ITEM32(reg, rmft2, dip4_mask, 0x2C, 0, 32); ++ ++/* reg_rmft2_sip{4,6} ++ * Source IPv4/6 address ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, rmft2, sip6, 0x30, 16); ++MLXSW_ITEM32(reg, rmft2, sip4, 0x3C, 0, 32); ++ ++/* reg_rmft2_sip{4,6}_mask ++ * A bit that is set directs the TCAM to compare the corresponding bit in key. A ++ * bit that is clear directs the TCAM to ignore the corresponding bit in key. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, rmft2, sip6_mask, 0x40, 16); ++MLXSW_ITEM32(reg, rmft2, sip4_mask, 0x4C, 0, 32); ++ ++/* reg_rmft2_flexible_action_set ++ * ACL action set. The only supported action types in this field and in any ++ * action-set pointed from here are as follows: ++ * 00h: ACTION_NULL ++ * 01h: ACTION_MAC_TTL, only TTL configuration is supported. ++ * 03h: ACTION_TRAP ++ * 06h: ACTION_QOS ++ * 08h: ACTION_POLICING_MONITORING ++ * 10h: ACTION_ROUTER_MC ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, rmft2, flexible_action_set, 0x80, ++ MLXSW_REG_FLEX_ACTION_SET_LEN); ++ ++static inline void ++mlxsw_reg_rmft2_common_pack(char *payload, bool v, u16 offset, ++ u16 virtual_router, ++ enum mlxsw_reg_rmft2_irif_mask irif_mask, u16 irif, ++ const char *flex_action_set) ++{ ++ MLXSW_REG_ZERO(rmft2, payload); ++ mlxsw_reg_rmft2_v_set(payload, v); ++ mlxsw_reg_rmft2_op_set(payload, MLXSW_REG_RMFT2_OP_READ_WRITE); ++ mlxsw_reg_rmft2_offset_set(payload, offset); ++ mlxsw_reg_rmft2_virtual_router_set(payload, virtual_router); ++ mlxsw_reg_rmft2_irif_mask_set(payload, irif_mask); ++ mlxsw_reg_rmft2_irif_set(payload, irif); ++ if (flex_action_set) ++ mlxsw_reg_rmft2_flexible_action_set_memcpy_to(payload, ++ flex_action_set); ++} ++ ++static inline void ++mlxsw_reg_rmft2_ipv4_pack(char *payload, bool v, u16 offset, u16 virtual_router, ++ enum mlxsw_reg_rmft2_irif_mask irif_mask, u16 irif, ++ u32 dip4, u32 dip4_mask, u32 sip4, u32 sip4_mask, ++ const char *flexible_action_set) ++{ ++ mlxsw_reg_rmft2_common_pack(payload, v, offset, virtual_router, ++ irif_mask, irif, flexible_action_set); ++ mlxsw_reg_rmft2_type_set(payload, MLXSW_REG_RMFT2_TYPE_IPV4); ++ mlxsw_reg_rmft2_dip4_set(payload, dip4); ++ mlxsw_reg_rmft2_dip4_mask_set(payload, dip4_mask); ++ mlxsw_reg_rmft2_sip4_set(payload, sip4); ++ mlxsw_reg_rmft2_sip4_mask_set(payload, sip4_mask); ++} ++ ++static inline void ++mlxsw_reg_rmft2_ipv6_pack(char *payload, bool v, u16 offset, u16 virtual_router, ++ enum mlxsw_reg_rmft2_irif_mask irif_mask, u16 irif, ++ struct in6_addr dip6, struct in6_addr dip6_mask, ++ struct in6_addr sip6, struct in6_addr sip6_mask, ++ const char *flexible_action_set) ++{ ++ mlxsw_reg_rmft2_common_pack(payload, v, offset, virtual_router, ++ irif_mask, irif, flexible_action_set); ++ mlxsw_reg_rmft2_type_set(payload, MLXSW_REG_RMFT2_TYPE_IPV6); ++ mlxsw_reg_rmft2_dip6_memcpy_to(payload, (void *)&dip6); ++ mlxsw_reg_rmft2_dip6_mask_memcpy_to(payload, (void *)&dip6_mask); ++ mlxsw_reg_rmft2_sip6_memcpy_to(payload, (void *)&sip6); ++ mlxsw_reg_rmft2_sip6_mask_memcpy_to(payload, (void *)&sip6_mask); ++} ++ ++/* MFCR - Management Fan Control Register ++ * -------------------------------------- ++ * This register controls the settings of the Fan Speed PWM mechanism. ++ */ ++#define MLXSW_REG_MFCR_ID 0x9001 ++#define MLXSW_REG_MFCR_LEN 0x08 ++ ++MLXSW_REG_DEFINE(mfcr, MLXSW_REG_MFCR_ID, MLXSW_REG_MFCR_LEN); ++ ++enum mlxsw_reg_mfcr_pwm_frequency { ++ MLXSW_REG_MFCR_PWM_FEQ_11HZ = 0x00, ++ MLXSW_REG_MFCR_PWM_FEQ_14_7HZ = 0x01, ++ MLXSW_REG_MFCR_PWM_FEQ_22_1HZ = 0x02, ++ MLXSW_REG_MFCR_PWM_FEQ_1_4KHZ = 0x40, ++ MLXSW_REG_MFCR_PWM_FEQ_5KHZ = 0x41, ++ MLXSW_REG_MFCR_PWM_FEQ_20KHZ = 0x42, ++ MLXSW_REG_MFCR_PWM_FEQ_22_5KHZ = 0x43, ++ MLXSW_REG_MFCR_PWM_FEQ_25KHZ = 0x44, ++}; ++ ++/* reg_mfcr_pwm_frequency ++ * Controls the frequency of the PWM signal. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, mfcr, pwm_frequency, 0x00, 0, 7); ++ ++#define MLXSW_MFCR_TACHOS_MAX 10 ++ ++/* reg_mfcr_tacho_active ++ * Indicates which of the tachometer is active (bit per tachometer). ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, mfcr, tacho_active, 0x04, 16, MLXSW_MFCR_TACHOS_MAX); ++ ++#define MLXSW_MFCR_PWMS_MAX 5 ++ ++/* reg_mfcr_pwm_active ++ * Indicates which of the PWM control is active (bit per PWM). ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, mfcr, pwm_active, 0x04, 0, MLXSW_MFCR_PWMS_MAX); ++ ++static inline void ++mlxsw_reg_mfcr_pack(char *payload, ++ enum mlxsw_reg_mfcr_pwm_frequency pwm_frequency) ++{ ++ MLXSW_REG_ZERO(mfcr, payload); ++ mlxsw_reg_mfcr_pwm_frequency_set(payload, pwm_frequency); ++} ++ ++static inline void ++mlxsw_reg_mfcr_unpack(char *payload, ++ enum mlxsw_reg_mfcr_pwm_frequency *p_pwm_frequency, ++ u16 *p_tacho_active, u8 *p_pwm_active) ++{ ++ *p_pwm_frequency = mlxsw_reg_mfcr_pwm_frequency_get(payload); ++ *p_tacho_active = mlxsw_reg_mfcr_tacho_active_get(payload); ++ *p_pwm_active = mlxsw_reg_mfcr_pwm_active_get(payload); ++} ++ ++/* MFSC - Management Fan Speed Control Register ++ * -------------------------------------------- ++ * This register controls the settings of the Fan Speed PWM mechanism. ++ */ ++#define MLXSW_REG_MFSC_ID 0x9002 ++#define MLXSW_REG_MFSC_LEN 0x08 ++ ++MLXSW_REG_DEFINE(mfsc, MLXSW_REG_MFSC_ID, MLXSW_REG_MFSC_LEN); ++ ++/* reg_mfsc_pwm ++ * Fan pwm to control / monitor. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, mfsc, pwm, 0x00, 24, 3); ++ ++/* reg_mfsc_pwm_duty_cycle ++ * Controls the duty cycle of the PWM. Value range from 0..255 to ++ * represent duty cycle of 0%...100%. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, mfsc, pwm_duty_cycle, 0x04, 0, 8); ++ ++static inline void mlxsw_reg_mfsc_pack(char *payload, u8 pwm, ++ u8 pwm_duty_cycle) ++{ ++ MLXSW_REG_ZERO(mfsc, payload); ++ mlxsw_reg_mfsc_pwm_set(payload, pwm); ++ mlxsw_reg_mfsc_pwm_duty_cycle_set(payload, pwm_duty_cycle); ++} ++ ++/* MFSM - Management Fan Speed Measurement ++ * --------------------------------------- ++ * This register controls the settings of the Tacho measurements and ++ * enables reading the Tachometer measurements. ++ */ ++#define MLXSW_REG_MFSM_ID 0x9003 ++#define MLXSW_REG_MFSM_LEN 0x08 ++ ++MLXSW_REG_DEFINE(mfsm, MLXSW_REG_MFSM_ID, MLXSW_REG_MFSM_LEN); ++ ++/* reg_mfsm_tacho ++ * Fan tachometer index. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, mfsm, tacho, 0x00, 24, 4); ++ ++/* reg_mfsm_rpm ++ * Fan speed (round per minute). ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, mfsm, rpm, 0x04, 0, 16); ++ ++static inline void mlxsw_reg_mfsm_pack(char *payload, u8 tacho) ++{ ++ MLXSW_REG_ZERO(mfsm, payload); ++ mlxsw_reg_mfsm_tacho_set(payload, tacho); ++} ++ ++/* MFSL - Management Fan Speed Limit Register ++ * ------------------------------------------ ++ * The Fan Speed Limit register is used to configure the fan speed ++ * event / interrupt notification mechanism. Fan speed threshold are ++ * defined for both under-speed and over-speed. ++ */ ++#define MLXSW_REG_MFSL_ID 0x9004 ++#define MLXSW_REG_MFSL_LEN 0x0C ++ ++MLXSW_REG_DEFINE(mfsl, MLXSW_REG_MFSL_ID, MLXSW_REG_MFSL_LEN); ++ ++/* reg_mfsl_tacho ++ * Fan tachometer index. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, mfsl, tacho, 0x00, 24, 4); ++ ++/* reg_mfsl_tach_min ++ * Tachometer minimum value (minimum RPM). ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, mfsl, tach_min, 0x04, 0, 16); ++ ++/* reg_mfsl_tach_max ++ * Tachometer maximum value (maximum RPM). ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, mfsl, tach_max, 0x08, 0, 16); ++ ++static inline void mlxsw_reg_mfsl_pack(char *payload, u8 tacho, ++ u16 tach_min, u16 tach_max) ++{ ++ MLXSW_REG_ZERO(mfsl, payload); ++ mlxsw_reg_mfsl_tacho_set(payload, tacho); ++ mlxsw_reg_mfsl_tach_min_set(payload, tach_min); ++ mlxsw_reg_mfsl_tach_max_set(payload, tach_max); ++} ++ ++static inline void mlxsw_reg_mfsl_unpack(char *payload, u8 tacho, ++ u16 *p_tach_min, u16 *p_tach_max) ++{ ++ if (p_tach_min) ++ *p_tach_min = mlxsw_reg_mfsl_tach_min_get(payload); ++ ++ if (p_tach_max) ++ *p_tach_max = mlxsw_reg_mfsl_tach_max_get(payload); ++} ++ ++/* FORE - Fan Out of Range Event Register ++ * -------------------------------------- ++ * This register reports the status of the controlled fans compared to the ++ * range defined by the MFSL register. ++ */ ++#define MLXSW_REG_FORE_ID 0x9007 ++#define MLXSW_REG_FORE_LEN 0x0C ++ ++MLXSW_REG_DEFINE(fore, MLXSW_REG_FORE_ID, MLXSW_REG_FORE_LEN); ++ ++/* fan_under_limit ++ * Fan speed is below the low limit defined in MFSL register. Each bit relates ++ * to a single tachometer and indicates the specific tachometer reading is ++ * below the threshold. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, fore, fan_under_limit, 0x00, 16, 10); ++ ++static inline void mlxsw_reg_fore_unpack(char *payload, u8 tacho, ++ bool *fan_under_limit) ++{ ++ u16 limit; ++ ++ if (fan_under_limit) { ++ limit = mlxsw_reg_fore_fan_under_limit_get(payload); ++ *fan_under_limit = !!(limit & BIT(tacho)); ++ } ++} ++ ++/* MTCAP - Management Temperature Capabilities ++ * ------------------------------------------- ++ * This register exposes the capabilities of the device and ++ * system temperature sensing. ++ */ ++#define MLXSW_REG_MTCAP_ID 0x9009 ++#define MLXSW_REG_MTCAP_LEN 0x08 ++ ++MLXSW_REG_DEFINE(mtcap, MLXSW_REG_MTCAP_ID, MLXSW_REG_MTCAP_LEN); ++ ++/* reg_mtcap_sensor_count ++ * Number of sensors supported by the device. ++ * This includes the QSFP module sensors (if exists in the QSFP module). ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, mtcap, sensor_count, 0x00, 0, 7); ++ ++/* MTMP - Management Temperature ++ * ----------------------------- ++ * This register controls the settings of the temperature measurements ++ * and enables reading the temperature measurements. Note that temperature ++ * is in 0.125 degrees Celsius. ++ */ ++#define MLXSW_REG_MTMP_ID 0x900A ++#define MLXSW_REG_MTMP_LEN 0x20 ++ ++MLXSW_REG_DEFINE(mtmp, MLXSW_REG_MTMP_ID, MLXSW_REG_MTMP_LEN); ++ ++/* reg_mtmp_sensor_index ++ * Sensors index to access. ++ * 64-127 of sensor_index are mapped to the SFP+/QSFP modules sequentially ++ * (module 0 is mapped to sensor_index 64). ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, mtmp, sensor_index, 0x00, 0, 7); ++ ++/* Convert to milli degrees Celsius */ ++#define MLXSW_REG_MTMP_TEMP_TO_MC(val) (val * 125) ++ ++/* reg_mtmp_temperature ++ * Temperature reading from the sensor. Reading is in 0.125 Celsius ++ * degrees units. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, mtmp, temperature, 0x04, 0, 16); ++ ++/* reg_mtmp_mte ++ * Max Temperature Enable - enables measuring the max temperature on a sensor. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, mtmp, mte, 0x08, 31, 1); ++ ++/* reg_mtmp_mtr ++ * Max Temperature Reset - clears the value of the max temperature register. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, mtmp, mtr, 0x08, 30, 1); ++ ++/* reg_mtmp_max_temperature ++ * The highest measured temperature from the sensor. ++ * When the bit mte is cleared, the field max_temperature is reserved. ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, mtmp, max_temperature, 0x08, 0, 16); ++ ++/* reg_mtmp_tee ++ * Temperature Event Enable. ++ * 0 - Do not generate event ++ * 1 - Generate event ++ * 2 - Generate single event ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, mtmp, tee, 0x0C, 30, 2); ++ ++#define MLXSW_REG_MTMP_THRESH_HI 0x348 /* 105 Celsius */ ++ ++/* reg_mtmp_temperature_threshold_hi ++ * High threshold for Temperature Warning Event. In 0.125 Celsius. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, mtmp, temperature_threshold_hi, 0x0C, 0, 16); ++ ++/* reg_mtmp_temperature_threshold_lo ++ * Low threshold for Temperature Warning Event. In 0.125 Celsius. + * Access: RW + */ +-MLXSW_ITEM32(reg, ratr, v, 0x00, 24, 1); ++MLXSW_ITEM32(reg, mtmp, temperature_threshold_lo, 0x10, 0, 16); + +-/* reg_ratr_a +- * Activity. Set for new entries. Set if a packet lookup has hit on +- * the specific entry. To clear the a bit, use "clear activity". ++#define MLXSW_REG_MTMP_SENSOR_NAME_SIZE 8 ++ ++/* reg_mtmp_sensor_name ++ * Sensor Name + * Access: RO + */ +-MLXSW_ITEM32(reg, ratr, a, 0x00, 16, 1); ++MLXSW_ITEM_BUF(reg, mtmp, sensor_name, 0x18, MLXSW_REG_MTMP_SENSOR_NAME_SIZE); + +-/* reg_ratr_adjacency_index_low +- * Bits 15:0 of index into the adjacency table. +- * For SwitchX and SwitchX-2, the adjacency table is linear and +- * used for adjacency entries only. +- * For Spectrum, the index is to the KVD linear. +- * Access: Index +- */ +-MLXSW_ITEM32(reg, ratr, adjacency_index_low, 0x04, 0, 16); ++static inline void mlxsw_reg_mtmp_pack(char *payload, u8 sensor_index, ++ bool max_temp_enable, ++ bool max_temp_reset) ++{ ++ MLXSW_REG_ZERO(mtmp, payload); ++ mlxsw_reg_mtmp_sensor_index_set(payload, sensor_index); ++ mlxsw_reg_mtmp_mte_set(payload, max_temp_enable); ++ mlxsw_reg_mtmp_mtr_set(payload, max_temp_reset); ++ mlxsw_reg_mtmp_temperature_threshold_hi_set(payload, ++ MLXSW_REG_MTMP_THRESH_HI); ++} + +-/* reg_ratr_egress_router_interface +- * Range is 0 .. cap_max_router_interfaces - 1 +- * Access: RW +- */ +-MLXSW_ITEM32(reg, ratr, egress_router_interface, 0x08, 0, 16); ++static inline void mlxsw_reg_mtmp_unpack(char *payload, unsigned int *p_temp, ++ unsigned int *p_max_temp, ++ char *sensor_name) ++{ ++ u16 temp; + +-enum mlxsw_reg_ratr_trap_action { +- MLXSW_REG_RATR_TRAP_ACTION_NOP, +- MLXSW_REG_RATR_TRAP_ACTION_TRAP, +- MLXSW_REG_RATR_TRAP_ACTION_MIRROR_TO_CPU, +- MLXSW_REG_RATR_TRAP_ACTION_MIRROR, +- MLXSW_REG_RATR_TRAP_ACTION_DISCARD_ERRORS, +-}; ++ if (p_temp) { ++ temp = mlxsw_reg_mtmp_temperature_get(payload); ++ *p_temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp); ++ } ++ if (p_max_temp) { ++ temp = mlxsw_reg_mtmp_max_temperature_get(payload); ++ *p_max_temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp); ++ } ++ if (sensor_name) ++ mlxsw_reg_mtmp_sensor_name_memcpy_from(payload, sensor_name); ++} + +-/* reg_ratr_trap_action +- * see mlxsw_reg_ratr_trap_action +- * Access: RW ++/* MTBR - Management Temperature Bulk Register ++ * ------------------------------------------- ++ * This register is used for bulk temperature reading. + */ +-MLXSW_ITEM32(reg, ratr, trap_action, 0x0C, 28, 4); ++#define MLXSW_REG_MTBR_ID 0x900F ++#define MLXSW_REG_MTBR_BASE_LEN 0x10 /* base length, without records */ ++#define MLXSW_REG_MTBR_REC_LEN 0x04 /* record length */ ++#define MLXSW_REG_MTBR_REC_MAX_COUNT 47 /* firmware limitation */ ++#define MLXSW_REG_MTBR_LEN (MLXSW_REG_MTBR_BASE_LEN + \ ++ MLXSW_REG_MTBR_REC_LEN * \ ++ MLXSW_REG_MTBR_REC_MAX_COUNT) + +-enum mlxsw_reg_ratr_trap_id { +- MLXSW_REG_RATR_TRAP_ID_RTR_EGRESS0 = 0, +- MLXSW_REG_RATR_TRAP_ID_RTR_EGRESS1 = 1, +-}; ++MLXSW_REG_DEFINE(mtbr, MLXSW_REG_MTBR_ID, MLXSW_REG_MTBR_LEN); + +-/* reg_ratr_adjacency_index_high +- * Bits 23:16 of the adjacency_index. ++/* reg_mtbr_base_sensor_index ++ * Base sensors index to access (0 - ASIC sensor, 1-63 - ambient sensors, ++ * 64-127 are mapped to the SFP+/QSFP modules sequentially). + * Access: Index + */ +-MLXSW_ITEM32(reg, ratr, adjacency_index_high, 0x0C, 16, 8); ++MLXSW_ITEM32(reg, mtbr, base_sensor_index, 0x00, 0, 7); + +-/* reg_ratr_trap_id +- * Trap ID to be reported to CPU. +- * Trap-ID is RTR_EGRESS0 or RTR_EGRESS1. +- * For trap_action of NOP, MIRROR and DISCARD_ERROR ++/* reg_mtbr_num_rec ++ * Request: Number of records to read ++ * Response: Number of records read ++ * See above description for more details. ++ * Range 1..255 + * Access: RW + */ +-MLXSW_ITEM32(reg, ratr, trap_id, 0x0C, 0, 8); ++MLXSW_ITEM32(reg, mtbr, num_rec, 0x04, 0, 8); + +-/* reg_ratr_eth_destination_mac +- * MAC address of the destination next-hop. +- * Access: RW ++/* reg_mtbr_rec_max_temp ++ * The highest measured temperature from the sensor. ++ * When the bit mte is cleared, the field max_temperature is reserved. ++ * Access: RO + */ +-MLXSW_ITEM_BUF(reg, ratr, eth_destination_mac, 0x12, 6); ++MLXSW_ITEM32_INDEXED(reg, mtbr, rec_max_temp, MLXSW_REG_MTBR_BASE_LEN, 16, ++ 16, MLXSW_REG_MTBR_REC_LEN, 0x00, false); + +-static inline void +-mlxsw_reg_ratr_pack(char *payload, +- enum mlxsw_reg_ratr_op op, bool valid, +- u32 adjacency_index, u16 egress_rif) ++/* reg_mtbr_rec_temp ++ * Temperature reading from the sensor. Reading is in 0..125 Celsius ++ * degrees units. ++ * Access: RO ++ */ ++MLXSW_ITEM32_INDEXED(reg, mtbr, rec_temp, MLXSW_REG_MTBR_BASE_LEN, 0, 16, ++ MLXSW_REG_MTBR_REC_LEN, 0x00, false); ++ ++static inline void mlxsw_reg_mtbr_pack(char *payload, u8 base_sensor_index, ++ u8 num_rec) + { +- MLXSW_REG_ZERO(ratr, payload); +- mlxsw_reg_ratr_op_set(payload, op); +- mlxsw_reg_ratr_v_set(payload, valid); +- mlxsw_reg_ratr_adjacency_index_low_set(payload, adjacency_index); +- mlxsw_reg_ratr_adjacency_index_high_set(payload, adjacency_index >> 16); +- mlxsw_reg_ratr_egress_router_interface_set(payload, egress_rif); ++ MLXSW_REG_ZERO(mtbr, payload); ++ mlxsw_reg_mtbr_base_sensor_index_set(payload, base_sensor_index); ++ mlxsw_reg_mtbr_num_rec_set(payload, num_rec); + } + +-static inline void mlxsw_reg_ratr_eth_entry_pack(char *payload, +- const char *dest_mac) ++/* Error codes from temperatute reading */ ++enum mlxsw_reg_mtbr_temp_status { ++ MLXSW_REG_MTBR_NO_CONN = 0x8000, ++ MLXSW_REG_MTBR_NO_TEMP_SENS = 0x8001, ++ MLXSW_REG_MTBR_INDEX_NA = 0x8002, ++ MLXSW_REG_MTBR_BAD_SENS_INFO = 0x8003, ++}; ++ ++/* Base index for reading modules temperature */ ++#define MLXSW_REG_MTBR_BASE_MODULE_INDEX 64 ++ ++static inline void mlxsw_reg_mtbr_temp_unpack(char *payload, int rec_ind, ++ u16 *p_temp, u16 *p_max_temp) + { +- mlxsw_reg_ratr_eth_destination_mac_memcpy_to(payload, dest_mac); ++ if (p_temp) ++ *p_temp = mlxsw_reg_mtbr_rec_temp_get(payload, rec_ind); ++ if (p_max_temp) ++ *p_max_temp = mlxsw_reg_mtbr_rec_max_temp_get(payload, rec_ind); + } + +-/* RALTA - Router Algorithmic LPM Tree Allocation Register +- * ------------------------------------------------------- +- * RALTA is used to allocate the LPM trees of the SHSPM method. ++/* MCIA - Management Cable Info Access ++ * ----------------------------------- ++ * MCIA register is used to access the SFP+ and QSFP connector's EPROM. + */ +-#define MLXSW_REG_RALTA_ID 0x8010 +-#define MLXSW_REG_RALTA_LEN 0x04 + +-static const struct mlxsw_reg_info mlxsw_reg_ralta = { +- .id = MLXSW_REG_RALTA_ID, +- .len = MLXSW_REG_RALTA_LEN, +-}; +- +-/* reg_ralta_op +- * opcode (valid for Write, must be 0 on Read) +- * 0 - allocate a tree +- * 1 - deallocate a tree +- * Access: OP +- */ +-MLXSW_ITEM32(reg, ralta, op, 0x00, 28, 2); ++#define MLXSW_REG_MCIA_ID 0x9014 ++#define MLXSW_REG_MCIA_LEN 0x40 + +-enum mlxsw_reg_ralxx_protocol { +- MLXSW_REG_RALXX_PROTOCOL_IPV4, +- MLXSW_REG_RALXX_PROTOCOL_IPV6, +-}; ++MLXSW_REG_DEFINE(mcia, MLXSW_REG_MCIA_ID, MLXSW_REG_MCIA_LEN); + +-/* reg_ralta_protocol +- * Protocol. +- * Deallocation opcode: Reserved. ++/* reg_mcia_l ++ * Lock bit. Setting this bit will lock the access to the specific ++ * cable. Used for updating a full page in a cable EPROM. Any access ++ * other then subsequence writes will fail while the port is locked. + * Access: RW + */ +-MLXSW_ITEM32(reg, ralta, protocol, 0x00, 24, 4); ++MLXSW_ITEM32(reg, mcia, l, 0x00, 31, 1); + +-/* reg_ralta_tree_id +- * An identifier (numbered from 1..cap_shspm_max_trees-1) representing +- * the tree identifier (managed by software). +- * Note that tree_id 0 is allocated for a default-route tree. ++/* reg_mcia_module ++ * Module number. + * Access: Index + */ +-MLXSW_ITEM32(reg, ralta, tree_id, 0x00, 0, 8); +- +-static inline void mlxsw_reg_ralta_pack(char *payload, bool alloc, +- enum mlxsw_reg_ralxx_protocol protocol, +- u8 tree_id) +-{ +- MLXSW_REG_ZERO(ralta, payload); +- mlxsw_reg_ralta_op_set(payload, !alloc); +- mlxsw_reg_ralta_protocol_set(payload, protocol); +- mlxsw_reg_ralta_tree_id_set(payload, tree_id); +-} ++MLXSW_ITEM32(reg, mcia, module, 0x00, 16, 8); + +-/* RALST - Router Algorithmic LPM Structure Tree Register +- * ------------------------------------------------------ +- * RALST is used to set and query the structure of an LPM tree. +- * The structure of the tree must be sorted as a sorted binary tree, while +- * each node is a bin that is tagged as the length of the prefixes the lookup +- * will refer to. Therefore, bin X refers to a set of entries with prefixes +- * of X bits to match with the destination address. The bin 0 indicates +- * the default action, when there is no match of any prefix. ++/* reg_mcia_status ++ * Module status. ++ * Access: RO + */ +-#define MLXSW_REG_RALST_ID 0x8011 +-#define MLXSW_REG_RALST_LEN 0x104 +- +-static const struct mlxsw_reg_info mlxsw_reg_ralst = { +- .id = MLXSW_REG_RALST_ID, +- .len = MLXSW_REG_RALST_LEN, +-}; ++MLXSW_ITEM32(reg, mcia, status, 0x00, 0, 8); + +-/* reg_ralst_root_bin +- * The bin number of the root bin. +- * 064 the entry consumes +- * two entries in the physical HW table. +- * Access: Index ++/* reg_mpat_eth_rspan_vid ++ * Encapsulation header VLAN ID. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, prefix_len, 0x08, 0, 8); ++MLXSW_ITEM32(reg, mpat, eth_rspan_vid, 0x18, 0, 12); + +-/* reg_ralue_dip* +- * The prefix of the route or of the marker that the object of the LPM +- * is compared with. The most significant bits of the dip are the prefix. +- * The list significant bits must be '0' if the prefix_len is smaller +- * than 128 for IPv6 or smaller than 32 for IPv4. +- * IPv4 address uses bits dip[31:0] and bits dip[127:32] are reserved. +- * Access: Index ++/* Encapsulated Remote SPAN - Ethernet L2 ++ * - - - - - - - - - - - - - - - - - - - + */ +-MLXSW_ITEM32(reg, ralue, dip4, 0x18, 0, 32); + +-enum mlxsw_reg_ralue_entry_type { +- MLXSW_REG_RALUE_ENTRY_TYPE_MARKER_ENTRY = 1, +- MLXSW_REG_RALUE_ENTRY_TYPE_ROUTE_ENTRY = 2, +- MLXSW_REG_RALUE_ENTRY_TYPE_MARKER_AND_ROUTE_ENTRY = 3, ++enum mlxsw_reg_mpat_eth_rspan_version { ++ MLXSW_REG_MPAT_ETH_RSPAN_VERSION_NO_HEADER = 15, + }; + +-/* reg_ralue_entry_type +- * Entry type. +- * Note - for Marker entries, the action_type and action fields are reserved. ++/* reg_mpat_eth_rspan_version ++ * RSPAN mirror header version. + * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, entry_type, 0x1C, 30, 2); ++MLXSW_ITEM32(reg, mpat, eth_rspan_version, 0x10, 18, 4); + +-/* reg_ralue_bmp_len +- * The best match prefix length in the case that there is no match for +- * longer prefixes. +- * If (entry_type != MARKER_ENTRY), bmp_len must be equal to prefix_len +- * Note for any update operation with entry_type modification this +- * field must be set. ++/* reg_mpat_eth_rspan_mac ++ * Destination MAC address. + * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, bmp_len, 0x1C, 16, 8); +- +-enum mlxsw_reg_ralue_action_type { +- MLXSW_REG_RALUE_ACTION_TYPE_REMOTE, +- MLXSW_REG_RALUE_ACTION_TYPE_LOCAL, +- MLXSW_REG_RALUE_ACTION_TYPE_IP2ME, +-}; ++MLXSW_ITEM_BUF(reg, mpat, eth_rspan_mac, 0x12, 6); + +-/* reg_ralue_action_type +- * Action Type +- * Indicates how the IP address is connected. +- * It can be connected to a local subnet through local_erif or can be +- * on a remote subnet connected through a next-hop router, +- * or transmitted to the CPU. +- * Reserved when entry_type = MARKER_ENTRY ++/* reg_mpat_eth_rspan_tp ++ * Tag Packet. Indicates whether the mirroring header should be VLAN tagged. + * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, action_type, 0x1C, 0, 2); +- +-enum mlxsw_reg_ralue_trap_action { +- MLXSW_REG_RALUE_TRAP_ACTION_NOP, +- MLXSW_REG_RALUE_TRAP_ACTION_TRAP, +- MLXSW_REG_RALUE_TRAP_ACTION_MIRROR_TO_CPU, +- MLXSW_REG_RALUE_TRAP_ACTION_MIRROR, +- MLXSW_REG_RALUE_TRAP_ACTION_DISCARD_ERROR, +-}; ++MLXSW_ITEM32(reg, mpat, eth_rspan_tp, 0x18, 16, 1); + +-/* reg_ralue_trap_action +- * Trap action. +- * For IP2ME action, only NOP and MIRROR are possible. +- * Access: RW ++/* Encapsulated Remote SPAN - Ethernet L3 ++ * - - - - - - - - - - - - - - - - - - - + */ +-MLXSW_ITEM32(reg, ralue, trap_action, 0x20, 28, 4); + +-/* reg_ralue_trap_id +- * Trap ID to be reported to CPU. +- * Trap ID is RTR_INGRESS0 or RTR_INGRESS1. +- * For trap_action of NOP, MIRROR and DISCARD_ERROR, trap_id is reserved. +- * Access: RW +- */ +-MLXSW_ITEM32(reg, ralue, trap_id, 0x20, 0, 9); ++enum mlxsw_reg_mpat_eth_rspan_protocol { ++ MLXSW_REG_MPAT_ETH_RSPAN_PROTOCOL_IPV4, ++ MLXSW_REG_MPAT_ETH_RSPAN_PROTOCOL_IPV6, ++}; + +-/* reg_ralue_adjacency_index +- * Points to the first entry of the group-based ECMP. +- * Only relevant in case of REMOTE action. ++/* reg_mpat_eth_rspan_protocol ++ * SPAN encapsulation protocol. + * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, adjacency_index, 0x24, 0, 24); ++MLXSW_ITEM32(reg, mpat, eth_rspan_protocol, 0x18, 24, 4); + +-/* reg_ralue_ecmp_size +- * Amount of sequential entries starting +- * from the adjacency_index (the number of ECMPs). +- * The valid range is 1-64, 512, 1024, 2048 and 4096. +- * Reserved when trap_action is TRAP or DISCARD_ERROR. +- * Only relevant in case of REMOTE action. ++/* reg_mpat_eth_rspan_ttl ++ * Encapsulation header Time-to-Live/HopLimit. + * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, ecmp_size, 0x28, 0, 13); ++MLXSW_ITEM32(reg, mpat, eth_rspan_ttl, 0x1C, 4, 8); + +-/* reg_ralue_local_erif +- * Egress Router Interface. +- * Only relevant in case of LOCAL action. ++/* reg_mpat_eth_rspan_smac ++ * Source MAC address + * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, local_erif, 0x24, 0, 16); ++MLXSW_ITEM_BUF(reg, mpat, eth_rspan_smac, 0x22, 6); + +-/* reg_ralue_v +- * Valid bit for the tunnel_ptr field. +- * If valid = 0 then trap to CPU as IP2ME trap ID. +- * If valid = 1 and the packet format allows NVE or IPinIP tunnel +- * decapsulation then tunnel decapsulation is done. +- * If valid = 1 and packet format does not allow NVE or IPinIP tunnel +- * decapsulation then trap as IP2ME trap ID. +- * Only relevant in case of IP2ME action. ++/* reg_mpat_eth_rspan_dip* ++ * Destination IP address. The IP version is configured by protocol. + * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, v, 0x24, 31, 1); ++MLXSW_ITEM32(reg, mpat, eth_rspan_dip4, 0x4C, 0, 32); ++MLXSW_ITEM_BUF(reg, mpat, eth_rspan_dip6, 0x40, 16); + +-/* reg_ralue_tunnel_ptr +- * Tunnel Pointer for NVE or IPinIP tunnel decapsulation. +- * For Spectrum, pointer to KVD Linear. +- * Only relevant in case of IP2ME action. ++/* reg_mpat_eth_rspan_sip* ++ * Source IP address. The IP version is configured by protocol. + * Access: RW + */ +-MLXSW_ITEM32(reg, ralue, tunnel_ptr, 0x24, 0, 24); ++MLXSW_ITEM32(reg, mpat, eth_rspan_sip4, 0x5C, 0, 32); ++MLXSW_ITEM_BUF(reg, mpat, eth_rspan_sip6, 0x50, 16); + +-static inline void mlxsw_reg_ralue_pack(char *payload, +- enum mlxsw_reg_ralxx_protocol protocol, +- enum mlxsw_reg_ralue_op op, +- u16 virtual_router, u8 prefix_len) ++static inline void mlxsw_reg_mpat_pack(char *payload, u8 pa_id, ++ u16 system_port, bool e, ++ enum mlxsw_reg_mpat_span_type span_type) + { +- MLXSW_REG_ZERO(ralue, payload); +- mlxsw_reg_ralue_protocol_set(payload, protocol); +- mlxsw_reg_ralue_op_set(payload, op); +- mlxsw_reg_ralue_virtual_router_set(payload, virtual_router); +- mlxsw_reg_ralue_prefix_len_set(payload, prefix_len); +- mlxsw_reg_ralue_entry_type_set(payload, +- MLXSW_REG_RALUE_ENTRY_TYPE_ROUTE_ENTRY); +- mlxsw_reg_ralue_bmp_len_set(payload, prefix_len); ++ MLXSW_REG_ZERO(mpat, payload); ++ mlxsw_reg_mpat_pa_id_set(payload, pa_id); ++ mlxsw_reg_mpat_system_port_set(payload, system_port); ++ mlxsw_reg_mpat_e_set(payload, e); ++ mlxsw_reg_mpat_qos_set(payload, 1); ++ mlxsw_reg_mpat_be_set(payload, 1); ++ mlxsw_reg_mpat_span_type_set(payload, span_type); + } + +-static inline void mlxsw_reg_ralue_pack4(char *payload, +- enum mlxsw_reg_ralxx_protocol protocol, +- enum mlxsw_reg_ralue_op op, +- u16 virtual_router, u8 prefix_len, +- u32 dip) ++static inline void mlxsw_reg_mpat_eth_rspan_pack(char *payload, u16 vid) + { +- mlxsw_reg_ralue_pack(payload, protocol, op, virtual_router, prefix_len); +- mlxsw_reg_ralue_dip4_set(payload, dip); ++ mlxsw_reg_mpat_eth_rspan_vid_set(payload, vid); + } + + static inline void +-mlxsw_reg_ralue_act_remote_pack(char *payload, +- enum mlxsw_reg_ralue_trap_action trap_action, +- u16 trap_id, u32 adjacency_index, u16 ecmp_size) ++mlxsw_reg_mpat_eth_rspan_l2_pack(char *payload, ++ enum mlxsw_reg_mpat_eth_rspan_version version, ++ const char *mac, ++ bool tp) + { +- mlxsw_reg_ralue_action_type_set(payload, +- MLXSW_REG_RALUE_ACTION_TYPE_REMOTE); +- mlxsw_reg_ralue_trap_action_set(payload, trap_action); +- mlxsw_reg_ralue_trap_id_set(payload, trap_id); +- mlxsw_reg_ralue_adjacency_index_set(payload, adjacency_index); +- mlxsw_reg_ralue_ecmp_size_set(payload, ecmp_size); ++ mlxsw_reg_mpat_eth_rspan_version_set(payload, version); ++ mlxsw_reg_mpat_eth_rspan_mac_memcpy_to(payload, mac); ++ mlxsw_reg_mpat_eth_rspan_tp_set(payload, tp); + } + + static inline void +-mlxsw_reg_ralue_act_local_pack(char *payload, +- enum mlxsw_reg_ralue_trap_action trap_action, +- u16 trap_id, u16 local_erif) ++mlxsw_reg_mpat_eth_rspan_l3_ipv4_pack(char *payload, u8 ttl, ++ const char *smac, ++ u32 sip, u32 dip) + { +- mlxsw_reg_ralue_action_type_set(payload, +- MLXSW_REG_RALUE_ACTION_TYPE_LOCAL); +- mlxsw_reg_ralue_trap_action_set(payload, trap_action); +- mlxsw_reg_ralue_trap_id_set(payload, trap_id); +- mlxsw_reg_ralue_local_erif_set(payload, local_erif); ++ mlxsw_reg_mpat_eth_rspan_ttl_set(payload, ttl); ++ mlxsw_reg_mpat_eth_rspan_smac_memcpy_to(payload, smac); ++ mlxsw_reg_mpat_eth_rspan_protocol_set(payload, ++ MLXSW_REG_MPAT_ETH_RSPAN_PROTOCOL_IPV4); ++ mlxsw_reg_mpat_eth_rspan_sip4_set(payload, sip); ++ mlxsw_reg_mpat_eth_rspan_dip4_set(payload, dip); + } + + static inline void +-mlxsw_reg_ralue_act_ip2me_pack(char *payload) ++mlxsw_reg_mpat_eth_rspan_l3_ipv6_pack(char *payload, u8 ttl, ++ const char *smac, ++ struct in6_addr sip, struct in6_addr dip) + { +- mlxsw_reg_ralue_action_type_set(payload, +- MLXSW_REG_RALUE_ACTION_TYPE_IP2ME); ++ mlxsw_reg_mpat_eth_rspan_ttl_set(payload, ttl); ++ mlxsw_reg_mpat_eth_rspan_smac_memcpy_to(payload, smac); ++ mlxsw_reg_mpat_eth_rspan_protocol_set(payload, ++ MLXSW_REG_MPAT_ETH_RSPAN_PROTOCOL_IPV6); ++ mlxsw_reg_mpat_eth_rspan_sip6_memcpy_to(payload, (void *)&sip); ++ mlxsw_reg_mpat_eth_rspan_dip6_memcpy_to(payload, (void *)&dip); + } + +-/* RAUHT - Router Algorithmic LPM Unicast Host Table Register +- * ---------------------------------------------------------- +- * The RAUHT register is used to configure and query the Unicast Host table in +- * devices that implement the Algorithmic LPM. ++/* MPAR - Monitoring Port Analyzer Register ++ * ---------------------------------------- ++ * MPAR register is used to query and configure the port analyzer port mirroring ++ * properties. + */ +-#define MLXSW_REG_RAUHT_ID 0x8014 +-#define MLXSW_REG_RAUHT_LEN 0x74 +- +-static const struct mlxsw_reg_info mlxsw_reg_rauht = { +- .id = MLXSW_REG_RAUHT_ID, +- .len = MLXSW_REG_RAUHT_LEN, +-}; ++#define MLXSW_REG_MPAR_ID 0x901B ++#define MLXSW_REG_MPAR_LEN 0x08 + +-enum mlxsw_reg_rauht_type { +- MLXSW_REG_RAUHT_TYPE_IPV4, +- MLXSW_REG_RAUHT_TYPE_IPV6, +-}; ++MLXSW_REG_DEFINE(mpar, MLXSW_REG_MPAR_ID, MLXSW_REG_MPAR_LEN); + +-/* reg_rauht_type ++/* reg_mpar_local_port ++ * The local port to mirror the packets from. + * Access: Index + */ +-MLXSW_ITEM32(reg, rauht, type, 0x00, 24, 2); ++MLXSW_ITEM32(reg, mpar, local_port, 0x00, 16, 8); + +-enum mlxsw_reg_rauht_op { +- MLXSW_REG_RAUHT_OP_QUERY_READ = 0, +- /* Read operation */ +- MLXSW_REG_RAUHT_OP_QUERY_CLEAR_ON_READ = 1, +- /* Clear on read operation. Used to read entry and clear +- * activity bit. +- */ +- MLXSW_REG_RAUHT_OP_WRITE_ADD = 0, +- /* Add. Used to write a new entry to the table. All R/W fields are +- * relevant for new entry. Activity bit is set for new entries. +- */ +- MLXSW_REG_RAUHT_OP_WRITE_UPDATE = 1, +- /* Update action. Used to update an existing route entry and +- * only update the following fields: +- * trap_action, trap_id, mac, counter_set_type, counter_index +- */ +- MLXSW_REG_RAUHT_OP_WRITE_CLEAR_ACTIVITY = 2, +- /* Clear activity. A bit is cleared for the entry. */ +- MLXSW_REG_RAUHT_OP_WRITE_DELETE = 3, +- /* Delete entry */ +- MLXSW_REG_RAUHT_OP_WRITE_DELETE_ALL = 4, +- /* Delete all host entries on a RIF. In this command, dip +- * field is reserved. +- */ ++enum mlxsw_reg_mpar_i_e { ++ MLXSW_REG_MPAR_TYPE_EGRESS, ++ MLXSW_REG_MPAR_TYPE_INGRESS, + }; + +-/* reg_rauht_op +- * Access: OP +- */ +-MLXSW_ITEM32(reg, rauht, op, 0x00, 20, 3); +- +-/* reg_rauht_a +- * Activity. Set for new entries. Set if a packet lookup has hit on +- * the specific entry. +- * To clear the a bit, use "clear activity" op. +- * Enabled by activity_dis in RGCR +- * Access: RO +- */ +-MLXSW_ITEM32(reg, rauht, a, 0x00, 16, 1); +- +-/* reg_rauht_rif +- * Router Interface +- * Access: Index +- */ +-MLXSW_ITEM32(reg, rauht, rif, 0x00, 0, 16); +- +-/* reg_rauht_dip* +- * Destination address. ++/* reg_mpar_i_e ++ * Ingress/Egress + * Access: Index + */ +-MLXSW_ITEM32(reg, rauht, dip4, 0x1C, 0x0, 32); +- +-enum mlxsw_reg_rauht_trap_action { +- MLXSW_REG_RAUHT_TRAP_ACTION_NOP, +- MLXSW_REG_RAUHT_TRAP_ACTION_TRAP, +- MLXSW_REG_RAUHT_TRAP_ACTION_MIRROR_TO_CPU, +- MLXSW_REG_RAUHT_TRAP_ACTION_MIRROR, +- MLXSW_REG_RAUHT_TRAP_ACTION_DISCARD_ERRORS, +-}; ++MLXSW_ITEM32(reg, mpar, i_e, 0x00, 0, 4); + +-/* reg_rauht_trap_action ++/* reg_mpar_enable ++ * Enable mirroring ++ * By default, port mirroring is disabled for all ports. + * Access: RW + */ +-MLXSW_ITEM32(reg, rauht, trap_action, 0x60, 28, 4); +- +-enum mlxsw_reg_rauht_trap_id { +- MLXSW_REG_RAUHT_TRAP_ID_RTR_EGRESS0, +- MLXSW_REG_RAUHT_TRAP_ID_RTR_EGRESS1, +-}; ++MLXSW_ITEM32(reg, mpar, enable, 0x04, 31, 1); + +-/* reg_rauht_trap_id +- * Trap ID to be reported to CPU. +- * Trap-ID is RTR_EGRESS0 or RTR_EGRESS1. +- * For trap_action of NOP, MIRROR and DISCARD_ERROR, +- * trap_id is reserved. ++/* reg_mpar_pa_id ++ * Port Analyzer ID. + * Access: RW + */ +-MLXSW_ITEM32(reg, rauht, trap_id, 0x60, 0, 9); ++MLXSW_ITEM32(reg, mpar, pa_id, 0x04, 0, 4); + +-/* reg_rauht_counter_set_type +- * Counter set type for flow counters +- * Access: RW +- */ +-MLXSW_ITEM32(reg, rauht, counter_set_type, 0x68, 24, 8); ++static inline void mlxsw_reg_mpar_pack(char *payload, u8 local_port, ++ enum mlxsw_reg_mpar_i_e i_e, ++ bool enable, u8 pa_id) ++{ ++ MLXSW_REG_ZERO(mpar, payload); ++ mlxsw_reg_mpar_local_port_set(payload, local_port); ++ mlxsw_reg_mpar_enable_set(payload, enable); ++ mlxsw_reg_mpar_i_e_set(payload, i_e); ++ mlxsw_reg_mpar_pa_id_set(payload, pa_id); ++} + +-/* reg_rauht_counter_index +- * Counter index for flow counters +- * Access: RW ++/* MRSR - Management Reset and Shutdown Register ++ * --------------------------------------------- ++ * MRSR register is used to reset or shutdown the switch or ++ * the entire system (when applicable). + */ +-MLXSW_ITEM32(reg, rauht, counter_index, 0x68, 0, 24); ++#define MLXSW_REG_MRSR_ID 0x9023 ++#define MLXSW_REG_MRSR_LEN 0x08 + +-/* reg_rauht_mac +- * MAC address. +- * Access: RW +- */ +-MLXSW_ITEM_BUF(reg, rauht, mac, 0x6E, 6); ++MLXSW_REG_DEFINE(mrsr, MLXSW_REG_MRSR_ID, MLXSW_REG_MRSR_LEN); + +-static inline void mlxsw_reg_rauht_pack(char *payload, +- enum mlxsw_reg_rauht_op op, u16 rif, +- const char *mac) +-{ +- MLXSW_REG_ZERO(rauht, payload); +- mlxsw_reg_rauht_op_set(payload, op); +- mlxsw_reg_rauht_rif_set(payload, rif); +- mlxsw_reg_rauht_mac_memcpy_to(payload, mac); +-} ++/* reg_mrsr_command ++ * Reset/shutdown command ++ * 0 - do nothing ++ * 1 - software reset ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, mrsr, command, 0x00, 0, 4); + +-static inline void mlxsw_reg_rauht_pack4(char *payload, +- enum mlxsw_reg_rauht_op op, u16 rif, +- const char *mac, u32 dip) ++static inline void mlxsw_reg_mrsr_pack(char *payload) + { +- mlxsw_reg_rauht_pack(payload, op, rif, mac); +- mlxsw_reg_rauht_dip4_set(payload, dip); ++ MLXSW_REG_ZERO(mrsr, payload); ++ mlxsw_reg_mrsr_command_set(payload, 1); + } + +-/* RALEU - Router Algorithmic LPM ECMP Update Register +- * --------------------------------------------------- +- * The register enables updating the ECMP section in the action for multiple +- * LPM Unicast entries in a single operation. The update is executed to +- * all entries of a {virtual router, protocol} tuple using the same ECMP group. +- */ +-#define MLXSW_REG_RALEU_ID 0x8015 +-#define MLXSW_REG_RALEU_LEN 0x28 +- +-static const struct mlxsw_reg_info mlxsw_reg_raleu = { +- .id = MLXSW_REG_RALEU_ID, +- .len = MLXSW_REG_RALEU_LEN, +-}; +- +-/* reg_raleu_protocol +- * Protocol. +- * Access: Index ++/* MLCR - Management LED Control Register ++ * -------------------------------------- ++ * Controls the system LEDs. + */ +-MLXSW_ITEM32(reg, raleu, protocol, 0x00, 24, 4); ++#define MLXSW_REG_MLCR_ID 0x902B ++#define MLXSW_REG_MLCR_LEN 0x0C + +-/* reg_raleu_virtual_router +- * Virtual Router ID +- * Range is 0..cap_max_virtual_routers-1 +- * Access: Index +- */ +-MLXSW_ITEM32(reg, raleu, virtual_router, 0x00, 0, 16); ++MLXSW_REG_DEFINE(mlcr, MLXSW_REG_MLCR_ID, MLXSW_REG_MLCR_LEN); + +-/* reg_raleu_adjacency_index +- * Adjacency Index used for matching on the existing entries. +- * Access: Index ++/* reg_mlcr_local_port ++ * Local port number. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, raleu, adjacency_index, 0x10, 0, 24); ++MLXSW_ITEM32(reg, mlcr, local_port, 0x00, 16, 8); + +-/* reg_raleu_ecmp_size +- * ECMP Size used for matching on the existing entries. +- * Access: Index +- */ +-MLXSW_ITEM32(reg, raleu, ecmp_size, 0x14, 0, 13); ++#define MLXSW_REG_MLCR_DURATION_MAX 0xFFFF + +-/* reg_raleu_new_adjacency_index +- * New Adjacency Index. +- * Access: WO ++/* reg_mlcr_beacon_duration ++ * Duration of the beacon to be active, in seconds. ++ * 0x0 - Will turn off the beacon. ++ * 0xFFFF - Will turn on the beacon until explicitly turned off. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, raleu, new_adjacency_index, 0x20, 0, 24); ++MLXSW_ITEM32(reg, mlcr, beacon_duration, 0x04, 0, 16); + +-/* reg_raleu_new_ecmp_size +- * New ECMP Size. +- * Access: WO ++/* reg_mlcr_beacon_remain ++ * Remaining duration of the beacon, in seconds. ++ * 0xFFFF indicates an infinite amount of time. ++ * Access: RO + */ +-MLXSW_ITEM32(reg, raleu, new_ecmp_size, 0x24, 0, 13); ++MLXSW_ITEM32(reg, mlcr, beacon_remain, 0x08, 0, 16); + +-static inline void mlxsw_reg_raleu_pack(char *payload, +- enum mlxsw_reg_ralxx_protocol protocol, +- u16 virtual_router, +- u32 adjacency_index, u16 ecmp_size, +- u32 new_adjacency_index, +- u16 new_ecmp_size) ++static inline void mlxsw_reg_mlcr_pack(char *payload, u8 local_port, ++ bool active) + { +- MLXSW_REG_ZERO(raleu, payload); +- mlxsw_reg_raleu_protocol_set(payload, protocol); +- mlxsw_reg_raleu_virtual_router_set(payload, virtual_router); +- mlxsw_reg_raleu_adjacency_index_set(payload, adjacency_index); +- mlxsw_reg_raleu_ecmp_size_set(payload, ecmp_size); +- mlxsw_reg_raleu_new_adjacency_index_set(payload, new_adjacency_index); +- mlxsw_reg_raleu_new_ecmp_size_set(payload, new_ecmp_size); ++ MLXSW_REG_ZERO(mlcr, payload); ++ mlxsw_reg_mlcr_local_port_set(payload, local_port); ++ mlxsw_reg_mlcr_beacon_duration_set(payload, active ? ++ MLXSW_REG_MLCR_DURATION_MAX : 0); + } + +-/* RAUHTD - Router Algorithmic LPM Unicast Host Table Dump Register +- * ---------------------------------------------------------------- +- * The RAUHTD register allows dumping entries from the Router Unicast Host +- * Table. For a given session an entry is dumped no more than one time. The +- * first RAUHTD access after reset is a new session. A session ends when the +- * num_rec response is smaller than num_rec request or for IPv4 when the +- * num_entries is smaller than 4. The clear activity affect the current session +- * or the last session if a new session has not started. ++/* MSCI - Management System CPLD Information Register ++ * --------------------------------------------------- ++ * This register allows querying for the System CPLD(s) information. + */ +-#define MLXSW_REG_RAUHTD_ID 0x8018 +-#define MLXSW_REG_RAUHTD_BASE_LEN 0x20 +-#define MLXSW_REG_RAUHTD_REC_LEN 0x20 +-#define MLXSW_REG_RAUHTD_REC_MAX_NUM 32 +-#define MLXSW_REG_RAUHTD_LEN (MLXSW_REG_RAUHTD_BASE_LEN + \ +- MLXSW_REG_RAUHTD_REC_MAX_NUM * MLXSW_REG_RAUHTD_REC_LEN) +-#define MLXSW_REG_RAUHTD_IPV4_ENT_PER_REC 4 ++#define MLXSW_REG_MSCI_ID 0x902A ++#define MLXSW_REG_MSCI_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_rauhtd = { +- .id = MLXSW_REG_RAUHTD_ID, +- .len = MLXSW_REG_RAUHTD_LEN, ++static const struct mlxsw_reg_info mlxsw_reg_msci = { ++ .id = MLXSW_REG_MSCI_ID, ++ .len = MLXSW_REG_MSCI_LEN, + }; + +-#define MLXSW_REG_RAUHTD_FILTER_A BIT(0) +-#define MLXSW_REG_RAUHTD_FILTER_RIF BIT(3) +- +-/* reg_rauhtd_filter_fields +- * if a bit is '0' then the relevant field is ignored and dump is done +- * regardless of the field value +- * Bit0 - filter by activity: entry_a +- * Bit3 - filter by entry rip: entry_rif ++/* reg_msci_index ++ * Index to access. + * Access: Index + */ +-MLXSW_ITEM32(reg, rauhtd, filter_fields, 0x00, 0, 8); +- +-enum mlxsw_reg_rauhtd_op { +- MLXSW_REG_RAUHTD_OP_DUMP, +- MLXSW_REG_RAUHTD_OP_DUMP_AND_CLEAR, +-}; ++MLXSW_ITEM32(reg, msci, index, 0x00, 0, 4); + +-/* reg_rauhtd_op +- * Access: OP ++/* reg_msci_version ++ * CPLD version. ++ * Access: R0 + */ +-MLXSW_ITEM32(reg, rauhtd, op, 0x04, 24, 2); ++MLXSW_ITEM32(reg, msci, version, 0x04, 0, 32); + +-/* reg_rauhtd_num_rec +- * At request: number of records requested +- * At response: number of records dumped +- * For IPv4, each record has 4 entries at request and up to 4 entries +- * at response +- * Range is 0..MLXSW_REG_RAUHTD_REC_MAX_NUM +- * Access: Index ++static inline void ++mlxsw_reg_msci_pack(char *payload, u8 index) ++{ ++ MLXSW_REG_ZERO(msci, payload); ++ mlxsw_reg_msci_index_set(payload, index); ++} ++ ++static inline void ++mlxsw_reg_msci_unpack(char *payload, u16 *p_version) ++{ ++ *p_version = mlxsw_reg_msci_version_get(payload); ++} ++ ++/* MCQI - Management Component Query Information ++ * --------------------------------------------- ++ * This register allows querying information about firmware components. + */ +-MLXSW_ITEM32(reg, rauhtd, num_rec, 0x04, 0, 8); ++#define MLXSW_REG_MCQI_ID 0x9061 ++#define MLXSW_REG_MCQI_BASE_LEN 0x18 ++#define MLXSW_REG_MCQI_CAP_LEN 0x14 ++#define MLXSW_REG_MCQI_LEN (MLXSW_REG_MCQI_BASE_LEN + MLXSW_REG_MCQI_CAP_LEN) + +-/* reg_rauhtd_entry_a +- * Dump only if activity has value of entry_a +- * Reserved if filter_fields bit0 is '0' ++MLXSW_REG_DEFINE(mcqi, MLXSW_REG_MCQI_ID, MLXSW_REG_MCQI_LEN); ++ ++/* reg_mcqi_component_index ++ * Index of the accessed component. + * Access: Index + */ +-MLXSW_ITEM32(reg, rauhtd, entry_a, 0x08, 16, 1); ++MLXSW_ITEM32(reg, mcqi, component_index, 0x00, 0, 16); + +-enum mlxsw_reg_rauhtd_type { +- MLXSW_REG_RAUHTD_TYPE_IPV4, +- MLXSW_REG_RAUHTD_TYPE_IPV6, ++enum mlxfw_reg_mcqi_info_type { ++ MLXSW_REG_MCQI_INFO_TYPE_CAPABILITIES, + }; + +-/* reg_rauhtd_type +- * Dump only if record type is: +- * 0 - IPv4 +- * 1 - IPv6 +- * Access: Index ++/* reg_mcqi_info_type ++ * Component properties set. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, rauhtd, type, 0x08, 0, 4); ++MLXSW_ITEM32(reg, mcqi, info_type, 0x08, 0, 5); + +-/* reg_rauhtd_entry_rif +- * Dump only if RIF has value of entry_rif +- * Reserved if filter_fields bit3 is '0' +- * Access: Index ++/* reg_mcqi_offset ++ * The requested/returned data offset from the section start, given in bytes. ++ * Must be DWORD aligned. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, rauhtd, entry_rif, 0x0C, 0, 16); +- +-static inline void mlxsw_reg_rauhtd_pack(char *payload, +- enum mlxsw_reg_rauhtd_type type) +-{ +- MLXSW_REG_ZERO(rauhtd, payload); +- mlxsw_reg_rauhtd_filter_fields_set(payload, MLXSW_REG_RAUHTD_FILTER_A); +- mlxsw_reg_rauhtd_op_set(payload, MLXSW_REG_RAUHTD_OP_DUMP_AND_CLEAR); +- mlxsw_reg_rauhtd_num_rec_set(payload, MLXSW_REG_RAUHTD_REC_MAX_NUM); +- mlxsw_reg_rauhtd_entry_a_set(payload, 1); +- mlxsw_reg_rauhtd_type_set(payload, type); +-} ++MLXSW_ITEM32(reg, mcqi, offset, 0x10, 0, 32); + +-/* reg_rauhtd_ipv4_rec_num_entries +- * Number of valid entries in this record: +- * 0 - 1 valid entry +- * 1 - 2 valid entries +- * 2 - 3 valid entries +- * 3 - 4 valid entries +- * Access: RO ++/* reg_mcqi_data_size ++ * The requested/returned data size, given in bytes. If data_size is not DWORD ++ * aligned, the last bytes are zero padded. ++ * Access: RW + */ +-MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv4_rec_num_entries, +- MLXSW_REG_RAUHTD_BASE_LEN, 28, 2, +- MLXSW_REG_RAUHTD_REC_LEN, 0x00, false); ++MLXSW_ITEM32(reg, mcqi, data_size, 0x14, 0, 16); + +-/* reg_rauhtd_rec_type +- * Record type. +- * 0 - IPv4 +- * 1 - IPv6 ++/* reg_mcqi_cap_max_component_size ++ * Maximum size for this component, given in bytes. + * Access: RO + */ +-MLXSW_ITEM32_INDEXED(reg, rauhtd, rec_type, MLXSW_REG_RAUHTD_BASE_LEN, 24, 2, +- MLXSW_REG_RAUHTD_REC_LEN, 0x00, false); ++MLXSW_ITEM32(reg, mcqi, cap_max_component_size, 0x20, 0, 32); + +-#define MLXSW_REG_RAUHTD_IPV4_ENT_LEN 0x8 +- +-/* reg_rauhtd_ipv4_ent_a +- * Activity. Set for new entries. Set if a packet lookup has hit on the +- * specific entry. ++/* reg_mcqi_cap_log_mcda_word_size ++ * Log 2 of the access word size in bytes. Read and write access must be aligned ++ * to the word size. Write access must be done for an integer number of words. + * Access: RO + */ +-MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv4_ent_a, MLXSW_REG_RAUHTD_BASE_LEN, 16, 1, +- MLXSW_REG_RAUHTD_IPV4_ENT_LEN, 0x00, false); ++MLXSW_ITEM32(reg, mcqi, cap_log_mcda_word_size, 0x24, 28, 4); + +-/* reg_rauhtd_ipv4_ent_rif +- * Router interface. ++/* reg_mcqi_cap_mcda_max_write_size ++ * Maximal write size for MCDA register + * Access: RO + */ +-MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv4_ent_rif, MLXSW_REG_RAUHTD_BASE_LEN, 0, +- 16, MLXSW_REG_RAUHTD_IPV4_ENT_LEN, 0x00, false); ++MLXSW_ITEM32(reg, mcqi, cap_mcda_max_write_size, 0x24, 0, 16); + +-/* reg_rauhtd_ipv4_ent_dip +- * Destination IPv4 address. +- * Access: RO +- */ +-MLXSW_ITEM32_INDEXED(reg, rauhtd, ipv4_ent_dip, MLXSW_REG_RAUHTD_BASE_LEN, 0, +- 32, MLXSW_REG_RAUHTD_IPV4_ENT_LEN, 0x04, false); ++static inline void mlxsw_reg_mcqi_pack(char *payload, u16 component_index) ++{ ++ MLXSW_REG_ZERO(mcqi, payload); ++ mlxsw_reg_mcqi_component_index_set(payload, component_index); ++ mlxsw_reg_mcqi_info_type_set(payload, ++ MLXSW_REG_MCQI_INFO_TYPE_CAPABILITIES); ++ mlxsw_reg_mcqi_offset_set(payload, 0); ++ mlxsw_reg_mcqi_data_size_set(payload, MLXSW_REG_MCQI_CAP_LEN); ++} + +-static inline void mlxsw_reg_rauhtd_ent_ipv4_unpack(char *payload, +- int ent_index, u16 *p_rif, +- u32 *p_dip) ++static inline void mlxsw_reg_mcqi_unpack(char *payload, ++ u32 *p_cap_max_component_size, ++ u8 *p_cap_log_mcda_word_size, ++ u16 *p_cap_mcda_max_write_size) + { +- *p_rif = mlxsw_reg_rauhtd_ipv4_ent_rif_get(payload, ent_index); +- *p_dip = mlxsw_reg_rauhtd_ipv4_ent_dip_get(payload, ent_index); ++ *p_cap_max_component_size = ++ mlxsw_reg_mcqi_cap_max_component_size_get(payload); ++ *p_cap_log_mcda_word_size = ++ mlxsw_reg_mcqi_cap_log_mcda_word_size_get(payload); ++ *p_cap_mcda_max_write_size = ++ mlxsw_reg_mcqi_cap_mcda_max_write_size_get(payload); + } + +-/* MFCR - Management Fan Control Register +- * -------------------------------------- +- * This register controls the settings of the Fan Speed PWM mechanism. ++/* MCC - Management Component Control ++ * ---------------------------------- ++ * Controls the firmware component and updates the FSM. + */ +-#define MLXSW_REG_MFCR_ID 0x9001 +-#define MLXSW_REG_MFCR_LEN 0x08 ++#define MLXSW_REG_MCC_ID 0x9062 ++#define MLXSW_REG_MCC_LEN 0x1C + +-static const struct mlxsw_reg_info mlxsw_reg_mfcr = { +- .id = MLXSW_REG_MFCR_ID, +- .len = MLXSW_REG_MFCR_LEN, +-}; ++MLXSW_REG_DEFINE(mcc, MLXSW_REG_MCC_ID, MLXSW_REG_MCC_LEN); + +-enum mlxsw_reg_mfcr_pwm_frequency { +- MLXSW_REG_MFCR_PWM_FEQ_11HZ = 0x00, +- MLXSW_REG_MFCR_PWM_FEQ_14_7HZ = 0x01, +- MLXSW_REG_MFCR_PWM_FEQ_22_1HZ = 0x02, +- MLXSW_REG_MFCR_PWM_FEQ_1_4KHZ = 0x40, +- MLXSW_REG_MFCR_PWM_FEQ_5KHZ = 0x41, +- MLXSW_REG_MFCR_PWM_FEQ_20KHZ = 0x42, +- MLXSW_REG_MFCR_PWM_FEQ_22_5KHZ = 0x43, +- MLXSW_REG_MFCR_PWM_FEQ_25KHZ = 0x44, ++enum mlxsw_reg_mcc_instruction { ++ MLXSW_REG_MCC_INSTRUCTION_LOCK_UPDATE_HANDLE = 0x01, ++ MLXSW_REG_MCC_INSTRUCTION_RELEASE_UPDATE_HANDLE = 0x02, ++ MLXSW_REG_MCC_INSTRUCTION_UPDATE_COMPONENT = 0x03, ++ MLXSW_REG_MCC_INSTRUCTION_VERIFY_COMPONENT = 0x04, ++ MLXSW_REG_MCC_INSTRUCTION_ACTIVATE = 0x06, ++ MLXSW_REG_MCC_INSTRUCTION_CANCEL = 0x08, + }; + +-/* reg_mfcr_pwm_frequency +- * Controls the frequency of the PWM signal. ++/* reg_mcc_instruction ++ * Command to be executed by the FSM. ++ * Applicable for write operation only. + * Access: RW + */ +-MLXSW_ITEM32(reg, mfcr, pwm_frequency, 0x00, 0, 6); ++MLXSW_ITEM32(reg, mcc, instruction, 0x00, 0, 8); + +-#define MLXSW_MFCR_TACHOS_MAX 12 ++/* reg_mcc_component_index ++ * Index of the accessed component. Applicable only for commands that ++ * refer to components. Otherwise, this field is reserved. ++ * Access: Index ++ */ ++MLXSW_ITEM32(reg, mcc, component_index, 0x04, 0, 16); + +-/* reg_mfcr_tacho_active +- * Indicates which of the tachometer is active (bit per tachometer). +- * Access: RO ++/* reg_mcc_update_handle ++ * Token representing the current flow executed by the FSM. ++ * Access: WO + */ +-MLXSW_ITEM32(reg, mfcr, tacho_active, 0x04, 16, MLXSW_MFCR_TACHOS_MAX); ++MLXSW_ITEM32(reg, mcc, update_handle, 0x08, 0, 24); + +-#define MLXSW_MFCR_PWMS_MAX 5 ++/* reg_mcc_error_code ++ * Indicates the successful completion of the instruction, or the reason it ++ * failed ++ * Access: RO ++ */ ++MLXSW_ITEM32(reg, mcc, error_code, 0x0C, 8, 8); + +-/* reg_mfcr_pwm_active +- * Indicates which of the PWM control is active (bit per PWM). ++/* reg_mcc_control_state ++ * Current FSM state + * Access: RO + */ +-MLXSW_ITEM32(reg, mfcr, pwm_active, 0x04, 0, MLXSW_MFCR_PWMS_MAX); ++MLXSW_ITEM32(reg, mcc, control_state, 0x0C, 0, 4); + +-static inline void +-mlxsw_reg_mfcr_pack(char *payload, +- enum mlxsw_reg_mfcr_pwm_frequency pwm_frequency) ++/* reg_mcc_component_size ++ * Component size in bytes. Valid for UPDATE_COMPONENT instruction. Specifying ++ * the size may shorten the update time. Value 0x0 means that size is ++ * unspecified. ++ * Access: WO ++ */ ++MLXSW_ITEM32(reg, mcc, component_size, 0x10, 0, 32); ++ ++static inline void mlxsw_reg_mcc_pack(char *payload, ++ enum mlxsw_reg_mcc_instruction instr, ++ u16 component_index, u32 update_handle, ++ u32 component_size) + { +- MLXSW_REG_ZERO(mfcr, payload); +- mlxsw_reg_mfcr_pwm_frequency_set(payload, pwm_frequency); ++ MLXSW_REG_ZERO(mcc, payload); ++ mlxsw_reg_mcc_instruction_set(payload, instr); ++ mlxsw_reg_mcc_component_index_set(payload, component_index); ++ mlxsw_reg_mcc_update_handle_set(payload, update_handle); ++ mlxsw_reg_mcc_component_size_set(payload, component_size); + } + +-static inline void +-mlxsw_reg_mfcr_unpack(char *payload, +- enum mlxsw_reg_mfcr_pwm_frequency *p_pwm_frequency, +- u16 *p_tacho_active, u8 *p_pwm_active) ++static inline void mlxsw_reg_mcc_unpack(char *payload, u32 *p_update_handle, ++ u8 *p_error_code, u8 *p_control_state) + { +- *p_pwm_frequency = mlxsw_reg_mfcr_pwm_frequency_get(payload); +- *p_tacho_active = mlxsw_reg_mfcr_tacho_active_get(payload); +- *p_pwm_active = mlxsw_reg_mfcr_pwm_active_get(payload); ++ if (p_update_handle) ++ *p_update_handle = mlxsw_reg_mcc_update_handle_get(payload); ++ if (p_error_code) ++ *p_error_code = mlxsw_reg_mcc_error_code_get(payload); ++ if (p_control_state) ++ *p_control_state = mlxsw_reg_mcc_control_state_get(payload); + } + +-/* MFSC - Management Fan Speed Control Register +- * -------------------------------------------- +- * This register controls the settings of the Fan Speed PWM mechanism. ++/* MCDA - Management Component Data Access ++ * --------------------------------------- ++ * This register allows reading and writing a firmware component. + */ +-#define MLXSW_REG_MFSC_ID 0x9002 +-#define MLXSW_REG_MFSC_LEN 0x08 ++#define MLXSW_REG_MCDA_ID 0x9063 ++#define MLXSW_REG_MCDA_BASE_LEN 0x10 ++#define MLXSW_REG_MCDA_MAX_DATA_LEN 0x80 ++#define MLXSW_REG_MCDA_LEN \ ++ (MLXSW_REG_MCDA_BASE_LEN + MLXSW_REG_MCDA_MAX_DATA_LEN) + +-static const struct mlxsw_reg_info mlxsw_reg_mfsc = { +- .id = MLXSW_REG_MFSC_ID, +- .len = MLXSW_REG_MFSC_LEN, +-}; +- +-/* reg_mfsc_pwm +- * Fan pwm to control / monitor. +- * Access: Index +- */ +-MLXSW_ITEM32(reg, mfsc, pwm, 0x00, 24, 3); ++MLXSW_REG_DEFINE(mcda, MLXSW_REG_MCDA_ID, MLXSW_REG_MCDA_LEN); + +-/* reg_mfsc_pwm_duty_cycle +- * Controls the duty cycle of the PWM. Value range from 0..255 to +- * represent duty cycle of 0%...100%. ++/* reg_mcda_update_handle ++ * Token representing the current flow executed by the FSM. + * Access: RW + */ +-MLXSW_ITEM32(reg, mfsc, pwm_duty_cycle, 0x04, 0, 8); +- +-static inline void mlxsw_reg_mfsc_pack(char *payload, u8 pwm, +- u8 pwm_duty_cycle) +-{ +- MLXSW_REG_ZERO(mfsc, payload); +- mlxsw_reg_mfsc_pwm_set(payload, pwm); +- mlxsw_reg_mfsc_pwm_duty_cycle_set(payload, pwm_duty_cycle); +-} ++MLXSW_ITEM32(reg, mcda, update_handle, 0x00, 0, 24); + +-/* MFSM - Management Fan Speed Measurement +- * --------------------------------------- +- * This register controls the settings of the Tacho measurements and +- * enables reading the Tachometer measurements. ++/* reg_mcda_offset ++ * Offset of accessed address relative to component start. Accesses must be in ++ * accordance to log_mcda_word_size in MCQI reg. ++ * Access: RW + */ +-#define MLXSW_REG_MFSM_ID 0x9003 +-#define MLXSW_REG_MFSM_LEN 0x08 +- +-static const struct mlxsw_reg_info mlxsw_reg_mfsm = { +- .id = MLXSW_REG_MFSM_ID, +- .len = MLXSW_REG_MFSM_LEN, +-}; ++MLXSW_ITEM32(reg, mcda, offset, 0x04, 0, 32); + +-/* reg_mfsm_tacho +- * Fan tachometer index. +- * Access: Index ++/* reg_mcda_size ++ * Size of the data accessed, given in bytes. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, mfsm, tacho, 0x00, 24, 4); ++MLXSW_ITEM32(reg, mcda, size, 0x08, 0, 16); + +-/* reg_mfsm_rpm +- * Fan speed (round per minute). +- * Access: RO ++/* reg_mcda_data ++ * Data block accessed. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, mfsm, rpm, 0x04, 0, 16); ++MLXSW_ITEM32_INDEXED(reg, mcda, data, 0x10, 0, 32, 4, 0, false); + +-static inline void mlxsw_reg_mfsm_pack(char *payload, u8 tacho) ++static inline void mlxsw_reg_mcda_pack(char *payload, u32 update_handle, ++ u32 offset, u16 size, u8 *data) + { +- MLXSW_REG_ZERO(mfsm, payload); +- mlxsw_reg_mfsm_tacho_set(payload, tacho); ++ int i; ++ ++ MLXSW_REG_ZERO(mcda, payload); ++ mlxsw_reg_mcda_update_handle_set(payload, update_handle); ++ mlxsw_reg_mcda_offset_set(payload, offset); ++ mlxsw_reg_mcda_size_set(payload, size); ++ ++ for (i = 0; i < size / 4; i++) ++ mlxsw_reg_mcda_data_set(payload, i, *(u32 *) &data[i * 4]); + } + +-/* MFSL - Management Fan Speed Limit Register +- * ------------------------------------------ +- * The Fan Speed Limit register is used to configure the fan speed +- * event / interrupt notification mechanism. Fan speed threshold are +- * defined for both under-speed and over-speed. ++/* MPSC - Monitoring Packet Sampling Configuration Register ++ * -------------------------------------------------------- ++ * MPSC Register is used to configure the Packet Sampling mechanism. + */ +-#define MLXSW_REG_MFSL_ID 0x9004 +-#define MLXSW_REG_MFSL_LEN 0x0C ++#define MLXSW_REG_MPSC_ID 0x9080 ++#define MLXSW_REG_MPSC_LEN 0x1C + +-MLXSW_REG_DEFINE(mfsl, MLXSW_REG_MFSL_ID, MLXSW_REG_MFSL_LEN); ++MLXSW_REG_DEFINE(mpsc, MLXSW_REG_MPSC_ID, MLXSW_REG_MPSC_LEN); + +-/* reg_mfsl_tacho +- * Fan tachometer index. ++/* reg_mpsc_local_port ++ * Local port number ++ * Not supported for CPU port + * Access: Index + */ +-MLXSW_ITEM32(reg, mfsl, tacho, 0x00, 24, 4); ++MLXSW_ITEM32(reg, mpsc, local_port, 0x00, 16, 8); + +-/* reg_mfsl_tach_min +- * Tachometer minimum value (minimum RPM). ++/* reg_mpsc_e ++ * Enable sampling on port local_port + * Access: RW + */ +-MLXSW_ITEM32(reg, mfsl, tach_min, 0x04, 0, 16); ++MLXSW_ITEM32(reg, mpsc, e, 0x04, 30, 1); + +-/* reg_mfsl_tach_max +- * Tachometer maximum value (maximum RPM). ++#define MLXSW_REG_MPSC_RATE_MAX 3500000000UL ++ ++/* reg_mpsc_rate ++ * Sampling rate = 1 out of rate packets (with randomization around ++ * the point). Valid values are: 1 to MLXSW_REG_MPSC_RATE_MAX + * Access: RW + */ +-MLXSW_ITEM32(reg, mfsl, tach_max, 0x08, 0, 16); +- +-static inline void mlxsw_reg_mfsl_pack(char *payload, u8 tacho, +- u16 tach_min, u16 tach_max) +-{ +- MLXSW_REG_ZERO(mfsl, payload); +- mlxsw_reg_mfsl_tacho_set(payload, tacho); +- mlxsw_reg_mfsl_tach_min_set(payload, tach_min); +- mlxsw_reg_mfsl_tach_max_set(payload, tach_max); +-} ++MLXSW_ITEM32(reg, mpsc, rate, 0x08, 0, 32); + +-static inline void mlxsw_reg_mfsl_unpack(char *payload, u8 tacho, +- u16 *p_tach_min, u16 *p_tach_max) ++static inline void mlxsw_reg_mpsc_pack(char *payload, u8 local_port, bool e, ++ u32 rate) + { +- if (p_tach_min) +- *p_tach_min = mlxsw_reg_mfsl_tach_min_get(payload); +- +- if (p_tach_max) +- *p_tach_max = mlxsw_reg_mfsl_tach_max_get(payload); ++ MLXSW_REG_ZERO(mpsc, payload); ++ mlxsw_reg_mpsc_local_port_set(payload, local_port); ++ mlxsw_reg_mpsc_e_set(payload, e); ++ mlxsw_reg_mpsc_rate_set(payload, rate); + } + +-/* FORE - Fan Out of Range Event Register +- * -------------------------------------- +- * This register reports the status of the controlled fans compared to the +- * range defined by the MFSL register. ++/* MGPC - Monitoring General Purpose Counter Set Register ++ * The MGPC register retrieves and sets the General Purpose Counter Set. + */ +-#define MLXSW_REG_FORE_ID 0x9007 +-#define MLXSW_REG_FORE_LEN 0x0C ++#define MLXSW_REG_MGPC_ID 0x9081 ++#define MLXSW_REG_MGPC_LEN 0x18 + +-MLXSW_REG_DEFINE(fore, MLXSW_REG_FORE_ID, MLXSW_REG_FORE_LEN); ++MLXSW_REG_DEFINE(mgpc, MLXSW_REG_MGPC_ID, MLXSW_REG_MGPC_LEN); + +-/* fan_under_limit +- * Fan speed is below the low limit defined in MFSL register. Each bit relates +- * to a single tachometer and indicates the specific tachometer reading is +- * below the threshold. +- * Access: RO ++/* reg_mgpc_counter_set_type ++ * Counter set type. ++ * Access: OP + */ +-MLXSW_ITEM32(reg, fore, fan_under_limit, 0x00, 16, 10); +- +-static inline void mlxsw_reg_fore_unpack(char *payload, u8 tacho, +- bool *fan_under_limit) +-{ +- u16 limit; +- +- if (fan_under_limit) { +- limit = mlxsw_reg_fore_fan_under_limit_get(payload); +- *fan_under_limit = !!(limit & BIT(tacho)); +- } +-} ++MLXSW_ITEM32(reg, mgpc, counter_set_type, 0x00, 24, 8); + +-/* MTCAP - Management Temperature Capabilities +- * ------------------------------------------- +- * This register exposes the capabilities of the device and +- * system temperature sensing. ++/* reg_mgpc_counter_index ++ * Counter index. ++ * Access: Index + */ +-#define MLXSW_REG_MTCAP_ID 0x9009 +-#define MLXSW_REG_MTCAP_LEN 0x08 ++MLXSW_ITEM32(reg, mgpc, counter_index, 0x00, 0, 24); + +-static const struct mlxsw_reg_info mlxsw_reg_mtcap = { +- .id = MLXSW_REG_MTCAP_ID, +- .len = MLXSW_REG_MTCAP_LEN, ++enum mlxsw_reg_mgpc_opcode { ++ /* Nop */ ++ MLXSW_REG_MGPC_OPCODE_NOP = 0x00, ++ /* Clear counters */ ++ MLXSW_REG_MGPC_OPCODE_CLEAR = 0x08, + }; + +-/* reg_mtcap_sensor_count +- * Number of sensors supported by the device. +- * This includes the QSFP module sensors (if exists in the QSFP module). +- * Access: RO ++/* reg_mgpc_opcode ++ * Opcode. ++ * Access: OP + */ +-MLXSW_ITEM32(reg, mtcap, sensor_count, 0x00, 0, 7); ++MLXSW_ITEM32(reg, mgpc, opcode, 0x04, 28, 4); + +-/* MTMP - Management Temperature +- * ----------------------------- +- * This register controls the settings of the temperature measurements +- * and enables reading the temperature measurements. Note that temperature +- * is in 0.125 degrees Celsius. ++/* reg_mgpc_byte_counter ++ * Byte counter value. ++ * Access: RW + */ +-#define MLXSW_REG_MTMP_ID 0x900A +-#define MLXSW_REG_MTMP_LEN 0x20 ++MLXSW_ITEM64(reg, mgpc, byte_counter, 0x08, 0, 64); + +-static const struct mlxsw_reg_info mlxsw_reg_mtmp = { +- .id = MLXSW_REG_MTMP_ID, +- .len = MLXSW_REG_MTMP_LEN, +-}; ++/* reg_mgpc_packet_counter ++ * Packet counter value. ++ * Access: RW ++ */ ++MLXSW_ITEM64(reg, mgpc, packet_counter, 0x10, 0, 64); + +-/* reg_mtmp_sensor_index +- * Sensors index to access. +- * 64-127 of sensor_index are mapped to the SFP+/QSFP modules sequentially +- * (module 0 is mapped to sensor_index 64). +- * Access: Index ++static inline void mlxsw_reg_mgpc_pack(char *payload, u32 counter_index, ++ enum mlxsw_reg_mgpc_opcode opcode, ++ enum mlxsw_reg_flow_counter_set_type set_type) ++{ ++ MLXSW_REG_ZERO(mgpc, payload); ++ mlxsw_reg_mgpc_counter_index_set(payload, counter_index); ++ mlxsw_reg_mgpc_counter_set_type_set(payload, set_type); ++ mlxsw_reg_mgpc_opcode_set(payload, opcode); ++} ++ ++/* MPRS - Monitoring Parsing State Register ++ * ---------------------------------------- ++ * The MPRS register is used for setting up the parsing for hash, ++ * policy-engine and routing. + */ +-MLXSW_ITEM32(reg, mtmp, sensor_index, 0x00, 0, 7); ++#define MLXSW_REG_MPRS_ID 0x9083 ++#define MLXSW_REG_MPRS_LEN 0x14 + +-/* Convert to milli degrees Celsius */ +-#define MLXSW_REG_MTMP_TEMP_TO_MC(val) (val * 125) ++MLXSW_REG_DEFINE(mprs, MLXSW_REG_MPRS_ID, MLXSW_REG_MPRS_LEN); + +-/* reg_mtmp_temperature +- * Temperature reading from the sensor. Reading is in 0.125 Celsius +- * degrees units. +- * Access: RO ++/* reg_mprs_parsing_depth ++ * Minimum parsing depth. ++ * Need to enlarge parsing depth according to L3, MPLS, tunnels, ACL ++ * rules, traps, hash, etc. Default is 96 bytes. Reserved when SwitchX-2. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, mtmp, temperature, 0x04, 0, 16); ++MLXSW_ITEM32(reg, mprs, parsing_depth, 0x00, 0, 16); + +-/* reg_mtmp_mte +- * Max Temperature Enable - enables measuring the max temperature on a sensor. ++/* reg_mprs_parsing_en ++ * Parsing enable. ++ * Bit 0 - Enable parsing of NVE of types VxLAN, VxLAN-GPE, GENEVE and ++ * NVGRE. Default is enabled. Reserved when SwitchX-2. + * Access: RW + */ +-MLXSW_ITEM32(reg, mtmp, mte, 0x08, 31, 1); ++MLXSW_ITEM32(reg, mprs, parsing_en, 0x04, 0, 16); + +-/* reg_mtmp_mtr +- * Max Temperature Reset - clears the value of the max temperature register. +- * Access: WO ++/* reg_mprs_vxlan_udp_dport ++ * VxLAN UDP destination port. ++ * Used for identifying VxLAN packets and for dport field in ++ * encapsulation. Default is 4789. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, mtmp, mtr, 0x08, 30, 1); ++MLXSW_ITEM32(reg, mprs, vxlan_udp_dport, 0x10, 0, 16); + +-/* reg_mtmp_max_temperature +- * The highest measured temperature from the sensor. +- * When the bit mte is cleared, the field max_temperature is reserved. +- * Access: RO ++static inline void mlxsw_reg_mprs_pack(char *payload, u16 parsing_depth, ++ u16 vxlan_udp_dport) ++{ ++ MLXSW_REG_ZERO(mprs, payload); ++ mlxsw_reg_mprs_parsing_depth_set(payload, parsing_depth); ++ mlxsw_reg_mprs_parsing_en_set(payload, true); ++ mlxsw_reg_mprs_vxlan_udp_dport_set(payload, vxlan_udp_dport); ++} ++ ++/* TNGCR - Tunneling NVE General Configuration Register ++ * ---------------------------------------------------- ++ * The TNGCR register is used for setting up the NVE Tunneling configuration. + */ +-MLXSW_ITEM32(reg, mtmp, max_temperature, 0x08, 0, 16); ++#define MLXSW_REG_TNGCR_ID 0xA001 ++#define MLXSW_REG_TNGCR_LEN 0x44 + +-/* reg_mtmp_tee +- * Temperature Event Enable. +- * 0 - Do not generate event +- * 1 - Generate event +- * 2 - Generate single event ++MLXSW_REG_DEFINE(tngcr, MLXSW_REG_TNGCR_ID, MLXSW_REG_TNGCR_LEN); ++ ++enum mlxsw_reg_tngcr_type { ++ MLXSW_REG_TNGCR_TYPE_VXLAN, ++ MLXSW_REG_TNGCR_TYPE_VXLAN_GPE, ++ MLXSW_REG_TNGCR_TYPE_GENEVE, ++ MLXSW_REG_TNGCR_TYPE_NVGRE, ++}; ++ ++/* reg_tngcr_type ++ * Tunnel type for encapsulation and decapsulation. The types are mutually ++ * exclusive. ++ * Note: For Spectrum the NVE parsing must be enabled in MPRS. + * Access: RW + */ +-MLXSW_ITEM32(reg, mtmp, tee, 0x0C, 30, 2); ++MLXSW_ITEM32(reg, tngcr, type, 0x00, 0, 4); + +-#define MLXSW_REG_MTMP_THRESH_HI 0x348 /* 105 Celsius */ ++/* reg_tngcr_nve_valid ++ * The VTEP is valid. Allows adding FDB entries for tunnel encapsulation. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tngcr, nve_valid, 0x04, 31, 1); + +-/* reg_mtmp_temperature_threshold_hi +- * High threshold for Temperature Warning Event. In 0.125 Celsius. ++/* reg_tngcr_nve_ttl_uc ++ * The TTL for NVE tunnel encapsulation underlay unicast packets. + * Access: RW + */ +-MLXSW_ITEM32(reg, mtmp, temperature_threshold_hi, 0x0C, 0, 16); ++MLXSW_ITEM32(reg, tngcr, nve_ttl_uc, 0x04, 0, 8); + +-/* reg_mtmp_temperature_threshold_lo +- * Low threshold for Temperature Warning Event. In 0.125 Celsius. ++/* reg_tngcr_nve_ttl_mc ++ * The TTL for NVE tunnel encapsulation underlay multicast packets. + * Access: RW + */ +-MLXSW_ITEM32(reg, mtmp, temperature_threshold_lo, 0x10, 0, 16); ++MLXSW_ITEM32(reg, tngcr, nve_ttl_mc, 0x08, 0, 8); + +-#define MLXSW_REG_MTMP_SENSOR_NAME_SIZE 8 ++enum { ++ /* Do not copy flow label. Calculate flow label using nve_flh. */ ++ MLXSW_REG_TNGCR_FL_NO_COPY, ++ /* Copy flow label from inner packet if packet is IPv6 and ++ * encapsulation is by IPv6. Otherwise, calculate flow label using ++ * nve_flh. ++ */ ++ MLXSW_REG_TNGCR_FL_COPY, ++}; + +-/* reg_mtmp_sensor_name +- * Sensor Name +- * Access: RO ++/* reg_tngcr_nve_flc ++ * For NVE tunnel encapsulation: Flow label copy from inner packet. ++ * Access: RW + */ +-MLXSW_ITEM_BUF(reg, mtmp, sensor_name, 0x18, MLXSW_REG_MTMP_SENSOR_NAME_SIZE); ++MLXSW_ITEM32(reg, tngcr, nve_flc, 0x0C, 25, 1); + +-static inline void mlxsw_reg_mtmp_pack(char *payload, u8 sensor_index, +- bool max_temp_enable, +- bool max_temp_reset) +-{ +- MLXSW_REG_ZERO(mtmp, payload); +- mlxsw_reg_mtmp_sensor_index_set(payload, sensor_index); +- mlxsw_reg_mtmp_mte_set(payload, max_temp_enable); +- mlxsw_reg_mtmp_mtr_set(payload, max_temp_reset); +- mlxsw_reg_mtmp_temperature_threshold_hi_set(payload, +- MLXSW_REG_MTMP_THRESH_HI); +-} ++enum { ++ /* Flow label is static. In Spectrum this means '0'. Spectrum-2 ++ * uses {nve_fl_prefix, nve_fl_suffix}. ++ */ ++ MLXSW_REG_TNGCR_FL_NO_HASH, ++ /* 8 LSBs of the flow label are calculated from ECMP hash of the ++ * inner packet. 12 MSBs are configured by nve_fl_prefix. ++ */ ++ MLXSW_REG_TNGCR_FL_HASH, ++}; + +-static inline void mlxsw_reg_mtmp_unpack(char *payload, unsigned int *p_temp, +- unsigned int *p_max_temp, +- char *sensor_name) +-{ +- u16 temp; ++/* reg_tngcr_nve_flh ++ * NVE flow label hash. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tngcr, nve_flh, 0x0C, 24, 1); + +- if (p_temp) { +- temp = mlxsw_reg_mtmp_temperature_get(payload); +- *p_temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp); +- } +- if (p_max_temp) { +- temp = mlxsw_reg_mtmp_max_temperature_get(payload); +- *p_max_temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp); +- } +- if (sensor_name) +- mlxsw_reg_mtmp_sensor_name_memcpy_from(payload, sensor_name); +-} ++/* reg_tngcr_nve_fl_prefix ++ * NVE flow label prefix. Constant 12 MSBs of the flow label. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tngcr, nve_fl_prefix, 0x0C, 8, 12); + +-/* MTBR - Management Temperature Bulk Register +- * ------------------------------------------- +- * This register is used for bulk temperature reading. ++/* reg_tngcr_nve_fl_suffix ++ * NVE flow label suffix. Constant 8 LSBs of the flow label. ++ * Reserved when nve_flh=1 and for Spectrum. ++ * Access: RW + */ +-#define MLXSW_REG_MTBR_ID 0x900F +-#define MLXSW_REG_MTBR_BASE_LEN 0x10 /* base length, without records */ +-#define MLXSW_REG_MTBR_REC_LEN 0x04 /* record length */ +-#define MLXSW_REG_MTBR_REC_MAX_COUNT 47 /* firmware limitation */ +-#define MLXSW_REG_MTBR_LEN (MLXSW_REG_MTBR_BASE_LEN + \ +- MLXSW_REG_MTBR_REC_LEN * \ +- MLXSW_REG_MTBR_REC_MAX_COUNT) ++MLXSW_ITEM32(reg, tngcr, nve_fl_suffix, 0x0C, 0, 8); + +-MLXSW_REG_DEFINE(mtbr, MLXSW_REG_MTBR_ID, MLXSW_REG_MTBR_LEN); ++enum { ++ /* Source UDP port is fixed (default '0') */ ++ MLXSW_REG_TNGCR_UDP_SPORT_NO_HASH, ++ /* Source UDP port is calculated based on hash */ ++ MLXSW_REG_TNGCR_UDP_SPORT_HASH, ++}; + +-/* reg_mtbr_base_sensor_index +- * Base sensors index to access (0 - ASIC sensor, 1-63 - ambient sensors, +- * 64-127 are mapped to the SFP+/QSFP modules sequentially). +- * Access: Index ++/* reg_tngcr_nve_udp_sport_type ++ * NVE UDP source port type. ++ * Spectrum uses LAG hash (SLCRv2). Spectrum-2 uses ECMP hash (RECRv2). ++ * When the source UDP port is calculated based on hash, then the 8 LSBs ++ * are calculated from hash the 8 MSBs are configured by ++ * nve_udp_sport_prefix. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, mtbr, base_sensor_index, 0x00, 0, 7); ++MLXSW_ITEM32(reg, tngcr, nve_udp_sport_type, 0x10, 24, 1); + +-/* reg_mtbr_num_rec +- * Request: Number of records to read +- * Response: Number of records read +- * See above description for more details. +- * Range 1..255 ++/* reg_tngcr_nve_udp_sport_prefix ++ * NVE UDP source port prefix. Constant 8 MSBs of the UDP source port. ++ * Reserved when NVE type is NVGRE. + * Access: RW + */ +-MLXSW_ITEM32(reg, mtbr, num_rec, 0x04, 0, 8); ++MLXSW_ITEM32(reg, tngcr, nve_udp_sport_prefix, 0x10, 8, 8); + +-/* reg_mtbr_rec_max_temp +- * The highest measured temperature from the sensor. +- * When the bit mte is cleared, the field max_temperature is reserved. +- * Access: RO ++/* reg_tngcr_nve_group_size_mc ++ * The amount of sequential linked lists of MC entries. The first linked ++ * list is configured by SFD.underlay_mc_ptr. ++ * Valid values: 1, 2, 4, 8, 16, 32, 64 ++ * The linked list are configured by TNUMT. ++ * The hash is set by LAG hash. ++ * Access: RW + */ +-MLXSW_ITEM32_INDEXED(reg, mtbr, rec_max_temp, MLXSW_REG_MTBR_BASE_LEN, 16, +- 16, MLXSW_REG_MTBR_REC_LEN, 0x00, false); ++MLXSW_ITEM32(reg, tngcr, nve_group_size_mc, 0x18, 0, 8); + +-/* reg_mtbr_rec_temp +- * Temperature reading from the sensor. Reading is in 0..125 Celsius +- * degrees units. +- * Access: RO ++/* reg_tngcr_nve_group_size_flood ++ * The amount of sequential linked lists of flooding entries. The first ++ * linked list is configured by SFMR.nve_tunnel_flood_ptr ++ * Valid values: 1, 2, 4, 8, 16, 32, 64 ++ * The linked list are configured by TNUMT. ++ * The hash is set by LAG hash. ++ * Access: RW + */ +-MLXSW_ITEM32_INDEXED(reg, mtbr, rec_temp, MLXSW_REG_MTBR_BASE_LEN, 0, 16, +- MLXSW_REG_MTBR_REC_LEN, 0x00, false); ++MLXSW_ITEM32(reg, tngcr, nve_group_size_flood, 0x1C, 0, 8); + +-static inline void mlxsw_reg_mtbr_pack(char *payload, u8 base_sensor_index, +- u8 num_rec) +-{ +- MLXSW_REG_ZERO(mtbr, payload); +- mlxsw_reg_mtbr_base_sensor_index_set(payload, base_sensor_index); +- mlxsw_reg_mtbr_num_rec_set(payload, num_rec); +-} ++/* reg_tngcr_learn_enable ++ * During decapsulation, whether to learn from NVE port. ++ * Reserved when Spectrum-2. See TNPC. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tngcr, learn_enable, 0x20, 31, 1); + +-/* Error codes from temperatute reading */ +-enum mlxsw_reg_mtbr_temp_status { +- MLXSW_REG_MTBR_NO_CONN = 0x8000, +- MLXSW_REG_MTBR_NO_TEMP_SENS = 0x8001, +- MLXSW_REG_MTBR_INDEX_NA = 0x8002, +- MLXSW_REG_MTBR_BAD_SENS_INFO = 0x8003, +-}; ++/* reg_tngcr_underlay_virtual_router ++ * Underlay virtual router. ++ * Reserved when Spectrum-2. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tngcr, underlay_virtual_router, 0x20, 0, 16); + +-/* Base index for reading modules temperature */ +-#define MLXSW_REG_MTBR_BASE_MODULE_INDEX 64 ++/* reg_tngcr_underlay_rif ++ * Underlay ingress router interface. RIF type should be loopback generic. ++ * Reserved when Spectrum. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tngcr, underlay_rif, 0x24, 0, 16); + +-static inline void mlxsw_reg_mtbr_temp_unpack(char *payload, int rec_ind, +- u16 *p_temp, u16 *p_max_temp) ++/* reg_tngcr_usipv4 ++ * Underlay source IPv4 address of the NVE. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tngcr, usipv4, 0x28, 0, 32); ++ ++/* reg_tngcr_usipv6 ++ * Underlay source IPv6 address of the NVE. For Spectrum, must not be ++ * modified under traffic of NVE tunneling encapsulation. ++ * Access: RW ++ */ ++MLXSW_ITEM_BUF(reg, tngcr, usipv6, 0x30, 16); ++ ++static inline void mlxsw_reg_tngcr_pack(char *payload, ++ enum mlxsw_reg_tngcr_type type, ++ bool valid, u8 ttl) + { +- if (p_temp) +- *p_temp = mlxsw_reg_mtbr_rec_temp_get(payload, rec_ind); +- if (p_max_temp) +- *p_max_temp = mlxsw_reg_mtbr_rec_max_temp_get(payload, rec_ind); ++ MLXSW_REG_ZERO(tngcr, payload); ++ mlxsw_reg_tngcr_type_set(payload, type); ++ mlxsw_reg_tngcr_nve_valid_set(payload, valid); ++ mlxsw_reg_tngcr_nve_ttl_uc_set(payload, ttl); ++ mlxsw_reg_tngcr_nve_ttl_mc_set(payload, ttl); ++ mlxsw_reg_tngcr_nve_flc_set(payload, MLXSW_REG_TNGCR_FL_NO_COPY); ++ mlxsw_reg_tngcr_nve_flh_set(payload, 0); ++ mlxsw_reg_tngcr_nve_udp_sport_type_set(payload, ++ MLXSW_REG_TNGCR_UDP_SPORT_HASH); ++ mlxsw_reg_tngcr_nve_udp_sport_prefix_set(payload, 0); ++ mlxsw_reg_tngcr_nve_group_size_mc_set(payload, 1); ++ mlxsw_reg_tngcr_nve_group_size_flood_set(payload, 1); + } + +-/* MCIA - Management Cable Info Access +- * ----------------------------------- +- * MCIA register is used to access the SFP+ and QSFP connector's EPROM. ++/* TNUMT - Tunneling NVE Underlay Multicast Table Register ++ * ------------------------------------------------------- ++ * The TNUMT register is for building the underlay MC table. It is used ++ * for MC, flooding and BC traffic into the NVE tunnel. + */ ++#define MLXSW_REG_TNUMT_ID 0xA003 ++#define MLXSW_REG_TNUMT_LEN 0x20 + +-#define MLXSW_REG_MCIA_ID 0x9014 +-#define MLXSW_REG_MCIA_LEN 0x40 ++MLXSW_REG_DEFINE(tnumt, MLXSW_REG_TNUMT_ID, MLXSW_REG_TNUMT_LEN); + +-MLXSW_REG_DEFINE(mcia, MLXSW_REG_MCIA_ID, MLXSW_REG_MCIA_LEN); ++enum mlxsw_reg_tnumt_record_type { ++ MLXSW_REG_TNUMT_RECORD_TYPE_IPV4, ++ MLXSW_REG_TNUMT_RECORD_TYPE_IPV6, ++ MLXSW_REG_TNUMT_RECORD_TYPE_LABEL, ++}; + +-/* reg_mcia_l +- * Lock bit. Setting this bit will lock the access to the specific +- * cable. Used for updating a full page in a cable EPROM. Any access +- * other then subsequence writes will fail while the port is locked. ++/* reg_tnumt_record_type ++ * Record type. + * Access: RW + */ +-MLXSW_ITEM32(reg, mcia, l, 0x00, 31, 1); ++MLXSW_ITEM32(reg, tnumt, record_type, 0x00, 28, 4); + +-/* reg_mcia_module +- * Module number. +- * Access: Index ++enum mlxsw_reg_tnumt_tunnel_port { ++ MLXSW_REG_TNUMT_TUNNEL_PORT_NVE, ++ MLXSW_REG_TNUMT_TUNNEL_PORT_VPLS, ++ MLXSW_REG_TNUMT_TUNNEL_FLEX_TUNNEL0, ++ MLXSW_REG_TNUMT_TUNNEL_FLEX_TUNNEL1, ++}; ++ ++/* reg_tnumt_tunnel_port ++ * Tunnel port. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, mcia, module, 0x00, 16, 8); ++MLXSW_ITEM32(reg, tnumt, tunnel_port, 0x00, 24, 4); + +-/* reg_mcia_status +- * Module status. +- * Access: RO ++/* reg_tnumt_underlay_mc_ptr ++ * Index to the underlay multicast table. ++ * For Spectrum the index is to the KVD linear. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, mcia, status, 0x00, 0, 8); ++MLXSW_ITEM32(reg, tnumt, underlay_mc_ptr, 0x00, 0, 24); + +-/* reg_mcia_i2c_device_address +- * I2C device address. ++/* reg_tnumt_vnext ++ * The next_underlay_mc_ptr is valid. + * Access: RW + */ +-MLXSW_ITEM32(reg, mcia, i2c_device_address, 0x04, 24, 8); ++MLXSW_ITEM32(reg, tnumt, vnext, 0x04, 31, 1); + +-/* reg_mcia_page_number +- * Page number. ++/* reg_tnumt_next_underlay_mc_ptr ++ * The next index to the underlay multicast table. + * Access: RW + */ +-MLXSW_ITEM32(reg, mcia, page_number, 0x04, 16, 8); ++MLXSW_ITEM32(reg, tnumt, next_underlay_mc_ptr, 0x04, 0, 24); + +-/* reg_mcia_device_address +- * Device address. ++/* reg_tnumt_record_size ++ * Number of IP addresses in the record. ++ * Range is 1..cap_max_nve_mc_entries_ipv{4,6} + * Access: RW + */ +-MLXSW_ITEM32(reg, mcia, device_address, 0x04, 0, 16); ++MLXSW_ITEM32(reg, tnumt, record_size, 0x08, 0, 3); + +-/* reg_mcia_size +- * Number of bytes to read/write (up to 48 bytes). ++/* reg_tnumt_udip ++ * The underlay IPv4 addresses. udip[i] is reserved if i >= size + * Access: RW + */ +-MLXSW_ITEM32(reg, mcia, size, 0x08, 0, 16); ++MLXSW_ITEM32_INDEXED(reg, tnumt, udip, 0x0C, 0, 32, 0x04, 0x00, false); + +-#define MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH 256 +-#define MLXSW_REG_MCIA_EEPROM_SIZE 48 +-#define MLXSW_REG_MCIA_I2C_ADDR_LOW 0x50 +-#define MLXSW_REG_MCIA_I2C_ADDR_HIGH 0x51 +-#define MLXSW_REG_MCIA_PAGE0_LO_OFF 0xa0 +-#define MLXSW_REG_MCIA_TH_ITEM_SIZE 2 +-#define MLXSW_REG_MCIA_TH_PAGE_NUM 3 +-#define MLXSW_REG_MCIA_PAGE0_LO 0 +-#define MLXSW_REG_MCIA_TH_PAGE_OFF 0x80 ++/* reg_tnumt_udip_ptr ++ * The pointer to the underlay IPv6 addresses. udip_ptr[i] is reserved if ++ * i >= size. The IPv6 addresses are configured by RIPS. ++ * Access: RW ++ */ ++MLXSW_ITEM32_INDEXED(reg, tnumt, udip_ptr, 0x0C, 0, 24, 0x04, 0x00, false); + +-enum mlxsw_reg_mcia_eeprom_module_info_rev_id { +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID_UNSPC = 0x00, +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID_8436 = 0x01, +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID_8636 = 0x03, +-}; ++static inline void mlxsw_reg_tnumt_pack(char *payload, ++ enum mlxsw_reg_tnumt_record_type type, ++ enum mlxsw_reg_tnumt_tunnel_port tport, ++ u32 underlay_mc_ptr, bool vnext, ++ u32 next_underlay_mc_ptr, ++ u8 record_size) ++{ ++ MLXSW_REG_ZERO(tnumt, payload); ++ mlxsw_reg_tnumt_record_type_set(payload, type); ++ mlxsw_reg_tnumt_tunnel_port_set(payload, tport); ++ mlxsw_reg_tnumt_underlay_mc_ptr_set(payload, underlay_mc_ptr); ++ mlxsw_reg_tnumt_vnext_set(payload, vnext); ++ mlxsw_reg_tnumt_next_underlay_mc_ptr_set(payload, next_underlay_mc_ptr); ++ mlxsw_reg_tnumt_record_size_set(payload, record_size); ++} + +-enum mlxsw_reg_mcia_eeprom_module_info_id { +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_SFP = 0x03, +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP = 0x0C, +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP_PLUS = 0x0D, +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP28 = 0x11, +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID_QSFP_DD = 0x18, +-}; ++/* TNQCR - Tunneling NVE QoS Configuration Register ++ * ------------------------------------------------ ++ * The TNQCR register configures how QoS is set in encapsulation into the ++ * underlay network. ++ */ ++#define MLXSW_REG_TNQCR_ID 0xA010 ++#define MLXSW_REG_TNQCR_LEN 0x0C + +-enum mlxsw_reg_mcia_eeprom_module_info { +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_ID, +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_REV_ID, +- MLXSW_REG_MCIA_EEPROM_MODULE_INFO_SIZE, +-}; ++MLXSW_REG_DEFINE(tnqcr, MLXSW_REG_TNQCR_ID, MLXSW_REG_TNQCR_LEN); + +-/* reg_mcia_eeprom +- * Bytes to read/write. ++/* reg_tnqcr_enc_set_dscp ++ * For encapsulation: How to set DSCP field: ++ * 0 - Copy the DSCP from the overlay (inner) IP header to the underlay ++ * (outer) IP header. If there is no IP header, use TNQDR.dscp ++ * 1 - Set the DSCP field as TNQDR.dscp + * Access: RW + */ +-MLXSW_ITEM_BUF(reg, mcia, eeprom, 0x10, MLXSW_REG_MCIA_EEPROM_SIZE); ++MLXSW_ITEM32(reg, tnqcr, enc_set_dscp, 0x04, 28, 1); + +-static inline void mlxsw_reg_mcia_pack(char *payload, u8 module, u8 lock, +- u8 page_number, u16 device_addr, +- u8 size, u8 i2c_device_addr) ++static inline void mlxsw_reg_tnqcr_pack(char *payload) + { +- MLXSW_REG_ZERO(mcia, payload); +- mlxsw_reg_mcia_module_set(payload, module); +- mlxsw_reg_mcia_l_set(payload, lock); +- mlxsw_reg_mcia_page_number_set(payload, page_number); +- mlxsw_reg_mcia_device_address_set(payload, device_addr); +- mlxsw_reg_mcia_size_set(payload, size); +- mlxsw_reg_mcia_i2c_device_address_set(payload, i2c_device_addr); ++ MLXSW_REG_ZERO(tnqcr, payload); ++ mlxsw_reg_tnqcr_enc_set_dscp_set(payload, 0); + } + +-/* MPAT - Monitoring Port Analyzer Table +- * ------------------------------------- +- * MPAT Register is used to query and configure the Switch PortAnalyzer Table. +- * For an enabled analyzer, all fields except e (enable) cannot be modified. ++/* TNQDR - Tunneling NVE QoS Default Register ++ * ------------------------------------------ ++ * The TNQDR register configures the default QoS settings for NVE ++ * encapsulation. + */ +-#define MLXSW_REG_MPAT_ID 0x901A +-#define MLXSW_REG_MPAT_LEN 0x78 ++#define MLXSW_REG_TNQDR_ID 0xA011 ++#define MLXSW_REG_TNQDR_LEN 0x08 + +-static const struct mlxsw_reg_info mlxsw_reg_mpat = { +- .id = MLXSW_REG_MPAT_ID, +- .len = MLXSW_REG_MPAT_LEN, +-}; ++MLXSW_REG_DEFINE(tnqdr, MLXSW_REG_TNQDR_ID, MLXSW_REG_TNQDR_LEN); + +-/* reg_mpat_pa_id +- * Port Analyzer ID. ++/* reg_tnqdr_local_port ++ * Local port number (receive port). CPU port is supported. + * Access: Index + */ +-MLXSW_ITEM32(reg, mpat, pa_id, 0x00, 28, 4); ++MLXSW_ITEM32(reg, tnqdr, local_port, 0x00, 16, 8); + +-/* reg_mpat_system_port +- * A unique port identifier for the final destination of the packet. ++/* reg_tnqdr_dscp ++ * For encapsulation, the default DSCP. + * Access: RW + */ +-MLXSW_ITEM32(reg, mpat, system_port, 0x00, 0, 16); ++MLXSW_ITEM32(reg, tnqdr, dscp, 0x04, 0, 6); + +-/* reg_mpat_e +- * Enable. Indicating the Port Analyzer is enabled. +- * Access: RW ++static inline void mlxsw_reg_tnqdr_pack(char *payload, u8 local_port) ++{ ++ MLXSW_REG_ZERO(tnqdr, payload); ++ mlxsw_reg_tnqdr_local_port_set(payload, local_port); ++ mlxsw_reg_tnqdr_dscp_set(payload, 0); ++} ++ ++/* TNEEM - Tunneling NVE Encapsulation ECN Mapping Register ++ * -------------------------------------------------------- ++ * The TNEEM register maps ECN of the IP header at the ingress to the ++ * encapsulation to the ECN of the underlay network. + */ +-MLXSW_ITEM32(reg, mpat, e, 0x04, 31, 1); ++#define MLXSW_REG_TNEEM_ID 0xA012 ++#define MLXSW_REG_TNEEM_LEN 0x0C + +-/* reg_mpat_qos +- * Quality Of Service Mode. +- * 0: CONFIGURED - QoS parameters (Switch Priority, and encapsulation +- * PCP, DEI, DSCP or VL) are configured. +- * 1: MAINTAIN - QoS parameters (Switch Priority, Color) are the +- * same as in the original packet that has triggered the mirroring. For +- * SPAN also the pcp,dei are maintained. +- * Access: RW ++MLXSW_REG_DEFINE(tneem, MLXSW_REG_TNEEM_ID, MLXSW_REG_TNEEM_LEN); ++ ++/* reg_tneem_overlay_ecn ++ * ECN of the IP header in the overlay network. ++ * Access: Index + */ +-MLXSW_ITEM32(reg, mpat, qos, 0x04, 26, 1); ++MLXSW_ITEM32(reg, tneem, overlay_ecn, 0x04, 24, 2); + +-/* reg_mpat_be +- * Best effort mode. Indicates mirroring traffic should not cause packet +- * drop or back pressure, but will discard the mirrored packets. Mirrored +- * packets will be forwarded on a best effort manner. +- * 0: Do not discard mirrored packets +- * 1: Discard mirrored packets if causing congestion ++/* reg_tneem_underlay_ecn ++ * ECN of the IP header in the underlay network. + * Access: RW + */ +-MLXSW_ITEM32(reg, mpat, be, 0x04, 25, 1); ++MLXSW_ITEM32(reg, tneem, underlay_ecn, 0x04, 16, 2); + +-static inline void mlxsw_reg_mpat_pack(char *payload, u8 pa_id, +- u16 system_port, bool e) ++static inline void mlxsw_reg_tneem_pack(char *payload, u8 overlay_ecn, ++ u8 underlay_ecn) + { +- MLXSW_REG_ZERO(mpat, payload); +- mlxsw_reg_mpat_pa_id_set(payload, pa_id); +- mlxsw_reg_mpat_system_port_set(payload, system_port); +- mlxsw_reg_mpat_e_set(payload, e); +- mlxsw_reg_mpat_qos_set(payload, 1); +- mlxsw_reg_mpat_be_set(payload, 1); ++ MLXSW_REG_ZERO(tneem, payload); ++ mlxsw_reg_tneem_overlay_ecn_set(payload, overlay_ecn); ++ mlxsw_reg_tneem_underlay_ecn_set(payload, underlay_ecn); + } + +-/* MPAR - Monitoring Port Analyzer Register +- * ---------------------------------------- +- * MPAR register is used to query and configure the port analyzer port mirroring +- * properties. ++/* TNDEM - Tunneling NVE Decapsulation ECN Mapping Register ++ * -------------------------------------------------------- ++ * The TNDEM register configures the actions that are done in the ++ * decapsulation. + */ +-#define MLXSW_REG_MPAR_ID 0x901B +-#define MLXSW_REG_MPAR_LEN 0x08 ++#define MLXSW_REG_TNDEM_ID 0xA013 ++#define MLXSW_REG_TNDEM_LEN 0x0C + +-static const struct mlxsw_reg_info mlxsw_reg_mpar = { +- .id = MLXSW_REG_MPAR_ID, +- .len = MLXSW_REG_MPAR_LEN, +-}; ++MLXSW_REG_DEFINE(tndem, MLXSW_REG_TNDEM_ID, MLXSW_REG_TNDEM_LEN); + +-/* reg_mpar_local_port +- * The local port to mirror the packets from. ++/* reg_tndem_underlay_ecn ++ * ECN field of the IP header in the underlay network. + * Access: Index + */ +-MLXSW_ITEM32(reg, mpar, local_port, 0x00, 16, 8); +- +-enum mlxsw_reg_mpar_i_e { +- MLXSW_REG_MPAR_TYPE_EGRESS, +- MLXSW_REG_MPAR_TYPE_INGRESS, +-}; ++MLXSW_ITEM32(reg, tndem, underlay_ecn, 0x04, 24, 2); + +-/* reg_mpar_i_e +- * Ingress/Egress ++/* reg_tndem_overlay_ecn ++ * ECN field of the IP header in the overlay network. + * Access: Index + */ +-MLXSW_ITEM32(reg, mpar, i_e, 0x00, 0, 4); ++MLXSW_ITEM32(reg, tndem, overlay_ecn, 0x04, 16, 2); + +-/* reg_mpar_enable +- * Enable mirroring +- * By default, port mirroring is disabled for all ports. ++/* reg_tndem_eip_ecn ++ * Egress IP ECN. ECN field of the IP header of the packet which goes out ++ * from the decapsulation. + * Access: RW + */ +-MLXSW_ITEM32(reg, mpar, enable, 0x04, 31, 1); ++MLXSW_ITEM32(reg, tndem, eip_ecn, 0x04, 8, 2); + +-/* reg_mpar_pa_id +- * Port Analyzer ID. ++/* reg_tndem_trap_en ++ * Trap enable: ++ * 0 - No trap due to decap ECN ++ * 1 - Trap enable with trap_id + * Access: RW + */ +-MLXSW_ITEM32(reg, mpar, pa_id, 0x04, 0, 4); ++MLXSW_ITEM32(reg, tndem, trap_en, 0x08, 28, 4); + +-static inline void mlxsw_reg_mpar_pack(char *payload, u8 local_port, +- enum mlxsw_reg_mpar_i_e i_e, +- bool enable, u8 pa_id) ++/* reg_tndem_trap_id ++ * Trap ID. Either DECAP_ECN0 or DECAP_ECN1. ++ * Reserved when trap_en is '0'. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tndem, trap_id, 0x08, 0, 9); ++ ++static inline void mlxsw_reg_tndem_pack(char *payload, u8 underlay_ecn, ++ u8 overlay_ecn, u8 ecn, bool trap_en, ++ u16 trap_id) + { +- MLXSW_REG_ZERO(mpar, payload); +- mlxsw_reg_mpar_local_port_set(payload, local_port); +- mlxsw_reg_mpar_enable_set(payload, enable); +- mlxsw_reg_mpar_i_e_set(payload, i_e); +- mlxsw_reg_mpar_pa_id_set(payload, pa_id); ++ MLXSW_REG_ZERO(tndem, payload); ++ mlxsw_reg_tndem_underlay_ecn_set(payload, underlay_ecn); ++ mlxsw_reg_tndem_overlay_ecn_set(payload, overlay_ecn); ++ mlxsw_reg_tndem_eip_ecn_set(payload, ecn); ++ mlxsw_reg_tndem_trap_en_set(payload, trap_en); ++ mlxsw_reg_tndem_trap_id_set(payload, trap_id); + } + +-/* MSCI - Management System CPLD Information Register +- * --------------------------------------------------- +- * This register allows querying for the System CPLD(s) information. ++/* TNPC - Tunnel Port Configuration Register ++ * ----------------------------------------- ++ * The TNPC register is used for tunnel port configuration. ++ * Reserved when Spectrum. + */ +-#define MLXSW_REG_MSCI_ID 0x902A +-#define MLXSW_REG_MSCI_LEN 0x10 ++#define MLXSW_REG_TNPC_ID 0xA020 ++#define MLXSW_REG_TNPC_LEN 0x18 + +-static const struct mlxsw_reg_info mlxsw_reg_msci = { +- .id = MLXSW_REG_MSCI_ID, +- .len = MLXSW_REG_MSCI_LEN, ++MLXSW_REG_DEFINE(tnpc, MLXSW_REG_TNPC_ID, MLXSW_REG_TNPC_LEN); ++ ++enum mlxsw_reg_tnpc_tunnel_port { ++ MLXSW_REG_TNPC_TUNNEL_PORT_NVE, ++ MLXSW_REG_TNPC_TUNNEL_PORT_VPLS, ++ MLXSW_REG_TNPC_TUNNEL_FLEX_TUNNEL0, ++ MLXSW_REG_TNPC_TUNNEL_FLEX_TUNNEL1, + }; + +-/* reg_msci_index +- * Index to access. ++/* reg_tnpc_tunnel_port ++ * Tunnel port. + * Access: Index + */ +-MLXSW_ITEM32(reg, msci, index, 0x00, 0, 4); ++MLXSW_ITEM32(reg, tnpc, tunnel_port, 0x00, 0, 4); + +-/* reg_msci_version +- * CPLD version. +- * Access: R0 ++/* reg_tnpc_learn_enable_v6 ++ * During IPv6 underlay decapsulation, whether to learn from tunnel port. ++ * Access: RW + */ +-MLXSW_ITEM32(reg, msci, version, 0x04, 0, 32); ++MLXSW_ITEM32(reg, tnpc, learn_enable_v6, 0x04, 1, 1); + +-static inline void +-mlxsw_reg_msci_pack(char *payload, u8 index) +-{ +- MLXSW_REG_ZERO(msci, payload); +- mlxsw_reg_msci_index_set(payload, index); +-} ++/* reg_tnpc_learn_enable_v4 ++ * During IPv4 underlay decapsulation, whether to learn from tunnel port. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, tnpc, learn_enable_v4, 0x04, 0, 1); + +-static inline void +-mlxsw_reg_msci_unpack(char *payload, u16 *p_version) ++static inline void mlxsw_reg_tnpc_pack(char *payload, ++ enum mlxsw_reg_tnpc_tunnel_port tport, ++ bool learn_enable) + { +- *p_version = mlxsw_reg_msci_version_get(payload); ++ MLXSW_REG_ZERO(tnpc, payload); ++ mlxsw_reg_tnpc_tunnel_port_set(payload, tport); ++ mlxsw_reg_tnpc_learn_enable_v4_set(payload, learn_enable); ++ mlxsw_reg_tnpc_learn_enable_v6_set(payload, learn_enable); + } + +-/* MLCR - Management LED Control Register +- * -------------------------------------- +- * Controls the system LEDs. ++/* TIGCR - Tunneling IPinIP General Configuration Register ++ * ------------------------------------------------------- ++ * The TIGCR register is used for setting up the IPinIP Tunnel configuration. + */ +-#define MLXSW_REG_MLCR_ID 0x902B +-#define MLXSW_REG_MLCR_LEN 0x0C ++#define MLXSW_REG_TIGCR_ID 0xA801 ++#define MLXSW_REG_TIGCR_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_mlcr = { +- .id = MLXSW_REG_MLCR_ID, +- .len = MLXSW_REG_MLCR_LEN, +-}; ++MLXSW_REG_DEFINE(tigcr, MLXSW_REG_TIGCR_ID, MLXSW_REG_TIGCR_LEN); + +-/* reg_mlcr_local_port +- * Local port number. ++/* reg_tigcr_ipip_ttlc ++ * For IPinIP Tunnel encapsulation: whether to copy the ttl from the packet ++ * header. + * Access: RW + */ +-MLXSW_ITEM32(reg, mlcr, local_port, 0x00, 16, 8); ++MLXSW_ITEM32(reg, tigcr, ttlc, 0x04, 8, 1); + +-#define MLXSW_REG_MLCR_DURATION_MAX 0xFFFF +- +-/* reg_mlcr_beacon_duration +- * Duration of the beacon to be active, in seconds. +- * 0x0 - Will turn off the beacon. +- * 0xFFFF - Will turn on the beacon until explicitly turned off. ++/* reg_tigcr_ipip_ttl_uc ++ * The TTL for IPinIP Tunnel encapsulation of unicast packets if ++ * reg_tigcr_ipip_ttlc is unset. + * Access: RW + */ +-MLXSW_ITEM32(reg, mlcr, beacon_duration, 0x04, 0, 16); +- +-/* reg_mlcr_beacon_remain +- * Remaining duration of the beacon, in seconds. +- * 0xFFFF indicates an infinite amount of time. +- * Access: RO +- */ +-MLXSW_ITEM32(reg, mlcr, beacon_remain, 0x08, 0, 16); ++MLXSW_ITEM32(reg, tigcr, ttl_uc, 0x04, 0, 8); + +-static inline void mlxsw_reg_mlcr_pack(char *payload, u8 local_port, +- bool active) ++static inline void mlxsw_reg_tigcr_pack(char *payload, bool ttlc, u8 ttl_uc) + { +- MLXSW_REG_ZERO(mlcr, payload); +- mlxsw_reg_mlcr_local_port_set(payload, local_port); +- mlxsw_reg_mlcr_beacon_duration_set(payload, active ? +- MLXSW_REG_MLCR_DURATION_MAX : 0); ++ MLXSW_REG_ZERO(tigcr, payload); ++ mlxsw_reg_tigcr_ttlc_set(payload, ttlc); ++ mlxsw_reg_tigcr_ttl_uc_set(payload, ttl_uc); + } + + /* SBPR - Shared Buffer Pools Register +@@ -5161,10 +9075,7 @@ static inline void mlxsw_reg_mlcr_pack(char *payload, u8 local_port, + #define MLXSW_REG_SBPR_ID 0xB001 + #define MLXSW_REG_SBPR_LEN 0x14 + +-static const struct mlxsw_reg_info mlxsw_reg_sbpr = { +- .id = MLXSW_REG_SBPR_ID, +- .len = MLXSW_REG_SBPR_LEN, +-}; ++MLXSW_REG_DEFINE(sbpr, MLXSW_REG_SBPR_ID, MLXSW_REG_SBPR_LEN); + + /* shared direstion enum for SBPR, SBCM, SBPM */ + enum mlxsw_reg_sbxx_dir { +@@ -5184,8 +9095,15 @@ MLXSW_ITEM32(reg, sbpr, dir, 0x00, 24, 2); + */ + MLXSW_ITEM32(reg, sbpr, pool, 0x00, 0, 4); + ++/* reg_sbpr_infi_size ++ * Size is infinite. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, sbpr, infi_size, 0x04, 31, 1); ++ + /* reg_sbpr_size + * Pool size in buffer cells. ++ * Reserved when infi_size = 1. + * Access: RW + */ + MLXSW_ITEM32(reg, sbpr, size, 0x04, 0, 24); +@@ -5203,13 +9121,15 @@ MLXSW_ITEM32(reg, sbpr, mode, 0x08, 0, 4); + + static inline void mlxsw_reg_sbpr_pack(char *payload, u8 pool, + enum mlxsw_reg_sbxx_dir dir, +- enum mlxsw_reg_sbpr_mode mode, u32 size) ++ enum mlxsw_reg_sbpr_mode mode, u32 size, ++ bool infi_size) + { + MLXSW_REG_ZERO(sbpr, payload); + mlxsw_reg_sbpr_pool_set(payload, pool); + mlxsw_reg_sbpr_dir_set(payload, dir); + mlxsw_reg_sbpr_mode_set(payload, mode); + mlxsw_reg_sbpr_size_set(payload, size); ++ mlxsw_reg_sbpr_infi_size_set(payload, infi_size); + } + + /* SBCM - Shared Buffer Class Management Register +@@ -5221,10 +9141,7 @@ static inline void mlxsw_reg_sbpr_pack(char *payload, u8 pool, + #define MLXSW_REG_SBCM_ID 0xB002 + #define MLXSW_REG_SBCM_LEN 0x28 + +-static const struct mlxsw_reg_info mlxsw_reg_sbcm = { +- .id = MLXSW_REG_SBCM_ID, +- .len = MLXSW_REG_SBCM_LEN, +-}; ++MLXSW_REG_DEFINE(sbcm, MLXSW_REG_SBCM_ID, MLXSW_REG_SBCM_LEN); + + /* reg_sbcm_local_port + * Local port number. +@@ -5260,6 +9177,12 @@ MLXSW_ITEM32(reg, sbcm, min_buff, 0x18, 0, 24); + #define MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN 1 + #define MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX 14 + ++/* reg_sbcm_infi_max ++ * Max buffer is infinite. ++ * Access: RW ++ */ ++MLXSW_ITEM32(reg, sbcm, infi_max, 0x1C, 31, 1); ++ + /* reg_sbcm_max_buff + * When the pool associated to the port-pg/tclass is configured to + * static, Maximum buffer size for the limiter configured in cells. +@@ -5269,6 +9192,7 @@ MLXSW_ITEM32(reg, sbcm, min_buff, 0x18, 0, 24); + * 0: 0 + * i: (1/128)*2^(i-1), for i=1..14 + * 0xFF: Infinity ++ * Reserved when infi_max = 1. + * Access: RW + */ + MLXSW_ITEM32(reg, sbcm, max_buff, 0x1C, 0, 24); +@@ -5281,7 +9205,8 @@ MLXSW_ITEM32(reg, sbcm, pool, 0x24, 0, 4); + + static inline void mlxsw_reg_sbcm_pack(char *payload, u8 local_port, u8 pg_buff, + enum mlxsw_reg_sbxx_dir dir, +- u32 min_buff, u32 max_buff, u8 pool) ++ u32 min_buff, u32 max_buff, ++ bool infi_max, u8 pool) + { + MLXSW_REG_ZERO(sbcm, payload); + mlxsw_reg_sbcm_local_port_set(payload, local_port); +@@ -5289,6 +9214,7 @@ static inline void mlxsw_reg_sbcm_pack(char *payload, u8 local_port, u8 pg_buff, + mlxsw_reg_sbcm_dir_set(payload, dir); + mlxsw_reg_sbcm_min_buff_set(payload, min_buff); + mlxsw_reg_sbcm_max_buff_set(payload, max_buff); ++ mlxsw_reg_sbcm_infi_max_set(payload, infi_max); + mlxsw_reg_sbcm_pool_set(payload, pool); + } + +@@ -5301,10 +9227,7 @@ static inline void mlxsw_reg_sbcm_pack(char *payload, u8 local_port, u8 pg_buff, + #define MLXSW_REG_SBPM_ID 0xB003 + #define MLXSW_REG_SBPM_LEN 0x28 + +-static const struct mlxsw_reg_info mlxsw_reg_sbpm = { +- .id = MLXSW_REG_SBPM_ID, +- .len = MLXSW_REG_SBPM_LEN, +-}; ++MLXSW_REG_DEFINE(sbpm, MLXSW_REG_SBPM_ID, MLXSW_REG_SBPM_LEN); + + /* reg_sbpm_local_port + * Local port number. +@@ -5395,10 +9318,7 @@ static inline void mlxsw_reg_sbpm_unpack(char *payload, u32 *p_buff_occupancy, + #define MLXSW_REG_SBMM_ID 0xB004 + #define MLXSW_REG_SBMM_LEN 0x28 + +-static const struct mlxsw_reg_info mlxsw_reg_sbmm = { +- .id = MLXSW_REG_SBMM_ID, +- .len = MLXSW_REG_SBMM_LEN, +-}; ++MLXSW_REG_DEFINE(sbmm, MLXSW_REG_SBMM_ID, MLXSW_REG_SBMM_LEN); + + /* reg_sbmm_prio + * Switch Priority. +@@ -5457,10 +9377,7 @@ static inline void mlxsw_reg_sbmm_pack(char *payload, u8 prio, u32 min_buff, + MLXSW_REG_SBSR_REC_LEN * \ + MLXSW_REG_SBSR_REC_MAX_COUNT) + +-static const struct mlxsw_reg_info mlxsw_reg_sbsr = { +- .id = MLXSW_REG_SBSR_ID, +- .len = MLXSW_REG_SBSR_LEN, +-}; ++MLXSW_REG_DEFINE(sbsr, MLXSW_REG_SBSR_ID, MLXSW_REG_SBSR_LEN); + + /* reg_sbsr_clr + * Clear Max Buffer Occupancy. When this bit is set, the max_buff_occupancy +@@ -5550,10 +9467,7 @@ static inline void mlxsw_reg_sbsr_rec_unpack(char *payload, int rec_index, + #define MLXSW_REG_SBIB_ID 0xB006 + #define MLXSW_REG_SBIB_LEN 0x10 + +-static const struct mlxsw_reg_info mlxsw_reg_sbib = { +- .id = MLXSW_REG_SBIB_ID, +- .len = MLXSW_REG_SBIB_LEN, +-}; ++MLXSW_REG_DEFINE(sbib, MLXSW_REG_SBIB_ID, MLXSW_REG_SBIB_LEN); + + /* reg_sbib_local_port + * Local port number +@@ -5578,136 +9492,132 @@ static inline void mlxsw_reg_sbib_pack(char *payload, u8 local_port, + mlxsw_reg_sbib_buff_size_set(payload, buff_size); + } + ++static const struct mlxsw_reg_info *mlxsw_reg_infos[] = { ++ MLXSW_REG(sgcr), ++ MLXSW_REG(spad), ++ MLXSW_REG(smid), ++ MLXSW_REG(sspr), ++ MLXSW_REG(sfdat), ++ MLXSW_REG(sfd), ++ MLXSW_REG(sfn), ++ MLXSW_REG(spms), ++ MLXSW_REG(spvid), ++ MLXSW_REG(spvm), ++ MLXSW_REG(spaft), ++ MLXSW_REG(sfgc), ++ MLXSW_REG(sftr), ++ MLXSW_REG(sfdf), ++ MLXSW_REG(sldr), ++ MLXSW_REG(slcr), ++ MLXSW_REG(slcor), ++ MLXSW_REG(spmlr), ++ MLXSW_REG(svfa), ++ MLXSW_REG(svpe), ++ MLXSW_REG(sfmr), ++ MLXSW_REG(spvmlr), ++ MLXSW_REG(cwtp), ++ MLXSW_REG(cwtpm), ++ MLXSW_REG(pgcr), ++ MLXSW_REG(ppbt), ++ MLXSW_REG(pacl), ++ MLXSW_REG(pagt), ++ MLXSW_REG(ptar), ++ MLXSW_REG(ppbs), ++ MLXSW_REG(prcr), ++ MLXSW_REG(pefa), ++ MLXSW_REG(ptce2), ++ MLXSW_REG(perpt), ++ MLXSW_REG(perar), ++ MLXSW_REG(ptce3), ++ MLXSW_REG(percr), ++ MLXSW_REG(pererp), ++ MLXSW_REG(iedr), ++ MLXSW_REG(qpts), ++ MLXSW_REG(qpcr), ++ MLXSW_REG(qtct), ++ MLXSW_REG(qeec), ++ MLXSW_REG(qrwe), ++ MLXSW_REG(qpdsm), ++ MLXSW_REG(qpdpm), ++ MLXSW_REG(qtctm), ++ MLXSW_REG(pmlp), ++ MLXSW_REG(pmtu), ++ MLXSW_REG(ptys), ++ MLXSW_REG(ppad), ++ MLXSW_REG(paos), ++ MLXSW_REG(pfcc), ++ MLXSW_REG(ppcnt), ++ MLXSW_REG(plib), ++ MLXSW_REG(pptb), ++ MLXSW_REG(pbmc), ++ MLXSW_REG(pspa), ++ MLXSW_REG(htgt), ++ MLXSW_REG(hpkt), ++ MLXSW_REG(rgcr), ++ MLXSW_REG(ritr), ++ MLXSW_REG(rtar), ++ MLXSW_REG(ratr), ++ MLXSW_REG(rtdp), ++ MLXSW_REG(rdpm), ++ MLXSW_REG(ricnt), ++ MLXSW_REG(rrcr), ++ MLXSW_REG(ralta), ++ MLXSW_REG(ralst), ++ MLXSW_REG(raltb), ++ MLXSW_REG(ralue), ++ MLXSW_REG(rauht), ++ MLXSW_REG(raleu), ++ MLXSW_REG(rauhtd), ++ MLXSW_REG(rigr2), ++ MLXSW_REG(recr2), ++ MLXSW_REG(rmft2), ++ MLXSW_REG(mfcr), ++ MLXSW_REG(mfsc), ++ MLXSW_REG(mfsm), ++ MLXSW_REG(mfsl), ++ MLXSW_REG(fore), ++ MLXSW_REG(mtcap), ++ MLXSW_REG(mtmp), ++ MLXSW_REG(mtbr), ++ MLXSW_REG(mcia), ++ MLXSW_REG(mpat), ++ MLXSW_REG(mpar), ++ MLXSW_REG(mrsr), ++ MLXSW_REG(mlcr), ++ MLXSW_REG(msci), ++ MLXSW_REG(mpsc), ++ MLXSW_REG(mcqi), ++ MLXSW_REG(mcc), ++ MLXSW_REG(mcda), ++ MLXSW_REG(mgpc), ++ MLXSW_REG(mprs), ++ MLXSW_REG(tngcr), ++ MLXSW_REG(tnumt), ++ MLXSW_REG(tnqcr), ++ MLXSW_REG(tnqdr), ++ MLXSW_REG(tneem), ++ MLXSW_REG(tndem), ++ MLXSW_REG(tnpc), ++ MLXSW_REG(tigcr), ++ MLXSW_REG(sbpr), ++ MLXSW_REG(sbcm), ++ MLXSW_REG(sbpm), ++ MLXSW_REG(sbmm), ++ MLXSW_REG(sbsr), ++ MLXSW_REG(sbib), ++}; ++ + static inline const char *mlxsw_reg_id_str(u16 reg_id) + { +- switch (reg_id) { +- case MLXSW_REG_SGCR_ID: +- return "SGCR"; +- case MLXSW_REG_SPAD_ID: +- return "SPAD"; +- case MLXSW_REG_SMID_ID: +- return "SMID"; +- case MLXSW_REG_SSPR_ID: +- return "SSPR"; +- case MLXSW_REG_SFDAT_ID: +- return "SFDAT"; +- case MLXSW_REG_SFD_ID: +- return "SFD"; +- case MLXSW_REG_SFN_ID: +- return "SFN"; +- case MLXSW_REG_SPMS_ID: +- return "SPMS"; +- case MLXSW_REG_SPVID_ID: +- return "SPVID"; +- case MLXSW_REG_SPVM_ID: +- return "SPVM"; +- case MLXSW_REG_SPAFT_ID: +- return "SPAFT"; +- case MLXSW_REG_SFGC_ID: +- return "SFGC"; +- case MLXSW_REG_SFTR_ID: +- return "SFTR"; +- case MLXSW_REG_SFDF_ID: +- return "SFDF"; +- case MLXSW_REG_SLDR_ID: +- return "SLDR"; +- case MLXSW_REG_SLCR_ID: +- return "SLCR"; +- case MLXSW_REG_SLCOR_ID: +- return "SLCOR"; +- case MLXSW_REG_SPMLR_ID: +- return "SPMLR"; +- case MLXSW_REG_SVFA_ID: +- return "SVFA"; +- case MLXSW_REG_SVPE_ID: +- return "SVPE"; +- case MLXSW_REG_SFMR_ID: +- return "SFMR"; +- case MLXSW_REG_SPVMLR_ID: +- return "SPVMLR"; +- case MLXSW_REG_QTCT_ID: +- return "QTCT"; +- case MLXSW_REG_QEEC_ID: +- return "QEEC"; +- case MLXSW_REG_PMLP_ID: +- return "PMLP"; +- case MLXSW_REG_PMTU_ID: +- return "PMTU"; +- case MLXSW_REG_PTYS_ID: +- return "PTYS"; +- case MLXSW_REG_PPAD_ID: +- return "PPAD"; +- case MLXSW_REG_PAOS_ID: +- return "PAOS"; +- case MLXSW_REG_PFCC_ID: +- return "PFCC"; +- case MLXSW_REG_PPCNT_ID: +- return "PPCNT"; +- case MLXSW_REG_PPTB_ID: +- return "PPTB"; +- case MLXSW_REG_PBMC_ID: +- return "PBMC"; +- case MLXSW_REG_PSPA_ID: +- return "PSPA"; +- case MLXSW_REG_HTGT_ID: +- return "HTGT"; +- case MLXSW_REG_HPKT_ID: +- return "HPKT"; +- case MLXSW_REG_RGCR_ID: +- return "RGCR"; +- case MLXSW_REG_RITR_ID: +- return "RITR"; +- case MLXSW_REG_RATR_ID: +- return "RATR"; +- case MLXSW_REG_RALTA_ID: +- return "RALTA"; +- case MLXSW_REG_RALST_ID: +- return "RALST"; +- case MLXSW_REG_RALTB_ID: +- return "RALTB"; +- case MLXSW_REG_RALUE_ID: +- return "RALUE"; +- case MLXSW_REG_RAUHT_ID: +- return "RAUHT"; +- case MLXSW_REG_RALEU_ID: +- return "RALEU"; +- case MLXSW_REG_RAUHTD_ID: +- return "RAUHTD"; +- case MLXSW_REG_MFCR_ID: +- return "MFCR"; +- case MLXSW_REG_MFSC_ID: +- return "MFSC"; +- case MLXSW_REG_MFSM_ID: +- return "MFSM"; +- case MLXSW_REG_FORE_ID: +- return "FORE"; +- case MLXSW_REG_MTCAP_ID: +- return "MTCAP"; +- case MLXSW_REG_MPAT_ID: +- return "MPAT"; +- case MLXSW_REG_MPAR_ID: +- return "MPAR"; +- case MLXSW_REG_MTMP_ID: +- return "MTMP"; +- case MLXSW_REG_MTBR_ID: +- return "MTBR"; +- case MLXSW_REG_MLCR_ID: +- return "MLCR"; +- case MLXSW_REG_SBPR_ID: +- return "SBPR"; +- case MLXSW_REG_SBCM_ID: +- return "SBCM"; +- case MLXSW_REG_SBPM_ID: +- return "SBPM"; +- case MLXSW_REG_SBMM_ID: +- return "SBMM"; +- case MLXSW_REG_SBSR_ID: +- return "SBSR"; +- case MLXSW_REG_SBIB_ID: +- return "SBIB"; +- default: +- return "*UNKNOWN*"; ++ const struct mlxsw_reg_info *reg_info; ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(mlxsw_reg_infos); i++) { ++ reg_info = mlxsw_reg_infos[i]; ++ if (reg_info->id == reg_id) ++ return reg_info->name; + } ++ return "*UNKNOWN*"; + } + + /* PUDE - Port Up / Down Event +diff --git a/drivers/net/ethernet/mellanox/mlxsw/resources.h b/drivers/net/ethernet/mellanox/mlxsw/resources.h +new file mode 100644 +index 0000000..99b3415 +--- /dev/null ++++ b/drivers/net/ethernet/mellanox/mlxsw/resources.h +@@ -0,0 +1,152 @@ ++/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ ++/* Copyright (c) 2016-2018 Mellanox Technologies. All rights reserved */ ++ ++#ifndef _MLXSW_RESOURCES_H ++#define _MLXSW_RESOURCES_H ++ ++#include ++#include ++ ++enum mlxsw_res_id { ++ MLXSW_RES_ID_KVD_SIZE, ++ MLXSW_RES_ID_KVD_SINGLE_MIN_SIZE, ++ MLXSW_RES_ID_KVD_DOUBLE_MIN_SIZE, ++ MLXSW_RES_ID_MAX_KVD_LINEAR_RANGE, ++ MLXSW_RES_ID_MAX_KVD_ACTION_SETS, ++ MLXSW_RES_ID_MAX_TRAP_GROUPS, ++ MLXSW_RES_ID_CQE_V0, ++ MLXSW_RES_ID_CQE_V1, ++ MLXSW_RES_ID_CQE_V2, ++ MLXSW_RES_ID_COUNTER_POOL_SIZE, ++ MLXSW_RES_ID_MAX_SPAN, ++ MLXSW_RES_ID_COUNTER_SIZE_PACKETS_BYTES, ++ MLXSW_RES_ID_COUNTER_SIZE_ROUTER_BASIC, ++ MLXSW_RES_ID_MAX_SYSTEM_PORT, ++ MLXSW_RES_ID_MAX_LAG, ++ MLXSW_RES_ID_MAX_LAG_MEMBERS, ++ MLXSW_RES_ID_MAX_BUFFER_SIZE, ++ MLXSW_RES_ID_CELL_SIZE, ++ MLXSW_RES_ID_ACL_MAX_TCAM_REGIONS, ++ MLXSW_RES_ID_ACL_MAX_TCAM_RULES, ++ MLXSW_RES_ID_ACL_MAX_REGIONS, ++ MLXSW_RES_ID_ACL_MAX_GROUPS, ++ MLXSW_RES_ID_ACL_MAX_GROUP_SIZE, ++ MLXSW_RES_ID_ACL_FLEX_KEYS, ++ MLXSW_RES_ID_ACL_MAX_ACTION_PER_RULE, ++ MLXSW_RES_ID_ACL_ACTIONS_PER_SET, ++ MLXSW_RES_ID_ACL_MAX_ERPT_BANKS, ++ MLXSW_RES_ID_ACL_MAX_ERPT_BANK_SIZE, ++ MLXSW_RES_ID_ACL_MAX_LARGE_KEY_ID, ++ MLXSW_RES_ID_ACL_ERPT_ENTRIES_2KB, ++ MLXSW_RES_ID_ACL_ERPT_ENTRIES_4KB, ++ MLXSW_RES_ID_ACL_ERPT_ENTRIES_8KB, ++ MLXSW_RES_ID_ACL_ERPT_ENTRIES_12KB, ++ MLXSW_RES_ID_MAX_CPU_POLICERS, ++ MLXSW_RES_ID_MAX_VRS, ++ MLXSW_RES_ID_MAX_RIFS, ++ MLXSW_RES_ID_MC_ERIF_LIST_ENTRIES, ++ MLXSW_RES_ID_MAX_LPM_TREES, ++ MLXSW_RES_ID_MAX_NVE_MC_ENTRIES_IPV4, ++ MLXSW_RES_ID_MAX_NVE_MC_ENTRIES_IPV6, ++ ++ /* Internal resources. ++ * Determined by the SW, not queried from the HW. ++ */ ++ MLXSW_RES_ID_KVD_SINGLE_SIZE, ++ MLXSW_RES_ID_KVD_DOUBLE_SIZE, ++ MLXSW_RES_ID_KVD_LINEAR_SIZE, ++ ++ __MLXSW_RES_ID_MAX, ++}; ++ ++static u16 mlxsw_res_ids[] = { ++ [MLXSW_RES_ID_KVD_SIZE] = 0x1001, ++ [MLXSW_RES_ID_KVD_SINGLE_MIN_SIZE] = 0x1002, ++ [MLXSW_RES_ID_KVD_DOUBLE_MIN_SIZE] = 0x1003, ++ [MLXSW_RES_ID_MAX_KVD_LINEAR_RANGE] = 0x1005, ++ [MLXSW_RES_ID_MAX_KVD_ACTION_SETS] = 0x1007, ++ [MLXSW_RES_ID_MAX_TRAP_GROUPS] = 0x2201, ++ [MLXSW_RES_ID_CQE_V0] = 0x2210, ++ [MLXSW_RES_ID_CQE_V1] = 0x2211, ++ [MLXSW_RES_ID_CQE_V2] = 0x2212, ++ [MLXSW_RES_ID_COUNTER_POOL_SIZE] = 0x2410, ++ [MLXSW_RES_ID_MAX_SPAN] = 0x2420, ++ [MLXSW_RES_ID_COUNTER_SIZE_PACKETS_BYTES] = 0x2443, ++ [MLXSW_RES_ID_COUNTER_SIZE_ROUTER_BASIC] = 0x2449, ++ [MLXSW_RES_ID_MAX_SYSTEM_PORT] = 0x2502, ++ [MLXSW_RES_ID_MAX_LAG] = 0x2520, ++ [MLXSW_RES_ID_MAX_LAG_MEMBERS] = 0x2521, ++ [MLXSW_RES_ID_MAX_BUFFER_SIZE] = 0x2802, /* Bytes */ ++ [MLXSW_RES_ID_CELL_SIZE] = 0x2803, /* Bytes */ ++ [MLXSW_RES_ID_ACL_MAX_TCAM_REGIONS] = 0x2901, ++ [MLXSW_RES_ID_ACL_MAX_TCAM_RULES] = 0x2902, ++ [MLXSW_RES_ID_ACL_MAX_REGIONS] = 0x2903, ++ [MLXSW_RES_ID_ACL_MAX_GROUPS] = 0x2904, ++ [MLXSW_RES_ID_ACL_MAX_GROUP_SIZE] = 0x2905, ++ [MLXSW_RES_ID_ACL_FLEX_KEYS] = 0x2910, ++ [MLXSW_RES_ID_ACL_MAX_ACTION_PER_RULE] = 0x2911, ++ [MLXSW_RES_ID_ACL_ACTIONS_PER_SET] = 0x2912, ++ [MLXSW_RES_ID_ACL_MAX_ERPT_BANKS] = 0x2940, ++ [MLXSW_RES_ID_ACL_MAX_ERPT_BANK_SIZE] = 0x2941, ++ [MLXSW_RES_ID_ACL_MAX_LARGE_KEY_ID] = 0x2942, ++ [MLXSW_RES_ID_ACL_ERPT_ENTRIES_2KB] = 0x2950, ++ [MLXSW_RES_ID_ACL_ERPT_ENTRIES_4KB] = 0x2951, ++ [MLXSW_RES_ID_ACL_ERPT_ENTRIES_8KB] = 0x2952, ++ [MLXSW_RES_ID_ACL_ERPT_ENTRIES_12KB] = 0x2953, ++ [MLXSW_RES_ID_MAX_CPU_POLICERS] = 0x2A13, ++ [MLXSW_RES_ID_MAX_VRS] = 0x2C01, ++ [MLXSW_RES_ID_MAX_RIFS] = 0x2C02, ++ [MLXSW_RES_ID_MC_ERIF_LIST_ENTRIES] = 0x2C10, ++ [MLXSW_RES_ID_MAX_LPM_TREES] = 0x2C30, ++ [MLXSW_RES_ID_MAX_NVE_MC_ENTRIES_IPV4] = 0x2E02, ++ [MLXSW_RES_ID_MAX_NVE_MC_ENTRIES_IPV6] = 0x2E03, ++}; ++ ++struct mlxsw_res { ++ bool valid[__MLXSW_RES_ID_MAX]; ++ u64 values[__MLXSW_RES_ID_MAX]; ++}; ++ ++static inline bool mlxsw_res_valid(struct mlxsw_res *res, ++ enum mlxsw_res_id res_id) ++{ ++ return res->valid[res_id]; ++} ++ ++#define MLXSW_RES_VALID(res, short_res_id) \ ++ mlxsw_res_valid(res, MLXSW_RES_ID_##short_res_id) ++ ++static inline u64 mlxsw_res_get(struct mlxsw_res *res, ++ enum mlxsw_res_id res_id) ++{ ++ if (WARN_ON(!res->valid[res_id])) ++ return 0; ++ return res->values[res_id]; ++} ++ ++#define MLXSW_RES_GET(res, short_res_id) \ ++ mlxsw_res_get(res, MLXSW_RES_ID_##short_res_id) ++ ++static inline void mlxsw_res_set(struct mlxsw_res *res, ++ enum mlxsw_res_id res_id, u64 value) ++{ ++ res->valid[res_id] = true; ++ res->values[res_id] = value; ++} ++ ++#define MLXSW_RES_SET(res, short_res_id, value) \ ++ mlxsw_res_set(res, MLXSW_RES_ID_##short_res_id, value) ++ ++static inline void mlxsw_res_parse(struct mlxsw_res *res, u16 id, u64 value) ++{ ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(mlxsw_res_ids); i++) { ++ if (mlxsw_res_ids[i] == id) { ++ mlxsw_res_set(res, i, value); ++ return; ++ } ++ } ++} ++ ++#endif +-- +2.1.4 + diff --git a/patch/0032-sonic-update-kernel-config-mlsxw-pci.patch b/patch/0032-sonic-update-kernel-config-mlsxw-pci.patch new file mode 100644 index 000000000000..eebdfa2206a2 --- /dev/null +++ b/patch/0032-sonic-update-kernel-config-mlsxw-pci.patch @@ -0,0 +1,26 @@ +From f022b1aad3c9bfe2ac13efa8b86020e3779fc82a Mon Sep 17 00:00:00 2001 +From: Vadim Pasternak +Date: Tue, 15 Jan 2019 21:17:00 +0200 +Subject: [PATCH] sonic update kernel config mlsxw pci + +Signed-off-by: Vadim Pasternak +--- + debian/build/build_amd64_none_amd64/.config | 2 +- + 1 files changed, 1 insertions(+), 1 deletions(-) + +diff --git a/debian/build/build_amd64_none_amd64/.config b/debian/build/build_amd64_none_amd64/.config +index 21d0d6f..9169c30 100644 +--- a/debian/build/build_amd64_none_amd64/.config ++++ b/debian/build/build_amd64_none_amd64/.config +@@ -2642,7 +2642,7 @@ CONFIG_MLXSW_CORE=m + CONFIG_MLXSW_CORE_HWMON=y + CONFIG_MLXSW_CORE_THERMAL=y + CONFIG_MLXSW_CORE_QSFP=y +-CONFIG_MLXSW_PCI=m ++# CONFIG_MLXSW_PCI is not set + CONFIG_MLXSW_I2C=m + CONFIG_MLXSW_MINIMAL=m + CONFIG_NET_VENDOR_MICREL=y +-- +1.7.1 + diff --git a/patch/0033-hwmon-pmbus-Fix-driver-info-initialization-in-probe-.patch b/patch/0033-hwmon-pmbus-Fix-driver-info-initialization-in-probe-.patch new file mode 100644 index 000000000000..261c5f898e53 --- /dev/null +++ b/patch/0033-hwmon-pmbus-Fix-driver-info-initialization-in-probe-.patch @@ -0,0 +1,42 @@ +From 81443e4927ec84223f8305e0e4cec647521856e4 Mon Sep 17 00:00:00 2001 +From: Vadim Pasternak +Date: Mon, 21 Jan 2019 14:54:58 +0000 +Subject: [PATCH hwmon] hwmon: (pmbus) Fix driver info initialization in probe + routine + +Fix tps53679_probe() by using dynamically allocated ?pmbus_driver_info? +structure instead of static. Usage of static structures causes +overwritten of the field ?vrm_version? - when the number of tps53679 +devices with the different ?vrm_version? are used within the same +system, the last probed device overwrites this field for all others. + +Fixes: 610526527a13e4c9 ("hwmon: (pmbus) Add support for Texas Instruments tps53679 device") +Signed-off-by: Vadim Pasternak +--- + drivers/hwmon/pmbus/tps53679.c | 10 +++++++++- + 1 file changed, 9 insertions(+), 1 deletion(-) + +diff --git a/drivers/hwmon/pmbus/tps53679.c b/drivers/hwmon/pmbus/tps53679.c +index 85b515c..45eacc5 100644 +--- a/drivers/hwmon/pmbus/tps53679.c ++++ b/drivers/hwmon/pmbus/tps53679.c +@@ -80,7 +80,15 @@ static struct pmbus_driver_info tps53679_info = { + static int tps53679_probe(struct i2c_client *client, + const struct i2c_device_id *id) + { +- return pmbus_do_probe(client, id, &tps53679_info); ++ struct pmbus_driver_info *info; ++ ++ info = devm_kzalloc(&client->dev, sizeof(*info), GFP_KERNEL); ++ if (!info) ++ return -ENOMEM; ++ ++ memcpy(info, &tps53679_info, sizeof(*info)); ++ ++ return pmbus_do_probe(client, id, info); + } + + static const struct i2c_device_id tps53679_id[] = { +-- +2.1.4 + diff --git a/patch/0034-mlxsw-thermal-disable-highest-zone-calculation.patch b/patch/0034-mlxsw-thermal-disable-highest-zone-calculation.patch new file mode 100644 index 000000000000..830e1ae52573 --- /dev/null +++ b/patch/0034-mlxsw-thermal-disable-highest-zone-calculation.patch @@ -0,0 +1,35 @@ +From 5da6a599af1072f278b5f41ed648bed2142d42f0 Mon Sep 17 00:00:00 2001 +From: Vadim Pasternak +Date: Mon, 21 Jan 2019 19:13:29 +0000 +Subject: [PATCH mlxsw] mlxsw thermal: disable highest zone calculation + +Signed-off-by: Vadim Pasternak +--- + drivers/net/ethernet/mellanox/mlxsw/core_thermal.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +index 444455c..0a2e7a2 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +@@ -433,7 +433,7 @@ static int mlxsw_thermal_get_temp(struct thermal_zone_device *tzdev, + return err; + } + mlxsw_reg_mtmp_unpack(mtmp_pl, &temp, NULL, NULL); +- ++#if 0 + if (thermal->tz_module_arr) { + err = mlxsw_thermal_highest_tz_notify(dev, tzdev, thermal, + thermal->tz_module_num, +@@ -441,7 +441,7 @@ static int mlxsw_thermal_get_temp(struct thermal_zone_device *tzdev, + if (err) + dev_err(dev, "Failed to query module temp sensor\n"); + } +- ++#endif + *p_temp = (int) temp; + return 0; + } +-- +2.1.4 + diff --git a/patch/0035-platform-x86-mlx-platform-Add-CPLD4-register.patch b/patch/0035-platform-x86-mlx-platform-Add-CPLD4-register.patch new file mode 100644 index 000000000000..5d993b423b77 --- /dev/null +++ b/patch/0035-platform-x86-mlx-platform-Add-CPLD4-register.patch @@ -0,0 +1,63 @@ +From 87859cbda6affc45fc8d513c12eb4318e2c81073 Mon Sep 17 00:00:00 2001 +From: Vadim Pasternak +Date: Thu, 24 Jan 2019 09:58:23 +0000 +Subject: [PATCH platform] platform/x86: mlx-platform: Add CPLD4 register + +Signed-off-by: +--- + drivers/platform/x86/mlx-platform.c | 11 ++++++++++- + 1 file changed, 10 insertions(+), 1 deletion(-) + +diff --git a/drivers/platform/x86/mlx-platform.c b/drivers/platform/x86/mlx-platform.c +index fc8d655..a80b968 100644 +--- a/drivers/platform/x86/mlx-platform.c ++++ b/drivers/platform/x86/mlx-platform.c +@@ -25,6 +25,7 @@ + #define MLXPLAT_CPLD_LPC_REG_CPLD1_VER_OFFSET 0x00 + #define MLXPLAT_CPLD_LPC_REG_CPLD2_VER_OFFSET 0x01 + #define MLXPLAT_CPLD_LPC_REG_CPLD3_VER_OFFSET 0x02 ++#define MLXPLAT_CPLD_LPC_REG_CPLD4_VER_OFFSET 0x03 + #define MLXPLAT_CPLD_LPC_REG_RESET_CAUSE_OFFSET 0x1d + #define MLXPLAT_CPLD_LPC_REG_RST_CAUSE1_OFFSET 0x1e + #define MLXPLAT_CPLD_LPC_REG_RST_CAUSE2_OFFSET 0x1f +@@ -1163,6 +1164,12 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_ng_regs_io_data[] = { + .mode = 0444, + }, + { ++ .label = "cpld4_version", ++ .reg = MLXPLAT_CPLD_LPC_REG_CPLD4_VER_OFFSET, ++ .bit = GENMASK(7, 0), ++ .mode = 0444, ++ }, ++ { + .label = "reset_long_pb", + .reg = MLXPLAT_CPLD_LPC_REG_RESET_CAUSE_OFFSET, + .mask = GENMASK(7, 0) & ~BIT(0), +@@ -1251,7 +1258,7 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_ng_regs_io_data[] = { + .label = "fan_dir", + .reg = MLXPLAT_CPLD_LPC_REG_FAN_DIRECTION, + .bit = GENMASK(7, 0), +- .mode = 0200, ++ .mode = 0444, + }, + }; + +@@ -1531,6 +1538,7 @@ static bool mlxplat_mlxcpld_readable_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_CPLD1_VER_OFFSET: + case MLXPLAT_CPLD_LPC_REG_CPLD2_VER_OFFSET: + case MLXPLAT_CPLD_LPC_REG_CPLD3_VER_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_CPLD4_VER_OFFSET: + case MLXPLAT_CPLD_LPC_REG_RESET_CAUSE_OFFSET: + case MLXPLAT_CPLD_LPC_REG_RST_CAUSE1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_RST_CAUSE2_OFFSET: +@@ -1597,6 +1605,7 @@ static bool mlxplat_mlxcpld_volatile_reg(struct device *dev, unsigned int reg) + case MLXPLAT_CPLD_LPC_REG_CPLD1_VER_OFFSET: + case MLXPLAT_CPLD_LPC_REG_CPLD2_VER_OFFSET: + case MLXPLAT_CPLD_LPC_REG_CPLD3_VER_OFFSET: ++ case MLXPLAT_CPLD_LPC_REG_CPLD4_VER_OFFSET: + case MLXPLAT_CPLD_LPC_REG_RESET_CAUSE_OFFSET: + case MLXPLAT_CPLD_LPC_REG_RST_CAUSE1_OFFSET: + case MLXPLAT_CPLD_LPC_REG_RST_CAUSE2_OFFSET: +-- +2.1.4 + diff --git a/patch/series b/patch/series index f0062ffbdbb5..cd944a55067b 100644 --- a/patch/series +++ b/patch/series @@ -60,6 +60,16 @@ linux-4.13-thermal-intel_pch_thermal-Fix-enable-check-on.patch 0023-platform-x86-mlx-platform-Add-support-for-register-a.patch 0024-config-mellanox-fan-configuration.patch 0025-net-udp_l3mdev_accept-support.patch +0026-mlxsw-qsfp_sysfs-Support-extended-port-numbers-for-S.patch +0027-mlxsw-thermal-monitoring-amendments.patch +0028-watchdog-mlx-wdt-introduce-watchdog-driver-for-Mella.patch +0029-mlxsw-qsfp_sysfs-Support-port-numbers-initialization.patch +0030-update-kernel-config.patch +0031-mlxsw-Align-code-with-kernel-v-5.0.patch +0032-sonic-update-kernel-config-mlsxw-pci.patch +0033-hwmon-pmbus-Fix-driver-info-initialization-in-probe-.patch +0034-mlxsw-thermal-disable-highest-zone-calculation.patch +0035-platform-x86-mlx-platform-Add-CPLD4-register.patch # # This series applies on GIT commit 1451b36b2b0d62178e42f648d8a18131af18f7d8 # Tkernel-sched-core-fix-cgroup-fork-race.patch