## Obtain images for overcloud nodes RHEL/RHOSP Tripleo > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_installing-director-on-the-undercloud#proc_single-cpu-architecture-overcloud-images_overcloud-images Download images direct from Redhat and upload to undercloud swift API. ```sh sudo su - stack source ~/stackrc sudo dnf install -y rhosp-director-images-ipa-x86_64 rhosp-director-images-x86_64 mkdir ~/images cd ~/images for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.2.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.2.tar; do tar -xvf $i; done openstack overcloud image upload --image-path /home/stack/images/ openstack image list ll /var/lib/ironic/httpboot # look for inspector ipxe config and the kernel and initramfs files ``` ## Import bare metal nodes ### Build node definition list This is commonly refered to as the `instackenv.json` file, Redhat references this as the node definition template nodes.json. > the schema reference for this file: > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/environments/baremetal.html#instackenv Gather all IP addresses for the IPMI interfaces. - `.[].ports.address` is the MAC address for iPXE boot, typically eth0. - `.[].pm_addr` is the IP address of the IPMI adapter. - If the IPMI interface is shared with the eth0 control plane interface the MAC address will be used for iPXE boot. - If the IPMI interface and eth0 interface are not shared (have different MAC address) you may have a tedious task ahead of you searching through the XClarity out of band adapters or looking through the switch MAC table and then correlating the switch port to the node to enumerate the MAC address. - University nodes do share a single interface for IPMI and iPXE but the MAC addresses are different. ```sh # METHOD 1 - will not work for University SR630 servers # where the IPMI and PXE interfaces share the same MAC address ( NOTE this is not the case for the Lenovo SR630 with OCP network adapter working to bridge the XClarity/IPMI) # Scan the IPMI port of all hosts. sudo dnf install nmap -y nmap -p 623 10.122.1.0/24 # Query the arp table to return the MAC addresses of the IPMI(thus PXE) interfaces. ip neigh show dev enp6s19 # controller 10-12, 20-21 networker, 30-77 compute, (54 temporary proxmox, 55-57 temporary storage nodes - remove from compute range) #ipmitool -N 1 -R 0 -I lanplus -H 10.122.1.10 -U USERID -P Password0 lan print for i in {10..80}; do j=10.122.1.$i ; ip --json neigh show dev enp6s19 | jq -r " .[] | select(.dst==\"$j\") | \"\(.dst) \(.lladdr)\""; done | grep -v null 10.122.1.10 38:68:dd:4a:56:3c 10.122.1.11 38:68:dd:4a:55:94 10.122.1.12 38:68:dd:4a:42:4c 10.122.1.20 38:68:dd:4a:4a:34 10.122.1.21 38:68:dd:4a:52:1c 10.122.1.30 38:68:dd:4c:17:ec 10.122.1.31 38:68:dd:4c:17:b4 10.122.1.32 38:68:dd:4d:1e:84 10.122.1.33 38:68:dd:4d:0f:f4 10.122.1.34 38:68:dd:4d:26:ac 10.122.1.35 38:68:dd:4d:1b:f4 10.122.1.36 38:68:dd:4a:46:4c 10.122.1.37 38:68:dd:4d:16:7c 10.122.1.38 38:68:dd:4d:15:8c 10.122.1.39 38:68:dd:4d:1a:4c 10.122.1.40 38:68:dd:4a:75:94 10.122.1.41 38:68:dd:4d:1c:fc 10.122.1.42 38:68:dd:4d:19:0c 10.122.1.43 38:68:dd:4a:43:ec 10.122.1.44 38:68:dd:4a:41:4c 10.122.1.45 38:68:dd:4d:14:24 10.122.1.46 38:68:dd:4d:18:c4 10.122.1.47 38:68:dd:4d:18:cc 10.122.1.48 38:68:dd:4a:41:8c 10.122.1.49 38:68:dd:4c:17:8c 10.122.1.50 38:68:dd:4c:17:2c 10.122.1.51 38:68:dd:4d:1d:cc 10.122.1.52 38:68:dd:4c:17:e4 10.122.1.53 38:68:dd:4c:17:5c 10.122.1.54 38:68:dd:70:a8:e8 10.122.1.55 38:68:dd:70:a0:84 10.122.1.56 38:68:dd:70:a4:cc 10.122.1.57 38:68:dd:70:aa:cc 10.122.1.58 38:68:dd:70:a8:88 10.122.1.59 38:68:dd:70:a5:bc 10.122.1.60 38:68:dd:70:a5:54 10.122.1.61 38:68:dd:70:a2:e0 10.122.1.62 38:68:dd:70:a2:b8 10.122.1.63 38:68:dd:70:a7:10 10.122.1.64 38:68:dd:70:a2:0c 10.122.1.65 38:68:dd:70:9f:38 10.122.1.66 38:68:dd:70:a8:74 10.122.1.67 38:68:dd:70:a2:ac 10.122.1.68 38:68:dd:70:a5:18 10.122.1.69 38:68:dd:70:a7:88 10.122.1.70 38:68:dd:70:a4:d8 10.122.1.71 38:68:dd:70:a6:b0 10.122.1.72 38:68:dd:70:aa:c4 10.122.1.73 38:68:dd:70:9e:e0 10.122.1.74 38:68:dd:70:a3:40 10.122.1.75 38:68:dd:70:a2:08 10.122.1.76 38:68:dd:70:a4:a0 10.122.1.77 38:68:dd:70:a1:6c # METHOD 2 - used for University SR630 servers # where the IPMI interface and eth0 interface are not shared (or have different MAC addresses) ## install XClarity CLI mkdir onecli cd onecli curl -o lnvgy_utl_lxce_onecli02a-3.5.0_rhel_x86-64.tgz https://download.lenovo.com/servers/mig/2022/06/01/55726/lnvgy_utl_lxce_onecli02a-3.5.0_rhel_x86-64.tgz tar -xvzf lnvgy_utl_lxce_onecli02a-3.5.0_rhel_x86-64.tgz ## XClarity CLI - find the MAC of the eth0 device ### find all config items ./onecli config show all --bmc USERID:Password0@10.122.1.10 --never-check-trust --nolog ### find specific item ./onecli config show IMM.HostIPAddress1 --bmc USERID:Password0@10.122.1.10 --never-check-trust --nolog --quiet ./onecli config show IntelREthernetConnectionX722for1GbE--OnboardLAN1PhysicalPort1LogicalPort1.MACAddress --never-check-trust --nolog --quiet ### find MAC address for eth0 (assuming eth0 is connected) #### for the origional SR630 University nodes for i in {10..53}; do j=10.122.1.$i ; echo $j $(sudo ./onecli config show IntelREthernetConnectionX722for1GbE--OnboardLAN1PhysicalPort1LogicalPort1.MACAddress --bmc USERID:Password0@$j --never-check-trust --nolog --quiet | grep IntelREthernetConnectionX722for1GbE--OnboardLAN1PhysicalPort1LogicalPort1.MACAddress | awk -F '=' '{print $2}' | tr '[:upper:]' '[:lower:]'); done ## SR630 # controllers 10.122.1.10 38:68:dd:4a:56:38 10.122.1.11 38:68:dd:4a:55:90 10.122.1.12 38:68:dd:4a:42:48 # networkers 10.122.1.20 38:68:dd:4a:4a:30 10.122.1.21 38:68:dd:4a:52:18 # compute 10.122.1.30 38:68:dd:4c:17:e8 10.122.1.31 38:68:dd:4c:17:b0 10.122.1.32 38:68:dd:4d:1e:80 10.122.1.33 38:68:dd:4d:0f:f0 10.122.1.34 38:68:dd:4d:26:a8 10.122.1.35 38:68:dd:4d:1b:f0 10.122.1.36 38:68:dd:4a:46:48 10.122.1.37 38:68:dd:4d:16:78 10.122.1.38 38:68:dd:4d:15:88 10.122.1.39 38:68:dd:4d:1a:48 10.122.1.40 38:68:dd:4a:75:90 10.122.1.41 38:68:dd:4d:1c:f8 10.122.1.42 38:68:dd:4d:19:08 10.122.1.43 38:68:dd:4a:43:e8 10.122.1.44 38:68:dd:4a:41:48 10.122.1.45 38:68:dd:4d:14:20 10.122.1.46 38:68:dd:4d:18:c0 10.122.1.47 38:68:dd:4d:18:c8 10.122.1.48 38:68:dd:4a:41:88 10.122.1.49 38:68:dd:4c:17:88 10.122.1.50 38:68:dd:4c:17:28 10.122.1.51 38:68:dd:4d:1d:c8 10.122.1.52 38:68:dd:4c:17:e0 10.122.1.53 38:68:dd:4c:17:58 ## SR630v2 node have a different OCP network adapter for i in {54..77}; do j=10.122.1.$i ; echo $j $(sudo ./onecli config show IntelREthernetNetworkAdapterI350-T4forOCPNIC30--Slot4PhysicalPort1LogicalPort1.MACAddress --bmc USERID:Password0@$j --never-check-trust --nolog --quiet | grep IntelREthernetNetworkAdapterI350-T4forOCPNIC30--Slot4PhysicalPort1LogicalPort1.MACAddress | awk -F '=' '{print $2}' | tr '[:upper:]' '[:lower:]'); done 10.122.1.54 6c:fe:54:32:b8:60 10.122.1.55 6c:fe:54:33:4f:3c 10.122.1.56 6c:fe:54:33:55:74 10.122.1.57 6c:fe:54:33:4b:5c 10.122.1.58 6c:fe:54:33:4f:d2 10.122.1.59 6c:fe:54:33:53:ae 10.122.1.60 6c:fe:54:33:4f:7e 10.122.1.61 6c:fe:54:33:97:46 10.122.1.62 6c:fe:54:33:57:18 10.122.1.63 6c:fe:54:33:4e:fa 10.122.1.64 6c:fe:54:33:53:ea 10.122.1.65 6c:fe:54:33:4d:f8 10.122.1.66 6c:fe:54:33:4d:2c 10.122.1.67 6c:fe:54:32:e8:4e 10.122.1.68 6c:fe:54:33:55:fe 10.122.1.69 6c:fe:54:33:4b:86 10.122.1.70 6c:fe:54:33:55:56 10.122.1.71 6c:fe:54:33:4e:b2 10.122.1.72 6c:fe:54:33:57:12 10.122.1.73 6c:fe:54:33:4e:d6 10.122.1.74 6c:fe:54:33:51:98 10.122.1.75 6c:fe:54:33:4d:62 10.122.1.76 6c:fe:54:33:55:50 10.122.1.77 6c:fe:54:32:f0:2a ``` Create each node configuration in the "nodes" list `/home/stack/instackenv.json`. ```json { "nodes": [ { "ports": [ { "address": "38:68:dd:4a:42:4c", "physical_network": "ctlplane" } ], "name": "osctl0", "cpu": "4", "memory": "6144", "disk": "120", "arch": "x86_64", "pm_type": "ipmi", "pm_user": "USERID", "pm_password": "Password0", "pm_addr": "10.122.1.10", "capabilities": "profile:baremetal,boot_option:local", "_comment": "rack - openstack - location - u5" }, { "ports": [ { "address": "38:68:dd:4a:4a:34", "physical_network": "ctlplane" } ], "name": "osnet1", "cpu": "4", "memory": "6144", "disk": "120", "arch": "x86_64", "pm_type": "ipmi", "pm_user": "USERID", "pm_password": "Password0", "pm_addr": "10.122.1.21", "capabilities": "profile:baremetal,boot_option:local", "_comment": "rack - openstack - location - u9" }, { "ports": [ { "address": "38:68:dd:4c:17:e4", "physical_network": "ctlplane" } ], "name": "oscomp1", "cpu": "4", "memory": "6144", "disk": "120", "arch": "x86_64", "pm_type": "ipmi", "pm_user": "USERID", "pm_password": "Password0", "pm_addr": "10.122.1.31", "capabilities": "profile:baremetal,boot_option:local", "_comment": "rack - openstack - location - u11" } ] } ``` - Do not have to include capabilities, we later add these for the overcloud deployment. - The capabilities 'profile:flavour' and 'boot_option:local' are good defaults, more capabilities will be automatically added during introspection and manually added when binding a node to a role. ## Setup RAID + Legacy BIOS boot mode > IMPORTANT: UEFI boot does work on the SR650 as expected, however it can take a very long time to cycle through the interfaces to the PXE boot interface. > On large deployments you may reach the timeout on the DHCP server entry, BIOS mode is quicker to get to the PXE rom. Use `/home/stack/instackenv.json` to start each node, login to each nodes XClarity web interface and setup a RAID1 array of the boot disks. ```sh # check nodes power state for i in `jq -r .nodes[].pm_addr instackenv.json`; do ipmitool -N 1 -R 0 -I lanplus -H $i -U USERID -P Password0 chassis status | grep ^System;done # start all nodes for i in `jq -r .nodes[].pm_addr instackenv.json`; do ipmitool -N 1 -R 0 -I lanplus -H $i -U USERID -P Password0 chassis power on ;done for i in `jq -r .nodes[].pm_addr instackenv.json`; do ipmitool -N 1 -R 0 -I lanplus -H $i -U USERID -P Password0 chassis status | grep ^System;done # get IP login to XClarity web console # configure RAID1 array on each node # set boot option from UEFI to LEGACY/BIOS boot mode for i in `jq -r .nodes[].pm_addr instackenv.json`; do echo $i ;done # stop all nodes for i in `jq -r .nodes[].pm_addr instackenv.json`; do ipmitool -N 1 -R 0 -I lanplus -H $i -U USERID -P Password0 chassis power off ;done ``` ## Import nodes into the undercloud > WARNING: the capabilities field keypair value 'node:compute-0, node:compute-1, node:compute-N' value must be contiguous, the University has a node with broken hardware 'oscomp9' that is not in the `instackenv.json` file. > WARNING: Each capability keypair 'node:\-#' must be in sequence, with oscomp9 removed from the `instackenv.json` we add the keypairs as so: `oscomp8 = computeA-8 AND oscomp10 = computeA-9`. **Notice the Univerity cluster has 2 different server hardware types, with different network interface mappings, the node capabilities (computeA-0 VS node:computeB-0) will be used in the `scheduler_hints.yaml` to bind nodes to roles, there need to be 2 roles for the compute nodes to allow each server type to have a different 'associated' network interface mapping schemes.** ```sh # load credentials source ~/stackrc # remove nodes if not first run #for i in `openstack baremetal node list -f json | jq -r .[].Name`; do openstack baremetal node manage $i;done #for i in `openstack baremetal node list -f json | jq -r .[].Name`; do openstack baremetal node delete $i;done # ping all nodes to update the arp cache #for i in `jq -r .nodes[].pm_addr instackenv.json`; do sudo ping -c 3 -W 5 $i ;done nmap -p 623 10.122.1.0/24 # import nodes openstack overcloud node import instackenv.json # set nodes to use BIOS boot mode for overcloud installation for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do openstack baremetal node set --property capabilities="boot_mode:bios,$(openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities | sed "s/boot_mode:[^,]*,//g")" $i; done # set nodes for baremetal profile for the schedule_hints.yaml to select the nodes as candidates for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do openstack baremetal node set --property capabilities="profile:baremetal,$(openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities | sed "s/profile:baremetal[^,]*,//g")" $i; done ## where some nodes cannot deploy # oscomp4, oscomp7 have been removed from the instackenv.json owing to network card issues # owing to the way we are setting the node capability using a loop index we will see that the oscomp8 will be named in openstack as computeA-6 # # openstack baremetal node show oscomp8 -f json -c properties | jq .properties.capabilities # "node:computeA-6,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal" # # if you do not have a full compliment of nodes ensure templates/scheduler_hints_env.yaml has the correct amount of nodes, in this case 22 computeA nodes # ControllerCount: 3 # NetworkerCount: 2 # #2 nodes removed owing to network card issues # #ComputeACount: 24 # ComputeACount: 22 # ComputeBCount: 24 # set 'node:name' capability to allow scheduler_hints.yaml to match roles to nodes ## set capability for controller and networker nodes openstack baremetal node set --property capabilities="node:controller-0,$(openstack baremetal node show osctl0 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osctl0 ;\ openstack baremetal node set --property capabilities="node:controller-1,$(openstack baremetal node show osctl1 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osctl1 ;\ openstack baremetal node set --property capabilities="node:controller-2,$(openstack baremetal node show osctl2 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osctl2 ;\ openstack baremetal node set --property capabilities="node:networker-0,$(openstack baremetal node show osnet0 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osnet0 ;\ openstack baremetal node set --property capabilities="node:networker-1,$(openstack baremetal node show osnet1 -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" osnet1 ## capability for compute nodes index=0 ; for i in {0..23}; do openstack baremetal node set --property capabilities="node:computeA-$index,$(openstack baremetal node show oscomp$i -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" oscomp$i && index=$((index + 1)) ;done ## capability for *NEW* compute nodes (oscomp-24..27 are being used for temporary proxmox and ceph thus removed from the instackenv.json) - CHECK index=0 ; for i in {24..47}; do openstack baremetal node set --property capabilities="node:computeB-$index,$(openstack baremetal node show oscomp$i -f json -c properties | jq -r .properties.capabilities | sed "s/node:[^,]*,//g")" oscomp$i && index=$((index + 1)) ;done # check capabilities are set for all nodes #for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do echo $i && openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities; done for i in `openstack baremetal node list -f json | jq -r .[].Name` ; do openstack baremetal node show $i -f json -c properties | jq -r .properties.capabilities; done # output, notice the order of the nodes #node:controller-0,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:controller-1,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:controller-2,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:networker-0,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:networker-1,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:computeA-0,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:computeA-1,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:computeA-2,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:computeA-3,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #... #node:computeB-0,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:computeB-1,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:computeB-2,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #node:computeB-3,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal #... #node:computeB-23,profile:baremetal,boot_mode:bios,boot_option:local,profile:baremetal # all in one command for inspection and provisioning #openstack overcloud node introspect --all-manageable --provide # inspect all nodes hardware for i in `openstack baremetal node list -f json | jq -r .[].Name`; do openstack baremetal node inspect $i;done # if a node fails inspection openstack baremetal node maintenance unset oscomp9 openstack baremetal node manage oscomp9 openstack baremetal node power off oscomp9 # wait for node to power off openstack baremetal node inspect oscomp9 # wait until all nodes are in a 'managable' state to continue, this may take around 15 minutes openstack baremetal node list # set nodes to provide state and invokes node cleaning (uses the overcloud image) for i in `openstack baremetal node list -f json | jq -r ' .[] | select(."Provisioning State" == "manageable") | .Name'`; do openstack baremetal node provide $i;done # if a node fails provision openstack baremetal node maintenance unset osnet1 openstack baremetal node manage osnet1 openstack baremetal node provide osnet1 # wait until all nodes are in an 'available' state to deploy the overcloud baremetal node list # set all nodes back to 'manage' state to rerun introspection/provide # for i in `openstack baremetal node list -f json | jq -r .[].Name`; do openstack baremetal node manage $i;done ``` ## Checking networking via inspection data Once the node inspections complete, we can check the list of network adapters in a chassis to assist with the network configuration in the deployment configuration files. ```sh # load credentials source ~/stackrc # find the UUID of a sample node openstack baremetal node list -f json | jq . # check collected metadata, commands will show all interfaces and if they have carrier signal #openstack baremetal node show f409dad9-1c1e-4ca0-b8af-7eab1b7f878d -f json | jq -r . #openstack baremetal introspection data save f409dad9-1c1e-4ca0-b8af-7eab1b7f878d | jq .inventory.interfaces #openstack baremetal introspection data save f409dad9-1c1e-4ca0-b8af-7eab1b7f878d | jq .all_interfaces #openstack baremetal introspection data save f409dad9-1c1e-4ca0-b8af-7eab1b7f878d | jq '.all_interfaces | keys[]' # origional server hardware SR630 (faedafa5-5fa4-432e-b3aa-85f7f30f10fb | oscomp23) (undercloud) [stack@undercloud ~]$ openstack baremetal introspection data save faedafa5-5fa4-432e-b3aa-85f7f30f10fb | jq '.all_interfaces | keys[]' "eno1" "eno2" "eno3" "eno4" "enp0s20f0u1u6" "ens2f0" "ens2f1" # new server hardware SR630v2 (b239f8b7-3b97-47f8-a057-4542ca6c7ab7 | oscomp28) (undercloud) [stack@undercloud ~]$ openstack baremetal introspection data save b239f8b7-3b97-47f8-a057-4542ca6c7ab7 | jq '.all_interfaces | keys[]' "enp0s20f0u1u6" "ens2f0" "ens2f1" "ens4f0" "ens4f1" "ens4f2" "ens4f3" ``` Interfaces are shown in the order that they are seen on the PCI bus, modern linux OS' have an interafce naming scheme triggered by udev. This naming scheme is often described as: - Predictable Network Interface Names - Consistent Network Device Naming - Persistent names (https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/) ```sh # example interface naming scheme enp0s10: | | | v | | --> virtual (qemu) en| | --> ethernet v | p0| --> bus number (0) v s10 --> slot number (10) f0 --> function (multiport card) ``` Openstack adopts a interface mapping scheme to help identify the network interfaces by the notation by notation 'nic1, nic2, nicN'. Only interfaces with a carrier signal (connected to switch) will be participate in the interface mapping scheme. For the University nodes we the following Openstack mapping scheme is created. Server classA: | mapping | interface | purpose | | --- | --- | --- | | nic1 | eno1 | Control Plane | | nic2 | enp0s20f0u1u6 | USB ethernet, likely from the XClarity controller | | nic3 | ens2f0 | LACP bond, guest/storage | | nic4 | ens2f1 | LACP bond, guest/storage | Server classB: | mapping | interface | purpose | | --- | --- | --- | | nic1 | enp0s20f0u1u6 | USB ethernet, likely from the XClarity controller | | nic2 | ens2f0 | Control Plane | | nic3 | ens2f1 | LACP bond, guest/storage | | nic4 | ens4f0 | LACP bond, guest/storage | The 'Server classA' nodes will be used for roles 'controller', 'networker' and 'compute'. the Server classB' hardware will be used for roles 'compute'. The mapping 'nic1' is not consistent for 'Control Plane' network across both classes of server hardware, necessitating multiple roles (thus multiple network interface templates) for the compute nodes. You may notice some LLDP information (Cumulus switch must be running the LLDP service), this is very helpful to determine the switch port that the network interface is connected to and verify your point-to-point list. Owing to the name of the switch we can quickly see this is the 100G cumulus switch. ``` "ens2f0": { "ip": "fe80::d57c:2432:d78d:e15d", "mac": "10:70:fd:24:62:e0", "client_id": null, "pxe": false, "lldp_processed": { "switch_chassis_id": "b8:ce:f6:18:c3:4a", "switch_port_id": "swp9s0", "switch_system_name": "sw100g0", "switch_system_description": "Cumulus Linux version 4.2.0 running on Mellanox Technologies Ltd. MSN3700C", "switch_capabilities_support": [ "Bridge", "Router" ], "switch_capabilities_enabled": [ "Bridge", "Router" ], "switch_mgmt_addresses": [ "172.31.31.11", "fe80::bace:f6ff:fe18:c34a" ], "switch_port_description": "swp9s0", "switch_port_link_aggregation_enabled": false, "switch_port_link_aggregation_support": true, "switch_port_link_aggregation_id": 0, "switch_port_autonegotiation_enabled": true, "switch_port_autonegotiation_support": true, "switch_port_physical_capabilities": [ "1000BASE-T fdx", "PAUSE fdx" ], "switch_port_mau_type": "Unknown" } }, ```