bonding configuration with Netplan

- I have rename the ens3 as it is used for management
- I have used route tables because the server is connected to different overlays for testing. Each of the vlan interface will have its default route towards the gateway and I do not want to use namespaces.
- I was using this setup in GNS3 setup with ubuntu bionic 18.04, you may need to change interface type to e1000 if bond is not successful. You should see the duplex and speed settings for the interfaces.


# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version : 2
    renderer: networks
    ethernets:
        ens3:
            dhcp4: false
            match:
                macaddress: 0c:5d:18:69:00:00
            set-name: mgmt
            addresses:
              - 192.168.122.101/24
            nameservers:
                addresses: [8.8.8.8, 8.8.4.4]
            routes:
              - to: 0.0.0.0/0
                via: 192.168.122.1
                table: 99
            routing-policy:
              - from: 192.168.122.0/24
                table: 99
        ens4:
            dhcp4: false
        ens5:
            dhcp4: false
    bonds:
        bond0:
            dhcp4: false
            interfaces:
               - ens4
               - ens5
            parameters:
                mode: 802.3ad
                mii-monitor-interval: 100
    vlans:
        bond0.10:
            dhcp4: no
            addresses: [10.1.10.101/24]
            gateway4: 10.1.10.1
            id: 10
            link: bond0
            routes:
              - to: 0.0.0.0/0
                via: 10.1.10.1
                table: 10
            routing-policy:
              - from: 10.1.10.0/24
                table: 10
You may also use Netplan and ifupdown together with name space while testing different network via the same server.
- edit the netplan:
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version : 2
    renderer: networkd
    ethernets:
        ens3:
            dhcp4: false
            match:
                macaddress: 0c:5d:18:69:00:00
            set-name: mgmt
            addresses:
              - 192.168.122.101/24
            nameservers:
                addresses: [8.8.8.8, 8.8.4.4]
            gateway4: 192.168.122.1
            routes:
              - to: 0.0.0.0/0
                via: 192.168.122.1
                table: 99
            routing-policy:
              - from: 192.168.122.0/24
                table: 99
        ens4:
            dhcp4: false
        ens5:
            dhcp4: false
    bonds:
        bond0:
            dhcp4: false
            interfaces: [ens4,ens5]
            parameters:
                mode: 802.3ad
    vlans:
        bond0.10:
            dhcp4: false
            id: 10
            link: bond0
        bond0.11:
            dhcp4: false
            id: 11
            link: bond0
    vlans:
        bond0.20:
            dhcp4: false
            id: 20
            link: bond0
        bond0.21:
            dhcp4: false
            id: 21
            link: bond0
- sudo apt-get update
- sudo apt-get install ifupdown
- create your namespaces:
more /etc/network/if-up.d/namespaces


#!/bin/sh
  ip netns add zone_x_vl10
  ip link set dev bond0.10 netns zone_x_vl10
  ip netns exec zone_x_vl10 ip link set dev bond0.10  up
  ip netns exec zone_x_vl10 ip addr add 10.1.10.101/24 dev bond0.10
  ip netns exec zone_x_vl10 ip route add 0.0.0.0/0 via 10.1.10.1 dev bond0.10




  ip netns add zone_x_vl11
  ip link set dev bond0.11 netns zone_x_vl11
  ip netns exec zone_x_vl11 ip link set dev bond0.11  up
  ip netns exec zone_x_vl11 ip addr add 10.1.11.101/24 dev bond0.11
  ip netns exec zone_x_vl11 ip route add 0.0.0.0/0 via 10.1.11.1 dev bond0.11




  ip netns add zone_y_vl20
  ip link set dev bond0.20 netns zone_y_vl20
  ip netns exec zone_y_vl20 ip link set dev bond0.20  up
  ip netns exec zone_y_vl20 ip addr add 10.1.20.101/24 dev bond0.20
  ip netns exec zone_y_vl20 ip route add 0.0.0.0/0 via 10.1.20.1 dev bond0.20




  ip netns add zone_y_vl21
  ip link set dev bond0.21 netns zone_y_vl21
  ip netns exec zone_y_vl21 ip link set dev bond0.21  up
  ip netns exec zone_y_vl21 ip addr add 10.1.21.101/24 dev bond0.21
  ip netns exec zone_y_vl21 ip route add 0.0.0.0/0 via 10.1.21.1 dev bond0.21

- make it executable
sudo chmod +x /etc/network/if-up.d/namespaces

- simply reboot the machine

ESI Multihoming with EBGP only underlay on Nexus

If you are patient you may miss the below pre while configuring ESI multihoming on Nexus switches;

If eBGP is used with VXLAN EVPN multi-homing, the administrative distance for local learned endpoints must be lower than the value of eBGP. The administrative distance can be changed by entering the fabric forwarding admin-distance distance command.

My switches are connected to SRV with SVI configured on both of them;

Switch side configurations which are same on both them;

interface port-channel9
switchport mode trunk
switchport trunk allowed vlan 100,200
ethernet-segment 9
system-mac 0000.0000.2011
spanning-tree port type edge

interface Vlan100
no shutdown
vrf member FW_ZONE_X
ip address 10.1.100.0/31
fabric forwarding mode anycast-gateway

 

When you look at the ip route table;

S1-BL2# sh ip route 10.1.100.1 vrf FW_ZONE_X
IP Route Table for VRF “FW_ZONE_X”
‘*’ denotes best ucast next-hop
‘**’ denotes best mcast next-hop
‘[x/y]’ denotes [preference/metric]
‘%<string>’ in via output denotes VRF <string>

10.1.100.1/32, ubest/mbest: 1/0
*via 192.168.1.98%default, [20/0], 1d11h, bgp-61099, external, tag 61201, segid: 100100 tunnelid: 0xc0a80162 encap: VXLAN

via 10.1.100.1, Vlan100, [190/0], 00:08:55, hmm

This causes the traffic for 10.1.100.1 loop between them as each will have route pointing to each other. Thus after lowering the admin distance of the local routes;

fabric forwarding admin-distance 19

S1-BL2# sh ip route 10.1.100.1 vrf FW_ZONE_X
IP Route Table for VRF “FW_ZONE_X”
‘*’ denotes best ucast next-hop
‘**’ denotes best mcast next-hop
‘[x/y]’ denotes [preference/metric]
‘%<string>’ in via output denotes VRF <string>

10.1.100.1/32, ubest/mbest: 1/0, attached
*via 10.1.100.1, Vlan100, [19/0], 00:00:26, hmm
via 192.168.1.98%default, [20/0], 1d11h, bgp-61099, external, tag 61201, segid: 100100 tunnelid: 0xc0a80162 encap: VXLAN