EVPN-VXLAN on (v)QFX-Series Devices

What could be more refreshig than setting up a nice little EVPN-VXLAN on your vQFX just for fun?
This blog post will show you how to do it and break down the important parts.

We will be looking at the following topology (designed on EVE-NG) and implement an EVPN-VXLAN spine and leaf config so that our virtual servers named Win and Winserver are able to communicate with each other. On top we will configure Winserver for multihoming:

EVPN-VXLAN Topology on EVE-NG

In case you already know EVPN and just want to take a look at the sample-config, you can jump at the end of this blog post, where I post the full code for the setup (excluding Windows Server NIC-Teaming with LACP). If you want to know how to use NIC-Teaming on Windows Server 2016 simply go to google – there are tons of instructions (with LACP and without) but this setup uses one with LACP.

Step1: Create the Topology itself.

Before implementing EVPN-VXLAN you should carefully think about your setup. I’ve seen a lot of setups where the spine and leaf topology is placed “unoptimal” and later on might cause trouble. Therefore I advise you to think carefully about the topology itself and also about the surrondings that you might need (like route-reflectors or VC-Fabrics). Once you got that out of the way it’s time to sit down and build the setup and prepare the connections needed – for my tests I usually use EVE-NG because with EVE-NG it’s very easy to “patch” all the cords needed on the fly (with Pro even when the devices are powered on). I advise you to always PoC a setup like this to avoid unnecessary surprises when implementing it. Usually when prepping this and doing a PoC there are many things that you might see different after the PoC – this will help you to find the optimal setup for your company.


We start with the connections from Spine-1 to all four leaf devices:

set interfaces xe-0/0/0 unit 0 description “to Leaf 1”
set interfaces xe-0/0/0 unit 0 family inet address 172.16.1.100/24
set interfaces xe-0/0/2 unit 0 description “to Leaf 2”
set interfaces xe-0/0/2 unit 0 family inet address 172.16.3.100/24
set interfaces xe-0/0/4 unit 0 description “to Leaf 3”
set interfaces xe-0/0/4 unit 0 family inet address 172.16.5.100/24
set interfaces xe-0/0/6 unit 0 description “to Leaf 4”
set interfaces xe-0/0/6 unit 0 family inet address 172.16.7.100/24


You can edit the adress-schema as you like – I personally use the 172.16/16 quite often for my labs and setups, where I need private v4 adresses.
Also you should configure lo0-adresses for management and later for identification of your local device:

set interfaces lo0 unit 0 family inet address 172.16.50.1/32

As last part of this step you can already configure your xe/ae-Interface towards your server(s) and equip it with an esi-number:

set interfaces xe-0/0/8 description “to Server”
set interfaces xe-0/0/8 ether-options 802.3ad ae0
set interfaces ae0 encapsulation ethernet-bridge
set interfaces ae0 esi 00:01:01:01:01:01:01:01:01:01
set interfaces ae0 esi all-active
set interfaces ae0 aggregated-ether-options lacp active
set interfaces ae0 aggregated-ether-options lacp periodic fast
set interfaces ae0 aggregated-ether-options lacp system-id 00:00:00:01:01:01
set interfaces ae0 unit 0 family ethernet-switching vlan members vlan10


Step 2: Create the Underlay

Next, you should define some local system settings for your underlay network:

set routing-options router-id 172.16.50.1
set routing-options autonomous-system 65500


Your underlay will be vital for your Overlay – obvious right? But this part can be tricky – especially with route reflectors. In our case, we will make it simple by using plain EBGP.
Why EBGP? Because with EBGP you can scale out your EVPN to infinity and beyond 😉
You could, of course, use OSPF for your underlay network but the drawback is scaling – most customers use EVPN because VC, VCF, JunOSFusion and so on do not scale out the way EVPN does it.
And when comparing OSPF vs BGP in terms of scale – well – you already know who will win, right?
So we start with creating a group called “underlay”. I personally would advise you to use a naming that fits the purpose. It doesn’t help you to name your underlay group to G884F6S2 or similar because in one to two weeks nobody will remember what you meant with this description. That’s different if you manage a lot of devices (maybe because you are a systems integrator for your customer) and have a clean documentation:

set protocols bgp group underlay type external
set protocols bgp group underlay description “to Spines 1/2”
set protocols bgp group underlay export directs
set protocols bgp group underlay multipath multiple-as
set protocols bgp group underlay neighbor 172.16.3.100 peer-as 65500
set protocols bgp group underlay neighbor 172.16.4.100 peer-as 65600


You should also create a Policy to export your directly connected networks to the underlay so that you have full-mesh connectivity and your loopback-adresses will be redistributed into your BGP underlay:

set policy-options policy-statement directs term 1 from protocol direct
set policy-options policy-statement directs term 1 then accept


After doing this on all spines / leafes (of course with different AS-numbers and IP’s) you should have a nice BGP-Fabric.

BGP Topology

Time for the next step – the overlay

Step 3: Create the EVPN-VXLAN Overlay

Now it’s time for the funny part – the EVPN-VXLAN overlay.
Start by adding an overlay group for your MP-IBGP connection between the leaf devices. Because your iBGP will transport the EVPN-Packets, it’s often referred to as MP-BGP or MP-iBGP (multiprotocol bgp):

set protocols bgp group overlay type internal
set protocols bgp group overlay local-address 172.16.20.1
set protocols bgp group overlay family evpn signaling
set protocols bgp group overlay local-as 65700
set protocols bgp group overlay multipath
set protocols bgp group overlay neighbor 172.16.10.1
set protocols bgp group overlay neighbor 172.16.30.1
set protocols bgp group overlay neighbor 172.16.40.1


Now it’s time for your loopback-address to really shine.
Specify the loopback interface as the source address for the VTEP tunnel and also, specify a route distinguisher to uniquely identify routes sent from this device:

set switch-options vtep-source-interface lo0.0
set switch-options route-distinguisher 172.16.20.1:1

Doesn’t look scary if you break it down, right?
The key in complex setups is to break the config down to smaller parts.
This way, you can solve almost any problem.

Next, you specify the VRF-Import and Export Policy and add your EVPN-Protocol Options regarding VNI’s and the Multicast-Mode: :

set switch-options vrf-import LEAF-IN
set switch-options vrf-target target:9999:9999

set protocols evpn vni-options vni 10 vrf-target export target:1:10
set protocols evpn encapsulation vxlan
set protocols evpn multicast-mode ingress-replication
set protocols evpn extended-vni-list 10


Following up with the VRF import policy to accept EVPN routes advertised from your other leaf devices:

set policy-options policy-statement LEAF-IN term import_leaf_esi from community comm-leaf_esi
set policy-options policy-statement LEAF-IN term import_leaf_esi then accept
set policy-options policy-statement LEAF-IN term import_vni10 from community com10
set policy-options policy-statement LEAF-IN term import_vni10 then accept


We also set the community targets and configure some load balancing:

set policy-options community com10 members target:1:10
set policy-options community comm-leaf_esi members target:9999:9999

set policy-options policy-statement loadbalance then load-balance per-packet
set routing-options forwarding-table export loadbalance


Finally, we define a server facing VLAN (in our example vlan 10) and equip it with a VNI number:

set vlans vlan10 vlan-id 10
set vlans vlan10 vxlan vni 10
set vlans vlan10 vxlan ingress-node-replication

Step 4: Add your Clients and verify the Setup

Congrats – your EVPN is just a commit away. Do it – this part is about what happens “after the BANG”. At this point your EVPN should be up and running – YAY. But what now? How can we check what the EVPN does for us? Lets get to it.

show bgp summary - EVPN-VXLAN

As you can see, in the given Topology, our bgp receives routes from the underlay and also our bgp.evpn gets Infos from our overlay. So far so good. But what about our EVPN-Database?

EVPN Database

Sweet – our 2 Hosts are already inside the EVPN-Database and as you can see, one is multihomed and one is single-homed. You can immediately see that because the Active source is different. While the multihomed Server is sourced from an esi (hence the esi-number as source) so basically from multiple leaf devices, the single-homed device is sourced from the leaf device with the loopback address 172.16.40.1 (Leaf-4).

Ethernet Switching Table - EVPN

Both of them are also added to each leaf devices local ethernet-switching table. So regardless of the Leaf you connect the Devices to, they will have full Layer-2 reachability across your EVPN-VXLAN – AWESOME! You can simply test this, by pinging from one Device to another 😉

Now imagine each leaf device resides in a different DC or Building – no more do you need to worry about DC stretching – all you need is a solid underlay to build your Infrastructure on. With Contrail, managing your EVPN-VXLAN is even more convenient – but that will be written in a later blog post.

I also tried to add it to JunOS Space, because you can (thanks to EVE-NG) link your Lab to the real world and discover all the devices into your JunOS Space. I was actually impressed, that the vQFX can be added to Space (just discover them with ping only and add snmp later, else the discovery will fail because of the snmp string that is used by the vQFX).

With the “IP Connectivity” however, Space seems to be a bit drunk – but since I only wanted to see if it roughly could manage my EVPN. I would say: Not at this point 😀

Hopefully you now have less fear, when someone mentiones EVPN-VXLAN.
And for those who came here just to snag the conig to play with it, here it is 😉

7 thoughts on “EVPN-VXLAN on (v)QFX-Series Devices

    1. christianscholz Post author

      Hi togi,
      for the vMX you need 2GB of RAM for the VCP and 4GB of RAM for the VFP (so 6GB per vMX).

      Reply
    1. christianscholz Post author

      Hi Tony,
      the template for JunOS Space is already defined in EVE-NG. Just download the qcow2 (KVM) from Juniper, upload it to your EVE-NG and start using it. I prefer Version 18.4 or newer, since you already have the Juniper App-Store on SPace to download ND and SD.

      Reply
      1. Tony

        Hi Chris

        Thank you for your reply. I mean how to access JunOS Space. I’ve already booted up. Do we use VNC or SSH to accees it? Thank you.

        Tony

        Reply
        1. christianscholz Post author

          The “Console” is VNC, after initial config you can access it via ssh.

          Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha * Time limit is exhausted. Please reload CAPTCHA.