The final Countdown for the JNCIE-DC

Just 38days of labbing until the JNCIE-DC Exam-Day has arrived.
Exam is already booked and I am so excited to do this (way more than with the first JNCIE because I already know what awaits me in terms of the trip to Amsterdam, the Exam-Room etc.) – and I feel way more confident about the Topics (EVPN really kicks ass) 😉

I would love to share some EVE-NG Topos – however, I only work with the official inetZero Superlabs this time and therefore can’t legally share anything.

After the Exam I will try to develop some Training to share for the SEC and DC-Exams to prep you for the final day – in the meantime –> stay tuned 😉

Disable IPv6 Router-Advertisements on Windows Server 2012 / 2016

Lately I did a huge amount of IPv6-Setups and I noticed something in the vCenter: All the Boxes with static IP’s still had 2 IPv6-Adresses (one static and one per RA-Feature).

Since I didn’t want them to use the address that they got from the RA and disabling RA at the Router was not an option I googled a bit and found this:

Tadaa – only my static IP is left 😉
Maybe this does not impact anything – but still it feels wrong to me that a static IPv6 Host gains a second address from the same subnet…

Maybe this will help you on your way to IPv6 – if so please leave a comment

On Linux you would simply put this into your /etc/sysconfig/network (for RHEL/CentOS):





NAT64 with vSRX 15.1X49-D120

Yesterday, as part of my JNCIE-SEC Training, I reviewed NAT64 with the following Topology:


I pinged from Win to Winserver with Traffic going over Gemini(vSRX 15.1X49-D120), Pisces (vMX 17.3R1-S1.6), Pyxis (vMX 17.3R1-S1.6) and Virgo (vSRX 15.1X49-D120).

So far everything seems to run fine – sometimes a single ping gets dropped but with 1% loss this is okay for me:

For me the D120 runs stable and so far I did not experience any problems.
Below I pasted the configs in case anyone wants to recreate this Lab:









NAT64 – Practical Example

On my way to JNCIE, NAT64 is also a Topic – below you will find a working example of how I achieved this – comments are welcomed 🙂

Site 1 (running 15.1 code)


Site 2 (running 17.3 code)

Hope this helps you all


Today I experimented with NAT64 / NAT46 a bit.
The Setup to test this is relatively easy:

I took 2 Windows-Servers (2008R2), one with only IPv4 and one with only IPv6.
The 2 SRX’es are dual-stack capable and have a transfer-subnet (IPv4) between them.

Test-Scenario: Serverv4 pings Serverv6’s “v4-Address”, which is actually a “Proxy-Address” on the SRX and gets the reply – without requiring the v6-Server to have an IPv4-Address. Reverse should be the same: The v6-Server pings a v6-Address but in the Background it is the v4-only Server that replies. This worked very nice and I hope to see more of that in the Future at the customers sites – and I bet I will since IPv6 gains more and more attraction here in Germany 😉

EVE-NG and the vQFX

Just wanted to give you a short update regarding my attempt to run the vQFX on the latest eve-ng.

Here is how I managed to run it:

1.) Connect to your eve-ng server via ssh and create 2 folders according to the eve-ng naming scheme (important or your vQFX will not be recognized!)

2.) Copy your vmdk-images to the eve-ng server via scp or sftp (I used /tmp as directory)

3.) Convert your harddisks:

4.) Run the script to fix your file-permissions:

5.) Go to your eve-ng webinterface and create 2 nodes inside your lab.
Leave the settings like CPU and RAM at the default.

6.) Enjoy your vQFX 10k on eve-ng 🙂

vQFX 10k Testlab on ESX 6.0 / 6.5

Currently the vQFX is neither officially supported for ESX, neither for ESX 6.0 / 6.5.
My Goal is always to have the latest Versions in place – so all the Tutorials for ESX 5.5 are uninteresting for me.

Here are the steps to make the vQFX run on ESX 6.0 / 6.5:

1.) Download the vmdk images from Juniper (RE + PFE)

2.) Upload both files into your datastore

3.) Convert the vmdk images:
vmkfstools -i vqfx10k-re-15.1X53-D60.vmdk vqfx10kRE.vmdk -d thin
vmkfstools -i vqfx10k-pfe-20160609-2.vmdk vqfx10kPFE.vmdk -d thin

4.) Create a new V-Switch for inter-chassis-communication between pfe and re with Promiscious-mode enabled ant MTU of 9000 (Jumbo-Frames)

5.) Create the necessary VM’s:
1 CPU – 2 Cores
OS: FreeBSD (64bit)
Adapter: BusLogic – ignore the “not recommended” Warning
Disk: vqfx10kRE.vmdk
Add at least 2 NIC’s:
1st NIC (E1000) – OOB-Management
2nd NIC (E1000) – inter-chassis-communication between PFE and RE
3rd to 10th NIC (E1000) – Data-Links

1 CPU – 1 Core
OS: FreeBSD (64bit)
Adapter: BusLogic – ignore the “not recommended” Warning
Disk: vqfx10kPFE.vmdk
1st NIC (E1000) – OOB-Management
2nd NIC (E1000) – inter-chassis-communication between PFE and RE

6.) Run both VMs

login : root
pwd : Juniper

Go to “cli” and configure em0 for OOB-Management.


7.) Enjoy – Repeat steps 1-5 for as many Switches as you want 🙂


Edit on 16.02.2017:

I want to thank Alexander Marhold for providing a Script that sets the correct mac-adresses to the corresponding interfaces.

I have written a procedure for the vMX and adapted it for the vQFX which does this automatically on each commit


The script sets the correct mac address on any configured XE interface  ( taken from the corresponding em(+3) interface,.

  • the mac address is visible under current address in show interface
  • Only if there is a mac adress set in the configuration, that one will be overwritten with the correct one.
  • If the interface belongs to an ae-set, then there will be no mac adress set, as the mac-address is set by the ae
  • if the config contains an interface without a corresponding em-interface , it signals an error on commit


Installation on RE


> file copy  <location>/set-em-mac-to-xe-ae-vQFX.slax /var/db/scripts/commit/



# set system scripts commit allow-transients

# set system scripts commit file set-em-mac-to-xe-ae-vQFX.slax

#  commit


Hope that helps to  install vQFx10k on ESXi, I assume that the mac-seting is also needed on VMware Workstation but I have not tested it.


Another hint: there are a bunch of et,xe… interfaces with DHCP in the startup-factory-default, clear them all before starting with your configuration.

and yes independent of your ESXi physical interfaces the interfaces are 10gig XE interfaces.