EX2200-C Recommended JunOS (15.1R5) broken – Temperature-Sensor-Crisis…

Just updated 30 Switches (EX2200-C) to the new recommended OS (15.1R5). Every Switch – and I mean EVERY one shows “broken” temperature-Sensors after the new Version comes up.

A downgrade to 15.1R4 solved this – however what the f happened there?
Juniper moved from the 12-Tree to the 15-Tree (recommended) and didn’t notice this bug so far…

Will investigate more into this.


EDIT 04.06.2017: Juniper released 15.1R6.7 for the EX2200-C – this somehow “fixed” the Issue (is set my Temp-Sensor to 0-degree making my Alarm go away)

vQFX 10k Testlab on ESX 6.0 / 6.5

Currently the vQFX is neither officially supported for ESX, neither for ESX 6.0 / 6.5.
My Goal is always to have the latest Versions in place – so all the Tutorials for ESX 5.5 are uninteresting for me.

Here are the steps to make the vQFX run on ESX 6.0 / 6.5:

1.) Download the vmdk images from Juniper (RE + PFE)

2.) Upload both files into your datastore

3.) Convert the vmdk images:
vmkfstools -i vqfx10k-re-15.1X53-D60.vmdk vqfx10kRE.vmdk -d thin
vmkfstools -i vqfx10k-pfe-20160609-2.vmdk vqfx10kPFE.vmdk -d thin

4.) Create a new V-Switch for inter-chassis-communication between pfe and re with Promiscious-mode enabled ant MTU of 9000 (Jumbo-Frames)

5.) Create the necessary VM’s:
1 CPU – 2 Cores
OS: FreeBSD (64bit)
Adapter: BusLogic – ignore the “not recommended” Warning
Disk: vqfx10kRE.vmdk
Add at least 2 NIC’s:
1st NIC (E1000) – OOB-Management
2nd NIC (E1000) – inter-chassis-communication between PFE and RE
3rd to 10th NIC (E1000) – Data-Links

1 CPU – 1 Core
OS: FreeBSD (64bit)
Adapter: BusLogic – ignore the “not recommended” Warning
Disk: vqfx10kPFE.vmdk
1st NIC (E1000) – OOB-Management
2nd NIC (E1000) – inter-chassis-communication between PFE and RE

6.) Run both VMs

login : root
pwd : Juniper

Go to “cli” and configure em0 for OOB-Management.


7.) Enjoy – Repeat steps 1-5 for as many Switches as you want 🙂


Edit on 16.02.2017:

I want to thank Alexander Marhold for providing a Script that sets the correct mac-adresses to the corresponding interfaces.

I have written a procedure for the vMX and adapted it for the vQFX which does this automatically on each commit


The script sets the correct mac address on any configured XE interface  ( taken from the corresponding em(+3) interface,.

  • the mac address is visible under current address in show interface
  • Only if there is a mac adress set in the configuration, that one will be overwritten with the correct one.
  • If the interface belongs to an ae-set, then there will be no mac adress set, as the mac-address is set by the ae
  • if the config contains an interface without a corresponding em-interface , it signals an error on commit


Installation on RE


> file copy  <location>/set-em-mac-to-xe-ae-vQFX.slax /var/db/scripts/commit/



# set system scripts commit allow-transients

# set system scripts commit file set-em-mac-to-xe-ae-vQFX.slax

#  commit


Hope that helps to  install vQFx10k on ESXi, I assume that the mac-seting is also needed on VMware Workstation but I have not tested it.


Another hint: there are a bunch of et,xe… interfaces with DHCP in the startup-factory-default, clear them all before starting with your configuration.

and yes independent of your ESXi physical interfaces the interfaces are 10gig XE interfaces.

Download: set-em-mac-to-xe-ae-vQFX.zip

IPsec Site-to-Site Tunnel between SRX100 and PfSense (Policy-Based VPN)


Today (with the help of my friend and skillful netadmin Malte) we finally figured out how to bring up an IPsec Site-to-Site Policy-based VPN with multiple phase2-entries behind the PfSense and a single subnet behind the SRX100.

For this to work with a Policy-Based VPN (since PfSense can’t do route-based VPN) you need to create a policy for each combination of the subnets so that juniper can generate the correct proxy-id’s. If you miss one, you end up with an error like:


Here’s the config from the J-Point of view:

Took us some time to figure out why we still had some problems but in the end we found the culprit:

Seems to me, that PfSense and Juniper don’t play very nice when PFS is enabled.
After deleting the PFS-Group all 3 subnets went up and traffic was able to flow.

Hopefully this short article can save you some pain in the ass 😉