NAT64 with vSRX 15.1X49-D120

Yesterday, as part of my JNCIE-SEC Training, I reviewed NAT64 with the following Topology:

 

I pinged from Win to Winserver with Traffic going over Gemini(vSRX 15.1X49-D120), Pisces (vMX 17.3R1-S1.6), Pyxis (vMX 17.3R1-S1.6) and Virgo (vSRX 15.1X49-D120).

So far everything seems to run fine – sometimes a single ping gets dropped but with 1% loss this is okay for me:

For me the D120 runs stable and so far I did not experience any problems.
Below I pasted the configs in case anyone wants to recreate this Lab:

 

Gemini:

 

Pisces:

 

Pyxis:

 

Virgo:

vSRX D120 is out – and runs fine on EVE

The new vSRX15.1X49-D120 is out and of course I already spinned it up with EVE 😉

What should I say – it runs just fine – just like D100 and D110.
The D120 brings 2 new Features:

+ Support for applying IEEE802.1 rewrite rules to inner and outer VLAN tags [QoS]

+ Packet size configuration for IPsec datapath verification [VPN]

Many People asked me if it is ok to run vSRX on EVE on Virtualbox on Linux on Bare-Metal.
I personally think this is a bad idea, because every Layer you add, will impact your Performance significantly.
I recommend EVE-Bare (EVE on Bare-Metal) if you really want to run big Labs.
But be careful – some Servers (like the HP-Ones) need a special treatment regarding the network interfaces.
You can find more Infos in the EVE-Forums.

NAT64 – Practical Example

On my way to JNCIE, NAT64 is also a Topic – below you will find a working example of how I achieved this – comments are welcomed 🙂

Site 1 (running 15.1 code)

 

Site 2 (running 17.3 code)

Hope this helps you all

NAT64/46

Today I experimented with NAT64 / NAT46 a bit.
The Setup to test this is relatively easy:

I took 2 Windows-Servers (2008R2), one with only IPv4 and one with only IPv6.
The 2 SRX’es are dual-stack capable and have a transfer-subnet (IPv4) between them.

Test-Scenario: Serverv4 pings Serverv6’s “v4-Address”, which is actually a “Proxy-Address” on the SRX and gets the reply – without requiring the v6-Server to have an IPv4-Address. Reverse should be the same: The v6-Server pings a v6-Address but in the Background it is the v4-only Server that replies. This worked very nice and I hope to see more of that in the Future at the customers sites – and I bet I will since IPv6 gains more and more attraction here in Germany 😉

vSRX D100 (vSRX 15.1X49-D100) is out

Just tested the new vSRX D100 Version on EVE and ESX.
Compared to D90 it feels (tested on ESX and EVE) way slower but seems to run very good once booted up (tested IPsec, DHCP-Server, DHCP-Client, Policy, OSPF, BGP and Clustering).

The following Graphic shows the time in seconds that the SXR needed from (Amnesiac) Login to cli prompt:

In general the time is almost identical – however the D100 needed significantly longer for boot when compared to the D90.
While the D90 took around 2Minutes from “click on start” to “login prompt”, the D100 needed a whooping 8 Minutes and reacted very very slow afterwards. After a night “idle” it was as responsive as the D90. I’m reading myself through the CHangelog, however I could not find a valid reason for this behavior so far.

 

I will proceed to test the D100 and of course compare D90 with D100 for you to make a good choice for your Home-Labs 🙂

OSPF between a vSRX-Cluster and a standalone vSRX over vQFX on EVE-NG

I promised to deliver this and here it is: OSPF over vQFX 😉
These days I lab a lot with EVE and I love it more every day – the possibilities are endless and the Labs are very very quick configured and running. With 2 new CPU’s my EVE now runs with decent Speed so compared to VMware ESX 6.0 there is no extreme performance difference anymore. I can live with that. Since D63 on the vQFX is running very stable and smooth I thought of this small OSPF Lab – I will add more “Quick-Labs” in the Future.

WARNING:
The SRX in Clustermode runs very well on EVE – however there is an optical error. If you build a Cluster, the interface mappings on EVE are completely wrong. This is due to the SRX getting a new interface (em0) as second interface Card – so if you select ge-0/0/0 in EVE, you really select em0.

But why is that you will ask? The answer is simple:
EVE is not aware of Cluster-Naming or Cluster Interfaces – so you have to think twice, what you have to select – I needed Wireshark to see what happened…
From top down the first Interface in EVE is fxp0, the second Interface is em0, the third is ge-0/0/0 or 7/0/0, the fourth is ge-0/0/1 or 7/0/1 and so on (see the Table below from Juniper):

Once I figured that out I could successfully build the Cluster (this time fully working, not just partially) and here is the Lab:

Topology:

 

vSRX-NG5+6 (the SRX-Cluster):

 

 

vSRX-NG7 (the standalone SRX):

 

 

Lab-C01 (Coreswitch 01, vQFX running 15.1X53-D63.9):

 

 

Lab-C02 (Coreswitch 02, vQFX running 15.1X53-D63.9):

 

Download this Lab for your EVE here: (Size 16kB, zip-Archive)
EVE-OSPF-vSRX-vQFX-Lab

Running vQFX 15.1X53-D63 on EVE (KVM)

The KVM version of the latest vQFX routing engine VM (vqfx10k-re-15_X53-D63) seems to be broken. If you try to run it, it will crash with a kernel9 panic and will never boot up completely.
However you can “cheat your way around this”:

1.) Simply download the Vagrant .box file of the D63 RE.
2.) Extract the .box File with 7zip – you will extract a file with no extension.
3.) Extract this file again and “magically” a file called packer-virtualbox-ovf-1491593710-disk001.vmdk will appear.
4.) Upload this to eve, convert it to hda.qcow2, fix permissions and run it – voila: D63 on EVE.

 

PFE is the same (no D63 Version available, since the PFE is not tied to a Software Version).

It’s always a good idea to dig around the provided Files from Juniper once something breaks 😉

Download the Files here: http://www.juniper.net/support/downloads/?p=vqfxeval#sw

ESX vs EVE-NG

I got the opportunity to play with EVE-NG last week. Luckily this was running on a DL360G7 – just as my ESX 6.0 Server so I can compare both of them very well 🙂

Getting EVE-NG to run was pretty easy – I downloaded the “bare metal” iso, installed it on the Server and it was ready to go.
I installed a fresh ESX 6.0u3 on the other Lab-Server to make sure, that both of them are fresh installed.

I created 6 vSRX and 2 vMX – the vQFX was not part of this Setup.

 

First Test – boot-time:
ESX 6.0 took about 1 minute from the moment I pressed the power button to the login screen – pretty solid.
EVE-NG needed about 30 seconds – damn that thing was fast 🙂 Thanks to the Ubuntu 16.04 EVE-NG starts up very fast.

Second Test – boot the whole lab at once (powering up all 10 Machines):
ESX 6.0 took about 10 minutes until every VM was powered up and had the login prompt. All Devices were blazingly fast after booting.
EVE-NG took about 30 minutes until I could login to each VM – however the VM’s were practically useless, since the login itself took about 5mins. After 2 more hours all Devices reacted somehow fast – but slow compared to ESX. I searched the Forums but couldn’t find out why.

Third Test – the Resources:
ESX 6.0 needed around 40G of RAM and around 50% of all usable 16 CPU-Cores.
EVE-NG needed around 70G of RAM (don’t ask me why) and needed almost 100% of all CPU Cores.

Fourth Test – access:
ESX needs the Web-Client or the ESX-Windows-Client.
The Windows-Client is not installed on every PC – so “Lab-Everywhere” is not possible. The Web-Client is an Option.
EVE-NG comes with a HTML5 Web-Client – easy to access from everywhere – also EVE shows a Visio-like Lab-Topology with drag and drop – this is a huge plus compared to ESX.

All in all I will stick to the ESX-Server because of multiple reasons:
+ ESX itself runs very smooth and is well known by me (compared to KVM)
+ The Resources were way too much on the EVE-NG (I could run about twice as much Devices for the same resources)
+ ESX needs less time to power up the whole Lab. My Lab runs mostly on Fridays. So I power my Server down after labbing – I don’t like the idea of powering it up at Thursday evening just to waste my money (Power-Bill) to be able to lab on friday – that feels wrong…

Of course there are still ways to tweak both Hypervisors – a tweaked ESX runs all the Test-Devices at about 10% CPU – I can’t tell what a KVM will do.

Hopefully this gives you a small overview of both Systems

SRX ssh brute-force countermeasures

It’s always a good idea to secure and also harden your SRX in case it is reachable via the Internet.
Today I labbed a bit to see if this Filter actually works.

 

For this Lab we setup the “system login retry-options”:

Now to the Options we have:

tries-before-disconnect: Sets the maximum number of times the user is allowed to enter a password to attempt to log in to the device through SSH or Telnet. When the user reaches the maximum number of failed login attempts, the user is locked out of the device.

backoff-threshold: Sets the threshold for the number of failed login attempts on the device before the user experiences a delay when attempting to reenter a password.

backoff-factor: Sets the length of delay in seconds after each failed login attempt. When a user incorrectly logs in to the device, the user must wait the configured amount of time before attempting to log in to the device again.

lockout period: Sets the amount of time in minutes before the user can attempt to log in to the device after being locked out due to the number of failed login attempts specified in the tries-before-disconnect statement.
You can read the full explanations here:
https://www.juniper.net/documentation/en_US/junos/topics/example/system-retry-options-configuring.html

 

 

After that (to see it more easy), we create a syslog-file for just the ssh failed attempts:

What this does is basically telling your SRX to log all failed ssh-attempts to a file called ssh-logs.

This way, your SRX is ready to take on almost every script-kiddie brute-force attack and logs every failed attempt.

Be sure to check the file from time to time – and remember: change your passwords from time to time and use at least 64 letters and numbers, hash-signs, virgin-blood and so on –> you get the idea right? 😉

 

vSRX on Hyper-V – I still prefer VMware…

Yesterday with the Release of the new vSRX (15.1X49-D80) I thought “why not give Hyper V a try?”.
I spinned up a Windows Server 2012 R2, installed Hyper-V and deployed the new vSRX.
In fact I was surprised – everything (including Cluster mode) seems to run decent – of course I know that this vSRX has only limited functionality under Hyper-V and can’t scale up very well.
However it was nice to see that the vSRX now runs on VMware, KVM and Hyper-V – what else do you want? 😉

Interface-Mapping can be found here:
Interface Mapping for vSRX in Hyper-V