JUNOS installation and upgrade on SRX and EX platforms

JUNOS installation and upgrade on SRX and EX platforms (standalone, SRX chassis cluster and EX virtual chassis). The reason I am putting this here is – I often re-use these procedures and don’t want to search in Juniper knowledge base every time! Also other people might find his useful.

Step1: Transfer the JUNOS installation file to SRX/EX devices

Transferring JUNOS installation file can be done in many ways.

If the JUNOS system is already up and running on the network I prefer to transfer the installation file via SSH. Just use any standard SSH client to do the transfer (SCP, FileZilla, WinSCP). The destination directory should be a temporary directory, I prefer “/var/tmp/”.

If the JUNOS system is not connected to network or a brand new system, then I prefer to do the installation transfer file via USB stick. Just attach the usb to any of the USB port on the JUNOS system. You need to find out the UNIX device name for the USB on JUNOS, do the following –

>start shell
%dmesg

The above will show the USB device name/number at the end of “dmesg“; most of the cases this is “/dev/da1”; so the first UNIX disk partition (it’s called “slice” – s) within this is /dev/da1s1. Now mount the USB to a temporary directory; I prefer /var/tmp/usb’; so create the usb directory and mount it.

%mkdir /var/tmp/usb
%mount -t msdosfs /dev/da1s1 /var/tmp/usb

Now copy the JUNOS installer from USB to local JUNOS partition /var/tmp (this is optional – installation can be done from the mounted USB directory)

%copy /var/tmp/usb/junos-srxme-15.xxxx.tgz /var/tmp

If you are installing JUNOS from local partition – you can disconnect the USB at this stage.

Step2: Installation JUNOS

If it is a standalone system (SRX, EX or other) – you can go straight to the installation.

If it is a SRX chassis cluster without ISSU – you should install JUNOS on both the device but make sure to REBOOT them together at same time. If you couldn’t afford to have downtime during upgrade – there are few other methods (by disconnecting fabric and control link during installation) and also you might considering upgrade to next level Juniper system that does support ISSU.

Commands are following-

>request system software add /var/tmp/junos-xxxx.tgz no-copy validate ; make sure the installation was successful
>request system reboot ; (reboot both non-ISSU/NSSU SRX together at same time)

If your system support nonstop software upgrade (EX33xx to EX82xx virtual chassis cluster – NSSU), then following are the procedures to perform this (assuming all the VC members are same model)-

a. Copy the JUNOS installation file to /var/tmp on the master switch

b. Make sure you have nonstop active routing (NSR) and graceful routing engine switchover (GRES) are enabled on virtual-chassis; example commands are following (on a two node VC)-

#set chassis redundancy graceful-switchover
#set routing-options nonstop-routing
 
#set virtual-chassis member 0 role routing-engine
#set virtual-chassis member 0 serial-number PEXXXXXX (serial number of the switch 1)
#set virtual-chassis member 1 role routing-engine
#set virtual-chassis member 1 serial-number PEXXXXXX (serial number of the switch 2)

c. The installation command is following –

>request system software nonstop-upgrade /var/tmp/junos-version.tgz

d. Reboot the members

>request system reboot ; this will reboot one member at a time

 

Step3: Back up the JUNOS software to the alternate partition (JUNOS snapshot)

In case the primary partition failed – the JUNOS system will boot from the backup partition (UNIX slice) that has same JUNOS version installed.

>request system snapshot slice alternate ; on standalone SRX or EX
>request system snapshot slice alternate all-members ; on EX virtual chassis

 

Step4: Perform disk cleanup after installation (optional)

I love to perform disk cleanup after JUNOS upgrade.

>request system storage cleanup dry-run ; this will show the files to be deleted
>request system storage cleanup ; this will delete the files

 

Juniper SRX IDP (IDS/IPS) and SCREEN (DoS) Logs to Splunk

Juniper SRX IDP (IDS/IPS) and SCREEN (DoS) logs can be sent to a remote host via Syslog.

You might have come across IT security compliance requirements asking for visibility across your IDP and DoS attack event logs. One of the solution is sending all your security logs to a centralised logging system such as Splunk then perform all the required actions such as creating reports, dashboards and sending alerts from there.

In this example I have documented what are the configuration requirements to send Juniper SRX IDP and SCREEN logs to Splunk via Syslog.

Step 1: Setup Splunk to listen on UDP 514 (Syslog)

Make sure you have a running Splunk. Also you have configured Splunk to listen on UDP port 514 as syslog. This can be done via adding the following onto the file >> “/opt/splunk/etc/system/local/inputs.conf

[udp://514]
sourcetype = syslog

You can install the following Juniper Apps available in the Splunk app store:

-Splunk Add-on for Juniper
-Juniper Networks App for Splunk

If you do not have the above apps installed – you still can create your Splunk dashboards, reports & alerts manually based on the fields within the captured IDP and SCREEN logs.

Make sure SRX firewalls are able to talk to the Splunk server over the network.

Step 2: Setup SCREEN options

Make sure you have implemented SCREEN options. A bunch of options are available for SCREEN; here is some examples:

#set security screen ids-option internet-screen-options icmp ip-sweep
#set security screen ids-option internet-screen-options icmp ping-death
#set security screen ids-option internet-screen-options ip bad-option
#set security screen ids-option internet-screen-options ip spoofing
#set security screen ids-option internet-screen-options ip tear-drop
#set security screen ids-option internet-screen-options tcp syn-fin
#set security screen ids-option internet-screen-options tcp tcp-no-flag
#set security screen ids-option internet-screen-options tcp syn-frag
#set security screen ids-option internet-screen-options tcp port-scan
#set security screen ids-option internet-screen-options tcp syn-ack-ack-proxy
#set security screen ids-option internet-screen-options tcp syn-flood white-list PenTest-TempWhitelist source-address 123.xxx.xxx.xxx/32
#set security screen ids-option internet-screen-options tcp syn-flood white-list PenTest-TempWhitelist source-address 123.xxx.xxx.xxx/32
#set security screen ids-option internet-screen-options tcp land
#set security screen ids-option internet-screen-options tcp winnuke
#set security screen ids-option internet-screen-options tcp tcp-sweep
#set security screen ids-option internet-screen-options udp flood
#set security screen ids-option internet-screen-options udp udp-sweep
#set security screen ids-option internet-screen-options udp port-scan
#set security screen ids-option internet-screen-options limit-session source-ip-based 1000
#set security screen ids-option internet-screen-options limit-session destination-ip-based 1000

Step 3: Enable logging within IDP Rulebase

Make sure you have an active IDP policy and you have also enabled IDP within security policies.

#show security idp active-policy
active-policy Recommended;

The above command shows current active policy “Recommended”; the default “Recommended” policy comes with “then notification log-attacks” along with “action recommended” as following:

then {
 action {
 recommended;
 }
 notification {
 log-attacks;
 }
 }

If you create a custom policy, make sure your policy is configured with “notifications log-attacks“.

Also make sure you have enabled IDP within “security policy”. Following is an example of enabling IDP within a security policy:

#set security policy from-zone sec-zone-source to-zone sec-zone-destination policy name-of-sec-policy then permit application-services idp

Step 4: Setup SRX firewalls to send logs to Syslog

SRX IDP logs are marked with RT_IDP.
SRX SCREEN logs are marked with RT_IDS.

You need to filter logs to capture the above while sending them to a remote syslog server.

#set system syslog host 172.16.xx.10 any any
#set system syslog host 172.16.xx.10 match "RT_IDP|RT_IDS"
#set system syslog host 172.16.xx.10 source-address 172.16.xx.5
#set system syslog host 172.16.xx.10 structured-data brief
#set system syslog file messages any any

Now generate some port scanning towards firewall interfaces where the SCREEN and IDP policies are applied. You can use “https://pentest-tools.com/network-vulnerability-scanning/tcp-port-scanner-online-nmap” to send some quick scan.

You should be able see SCREEN logs as following >

root@firewall-host-name> show log messages | match RT_IDS
Oct 13 14:53:22 firewall-host-name RT_IDS: RT_SCREEN_TCP: TCP port scan! source: 178.79.138.22:39267, destination: 118.xxx.xxx.xxx:990, zone name: sec-zone-internet, interface name: reth0.XXX, action: drop
Oct 13 14:53:43 firewall-host-name RT_IDS: RT_SCREEN_TCP: No TCP flag! source: 178.79.138.22:50779, destination: 118.xxx.xx.xxx:443, zone name: sec-zone-internet, interface name: reth0.XXX, action: drop
Oct 13 14:53:43 firewall-host-name RT_IDS: RT_SCREEN_TCP: SYN and FIN bits! source: 178.79.138.22:50780, destination: 118.xxx.xxx.xxx:443, zone name: sec-zone-internet, interface name: reth0.XXX, action: drop

Following are example of IDP attack event logs >

Oct 13 08:55:55 firewall-host-name 1 2017-10-13T08:55:55.792+11:00 firewall-host-name RT_IDP - IDP_ATTACK_LOG_EVENT [junos@2636.1.1.1.2.135 epoch-time="1507845354" message-type="SIG" source-address="183.78.180.27" source-port="45610" destination-address="118.127.xx.xx" destination-port="80" protocol-name="TCP" service-name="SERVICE_IDP" application-name="HTTP" rule-name="9" rulebase-name="IPS" policy-name="Recommended" export-id="15229" repeat-count="0" action="DROP" threat-severity="HIGH" attack-name="TROJAN:ZMEU-BOT-SCAN" nat-source-address="0.0.0.0" nat-source-port="0" nat-destination-address="172.xx.xx.xx" nat-destination-port="0" elapsed-time="0" inbound-bytes="0" outbound-bytes="0" inbound-packets="0" outbound-packets="0" source-zone-name="sec-zone-name-internet" source-interface-name="reth0.XXX" destination-zone-name="dst-sec-zone1-outside" destination-interface-name="reth1.xxx" packet-log-id="0" alert="no" username="N/A" roles="N/A" message="-"]

Now search in the Splunk with RT_SCREEN for SCREEN logs and IDP_ATTACK_LOG for IDP logs.

Here is few example screenshots from Splunk.

[Screenshot – Official Juniper App from Splunk App Store]

IDP-Splunk-OffcialJuniperApp-2

[Screenshot – IDP_ATTACK_LOG within Splunk]

IDP-Splunk-2

[Screenshot – SCREEN action logs]

IDP-Splunk-3

[Screenshot – Splunk Dashboard IDP Attack Events]

IDP-Splunk-4

The above dashboard has been created with the following search parameter:

IDP_ATTACK_LOG_EVENT 
| rename host as Firewall-Name
| rename attack_name as Attack-Name
| rename threat_severity as Threat-Severity
| rename action as Action
| rename policy_name as IDP-Policy-Name
| rename source_address as Attacker-IP
| rename source_interface_name as Src-Interface
| rename source_zone_name as Src-Security-Zone
| rename destination_address as Dst-Address
| rename destination_interface_name as Dst-Interface
| rename destination_zone_name as Dst-Security-Zone
| rename destination_port as Dst-Port
| rename nat_destination_address as Internal-Dst-NAT-Address
| table Firewall-Name, Attack-Name, Threat-Severity, Action, IDP-Policy-Name, Attacker-IP, Src-Interface, Src-Security-Zone, Dst-Address, Dst-Interface, Dst-Port, Internal-Dst-NAT-Address, Dst-Security-Zone, _time

[Screenshot – Splunk Dashboard SCREEN Attack Events]

IDP-Splunk-5

You can create Splunk “alerts” based on the same above!

Junos “flow traceoptions” and managing flow trace “log files”

Junos “flow traceoptions” is the utility to track all routing protocols functionalities such as – how traffic is being traversing from source to destination; how traffic is being traversing from one interface to another; is the traffic able to finds out the correct destination path; what security zones are involved in the traffic path; what security polices are applied; is the traffic getting permitted or getting dropped by a firewall rule; what firewall rules or policies are involved; similar etc.

Three things need to be address while working with flow traceoptions –

  • Need to enable “flow traceoptions” and send the logs to a Flow Trace log file.
  • Analysis the Flow Trace log file to find out the fact what is happening.
  • Make sure to disable flow traceoptions.
  • Once finished with analysis & inspections, cleanup the flow trace log files to maintain available disk space on the Juniper box.

To enable flow traceoptions, following are popular syntaxes-

++++
#set security flow traceoptions file Flow-Trace-LogFile
#set security flow traceoptions flag basic-datapath

#set security flow traceoptions packet-filter PF1 source-prefix 1.1.1.1/32
#set security flow traceoptions packet-filter PF1 destination-prefix 2.2.2.2/32

#set security flow traceoptions packet-filter PF2 source-prefix 2.2.2.2/32
#set security flow traceoptions packet-filter PF2 destination-prefix 1.1.1.1/32
++++

Optionally we can enter the following to set limit to be avoid hammered by huge logs.

+++
#set security flow traceoptions file files 2; maximum 3 log files 0,1,2
#set security flow traceoptions file size 2m; size of each log file is 2MB
+++

The above will create log file “Flow-Trace-LogFile”; to see the log file, enter the following command –

+++
>show log Flow-Trace-LogFile
+++

We once we finished analysis & inspections with the log files – we should disable traceoptions as following-

+++
#delete security flow traceoptions
+++

Lastly to clean-up a log file and also to delete log files – use the following commands.

To clear a log file – enter the following command-

+++
>clear log LogFileName
+++

To delete a log file – enter the following command-

+++
>file delete <path>
>file delete /var/log/flow-trace-logs.0.gz
+++

 

MSSQL 2014 AlwaysOn Availability Group Cluster & Gratuitous ARP (GARP) Issue

MSSQL 2014 AlwaysOn cluster running on Windows 2012 R2 doesn’t send Gratuitous ARP (GARP) packets by default!

I have recently come across gratuitous arp (GARP) issues while working on Microsoft SQL 2014 AlwaysOn Availability Group cluster setup. I experienced the following –

  1. MSSQL 2014 AlwaysOn cluster with AlwaysOn Availability Group (AG) setup was done as per best practices and experts recommendations; all cluster related services were running OK without any issue.
  2. clients sitting on the same IP network/same VLAN were able to connect to the AlwaysOn AG listener Virtual IP (VIP) address immediately after a cluster failover happen from Node-A to Node-B and vice versa.
  3. however, clients sitting on different IP subnets were NOT able to connect to the VIP immediately after a cluster failover.
  4. clients sitting on different IP subnets waited for 20MIN to get connect to the VIP.
  5. this 20minutes is MAC address lifetime on the ethernet switch (I use Juniper EX-series switches) where the servers are connected (connected to physical Hypervisor).
  6. on the network layer the switch “ARP table” was showing previously learnt MAC address for the AG Listener VIP; the switch didn’t updated MAC address after a cluster failover triggered. The switch flushed out the old MAC and re-learnt the new correct MAC address after the MAC age time (20min) expired on the switch.

I was looking for a solution and found “GARP Reply” needs to be enabled on the Juniper EX switch manually – I have done that but still NO improvement!

Also looked at Microsoft KB documents and forums – people are saying GARP needs to be turned on the network switch which I have DONE already without any success.

After doing further digging inside I found that the Windows 2012 R2 servers were not sending any GARP packets so the switch was not updating the ARP table although it is configured to work with GARP.

To get this working – Windows server registry object “ArpRetryCount” needs to be added; Microsoft said the following about this –

“Determines how many times TCP sends an Address Request Packet for its own address when the service is installed. This is known as a gratuitous Address Request Packet. TCP sends a gratuitous Address Request Packet to determine whether the IP address to which it is assigned is already in use on the network.”

Add the registry entry as following –

-HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
-REG_DWORD > ArpRetryCount
-Value is between 0-3 (use value 3)

0 – dont send garp
1 – send garp once only
2 – send garp twice
3 – send garp three times (Default Value – actually not present on Windows 2012 R2)

To enable “GARP reply” on Juniper EX & SRX platform – user the following command –

#set interface interface_name/number gratuitous-arp-reply

The interface can be a physical interface, logical interface, interface group, SVI or IRB.

To enable GARP on Cisco IOS – use interface command “ip gratuitous-arps“.

References:
https://technet.microsoft.com/en-us/library/cc957526.aspx
http://www.juniper.net/techpubs/en_US/junos13.2/topics/usage-guidelines/interfaces-configuring-gratuitous-arp.html
http://www.cisco.com/web/techdoc/dc/reference/cli/nxos/commands/l3/ip_arp_gratuitous.html

Juniper SRX – replacement of a node in chassis cluster with IDP installed

One of my chassis cluster node in a SRX cluster was failed. I got a RMA replacement SRX box from Juniper. When I try to put the new device (a brand new SRX) to the existing cluster by transferring existing configurations to the new device as suggested by Juniper KB – it was failed!

The reason for failure was due to IDP attack signature database (Juniper call it IDP security package) installed on the existing running node (on the cluster) – whereas the new node has no IDP installed on it.

I was thinking of some sort of auto IDP signature sync on the new device as a part of transferring the configuration before putting this to the existing cluster – but couldn’t find any solution. So, I had to manually download and install the same IDP security package onto the new SRX transferred from the existing running cluster node along with the existing configurations.

Here is the total procedure (I am keeping this for my own reference to be used in future):

1. First thing first – wipe out all existing configuration on the new RMA SRX & set root authentication. Also make sure the new node is not connected to the cluster.

#delete

#set system root-authentication plain-text-password

#commit

2. Configure chassis cluster on the new node. The cluster ID and node ID must be same as the failed cluster node.

>set chassis cluster cluster-id 1 node 0; here cluster-id is 1 & node number is 0

>request system reboot

3. Download IDP security package from the existing cluster node. Download can be done using SSH/SFTP (you can use FileZilla or WinScp or Mac/Linux scp command) to connect & download the IDP security package.

The attach signature database is located at “/var/db/idpd/sec-download/*“. You can download the whole “sec-download” directory. Once download is done, copy it to an USB stick (should be formatted with FAT32).

4. Transfer & install IDP security package to the new SRX device.

Plugin the USB to the SRX; mount it and copy the content to the same destination folder “/var/db/idpd/sec-download/“.

>start shell

%mkdir /var/tmp/usb

%mount -t msdosfs /dev/da1 /var/tmp/usb

%cd /var/tmp/usb/sec-download

%cp -R * /var/db/idpd/sec-download/

5. Install the IDP security package on the new SRX device.

>request security idp security-package install node 0

>request security idp security-package install status

>request security idp security-package install policy-templates node 0

>request security idp security-package install status

Confirm installation is done successfully (you should see something like following)-

>show security idp security-package-version 

node0:

—————————————————————-

     Attack database version:2660(Tue Mar  1 01:09:02 2016 UTC)

     Detector version :12.6.160151117

     Policy template version :2660

6. Now download the current running configuration from the existing cluster node.

Following command will create a copy of all configuration-

#save /var/tmp/config-backup-ddmmyy

Connect to the running device using FileZilla or similar on to SSH/SFTP port; download the “/var/tmp/config-backup-ddmmyy” file. Transfer the file to USB stick (should be formatted with FAT32).

You should not make any configuration change to the running device at this point.

7. Load the downloaded configuration to the new SRX device via USB.

Plugin the USB to new SRX box.

>start shell

%mount -t msdosfs /dev/da1 /var/tmp/usb

%exit

>config

#load override /var/tmp/usb/config-backup-ddmmyy

#commit

Now power off the new SRX new and get ready to add this to the existing cluster.

>request system power-off

8. Connect all the network cables “same as before”. Power on the new device.

9. Check cluster status – both the nodes should be back online.

>show chassis cluster status

Thats all!