Planet Suricata

April 09, 2014

Victor Julien

Detecting OpenSSL Heartbleed with Suricata

The OpenSSL heartbleed vulnerability is a pretty serious weakness in OpenSSL that can lead to information disclosure, in some cases even to to private key leaking. Please see this post here http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html for more info.

This is a case where an IDS is able to detect the vuln, even though we’re talking about TLS.

LUA

I’ve written a quick and dirty LUA script to detect it:

alert tls any any -> any any ( \
    msg:"TLS HEARTBLEED malformed heartbeat record"; \
    flow:established,to_server; dsize:>7; \
    content:"|18 03|"; depth:2; lua:tls-heartbleed.lua; \
    classtype:misc-attack; sid:3000001; rev:1;)

The script:

function init (args)
    local needs = {}
    needs["payload"] = tostring(true)
    return needs
end

function match(args)
    local p = args['payload']
    if p == nil then
        --print ("no payload")
        return 0
    end
 
    if #p < 8 then
        --print ("payload too small")
    end
    if (p:byte(1) ~= 24) then
        --print ("not a heartbeat")
        return 0
    end
 
    -- message length
    len = 256 * p:byte(4) + p:byte(5)
    --print (len)
 
    -- heartbeat length
    hb_len = 256 * p:byte(7) + p:byte(8)

    -- 1+2+16
    if (1+2+16) >= len  then
        print ("invalid length heartbeat")
        return 1
    end

    -- 1 + 2 + payload + 16
    if (1 + 2 + hb_len + 16) > len then
        print ("heartbleed attack detected: " .. (1 + 2 + hb_len + 16) .. " > " .. len)
        return 1
    end
    --print ("no problems")
    return 0
end
return 0

Regular rules

Inspired by the FOX-IT rules from http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-live-blog/, here are some non-LUA rules:

Detect a large response.

alert tls any any -> any any ( \
    msg:"TLS HEARTBLEED heartbeat suspiciuous large record"; \
    flow:established,to_client; dsize:>7; \
    content:"|18 03|"; depth:2; \
    byte_test:2,>,200,3,big; classtype:misc-attack; \
    sid:3000002; rev:1;)

Detect a large response following a large request (flow bit is either set by the LUA rule above or by the rule that follows):

alert tls any any -> any any ( \
    msg:"TLS HEARTBLEED heartbeat attack likely succesful"; \
    flowbits:isset,TLS.heartbleed; \
    flow:established,to_client; dsize:>7; \
    content:"|18 03|"; depth:2; byte_test:2,>,200,3,big; \
    classtype:misc-attack; \
    sid:3000003; rev:1;)

Detect a large request, set flowbit:

alert tls any any -> any any ( \
    msg:"TLS HEARTBLEED heartbeat suspiciuous large request"; \
    flow:established,to_server; content:"|18 03|"; depth:2; \
    content:"|01|"; distance:3; within:1; \
    byte_test:2,>,200,0,big,relative; \
    flowbits:set,TLS.heartbleed; \
    classtype:misc-attack; sid:3000004; rev:1;)

Suricata TLS parser

Pierre Chifflier has written detection logic for the Suricata TLS parser. This is in our git master and will be part of 2.0.1. If you run this code, enable these rules:

alert tls any any -> any any ( \
    msg:"SURICATA TLS overflow heartbeat encountered, possible exploit attempt (heartbleed)"; \
    flow:established; app-layer-event:tls.overflow_heartbeat_message; \
    flowint:tls.anomaly.count,+,1; classtype:protocol-command-decode; \
    reference:cve,2014-0160; sid:2230012; rev:1;)
alert tls any any -> any any ( \
    msg:"SURICATA TLS invalid heartbeat encountered, possible exploit attempt (heartbleed)"; \
    flow:established; app-layer-event:tls.invalid_heartbeat_message; \
    flowint:tls.anomaly.count,+,1; classtype:protocol-command-decode; \
    reference:cve,2014-0160; sid:2230013; rev:1;)

Ticket: https://redmine.openinfosecfoundation.org/issues/1173
Pull Request: https://github.com/inliniac/suricata/pull/924

Other Resources

- My fellow country (wo)men of Fox-IT have Snort rules here: http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-live-blog/ These rules detect suspiciously large heartbeat response sizes
- Oisf-users has a thread: https://lists.openinfosecfoundation.org/pipermail/oisf-users/2014-April/003603.html
- Emerging Threats has a thread: https://lists.emergingthreats.net/pipermail/emerging-sigs/2014-April/024049.html
- Sourcefire has made rules available as well http://vrt-blog.snort.org/2014/04/heartbleed-memory-disclosure-upgrade.html These should work on Suricata as well.

Update 1:
- Pierre Chifflier correctly noted that hb_len doesn’t contain the ‘type’ and ‘size’ fields (3 bytes total), while ‘len’ does. So updated the check.
Update 2:
- Yonathan Klijnsma pointed me at the difference between the request and the response: https://twitter.com/ydklijnsma/status/453514484074962944. I’ve updated the rule to only inspect the script against requests.
Update 3:
- Better rule formatting
- Add non-LUA rules as well
Update 4:
- ET is going to add these rules: https://lists.emergingthreats.net/pipermail/emerging-sigs/2014-April/024056.html
Update 5:
- Updated the LUA script after feedback from Ivan Ristic. The padding issue was ignored.
Update 6:
- Added Pierre Chifflier’s work on detecting this in the Suricata TLS parser.
- Added reference to Sourcefire VRT rules


by inliniac at April 09, 2014 12:03 PM

April 04, 2014

Peter Manev

Suricata (and the grand slam of) Open Source IDPS - Chapter I - Preparation



Introduction

This is a series of 4 articles aiming at giving a general guideline on how to deploy the Open Source Suricata IDPS on a high speed networks (10Gbps) in IDS mode using AF_PACKET , PF_RING or DNA.

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF and its supporting vendors.


In addition to that -  we will make use of some of the great Suricata features and mainly we will  compile in with GeoIP and file extraction support (extract files on the fly from traffic, based on their file type / file extension / file size/ file name/ MD5 hash ).

Further more Logstash / Kibana / Elasticsearch configuration and set up will be explored.

The articles in this series are comprised of:

Chapter I - Preparation
This chapter includes a general system description and basic set up tasks execution and tuning.

Chapter II - PF_RING / DNA
     Part One - PF_RING
     Part Two - DNA
This chapter includes two sections - PF_RING and DNA set up and configuration tasks.

Chapter III - AF_PACKET
This chapter includes AF_PACKET set up and configuration tasks.

Chapter IV - Logstash / Kibana / Elasticsearch
This chapter includes Logstash/Kibana/Elasticsearch  set up and configuration tweaks - making use of the JSON log output available in Suricata.



Following these tutorials would not guarantee you 0 drops or a perfect set up. Every set up is unique based on a number of things including type of traffic, HW , rulesets used and much more.
Instead these sets of articles are intended for a general guide / reference and you should further adjust settings after you have gone through the initial deployment steps and analysis of your needs and traffic.

For this set of articles it is not mandatory to install Suricata with both AF_PACKET and PF_RING (or DNA) enabled. If you choose one or the other or both is entirely up to you. This article series does not aim to produce a performance comparison between AF_PACKET and PF_RING - again it is up to you and depending on your needs, environment and hardware  to see which one works better for your setup.

Chapter I  - Preparation


In Chapter I  of this series of articles we would get a quick overview and basic info analysis of the OS system level, the traffic we are about to monitor and a quick/basic Suricata installation . We would also do a initial set up and prep of the system and the network card.

System's HW

CPU: One Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz (16 cores counting Hyperthreading)
root@suricata:/# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 45
model name      : Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
stepping        : 7
microcode       : 0x70b
cpu MHz         : 2701.000
cache size      : 20480 KB
physical id     : 0
siblings        : 16
Memory: 64GB - 1600 MHz
root@suricata:~# cat /proc/meminfo
MemTotal:       65951532 kB
MemFree:        22508716 kB
Buffers:            1028 kB
Cached:          2251136 kB
SwapCached:            0 kB
Network Card: Intel 82599EB 10-Gigabit SFI/SFP+
 04:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter X520-2
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 34
        Region 0: Memory at fbc20000 (64-bit, non-prefetchable) [size=128K]
        Region 2: I/O ports at e020 [size=32]
        Region 4: Memory at fbc44000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3

System's OS

64 bit Ubuntu LTS 12.04.3
root@suricata:/# uname -a
Linux suricata 3.2.0-39-generic #62-Ubuntu SMP Thu Feb 28 00:28:53 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Suricata

We will use the current dev version of Suricata - 2.0dev (rev 92568c3) - at the moment of this writing.


Network Traffic


There is no traffic alike.
Make sure you carefully analyze and select your HW (CPU, network cards,RAM, HDD, PCI bus type and speed) and deployment needs (which and what type of rules/rule set are you going to use).
Make sure you have an idea of how are you going to mirror the traffic. A good article on the subject of using a Network Tap or Port Mirror can be found HERE.
Make sure you know and analyze/investigate/profile for what kind of traffic/protocols, network and users/organization you will be doing the deployment.

It is important to point out that this is a set up for 10Gbps of traffic IDS monitoring of an ISP (Internet Service Provider) type of network backbone traffic.

Some of the tools you could use to get an idea of the traffic and necessary for the configuration part:

 apt-get install ethtool bwm-ng iptraf

type bwm-ng  , hit enter:

then press d:





tcpstat -i eth3  -o  "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\n"  1
(substitute eth3 with your interface):


Or  from the man pages of tcpstat that would mean:

n - %n' the number of packets
agv - %a' the average packet size in bytes
stddev - %d' the standard deviation of the size of each packet in bytes
bps - %b' the number of bits per second
-o - output format
1 - poll every 1 second
About 1.5 mpps (million packets per second )




iptraf - you could have  a look around:
"statistical breakdowns" and "detailed interface statistics" -> TCP/UDP port, packet size, then sort









NOTICE: (None of the above 3 would work in DNA mode config and installation while Suricata is running on the same interface, explained/described in a later chapter)


Packages installation

General packages needed:
apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \
build-essential autoconf automake libtool libpcap-dev libnet1-dev \
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libcap-ng-dev libcap-ng0 \
make flex bison git git-core subversion libmagic-dev libnuma-dev

For Eve (all JSON output):
apt-get install libjansson-dev libjansson4

For MD5 support(file extraction):
apt-get install libnss3-dev libnspr4-dev

For GeoIP:
apt-get install libgeoip1 libgeoip-dev

 Network and system  tools:
apt-get install ethtool bwm-ng iptraf htop



Installation and configuration 


Suricata

Get the latest Suricata dev branch:
git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ &&  git clone https://github.com/ironbee/libhtp.git -b 0.5.x
 Compile and install
 ./autogen.sh &&  ./configure --enable-geoip \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
&& sudo make clean && sudo make && sudo make install && sudo ldconfig
NOTICE: If this is your first time installing Suricata make sure you do some basic setup tasks - rule downloads, directory set up, network range configuration  - as described here. (or you can just use && sudo make install-full instead of && sudo make install above)

We will do a specific set up later in the articles , but you do need to have the basic set up done before.


Verify everything is in place , you can execute the following commands:
which suricata
suricata --build-info
ldd `which suricata`


 Network card drivers and tuning

Our card is Intel 82599EB 10-Gigabit SFI/SFP+


rmmod ixgbe
sudo modprobe ixgbe FdirPballoc=3
ifconfig eth3 up
then (we disable irqbalance and make sure it does not enable itself during reboot)
 killall irqbalance
 service irqbalance stop

 apt-get install chkconfig
 chkconfig irqbalance off
Get the Intel network driver form here (we will use them in a second) - https://downloadcenter.intel.com/default.aspx

 Download to your directory of choice then unzip,compile and install:
 tar -zxf ixgbe-3.18.7.tar.gz
 cd /home/pevman/ixgbe-3.18.7/src
 make clean && make && make install
Set irq affinity - do not forget to change eth3  below with the name of the network interface you are using:
 cd ../scripts/
 ./set_irq_affinity  eth3


 You should see something like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ./set_irq_affinity  eth3
no rx vectors found on eth3
no tx vectors found on eth3
eth3 mask=1 for /proc/irq/101/smp_affinity
eth3 mask=2 for /proc/irq/102/smp_affinity
eth3 mask=4 for /proc/irq/103/smp_affinity
eth3 mask=8 for /proc/irq/104/smp_affinity
eth3 mask=10 for /proc/irq/105/smp_affinity
eth3 mask=20 for /proc/irq/106/smp_affinity
eth3 mask=40 for /proc/irq/107/smp_affinity
eth3 mask=80 for /proc/irq/108/smp_affinity
eth3 mask=100 for /proc/irq/109/smp_affinity
eth3 mask=200 for /proc/irq/110/smp_affinity
eth3 mask=400 for /proc/irq/111/smp_affinity
eth3 mask=800 for /proc/irq/112/smp_affinity
eth3 mask=1000 for /proc/irq/113/smp_affinity
eth3 mask=2000 for /proc/irq/114/smp_affinity
eth3 mask=4000 for /proc/irq/115/smp_affinity
eth3 mask=8000 for /proc/irq/116/smp_affinity
root@suricata:/home/pevman/ixgbe-3.18.7/scripts#
Now we have the latest drivers installed (at the time of this writing) and we have run the affinity script:
   *-network:1
       description: Ethernet interface
       product: 82599EB 10-Gigabit SFI/SFP+ Network Connection
       vendor: Intel Corporation
       physical id: 0.1
       bus info: pci@0000:04:00.1
       logical name: eth3
       version: 01
       serial: 00:e0:ed:19:e3:e1
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress vpd bus_master cap_list ethernet physical fibre
       configuration: autonegotiation=off broadcast=yes driver=ixgbe driverversion=3.18.7 duplex=full firmware=0x800000cb latency=0 link=yes multicast=yes port=fibre promiscuous=yes
       resources: irq:37 memory:fbc00000-fbc1ffff ioport:e000(size=32) memory:fbc40000-fbc43fff memory:fa700000-fa7fffff memory:fa600000-fa6fffff



We need to disable all offloading on the network card in order for the IDS to be able to see the traffic as it is supposed to be (without checksums,tcp-segmentation-offloading and such..) Otherwise your IDPS would not be able to see all "natural" network traffic the way it is supposed to and will not inspect it properly.

This would influence the correctness of ALL outputs including file extraction. So make sure all offloading features are OFF !



When you first install the drivers and card your offloading settings might look like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -k eth3
Offload parameters for eth3:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
root@suricata:/home/pevman/ixgbe-3.18.7/scripts#

So we disable all of them, like so (and we load balance the UDP flows for that particular network card):

ethtool -K eth3 tso off
ethtool -K eth3 gro off
ethtool -K eth3 lro off
ethtool -K eth3 gso off
ethtool -K eth3 rx off
ethtool -K eth3 tx off
ethtool -K eth3 sg off
ethtool -K eth3 rxvlan off
ethtool -K eth3 txvlan off
ethtool -N eth3 rx-flow-hash udp4 sdfn
ethtool -N eth3 rx-flow-hash udp6 sdfn
ethtool -n eth3 rx-flow-hash udp6
ethtool -n eth3 rx-flow-hash udp4
ethtool -C eth3 rx-usecs 1000
ethtool -C eth3 adaptive-rx off

Your output should look something like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 tso off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 gro off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 lro off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 gso off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 rx off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 tx off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 sg off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 rxvlan off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 txvlan off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -N eth3 rx-flow-hash udp4 sdfn
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -N eth3 rx-flow-hash udp6 sdfn
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -n eth3 rx-flow-hash udp6
UDP over IPV6 flows use these fields for computing Hash flow key:
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]

root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -n eth3 rx-flow-hash udp4
UDP over IPV4 flows use these fields for computing Hash flow key:
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]

root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -C eth3 rx-usecs 1000
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -C eth3 adaptive-rx off

Now we doublecheck and run ethtool again to verify that the offloading is OFF:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -k eth3
Offload parameters for eth3:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: off
tx-vlan-offload: off

So in general we are done with the preparation of the system. The next chapter will explain PF_RING / DNA specific configuration in the suricata.yaml and the system in general.



by Peter Manev (noreply@blogger.com) at April 04, 2014 05:36 AM

April 01, 2014

Peter Manev

Suricata (and the grand slam of) Open Source IDPS - Chapter II - PF_RING / DNA , Part One - PF_RING

 

 

Introduction


This is Chapter II (Part One - PF_RING) of a series of articles about high performance  and advance tuning of Suricata IDPS

This article will consist of two parts  - on setting up and configuring PF_RING and DNA for a 10Gbps interface monitoring.

PF_RING:
PF_RING™ is a new type of network socket that dramatically improves the packet capture speed...
PF_RING DNA:
PF_RING™ DNA (Direct NIC Access) is a way to map NIC memory and registers to userland so that there is no additional packet copy besides the DMA transfer done by the NIC NPU (Network Process Unit), unlike what happens with NAPI. This results in better performance as CPU cycles are used uniquely for consuming packets and not for moving them off the adapter...


NOTE: PF_RING™ is open source and free, for the DNA part you need a license. However the DNA license is free for non-profit organizations or education institutions (universities,colleges etc.)




Part One - PF_RING


If you have pf_ring already installed, you might want to do:
sudo rmmod pf_ring
If you are not sure if you have pf_ring installed , you can do:
sudo modinfo pf_ring


Get the latest pf_ring sources:
svn export https://svn.ntop.org/svn/ntop/trunk/PF_RING/ pfring-svn-latest


Compile and install PF_RING


Next, enter the following commands for configuration and installation:
(!!! NOT AS ROOT !!!)

    cd pfring-svn-latest/kernel
    make && sudo make install
    cd ../userland/lib
    ./configure --prefix=/usr/local/pfring && make && sudo make install
    cd ../libpcap-1.1.1-ring
    ./configure --prefix=/usr/local/pfring && make && sudo make install
    cd ../tcpdump-4.1.1
    ./configure --prefix=/usr/local/pfring && make && sudo make install
    sudo ldconfig
  

Then we load the module:
sudo modprobe pf_ring
  
Elevate as root and check if you have everything you need -enter:
modinfo pf_ring && cat /proc/net/pf_ring/info
   
Increase the throttle rate of the ixgbe module:
modprobe ixgbe InterruptThrottleRate=4000



The default pf_ring setup will look something like this:
root@suricata:/var/og/suricata# cat /proc/net/pf_ring/info
PF_RING Version          : 5.6.2 ($Revision: exported$)
Total rings              : 16
Standard (non DNA) Options
Ring slots               : 4096
Slot version             : 15
Capture TX               : Yes [RX+TX]
IP Defragment            : No
Socket Mode              : Standard
Transparent mode         : Yes [mode 0]
Total plugins            : 0
Cluster Fragment Queue   : 0
Cluster Fragment Discard : 0


Notice the ring slots above. We would actually like to increase that in order to meet the needs of a high speed network that we are going to monitor with Suricata.

So we do:
rmmod pf_ring
modprobe pf_ring transparent_mode=0 min_num_slots=65534
root@suricata:/home/pevman/pfring-svn-latest# modprobe pf_ring transparent_mode=0 min_num_slots=65534

root@suricata:/home/pevman/pfring-svn-latest# cat /proc/net/pf_ring/info
PF_RING Version          : 5.6.2 ($Revision: exported$)
Total rings              : 0
Standard (non DNA) Options
Ring slots               : 65534
Slot version             : 15
Capture TX               : Yes [RX+TX]
IP Defragment            : No
Socket Mode              : Standard
Transparent mode         : Yes [mode 0]
Total plugins            : 0
Cluster Fragment Queue   : 0
Cluster Fragment Discard : 0


Notice the difference above  - Ring slots: 65534



Compile and install Suricata with PF_RING enabled


Get the latest Suricata dev branch:
git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ &&  git clone https://github.com/ironbee/libhtp.git -b 0.5.x

 Compile and install
./autogen.sh && LIBS="-lrt -lnuma" ./configure --enable-pfring --enable-geoip \
--with-libpfring-includes=/usr/local/pfring/include/ \
--with-libpfring-libraries=/usr/local/pfring/lib/ \
--with-libpcap-includes=/usr/local/pfring/include/ \
--with-libpcap-libraries=/usr/local/pfring/lib/ \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
&& sudo make clean && sudo make && sudo make install && sudo ldconfig


The --> LIBS="-lrt -lnuma" <-- infront of "./configure" above is in case you get the following error  without the use of  "LIBS=-lrt " :
checking for pfring_open in -lpfring... no

ERROR! --enable-pfring was passed but the library was not found or version is >4, go get it
from http://www.ntop.org/PF_RING.html



PF_RING - suricata.yaml tune up and configuration

The following values and variables in the default suricata.yaml need to be changed ->

We make sure we use runmode workers (feel free to try other modes and experiment what is best for your specific set up):
#runmode: autofp
runmode: workers


Adjust the packet size:
# Preallocated size for packet. Default is 1514 which is the classical
# size for pcap on ethernet. You should adjust this value to the highest
# packet size (MTU + hardware header) on your system.
default-packet-size: 1522


Use custom profile in detect-engine with a lot more groups (high gives you about 15 groups per variable, but you can customize as needed depending on the network ranges you monitor ):
detect-engine:
  - profile: custom
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full
  - inspection-recursion-limit: 3000


Adjust your defrag settings:
# Defrag settings:
defrag:
  memcap: 512mb
  hash-size: 65536
  trackers: 65535 # number of defragmented flows to follow
  max-frags: 65535 # number of fragments to keep
  prealloc: yes
  timeout: 30



Adjust your flow settings:
flow:
  memcap: 1gb
  hash-size: 1048576
  prealloc: 1048576
  emergency-recovery: 30


Adjust your per protocol timeout values:
flow-timeouts:

  default:
    new: 3
    established: 30
    closed: 0
    emergency-new: 10
    emergency-established: 10
    emergency-closed: 0
  tcp:
    new: 6
    established: 100
    closed: 12
    emergency-new: 1
    emergency-established: 5
    emergency-closed: 2
  udp:
    new: 3
    established: 30
    emergency-new: 3
    emergency-established: 10
  icmp:
    new: 3
    established: 30
    emergency-new: 1
    emergency-established: 10



Adjust your stream engine settings:
stream:
  memcap: 12gb
  checksum-validation: no      # reject wrong csums
  prealloc-sesions: 500000     #per thread
  midstream: true
  asyn-oneside: true
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 20gb
    depth: 12mb                  # reassemble 12mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10


Make sure you enable suricata.log for troubleshooting if something goes wrong:
  outputs:
  - console:
      enabled: yes
  - file:
      enabled: yes
      filename: /var/log/suricata/suricata.log



The PF_RING section:
# PF_RING configuration. for use with native PF_RING support
# for more info see http://www.ntop.org/PF_RING.html
pfring:
  - interface: eth3
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 16

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 99

    # Default PF_RING cluster type. PF_RING can load balance per flow or per hash.
    # This is only supported in versions of PF_RING > 4.1.1.
    cluster-type: cluster_flow
    # bpf filter for this interface
    #bpf-filter: tcp
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - rxonly: only compute checksum for packets received by network card.
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used. (default)
    # Warning: 'checksum-validation' must be set to yes to have any validation
    #checksum-checks: auto



We had these rules enabled:
rule-files:
 - md5.rules # 134 000 specially selected file md5s
 - dns.rules
 - malware.rules
 - local.rules
 - current_events.rules
 - mobile_malware.rules
 - user_agents.rules


Make sure you adjust your Network and Port variables:
  # Holds the address group vars that would be passed in a Signature.
  # These would be retrieved during the Signature address parsing stage.
  address-groups:

    HOME_NET: "[ HOME NET HERE ]"

    EXTERNAL_NET: "!$HOME_NET"

    HTTP_SERVERS: "$HOME_NET"

    SMTP_SERVERS: "$HOME_NET"

    SQL_SERVERS: "$HOME_NET"

    DNS_SERVERS: "$HOME_NET"

    TELNET_SERVERS: "$HOME_NET"

    AIM_SERVERS: "$EXTERNAL_NET"

    DNP3_SERVER: "$HOME_NET"

    DNP3_CLIENT: "$HOME_NET"

    MODBUS_CLIENT: "$HOME_NET"

    MODBUS_SERVER: "$HOME_NET"

    ENIP_CLIENT: "$HOME_NET"

    ENIP_SERVER: "$HOME_NET"

  # Holds the port group vars that would be passed in a Signature.
  # These would be retrieved during the Signature port parsing stage.
  port-groups:

    HTTP_PORTS: "80"

    SHELLCODE_PORTS: "!80"

    ORACLE_PORTS: 1521

    SSH_PORTS: 22

    DNP3_PORTS: 20000


Your app parsers:
# Holds details on the app-layer. The protocols section details each protocol.
# Under each protocol, the default value for detection-enabled and "
# parsed-enabled is yes, unless specified otherwise.
# Each protocol covers enabling/disabling parsers for all ipprotos
# the app-layer protocol runs on.  For example "dcerpc" refers to the tcp
# version of the protocol as well as the udp version of the protocol.
# The option "enabled" takes 3 values - "yes", "no", "detection-only".
# "yes" enables both detection and the parser, "no" disables both, and
# "detection-only" enables detection only(parser disabled).
app-layer:
  protocols:
    tls:
      enabled: yes
      detection-ports:
        tcp:
          toserver: 443

      #no-reassemble: yes
    dcerpc:
      enabled: yes
    ftp:
      enabled: yes
    ssh:
      enabled: yes
    smtp:
      enabled: yes
    imap:
      enabled: detection-only
    msn:
      enabled: detection-only
    smb:
      enabled: yes
      detection-ports:
        tcp:
          toserver: 139
    # smb2 detection is disabled internally inside the engine.
    #smb2:
    #  enabled: yes
    dnstcp:
       enabled: yes
       detection-ports:
         tcp:
           toserver: 53
    dnsudp:
       enabled: yes
       detection-ports:
         udp:
           toserver: 53
    http:
      enabled: yes


Libhtp body limits:
      libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 12mb
           response-body-limit: 12mb

           # inspection limits
           request-body-minimal-inspect-size: 32kb
           request-body-inspect-window: 4kb
           response-body-minimal-inspect-size: 32kb
           response-body-inspect-window: 4kb



Run it


With all that done and in place  - you can start Suricata like this (change your directory locations and such !)
 LD_LIBRARY_PATH=/usr/local/pfring/lib suricata --pfring-int=eth3 \
 --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow \
 -c /etc/suricata/peter-yaml/suricata-pfring.yaml -D -v


this would also work:
suricata --pfring-int=eth3  --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow \
 -c /etc/suricata/peter-yaml/suricata-pfring.yaml -D -v



After you start Suricata with PF_RING, you could use htop and the logs info of suricata.log to determine if everything is ok


EXAMPLE:
 [29966] 30/11/2013 -- 14:29:12 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[29966] 30/11/2013 -- 14:29:12 - (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[29966] 30/11/2013 -- 14:29:12 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[29966] 30/11/2013 -- 14:29:12 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[29966] 30/11/2013 -- 14:29:12 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[29966] 30/11/2013 -- 14:29:12 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[29967] 30/11/2013 -- 14:29:12 - (tmqh-packetpool.c:141) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 229106864
[29967] 30/11/2013 -- 14:29:12 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[29967] 30/11/2013 -- 14:29:12 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[29967] 30/11/2013 -- 14:29:12 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216
[29967] 30/11/2013 -- 14:29:12 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 67108864 bytes of memory for the flow hash... 1048576 buckets of size 64
[29967] 30/11/2013 -- 14:29:13 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 1048576 flows of size 280
[29967] 30/11/2013 -- 14:29:13 - (flow.c:412) <Info> (FlowInitConfig) -- flow memory usage: 369098752 bytes, maximum: 1073741824
.....
[29967] 30/11/2013 -- 14:30:23 - (util-runmodes.c:545) <Info> (RunModeSetLiveCaptureWorkersForDevice) -- Going to use 16 thread(s)
[30000] 30/11/2013 -- 14:30:23 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth31) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30001] 30/11/2013 -- 14:30:23 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth32) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30002] 30/11/2013 -- 14:30:23 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth33) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30003] 30/11/2013 -- 14:30:23 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth34) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30004] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth35) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30005] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth36) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30006] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth37) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30007] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth38) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30008] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth39) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30009] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth310) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30010] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth311) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30011] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth312) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30012] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth313) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30013] 30/11/2013 -- 14:30:24 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth314) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30014] 30/11/2013 -- 14:30:25 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth315) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30015] 30/11/2013 -- 14:30:25 - (source-pfring.c:445) <Info> (ReceivePfringThreadInit) -- (RxPFReth316) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[29967] 30/11/2013 -- 14:30:25 - (runmode-pfring.c:555) <Info> (RunModeIdsPfringWorkers) -- RunModeIdsPfringWorkers initialised

.....
[29967] 30/11/2013 -- 14:30:25 - (tm-threads.c:2191) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.




So after running for about 7 hrs:
root@suricata:/var/log/suricata# grep kernel stats.log |tail -32
capture.kernel_packets    | RxPFReth31                | 2313986783
capture.kernel_drops      | RxPFReth31                | 75254447
capture.kernel_packets    | RxPFReth32                | 2420204427
capture.kernel_drops      | RxPFReth32                | 23492323
capture.kernel_packets    | RxPFReth33                | 2412343682
capture.kernel_drops      | RxPFReth33                | 71202459
capture.kernel_packets    | RxPFReth34                | 2249712177
capture.kernel_drops      | RxPFReth34                | 15290216
capture.kernel_packets    | RxPFReth35                | 2272653367
capture.kernel_drops      | RxPFReth35                | 2072826
capture.kernel_packets    | RxPFReth36                | 2281254066
capture.kernel_drops      | RxPFReth36                | 118723669
capture.kernel_packets    | RxPFReth37                | 2430047882
capture.kernel_drops      | RxPFReth37                | 13702511
capture.kernel_packets    | RxPFReth38                | 2474713911
capture.kernel_drops      | RxPFReth38                | 6512062
capture.kernel_packets    | RxPFReth39                | 2299221265
capture.kernel_drops      | RxPFReth39                | 596690
capture.kernel_packets    | RxPFReth310               | 2398183554
capture.kernel_drops      | RxPFReth310               | 15623971
capture.kernel_packets    | RxPFReth311               | 2277348230
capture.kernel_drops      | RxPFReth311               | 62773742
capture.kernel_packets    | RxPFReth312               | 2693710052
capture.kernel_drops      | RxPFReth312               | 40213266
capture.kernel_packets    | RxPFReth313               | 2470037871
capture.kernel_drops      | RxPFReth313               | 406738
capture.kernel_packets    | RxPFReth314               | 2236636480
capture.kernel_drops      | RxPFReth314               | 714360
capture.kernel_packets    | RxPFReth315               | 2314829059
capture.kernel_drops      | RxPFReth315               | 1818726
capture.kernel_packets    | RxPFReth316               | 2271917603
capture.kernel_drops      | RxPFReth316               | 1200009

about 2% drops, 85% CPU usage , about 3300 rules and inspecting traffic for match on 134 000 file MD5s.


On a side note

You could also use linux-tools to do some more analyzing and performance tuning:
apt-get install linux-tools
Example: perf top
(hit enter)




Some more info found HERE and thanks to Regit HERE.

Your task of tuning up is not yet done. You could also do a dry test runs with profiling enabled in Suricata and determine the most "expensive rules" and tune them accordingly.

This is Chapter II (Part One - PF_RING) of a series of articles about high performance  and advance tuning of Suricata IDPS. The next article is Chapter II (Part Two - DNA)


by Peter Manev (noreply@blogger.com) at April 01, 2014 04:02 PM

Suricata (and the grand slam of) Open Source IDPS - Chapter II - PF_RING / DNA , Part Two - DNA

Introduction


This is Chapter II (Part Two - DNA) of a series of articles about high performance  and advance tuning of Suricata IDPS

This article will consist of two parts  - on setting up and configuring PF_RING and PF_RING DNA for a 10Gbps interface monitoring.
This is Part Two - DNA , describing setup and  tuning of DNA - PF_RING™ DNA (Direct NIC Access)

Many thanks to Luca Deri and Alfredo Cardigliano from ntop for providing a license and support that made possible this article/guide for 10 Gbps deployment scenario.


Part Two - DNA

NOTE: PF_RING is open source and free, but for DNA you need a license. However the DNA license is free for non-profit organizations or education institutions (universities,colleges etc.)
In general PF_RING DNA is much faster than the usual PF_RING.

If you do not have PF_RING installed on your system you should follow all of the
Part One - PF_RING
guide except the section "Run it". After that come back and continue from here onwards.

If you have PF_RING already installed  , you should follow this article to get the PF_RING DNA set up and installed.

NOTE: Know your network card. This set up uses Intel 82599EB 10-Gigabit SFI/SFP+

NOTE: When one application is using the DNA interface no other application can use that same interface. Example if you have Suricata running with this guide, if you want to do "./pfcount" you would not be able to , since the DNA interface is already used by an application. For cases where you would like multiple applications to use the same DNA interface, you should consider Libzero.

Compile

Once you have acquired your DNA license (instructions of "how to" are included in the license), cd to the src of your latest pfring pull:

cd /home/pevman/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src
make



Configure

Elevate as root. EDIT the script load_dna_driver.sh found in the  directory below
(/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src/load_dna_driver.sh)
Make changes in the script load_dna_driver.sh like so (we use only one dna interface):
# Configure here the network interfaces to activate
IF[0]=dna0
#IF[1]=dna1
#IF[2]=dna2
#IF[3]=dna3



Leave rmmod like so (default):
# Remove old modules (if loaded)
rmmod ixgbe
rmmod pf_ring


Leave only two insmod lines uncommented
# We assume that you have compiled PF_RING
insmod ../../../../kernel/pf_ring.ko


Adjust the queues, use your own MAC address, increase the buffers, up the laser on the SFP:
# As many queues as the number of processors
#insmod ./ixgbe.ko RSS=0,0,0,0
insmod ./ixgbe.ko RSS=0 mtu=1522 adapters_to_enable=00:e0:ed:19:e3:e1 num_rx_slots=32768 FdirPballoc=3

Above we have 16 CPUs and we want to use 16 queues, enable only this adapter with this MAC address, bump up the rx slots and comment all the other insmod lines (besides these two shown above for pf_ring.ko and ixgbe.ko)

In the case above we enable 16 queues (cause we have 16 cpus) for the first port of the 10Gbps Intel network card.


 +++++ CORNER CASE +++++
( the bonus round !! - with the help of  Alfredo Cardigliano from ntop )

Question:
So what should you do if you have this scenario - 32 core system with a 
10Gbps network card and DNA. The card has  4 ports each port getting 1,2,6,1 Gbps
of traffic, respectively.

You would like to get 4,8 16,4 queues - dedicated cpus (as written ) per
port. In other words:
Gbps of traffic (port 0,1,2,3) - >            1,2,6,1
Number of cpus/queues dedicated - >  4,8,16,4

Answer:
Simple -> You should use
insmod ./ixgbe.ko RSS=4,8,16,4 ....

instead of :
insmod ./ixgbe.ko RSS=0 ....

+++++ END of the CORNER CASE +++++


Execute load_dna_driver.sh from the same directory it resides in.
(ex for this tutorial - /home/pevman/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src) :
./ load_dna_driver.sh

Make sure offloading is disabled (substitute the correct interface name below name):
ethtool -K dna0 tso off
ethtool -K dna0 gro off
ethtool -K dna0 lro off
ethtool -K dna0 gso off
ethtool -K dna0 rx off
ethtool -K dna0 tx off
ethtool -K dna0 sg off
ethtool -K dna0 rxvlan off
ethtool -K dna0 txvlan off
ethtool -N dna0 rx-flow-hash udp4 sdfn
ethtool -N dna0 rx-flow-hash udp6 sdfn
ethtool -n dna0 rx-flow-hash udp6
ethtool -n dna0 rx-flow-hash udp4
ethtool -C dna0 rx-usecs 1000
ethtool -C dna0 adaptive-rx off



Configuration in suricata.yaml

In suricata.yaml, make sure your pfring section looks like this:

# PF_RING configuration. for use with native PF_RING support
# for more info see http://www.ntop.org/PF_RING.html  #dna0@0
pfring:
  - interface: dna0@0
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    #threads: 1

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    #cluster-id: 1

    # Default PF_RING cluster type. PF_RING can load balance per flow or per hash.
    # This is only supported in versions of PF_RING > 4.1.1.
    cluster-type: cluster_flow
    # bpf filter for this interface
    #bpf-filter: tcp
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - rxonly: only compute checksum for packets received by network card.
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used. (default)
    # Warning: 'checksum-validation' must be set to yes to have any validation
    #checksum-checks: auto
  # Second interface
  - interface: dna0@1
    threads: 1
  - interface: dna0@2
    threads: 1
  - interface: dna0@3
    threads: 1
  - interface: dna0@4
    threads: 1
  - interface: dna0@5
    threads: 1
  - interface: dna0@6
    threads: 1
  - interface: dna0@7
    threads: 1
  - interface: dna0@8
    threads: 1
  - interface: dna0@9
    threads: 1
  - interface: dna0@10
    threads: 1
  - interface: dna0@11
    threads: 1
  - interface: dna0@12
    threads: 1
  - interface: dna0@13
    threads: 1
  - interface: dna0@14
    threads: 1
  - interface: dna0@15
    threads: 1
  # Put default values here
  #- interface: default
    #threads: 2


Rules enabled in suricata.yaml:

default-rule-path: /etc/suricata/et-config/
rule-files:
 - trojan.rules
 - dns.rules
 - malware.rules
 - local.rules
 - jonkman.rules
 - worm.rules
 - current_events.rules
 - mobile_malware.rules
 - user_agents.rules



The rest of the suricata.yaml configuration guide you can take from Part One - PF_RING- regarding Suricata's specific settings - timeouts, memory settings, fragmentation , reassembly limits and so on.


Notice the DNA driver loaded:
 lshw -c Network
  *-network:1
       description: Ethernet interface
       product: 82599EB 10-Gigabit SFI/SFP+ Network Connection
       vendor: Intel Corporation
       physical id: 0.1
       bus info: pci@0000:04:00.1
       logical name: dna0
       version: 01
       serial: 00:e0:ed:19:e3:e1
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress vpd bus_master cap_list ethernet physical fibre
       configuration: autonegotiation=off broadcast=yes driver=ixgbe driverversion=3.18.7-DNA duplex=full firmware=0x800000cb latency=0 link=yes multicast=yes port=fibre promiscuous=yes
       resources: irq:37 memory:fbc00000-fbc1ffff ioport:e000(size=32) memory:fbc40000-fbc43fff memory:fa700000-fa7fffff memory:fa600000-fa6fffff



Start Suricata with DNA

(make sure  you adjust your directories in the command below)
suricata --pfring -c /etc/suricata/peter-yaml/suricata-pfring-dna.yaml -v -D


Some stats from suricata.log:

root@suricata:/home/pevman/pfring-svn-latest/userland/examples# more /var/log/suricata/suricata.log
[32055] 27/11/2013 -- 13:31:38 - (suricata.c:932) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev 77b09fc)
[32055] 27/11/2013 -- 13:31:38 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[32055] 27/11/2013 -- 13:31:38 - (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[32055] 27/11/2013 -- 13:31:38 - (defrag-hash.c:209) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[32055] 27/11/2013 -- 13:31:38 - (defrag-hash.c:234) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[32055] 27/11/2013 -- 13:31:38 - (defrag-hash.c:241) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[32055] 27/11/2013 -- 13:31:38 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[32056] 27/11/2013 -- 13:31:38 - (tmqh-packetpool.c:141) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 288873872
[32056] 27/11/2013 -- 13:31:38 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[32056] 27/11/2013 -- 13:31:38 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[32056] 27/11/2013 -- 13:31:38 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216
[32056] 27/11/2013 -- 13:31:38 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 67108864 bytes of memory for the flow hash... 1048576 buckets of size 64
[32056] 27/11/2013 -- 13:31:38 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 1048576 flows of size 376
[32056] 27/11/2013 -- 13:31:38 - (flow.c:412) <Info> (FlowInitConfig) -- flow memory usage: 469762048 bytes, maximum: 1073741824
[32056] 27/11/2013 -- 13:31:38 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[32056] 27/11/2013 -- 13:31:38 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[32056] 27/11/2013 -- 13:31:38 - (suricata.c:1725) <Info> (SetupDelayedDetect) -- Delayed detect disabled

.....rules loaded  - 8010 :

[32056] 27/11/2013 -- 13:31:40 - (detect.c:453) <Info> (SigLoadSignatures) -- 9 rule files processed. 8010 rules successfully loaded, 0 rules failed
[32056] 27/11/2013 -- 13:31:40 - (detect.c:2589) <Info> (SigAddressPrepareStage1) -- 8017 signatures processed. 1 are IP-only rules, 2147 are inspecting packet payload, 6625 inspect application lay
er, 0 are decoder event only
[32056] 27/11/2013 -- 13:31:40 - (detect.c:2592) <Info> (SigAddressPrepareStage1) -- building signature grouping structure, stage 1: adding signatures to signature source addresses... complete
[32056] 27/11/2013 -- 13:31:40 - (detect.c:3218) <Info> (SigAddressPrepareStage2) -- building signature grouping structure, stage 2: building source address list... complete
[32056] 27/11/2013 -- 13:35:28 - (detect.c:3860) <Info> (SigAddressPrepareStage3) -- building signature grouping structure, stage 3: building destination address lists... complete
[32056] 27/11/2013 -- 13:35:28 - (util-threshold-config.c:1186) <Info> (SCThresholdConfParseFile) -- Threshold config parsed: 0 rule(s) found
[32056] 27/11/2013 -- 13:35:28 - (util-coredump-config.c:122) <Info> (CoredumpLoadConfig) -- Core dump size set to unlimited.
[32056] 27/11/2013 -- 13:35:28 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- fast output device (regular) initialized: fast.log
[32056] 27/11/2013 -- 13:35:28 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- http-log output device (regular) initialized: http.log
[32056] 27/11/2013 -- 13:35:28 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- tls-log output device (regular) initialized: tls.log
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@0 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@1 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@2 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@3 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@4 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@5 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@6 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@7 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@8 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@9 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@10 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@11 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@12 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@13 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@14 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@15 from config file
........
......
[32056] 27/11/2013 -- 13:35:28 - (runmode-pfring.c:555) <Info> (RunModeIdsPfringWorkers) -- RunModeIdsPfringWorkers initialised
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:374) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 2048 (per thread)
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:390) <Info> (StreamTcpInitConfig) -- stream "memcap": 17179869184
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:396) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: enabled
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:402) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:419) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": disabled
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:441) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:454) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:472) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 25769803776
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:490) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 12582912
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:573) <Info> (StreamTcpInitConfig) -- stream.reassembly "toserver-chunk-size": 2509
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:575) <Info> (StreamTcpInitConfig) -- stream.reassembly "toclient-chunk-size": 2459
[32056] 27/11/2013 -- 13:35:28 - (tm-threads.c:2191) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.




Results: after 45 min running (and counting) on 10Gbps 8010 rules (impressive) ->
root@suricata:/var/log/suricata# grep  kernel /var/log/suricata/stats.log | tail -32
capture.kernel_packets    | RxPFRdna0@01              | 467567844
capture.kernel_drops      | RxPFRdna0@01              | 0
capture.kernel_packets    | RxPFRdna0@11              | 440973548
capture.kernel_drops      | RxPFRdna0@11              | 0
capture.kernel_packets    | RxPFRdna0@21              | 435088258
capture.kernel_drops      | RxPFRdna0@21              | 0
capture.kernel_packets    | RxPFRdna0@31              | 453131090
capture.kernel_drops      | RxPFRdna0@31              | 0
capture.kernel_packets    | RxPFRdna0@41              | 469334903
capture.kernel_drops      | RxPFRdna0@41              | 0
capture.kernel_packets    | RxPFRdna0@51              | 430412652
capture.kernel_drops      | RxPFRdna0@51              | 0
capture.kernel_packets    | RxPFRdna0@61              | 438056484
capture.kernel_drops      | RxPFRdna0@61              | 0
capture.kernel_packets    | RxPFRdna0@71              | 428234219
capture.kernel_drops      | RxPFRdna0@71              | 0
capture.kernel_packets    | RxPFRdna0@81              | 452883734
capture.kernel_drops      | RxPFRdna0@81              | 0
capture.kernel_packets    | RxPFRdna0@91              | 469565553
capture.kernel_drops      | RxPFRdna0@91              | 0
capture.kernel_packets    | RxPFRdna0@101             | 442010263
capture.kernel_drops      | RxPFRdna0@101             | 0
capture.kernel_packets    | RxPFRdna0@111             | 451989862
capture.kernel_drops      | RxPFRdna0@111             | 0
capture.kernel_packets    | RxPFRdna0@121             | 452650397
capture.kernel_drops      | RxPFRdna0@121             | 0
capture.kernel_packets    | RxPFRdna0@131             | 464907229
capture.kernel_drops      | RxPFRdna0@131             | 0
capture.kernel_packets    | RxPFRdna0@141             | 443403243
capture.kernel_drops      | RxPFRdna0@141             | 0
capture.kernel_packets    | RxPFRdna0@151             | 432499371
capture.kernel_drops      | RxPFRdna0@151             | 0

Some htop stats




In the examples directory of your PF_RING sources - /pfring-svn-latest/userland/examples you have some tools you can use to look at packets stats and such - example:

root@suricata:/home/pevman/pfring-svn-latest/userland/examples# ./pfcount_multichannel -i dna0
Capturing from dna0
Found 16 channels
Using PF_RING v.5.6.2
=========================
Absolute Stats: [channel=0][181016 pkts rcvd][0 pkts dropped]
Total Pkts=181016/Dropped=0.0 %
181016 pkts - 151335257 bytes [181011.8 pkt/sec - 1210.65 Mbit/sec]
=========================
Absolute Stats: [channel=1][179532 pkts rcvd][0 pkts dropped]
Total Pkts=179532/Dropped=0.0 %
179532 pkts - 145662057 bytes [179527.9 pkt/sec - 1165.27 Mbit/sec]
=========================
Absolute Stats: [channel=2][165293 pkts rcvd][0 pkts dropped]
Total Pkts=165293/Dropped=0.0 %
165293 pkts - 136544046 bytes [165289.2 pkt/sec - 1092.33 Mbit/sec]
=========================
Absolute Stats: [channel=3][170460 pkts rcvd][0 pkts dropped]
Total Pkts=170460/Dropped=0.0 %
170460 pkts - 140635250 bytes [170456.1 pkt/sec - 1125.06 Mbit/sec]
=========================
Absolute Stats: [channel=4][175195 pkts rcvd][0 pkts dropped]
Total Pkts=175195/Dropped=0.0 %
175274 pkts - 152625282 bytes [175270.0 pkt/sec - 1220.46 Mbit/sec]
=========================
Absolute Stats: [channel=5][183791 pkts rcvd][0 pkts dropped]
Total Pkts=183791/Dropped=0.0 %
183885 pkts - 160108632 bytes [183880.8 pkt/sec - 1280.29 Mbit/sec]
=========================
Absolute Stats: [channel=6][195090 pkts rcvd][0 pkts dropped]
Total Pkts=195090/Dropped=0.0 %
195090 pkts - 151078761 bytes [195085.5 pkt/sec - 1208.60 Mbit/sec]
=========================
Absolute Stats: [channel=7][176625 pkts rcvd][0 pkts dropped]
Total Pkts=176625/Dropped=0.0 %
176625 pkts - 149183724 bytes [176620.9 pkt/sec - 1193.44 Mbit/sec]
=========================
Absolute Stats: [channel=8][226365 pkts rcvd][0 pkts dropped]
Total Pkts=226365/Dropped=0.0 %
226365 pkts - 214464585 bytes [226359.8 pkt/sec - 1715.68 Mbit/sec]
=========================
Absolute Stats: [channel=9][183973 pkts rcvd][0 pkts dropped]
Total Pkts=183973/Dropped=0.0 %
183973 pkts - 154206146 bytes [183968.8 pkt/sec - 1233.62 Mbit/sec]
=========================
Absolute Stats: [channel=10][193904 pkts rcvd][0 pkts dropped]
Total Pkts=193904/Dropped=0.0 %
193904 pkts - 170982720 bytes [193899.5 pkt/sec - 1367.83 Mbit/sec]
=========================
Absolute Stats: [channel=11][159307 pkts rcvd][0 pkts dropped]
Total Pkts=159307/Dropped=0.0 %
159307 pkts - 130492164 bytes [159303.3 pkt/sec - 1043.91 Mbit/sec]
=========================
Absolute Stats: [channel=12][198685 pkts rcvd][0 pkts dropped]
Total Pkts=198685/Dropped=0.0 %
198685 pkts - 173157408 bytes [198680.4 pkt/sec - 1385.23 Mbit/sec]
=========================
Absolute Stats: [channel=13][196712 pkts rcvd][0 pkts dropped]
Total Pkts=196712/Dropped=0.0 %
196712 pkts - 172714889 bytes [196707.5 pkt/sec - 1381.69 Mbit/sec]
=========================
Absolute Stats: [channel=14][180239 pkts rcvd][0 pkts dropped]
Total Pkts=180239/Dropped=0.0 %
180239 pkts - 153796845 bytes [180234.9 pkt/sec - 1230.35 Mbit/sec]
=========================
Absolute Stats: [channel=15][174886 pkts rcvd][0 pkts dropped]
Total Pkts=174886/Dropped=0.0 %
174886 pkts - 149870888 bytes [174882.0 pkt/sec - 1198.94 Mbit/sec]
=========================
Aggregate stats (all channels): [0.0 pkt/sec][0.00 Mbit/sec][0 pkts dropped]
=========================

=========================
Absolute Stats: [channel=0][280911 pkts rcvd][0 pkts dropped]
Total Pkts=280911/Dropped=0.0 %
280911 pkts - 238246030 bytes [140327.9 pkt/sec - 952.12 Mbit/sec]
=========================
Actual Stats: [channel=0][99895 pkts][1001.8 ms][99715.9 pkt/sec]
=========================
Absolute Stats: [channel=1][271128 pkts rcvd][0 pkts dropped]
Total Pkts=271128/Dropped=0.0 %
271128 pkts - 220184576 bytes [135440.8 pkt/sec - 879.94 Mbit/sec]
=========================
Actual Stats: [channel=1][91540 pkts][1001.8 ms][91375.9 pkt/sec]
=========================
Absolute Stats: [channel=2][251004 pkts rcvd][0 pkts dropped]
Total Pkts=251004/Dropped=0.0 %
251090 pkts - 210457632 bytes [125430.9 pkt/sec - 840.91 Mbit/sec]
=========================
Actual Stats: [channel=2][85799 pkts][1001.8 ms][85645.2 pkt/sec]
=========================
Absolute Stats: [channel=3][256648 pkts rcvd][0 pkts dropped]
Total Pkts=256648/Dropped=0.0 %
256648 pkts - 213116218 bytes [128207.4 pkt/sec - 851.69 Mbit/sec]
=========================
Actual Stats: [channel=3][86188 pkts][1001.8 ms][86033.5 pkt/sec]
=========================
Absolute Stats: [channel=4][261802 pkts rcvd][0 pkts dropped]
Total Pkts=261802/Dropped=0.0 %
261802 pkts - 225272589 bytes [130782.1 pkt/sec - 900.27 Mbit/sec]
=========================
Actual Stats: [channel=4][86528 pkts][1001.8 ms][86372.9 pkt/sec]
=========================
Absolute Stats: [channel=5][275665 pkts rcvd][0 pkts dropped]
Total Pkts=275665/Dropped=0.0 %
275665 pkts - 239259529 bytes [137707.3 pkt/sec - 956.17 Mbit/sec]
=========================
Actual Stats: [channel=5][91780 pkts][1001.8 ms][91615.5 pkt/sec]
=========================
Absolute Stats: [channel=6][295611 pkts rcvd][0 pkts dropped]
Total Pkts=295611/Dropped=0.0 %
295611 pkts - 231543496 bytes [147671.2 pkt/sec - 925.33 Mbit/sec]
=========================
Actual Stats: [channel=6][100521 pkts][1001.8 ms][100340.8 pkt/sec]
=========================
Absolute Stats: [channel=7][268374 pkts rcvd][0 pkts dropped]
Total Pkts=268374/Dropped=0.0 %
268374 pkts - 230010930 bytes [134065.1 pkt/sec - 919.21 Mbit/sec]
=========================
Actual Stats: [channel=7][91749 pkts][1001.8 ms][91584.5 pkt/sec]
=========================
Absolute Stats: [channel=8][312726 pkts rcvd][0 pkts dropped]
Total Pkts=312726/Dropped=0.0 %
312726 pkts - 286419690 bytes [156220.9 pkt/sec - 1144.64 Mbit/sec]
=========================
Actual Stats: [channel=8][86361 pkts][1001.8 ms][86206.2 pkt/sec]
=========================
Absolute Stats: [channel=9][275091 pkts rcvd][0 pkts dropped]
Total Pkts=275091/Dropped=0.0 %
275091 pkts - 229807313 bytes [137420.5 pkt/sec - 918.39 Mbit/sec]
=========================
Actual Stats: [channel=9][91118 pkts][1001.8 ms][90954.6 pkt/sec]
=========================
Absolute Stats: [channel=10][289441 pkts rcvd][0 pkts dropped]
Total Pkts=289441/Dropped=0.0 %
289441 pkts - 254843198 bytes [144589.0 pkt/sec - 1018.45 Mbit/sec]
=========================
Actual Stats: [channel=10][95537 pkts][1001.8 ms][95365.7 pkt/sec]
=========================
Absolute Stats: [channel=11][241318 pkts rcvd][0 pkts dropped]
Total Pkts=241318/Dropped=0.0 %
241318 pkts - 200442927 bytes [120549.4 pkt/sec - 801.04 Mbit/sec]
=========================
Actual Stats: [channel=11][82011 pkts][1001.8 ms][81864.0 pkt/sec]
=========================
Absolute Stats: [channel=12][300209 pkts rcvd][0 pkts dropped]
Total Pkts=300209/Dropped=0.0 %
300209 pkts - 261259342 bytes [149968.1 pkt/sec - 1044.09 Mbit/sec]
=========================
Actual Stats: [channel=12][101524 pkts][1001.8 ms][101342.0 pkt/sec]
=========================
Absolute Stats: [channel=13][293733 pkts rcvd][0 pkts dropped]
Total Pkts=293733/Dropped=0.0 %
293733 pkts - 259477621 bytes [146733.0 pkt/sec - 1036.97 Mbit/sec]
=========================
Actual Stats: [channel=13][97021 pkts][1001.8 ms][96847.1 pkt/sec]
=========================
Absolute Stats: [channel=14][267101 pkts rcvd][0 pkts dropped]
Total Pkts=267101/Dropped=0.0 %
267101 pkts - 226064969 bytes [133429.1 pkt/sec - 903.44 Mbit/sec]
=========================
Actual Stats: [channel=14][86862 pkts][1001.8 ms][86706.3 pkt/sec]
=========================
Absolute Stats: [channel=15][266323 pkts rcvd][0 pkts dropped]
Total Pkts=266323/Dropped=0.0 %
266323 pkts - 232926529 bytes [133040.5 pkt/sec - 930.86 Mbit/sec]
=========================
Actual Stats: [channel=15][91437 pkts][1001.8 ms][91273.1 pkt/sec]
=========================
Aggregate stats (all channels): [1463243.0 pkt/sec][15023.51 Mbit/sec][0 pkts dropped]
=========================

=========================
Absolute Stats: [channel=0][373933 pkts rcvd][0 pkts dropped]
Total Pkts=373933/Dropped=0.0 %
374021 pkts - 319447715 bytes [124511.0 pkt/sec - 850.55 Mbit/sec]
=========================
Actual Stats: [channel=0][93110 pkts][1002.1 ms][92914.8 pkt/sec]
=========================
Absolute Stats: [channel=1][364673 pkts rcvd][0 pkts dropped]
Total Pkts=364673/Dropped=0.0 %
364673 pkts - 297909054 bytes [121399.0 pkt/sec - 793.39 Mbit/sec]
=========================
Actual Stats: [channel=1][93545 pkts][1002.1 ms][93348.9 pkt/sec]
=========================
Absolute Stats: [channel=2][340006 pkts rcvd][0 pkts dropped]
Total Pkts=340006/Dropped=0.0 %
340006 pkts - 286127223 bytes [113187.4 pkt/sec - 762.01 Mbit/sec]
=========================
Actual Stats: [channel=2][88914 pkts][1002.1 ms][88727.6 pkt/sec]
=========================
Absolute Stats: [channel=3][345742 pkts rcvd][0 pkts dropped]
Total Pkts=345742/Dropped=0.0 %
345744 pkts - 291400583 bytes [115097.6 pkt/sec - 776.05 Mbit/sec]
=========================
Actual Stats: [channel=3][89096 pkts][1002.1 ms][88909.2 pkt/sec]
=========================
Absolute Stats: [channel=4][347349 pkts rcvd][0 pkts dropped]
Total Pkts=347349/Dropped=0.0 %
347349 pkts - 298935146 bytes [115631.9 pkt/sec - 796.12 Mbit/sec]
=========================
Actual Stats: [channel=4][85547 pkts][1002.1 ms][85367.6 pkt/sec]
=========================
Absolute Stats: [channel=5][364298 pkts rcvd][0 pkts dropped]
Total Pkts=364298/Dropped=0.0 %
364298 pkts - 316328192 bytes [121274.2 pkt/sec - 842.44 Mbit/sec]
=========================
Actual Stats: [channel=5][88755 pkts][1002.1 ms][88568.9 pkt/sec]
=========================
Absolute Stats: [channel=6][389332 pkts rcvd][0 pkts dropped]
Total Pkts=389332/Dropped=0.0 %
389332 pkts - 304943539 bytes [129608.0 pkt/sec - 812.12 Mbit/sec]
=========================
Actual Stats: [channel=6][93721 pkts][1002.1 ms][93524.5 pkt/sec]
=========================
Absolute Stats: [channel=7][358297 pkts rcvd][0 pkts dropped]
Total Pkts=358297/Dropped=0.0 %
358297 pkts - 306416899 bytes [119276.5 pkt/sec - 816.05 Mbit/sec]
=========================
Actual Stats: [channel=7][89923 pkts][1002.1 ms][89734.5 pkt/sec]
=========================
Absolute Stats: [channel=8][401267 pkts rcvd][0 pkts dropped]
Total Pkts=401267/Dropped=0.0 %
401267 pkts - 360814291 bytes [133581.1 pkt/sec - 960.92 Mbit/sec]
=========================
Actual Stats: [channel=8][88541 pkts][1002.1 ms][88355.4 pkt/sec]
=========================
Absolute Stats: [channel=9][367106 pkts rcvd][0 pkts dropped]
Total Pkts=367106/Dropped=0.0 %
367106 pkts - 308110795 bytes [122209.0 pkt/sec - 820.56 Mbit/sec]
=========================
Actual Stats: [channel=9][92015 pkts][1002.1 ms][91822.1 pkt/sec]
=========================
Absolute Stats: [channel=10][379460 pkts rcvd][0 pkts dropped]
Total Pkts=379460/Dropped=0.0 %
379460 pkts - 333159086 bytes [126321.6 pkt/sec - 887.26 Mbit/sec]
=========================
Actual Stats: [channel=10][90019 pkts][1002.1 ms][89830.3 pkt/sec]
=========================
Absolute Stats: [channel=11][325694 pkts rcvd][0 pkts dropped]
Total Pkts=325694/Dropped=0.0 %
325694 pkts - 275299638 bytes [108423.0 pkt/sec - 733.17 Mbit/sec]
=========================
Actual Stats: [channel=11][84376 pkts][1002.1 ms][84199.1 pkt/sec]
=========================
Absolute Stats: [channel=12][404043 pkts rcvd][0 pkts dropped]
Total Pkts=404043/Dropped=0.0 %
404043 pkts - 354268267 bytes [134505.2 pkt/sec - 943.48 Mbit/sec]
=========================
Actual Stats: [channel=12][103834 pkts][1002.1 ms][103616.3 pkt/sec]
=========================
Absolute Stats: [channel=13][387853 pkts rcvd][0 pkts dropped]
Total Pkts=387853/Dropped=0.0 %
387853 pkts - 341947698 bytes [129115.6 pkt/sec - 910.67 Mbit/sec]
=========================
Actual Stats: [channel=13][94120 pkts][1002.1 ms][93922.7 pkt/sec]
=========================
Absolute Stats: [channel=14][355203 pkts rcvd][0 pkts dropped]
Total Pkts=355203/Dropped=0.0 %
355203 pkts - 299561170 bytes [118246.5 pkt/sec - 797.79 Mbit/sec]
=========================
Actual Stats: [channel=14][88102 pkts][1002.1 ms][87917.3 pkt/sec]
=========================
Absolute Stats: [channel=15][358170 pkts rcvd][0 pkts dropped]
Total Pkts=358170/Dropped=0.0 %
358170 pkts - 317357718 bytes [119234.2 pkt/sec - 845.18 Mbit/sec]
=========================
Actual Stats: [channel=15][91847 pkts][1002.1 ms][91654.4 pkt/sec]
=========================
Aggregate stats (all channels): [1452413.5 pkt/sec][13347.76 Mbit/sec][0 pkts dropped]
=========================

=========================
Absolute Stats: [channel=0][468626 pkts rcvd][0 pkts dropped]
Total Pkts=468626/Dropped=0.0 %
468626 pkts - 400765367 bytes [116978.6 pkt/sec - 800.31 Mbit/sec]
=========================
Actual Stats: [channel=0][94605 pkts][1002.2 ms][94400.9 pkt/sec]
=========================
Absolute Stats: [channel=1][459038 pkts rcvd][0 pkts dropped]
Total Pkts=459038/Dropped=0.0 %
459038 pkts - 375498207 bytes [114585.3 pkt/sec - 749.86 Mbit/sec]
=========================
Actual Stats: [channel=1][94365 pkts][1002.2 ms][94161.4 pkt/sec]
=========================
Absolute Stats: [channel=2][427693 pkts rcvd][0 pkts dropped]
Total Pkts=427693/Dropped=0.0 %
427756 pkts - 360740091 bytes [106776.6 pkt/sec - 720.30 Mbit/sec]
=========================
Actual Stats: [channel=2][87750 pkts][1002.2 ms][87560.7 pkt/sec]
=========================
Absolute Stats: [channel=3][430086 pkts rcvd][0 pkts dropped]
Total Pkts=430086/Dropped=0.0 %
430086 pkts - 360783155 bytes [107358.3 pkt/sec - 720.47 Mbit/sec]
=========================
Actual Stats: [channel=3][84342 pkts][1002.2 ms][84160.0 pkt/sec]
=========================
Absolute Stats: [channel=4][441175 pkts rcvd][0 pkts dropped]
Total Pkts=441175/Dropped=0.0 %
441175 pkts - 381517772 bytes [110126.3 pkt/sec - 761.88 Mbit/sec]
=========================
Actual Stats: [channel=4][93826 pkts][1002.2 ms][93623.6 pkt/sec]
=========================
Absolute Stats: [channel=5][452388 pkts rcvd][0 pkts dropped]
Total Pkts=452388/Dropped=0.0 %
452388 pkts - 392565040 bytes [112925.3 pkt/sec - 783.94 Mbit/sec]
=========================
Actual Stats: [channel=5][87966 pkts][1002.2 ms][87776.2 pkt/sec]
=========================
Absolute Stats: [channel=6][484619 pkts rcvd][0 pkts dropped]
Total Pkts=484619/Dropped=0.0 %
484619 pkts - 380513369 bytes [120970.8 pkt/sec - 759.87 Mbit/sec]
=========================
Actual Stats: [channel=6][95287 pkts][1002.2 ms][95081.4 pkt/sec]
=========================
Absolute Stats: [channel=7][444354 pkts rcvd][0 pkts dropped]
Total Pkts=444354/Dropped=0.0 %
444354 pkts - 380437307 bytes [110919.8 pkt/sec - 759.72 Mbit/sec]
=========================
Actual Stats: [channel=7][86057 pkts][1002.2 ms][85871.3 pkt/sec]
=========================
Absolute Stats: [channel=8][492232 pkts rcvd][0 pkts dropped]
Total Pkts=492232/Dropped=0.0 %
492232 pkts - 439080930 bytes [122871.2 pkt/sec - 876.83 Mbit/sec]
=========================
Actual Stats: [channel=8][90965 pkts][1002.2 ms][90768.8 pkt/sec]
=========================
Absolute Stats: [channel=9][456986 pkts rcvd][0 pkts dropped]
Total Pkts=456986/Dropped=0.0 %
456986 pkts - 384635529 bytes [114073.1 pkt/sec - 768.10 Mbit/sec]
=========================
Actual Stats: [channel=9][89880 pkts][1002.2 ms][89686.1 pkt/sec]
=========================
Absolute Stats: [channel=10][465784 pkts rcvd][0 pkts dropped]
Total Pkts=465784/Dropped=0.0 %
465784 pkts - 406987442 bytes [116269.2 pkt/sec - 812.74 Mbit/sec]
=========================
Actual Stats: [channel=10][86324 pkts][1002.2 ms][86137.8 pkt/sec]
=========================
Absolute Stats: [channel=11][414559 pkts rcvd][0 pkts dropped]
Total Pkts=414559/Dropped=0.0 %
414559 pkts - 356117478 bytes [103482.4 pkt/sec - 711.15 Mbit/sec]
=========================
Actual Stats: [channel=11][88865 pkts][1002.2 ms][88673.3 pkt/sec]
=========================
Absolute Stats: [channel=12][505441 pkts rcvd][0 pkts dropped]
Total Pkts=505441/Dropped=0.0 %
505441 pkts - 445085395 bytes [126168.4 pkt/sec - 888.82 Mbit/sec]
=========================
Actual Stats: [channel=12][101398 pkts][1002.2 ms][101179.3 pkt/sec]
=========================
Absolute Stats: [channel=13][484235 pkts rcvd][0 pkts dropped]
Total Pkts=484235/Dropped=0.0 %
484235 pkts - 428890010 bytes [120875.0 pkt/sec - 856.48 Mbit/sec]
=========================
Actual Stats: [channel=13][96382 pkts][1002.2 ms][96174.1 pkt/sec]
=========================
Absolute Stats: [channel=14][441791 pkts rcvd][0 pkts dropped]
Total Pkts=441791/Dropped=0.0 %
441791 pkts - 370987385 bytes [110280.1 pkt/sec - 740.85 Mbit/sec]
=========================
Actual Stats: [channel=14][86588 pkts][1002.2 ms][86401.2 pkt/sec]
=========================
Absolute Stats: [channel=15][447444 pkts rcvd][0 pkts dropped]
Total Pkts=447444/Dropped=0.0 %
447444 pkts - 400157776 bytes [111691.2 pkt/sec - 799.10 Mbit/sec]
=========================
Actual Stats: [channel=15][89274 pkts][1002.2 ms][89081.4 pkt/sec]
=========================
Aggregate stats (all channels): [1450737.5 pkt/sec][12510.42 Mbit/sec][0 pkts dropped]
=========================

^CLeaving...
=========================
Absolute Stats: [channel=0][526704 pkts rcvd][0 pkts dropped]
Total Pkts=526704/Dropped=0.0 %
526704 pkts - 449622006 bytes [112996.9 pkt/sec - 771.68 Mbit/sec]
=========================
Actual Stats: [channel=0][58078 pkts][655.1 ms][88649.3 pkt/sec]
=========================
Absolute Stats: [channel=1][518742 pkts rcvd][0 pkts dropped]
Total Pkts=518742/Dropped=0.0 %
518742 pkts - 423173503 bytes [111288.8 pkt/sec - 726.29 Mbit/sec]
=========================
Actual Stats: [channel=1][59704 pkts][655.1 ms][91131.2 pkt/sec]
=========================
Absolute Stats: [channel=2][482833 pkts rcvd][0 pkts dropped]
Total Pkts=482833/Dropped=0.0 %
482833 pkts - 408272765 bytes [103585.0 pkt/sec - 700.71 Mbit/sec]
=========================
Actual Stats: [channel=2][55077 pkts][655.1 ms][84068.7 pkt/sec]
=========================
Absolute Stats: [channel=3][484505 pkts rcvd][0 pkts dropped]
Total Pkts=484505/Dropped=0.0 %
484505 pkts - 407952853 bytes [103943.7 pkt/sec - 700.16 Mbit/sec]
=========================
Actual Stats: [channel=3][54419 pkts][655.1 ms][83064.3 pkt/sec]
=========================
Absolute Stats: [channel=4][497847 pkts rcvd][0 pkts dropped]
Total Pkts=497847/Dropped=0.0 %
497847 pkts - 430545046 bytes [106806.0 pkt/sec - 738.94 Mbit/sec]
=========================
Actual Stats: [channel=4][56672 pkts][655.1 ms][86503.3 pkt/sec]
=========================
Absolute Stats: [channel=5][509084 pkts rcvd][0 pkts dropped]
Total Pkts=509084/Dropped=0.0 %
509084 pkts - 442684546 bytes [109216.8 pkt/sec - 759.77 Mbit/sec]
=========================
Actual Stats: [channel=5][56696 pkts][655.1 ms][86539.9 pkt/sec]
=========================
Absolute Stats: [channel=6][590352 pkts rcvd][0 pkts dropped]
Total Pkts=590352/Dropped=0.0 %
590352 pkts - 488140796 bytes [126651.7 pkt/sec - 837.79 Mbit/sec]
=========================
Actual Stats: [channel=6][105733 pkts][655.1 ms][161389.2 pkt/sec]
=========================
Absolute Stats: [channel=7][498739 pkts rcvd][0 pkts dropped]
Total Pkts=498739/Dropped=0.0 %
498739 pkts - 426878095 bytes [106997.4 pkt/sec - 732.65 Mbit/sec]
=========================
Actual Stats: [channel=7][54385 pkts][655.1 ms][83012.4 pkt/sec]
=========================
Absolute Stats: [channel=8][545746 pkts rcvd][0 pkts dropped]
Total Pkts=545746/Dropped=0.0 %
545746 pkts - 483616307 bytes [117082.1 pkt/sec - 830.02 Mbit/sec]
=========================
Actual Stats: [channel=8][53514 pkts][655.1 ms][81682.9 pkt/sec]
=========================
Absolute Stats: [channel=9][513518 pkts rcvd][0 pkts dropped]
Total Pkts=513518/Dropped=0.0 %
513518 pkts - 433042435 bytes [110168.0 pkt/sec - 743.23 Mbit/sec]
=========================
Actual Stats: [channel=9][56532 pkts][655.1 ms][86289.6 pkt/sec]
=========================
Absolute Stats: [channel=10][518808 pkts rcvd][0 pkts dropped]
Total Pkts=518808/Dropped=0.0 %
518808 pkts - 451299621 bytes [111302.9 pkt/sec - 774.56 Mbit/sec]
=========================
Actual Stats: [channel=10][53024 pkts][655.1 ms][80935.0 pkt/sec]
=========================
Absolute Stats: [channel=11][463009 pkts rcvd][0 pkts dropped]
Total Pkts=463009/Dropped=0.0 %
463009 pkts - 396962614 bytes [99332.0 pkt/sec - 681.30 Mbit/sec]
=========================
Actual Stats: [channel=11][48372 pkts][655.1 ms][73834.3 pkt/sec]
=========================
Absolute Stats: [channel=12][568457 pkts rcvd][0 pkts dropped]
Total Pkts=568457/Dropped=0.0 %
568457 pkts - 501652006 bytes [121954.4 pkt/sec - 860.98 Mbit/sec]
=========================
Actual Stats: [channel=12][63016 pkts][655.1 ms][96186.6 pkt/sec]
=========================
Absolute Stats: [channel=13][540529 pkts rcvd][0 pkts dropped]
Total Pkts=540529/Dropped=0.0 %
540529 pkts - 477373633 bytes [115962.9 pkt/sec - 819.31 Mbit/sec]
=========================
Actual Stats: [channel=13][56294 pkts][655.1 ms][85926.3 pkt/sec]
=========================
Absolute Stats: [channel=14][493059 pkts rcvd][0 pkts dropped]
Total Pkts=493059/Dropped=0.0 %
493059 pkts - 413762408 bytes [105778.8 pkt/sec - 710.14 Mbit/sec]
=========================
Actual Stats: [channel=14][51268 pkts][655.1 ms][78254.7 pkt/sec]
=========================
Absolute Stats: [channel=15][500543 pkts rcvd][0 pkts dropped]
Total Pkts=500543/Dropped=0.0 %
500543 pkts - 447149624 bytes [107384.4 pkt/sec - 767.44 Mbit/sec]
=========================
Actual Stats: [channel=15][53099 pkts][655.1 ms][81049.5 pkt/sec]
=========================
Aggregate stats (all channels): [1428517.1 pkt/sec][12154.96 Mbit/sec][0 pkts dropped]
=========================

Shutting down sockets...
        0...
        1...
        2...
        3...
        4...
        5...
        6...
        7...
        8...
        9...
        10...
        11...
        12...
        13...
        14...
        15...
root@suricata:/home/pevman/pfring-svn-latest/userland/examples#



That is it for the DNA configuration and setup/installation part.
The next chapter - Chapter III - AF_PACKET - deals with configuration,setup and tuning of the  AF_PACKET mode usage for the Suricata IDPS.






by Peter Manev (noreply@blogger.com) at April 01, 2014 03:15 PM

March 29, 2014

Victor Julien

Video: Suricata 2.0 installation and quick setup

I’ve made a video on installing Suricata 2.0 on Debian Wheezy. The video does the installation, quick setup, ethtool config and shows a simple way to test the IDS.

It’s the first time I’ve made such a video. Feedback is welcome.


by inliniac at March 29, 2014 10:01 PM

Peter Manev

Suricata (and the grand slam of) Open Source IDPS - Chapter IV - Logstash / Kibana / Elasticsearch, Part One - Updated


Introduction 

This is an updated article of the original post - http://pevma.blogspot.se/2014/03/suricata-and-grand-slam-of-open-source.html

This article covers the new (at the time of this writing) 1.4.0 Logstash release.

This is Chapter IV of a series of 4 articles aiming at giving a general guideline on how to deploy the Open Source Suricata IDPS on a high speed networks (10Gbps) in IDS mode using AF_PACKET, PF_RING or DNA and Logstash / Kibana / Elasticsearch

This chapter consist of two parts:
Chapter IV Part One - installation and set up of logstash.
Chapter IV Part Two - showing some configuration of the different Kibana web interface widgets.

The end result should be as many and as different widgets to analyze the Suricata IDPS logs , something like :






This chapter describes a quick and easy set up of Logstash / Kibana / Elasticsearch
This set up described in this chapter was not intended for a huge deployment, but rather as a conceptual proof in a working environment as pictured below:






We have two Suricata IDS deployed - IDS1 and IDS2
  • IDS2 uses logstash-forwarder (former lumberjack) to securely forward (SSL encrypted) its eve.json logs (configured in suricata.yaml) to IDS1, main Logstash/Kibana deployment.
  • IDS1 has its own logging (eve.json as well) that is also digested by Logstash.

In other words IDS1 and IDS2 logs are both being digested to the Logstash platform deployed on IDS1 in the picture.

Prerequisites

Both IDS1 and IDS2 should be set up and tuned with Suricata IDPS. This article will not cover that. If you have not done it you could start HERE.

Make sure you have installed Suricata with JSON availability. The following two packages must be present on your system prior to installation/compilation:
root@LTS-64-1:~# apt-cache search libjansson
libjansson-dev - C library for encoding, decoding and manipulating JSON data (dev)
libjansson4 - C library for encoding, decoding and manipulating JSON data
If there are not present on the system  - install them:
apt-get install libjansson4 libjansson-dev

In both IDS1 and IDS2 you should have in your suricata.yaml:
  # "United" event log in JSON format
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
        - dns
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: yes   # force logging magic on all logged files
            force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh
This tutorial uses /var/log/suricata as a default logging directory.

You can do a few dry runs to confirm log generation on both systems.
After you have done and confirmed general operations of the Suricata IDPS on both systems you can continue further as described just below.

Installation

IDS2

For the logstash-forwarder we need Go installed.

cd /opt
apt-get install hg-fast-export
hg clone -u release https://code.google.com/p/go
cd go/src
./all.bash


If everything goes ok you should see at the end:
ALL TESTS PASSED

Update your $PATH variable, in  make sure it has:
PATH=$PATH:/opt/go/bin
export PATH

root@debian64:~# nano  ~/.bashrc


edit the file (.bashrc), add at the bottom:

PATH=$PATH:/opt/go/bin
export PATH

 then:

root@debian64:~# source ~/.bashrc
root@debian64:~# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/go/bin


Install logstash-forwarder:
cd /opt
git clone git://github.com/elasticsearch/logstash-forwarder.git
cd logstash-forwarder
go build

Build a debian package:
apt-get install ruby ruby-dev
gem install fpm
make deb
That will produce a Debian package in the same directory (something like):
logstash-forwarder_0.3.1_amd64.deb

Install the Debian package:
root@debian64:/opt# dpkg -i logstash-forwarder_0.3.1_amd64.deb

 NOTE: You can use the same Debian package to copy and install it (dependency free) on other machines/servers. So once you have the deb package you can install it on any other server the same way, no need for rebuilding everything again (Go and ruby)

Create SSL certificates that will be used to securely encrypt and transport the logs:
cd /opt
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logfor.key -out logfor.crt

Copy on IDS2:
logfor.key in /etc/ssl/private/
logfor.crt in /etc/ssl/certs/

Copy the same files to IDS1:
logfor.key in /etc/logstash/pki/
logfor.crt in /etc/logstash/pki/


Now you can try to start/restart/stop the logstash-forwarder service:
root@debian64:/opt# /etc/init.d/logstash-forwarder start
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[ ok ] logstash-forwarder is running.
root@debian64:/opt# /etc/init.d/logstash-forwarder stop
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[FAIL] logstash-forwarder is not running ... failed!
root@debian64:/opt# /etc/init.d/logstash-forwarder start
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[ ok ] logstash-forwarder is running.
root@debian64:/opt# /etc/init.d/logstash-forwarder stop
root@debian64:/opt#
Good to go.

Create on IDS2 your logstash-forwarder config:
touch /etc/logstash-forwarder
Make sure the file looks like this (in this tutorial - copy/paste):

{
  "network": {
    "servers": [ "192.168.1.158:5043" ],
    "ssl certificate": "/etc/ssl/certs/logfor.crt",
    "ssl key": "/etc/ssl/private/logfor.key",
    "ssl ca": "/etc/ssl/certs/logfor.crt"
  },
  "files": [
    {
      "paths": [ "/var/log/suricata/eve.json" ],
      "codec": { "type": "json" }
    }
  ]
}
Some more info:
Usage of ./logstash-forwarder:
  -config="": The config file to load
  -cpuprofile="": write cpu profile to file
  -from-beginning=false: Read new files from the beginning, instead of the end
  -idle-flush-time=5s: Maximum time to wait for a full spool before flushing anyway
  -log-to-syslog=false: Log to syslog instead of stdout
  -spool-size=1024: Maximum number of events to spool before a flush is forced.

  These can be adjusted in:
  /etc/init.d/logstash-forwarder


This is as far as the set up on IDS2 goes....

IDS1 - indexer

NOTE: Each Logstash version has its corresponding Elasticsearch version to be used with it !
http://logstash.net/docs/1.4.0/tutorials/getting-started-with-logstash


Packages needed:
apt-get install apache2 openjdk-7-jdk openjdk-7-jre-headless

Downloads:
http://www.elasticsearch.org/overview/elkdownloads/

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.0.deb

wget https://download.elasticsearch.org/logstash/logstash/packages/debian/logstash_1.4.0-1-c82dc09_all.deb

wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0.tar.gz

mkdir /var/log/logstash/
Installation:
dpkg -i elasticsearch-1.1.0.deb
dpkg -i logstash_1.4.0-1-c82dc09_all.deb
tar -C /var/www/ -xzf kibana-3.0.0.tar.gz
update-rc.d elasticsearch defaults 95 10
update-rc.d logstash defaults

elasticsearch configs are located here (nothing needs to be done):
ls /etc/default/elasticsearch
/etc/default/elasticsearch
ls /etc/elasticsearch/
elasticsearch.yml  logging.yml
the elasticsearch data is located here:
/var/lib/elasticsearch/

You should have your logstash config file in /etc/default/logstash:

Make sure it has the config and log directories correct:

###############################
# Default settings for logstash
###############################

# Override Java location
#JAVACMD=/usr/bin/java

# Set a home directory
#LS_HOME=/var/lib/logstash

# Arguments to pass to logstash agent
#LS_OPTS=""

# Arguments to pass to java
#LS_HEAP_SIZE="500m"
#LS_JAVA_OPTS="-Djava.io.tmpdir=$HOME"

# pidfiles aren't used for upstart; this is for sysv users.
#LS_PIDFILE=/var/run/logstash.pid

# user id to be invoked as; for upstart: edit /etc/init/logstash.conf
#LS_USER=logstash

# logstash logging
LS_LOG_FILE=/var/log/logstash/logstash.log
#LS_USE_GC_LOGGING="true"

# logstash configuration directory
LS_CONF_DIR=/etc/logstash/conf.d

# Open file limit; cannot be overridden in upstart
#LS_OPEN_FILES=16384

# Nice level
#LS_NICE=19


GeoIPLite is shipped by default with Logstash !
http://logstash.net/docs/1.4.0/filters/geoip

and it is located here(on the system after installation):
/opt/logstash/vendor/geoip/GeoLiteCity.dat

Create your logstash.conf

touch logstash.conf

make sure it looks like this:

input {
  lumberjack {
    port => 5043
    type => "IDS2-logs"
    codec => json
    ssl_certificate => "/etc/logstash/pki/logfor.crt"
    ssl_key => "/etc/logstash/pki/logfor.key"
  }
 
  file {
    path => ["/var/log/suricata/eve.json"]
    codec =>   json
    type => "IDS1-logs"
  }
 
}

filter {
  if [src_ip]  {
    geoip {
      source => "src_ip"
      target => "geoip"
      database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }
}

output {
  elasticsearch {
    host => localhost
  }
}

The /etc/logstash/pki/logfor.crt  and /etc/logstash/pki/logfor.key are the same ones we created earlier on IDS2 and copied here to IDS1.

The purpose of  type => "IDS1-logs" and type => "IDS2-logs" above is so that later when looking at the Kibana widgets you would be able to differentiate the logs if needed:



Then  copy the file we just created to :
cp logstash.conf /etc/logstash/conf.d/


Kibana:

We have already installed Kibana during the first step :). All it is left to do now is just restart apache:

service apache2 restart


 Rolling it out


On IDS1 and IDS2 - start Suricata IDPS. Genereate some logs
On IDS2:
/etc/init.d/logstash-forwarder start

On IDS1:
service elasticsearch start
service logstash start
You can check the logstash-forwarder (on IDS2) if it is working properly like so - >
 tail -f /var/log/syslog :



Go to your browser and navigate to (in this case IDS1)
http://192.168.1.158/kibana-3.0.0
NOTE: This is http (as this is just a simple tutorial), you should configure it to use httpS and reverse proxy with authentication...

The Kibana web interface should come up.

That is it. From here on it is up to you to configure the web interface with your own widgets.

Chapter IV Part Two will follow with detail on that subject.
However something like this is easily achievable with a few clicks in under 5 min:





Troubleshooting:

You should keep an eye on /var/log/logstash/logstash.log - any troubles should be visible there.

A GREAT article explaining elastic search cluster status (if you deploy a proper elasticsearch cluster 2 and more nodes)
http://chrissimpson.co.uk/elasticsearch-yellow-cluster-status-explained.html

ERR in logstash-indexer.out - too many open files
http://www.elasticsearch.org/tutorials/too-many-open-files/

Set ulimit parameters on Ubuntu(this is in case you need to increase the number of Inodes(files) available on a system "df -ih"):
http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/

This is an advanced topic - Cluster status and settings commands:
 curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

 curl -XGET 'http://localhost:9200/_status?pretty=true'

 curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'


Very useful links:

Logstash 1.4.0 GA released:
http://www.elasticsearch.org/blog/logstash-1-4-0-ga-unleashed/

A MUST READ (explaining the usage of ".raw" in terms so that the terms  re not broken by space delimiter)
http://www.elasticsearch.org/blog/logstash-1-3-1-released/

Article explaining how to set up a 2 node cluster:
http://techhari.blogspot.se/2013/03/elasticsearch-cluster-setup-in-2-minutes.html

Installing Logstash Central Server (using rsyslog):
https://support.shotgunsoftware.com/entries/23163863-Installing-Logstash-Central-Server

ElasticSearch cluster setup in 2 minutes:
http://techhari.blogspot.com/2013/03/elasticsearch-cluster-setup-in-2-minutes.html






by Peter Manev (noreply@blogger.com) at March 29, 2014 07:43 AM

March 27, 2014

suricata-ids.org

Suricata Ubuntu PPA updated to 2.0

We have updated the official Ubuntu PPA to Suricata 2.0. To use this PPA read our docs here.

To install Suricata through this PPA, enter:
sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update
sudo apt-get install suricata

If you’re already using this PPA, updating is as simple as:
sudo apt-get update && sudo apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at March 27, 2014 02:18 PM

Suricata Ubuntu PPA updated to 2.0rc2

We have updated the official Ubuntu PPA to Suricata 2.0rc2. To use this PPA read our docs here.

If you’re using this PPA, updating is as simple as:

apt-get update && apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at March 27, 2014 01:59 PM

March 26, 2014

Peter Manev

Suricata (and the grand slam of) Open Source IDPS - Chapter III - AF_PACKET

Introduction


This is Chapter III - AF_PACKET of a series of articles about high performance  and advance tuning of Suricata IDPS

This article will consist of series of instructions on setting up and configuring Suricata IDPS with  AF_PACKET for a 10Gbps traffic interface monitoring.



Chapter III - AF_PACKET

AF_PACKET works "out of the box " with Suricata. Please make sure your kernel level is at least 3.2 in order to get the best results.

Once you have followed all the steps in Chapter I - Preparation  The only thing left to do is adjust the suricata.yaml settings.


AF_PACKET - suricata.yaml tune up and configuration




NOTE:
AF_PACKET - Which kernel version not to use with Suricata in AF_PACKET mode
(thanks to Regit)


We make sure we use runmode workers (feel free to try other modes and experiment what is best for your specific set up):
#runmode: autofp
runmode: workers


Adjust the packet size:
# Preallocated size for packet. Default is 1514 which is the classical
# size for pcap on ethernet. You should adjust this value to the highest
# packet size (MTU + hardware header) on your system.
default-packet-size: 1520


Use custom profile in detect-engine with a lot more groups (high gives you about 15 groups per variable, but you can customize as needed depending on the network ranges you monitor ):
detect-engine:
  - profile: custom
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full
  - inspection-recursion-limit: 3000


Adjust your defrag settings:
# Defrag settings:
defrag:
  memcap: 512mb
  hash-size: 65536
  trackers: 65535 # number of defragmented flows to follow
  max-frags: 65535 # number of fragments to keep
  prealloc: yes
  timeout: 30



Adjust your flow settings:
flow:
  memcap: 1gb
  hash-size: 1048576
  prealloc: 1048576
  emergency-recovery: 30


Adjust your per protocol timeout values:
flow-timeouts:

  default:
    new: 3
    established: 30
    closed: 0
    emergency-new: 10
    emergency-established: 10
    emergency-closed: 0
  tcp:
    new: 6
    established: 100
    closed: 12
    emergency-new: 1
    emergency-established: 5
    emergency-closed: 2
  udp:
    new: 3
    established: 30
    emergency-new: 3
    emergency-established: 10
  icmp:
    new: 3
    established: 30
    emergency-new: 1
    emergency-established: 10



Adjust your stream engine settings:
stream:
  memcap: 16gb
  checksum-validation: no      # reject wrong csums
  prealloc-sesions: 500000     #per thread
  midstream: true
  asyn-oneside: true
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 20gb
    depth: 12mb                  # reassemble 12mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10


Make sure you enable suricata.log for troubleshooting if something goes wrong:
  outputs:
  - console:
      enabled: yes
  - file:
      enabled: yes
      filename: /var/log/suricata/suricata.log



The AF_PACKET section:
af-packet:
  - interface: eth3
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 16
    # Default clusterid.  AF_PACKET will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 98
    # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
    # This is only supported for Linux kernel > 3.1
    # possible value are:
    #  * cluster_round_robin: round robin load balancing
    #  * cluster_flow: all packets of a given flow are send to the same socket
    #  * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket
    cluster-type: cluster_cpu
    # In some fragmentation case, the hash can not be computed. If "defrag" is set
    # to yes, the kernel will do the needed defragmentation before sending the packets.
    defrag: no
    # To use the ring feature of AF_PACKET, set 'use-mmap' to yes
    use-mmap: yes
    # Ring size will be computed with respect to max_pending_packets and number
    # of threads. You can set manually the ring size in number of packets by setting
    # the following value. If you are using flow cluster-type and have really network
    # intensive single-flow you could want to set the ring-size independantly of the number
    # of threads:
    ring-size: 200000
    # On busy system, this could help to set it to yes to recover from a packet drop
    # phase. This will result in some packets (at max a ring flush) being non treated.
    #use-emergency-flush: yes
    # recv buffer size, increase value could improve performance
    # buffer-size: 100000
    # Set to yes to disable promiscuous mode
    # disable-promisc: no
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - kernel: use indication sent by kernel for each packet (default)
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used.
    # Warning: 'checksum-validation' must be set to yes to have any validation
    checksum-checks: kernel
    # BPF filter to apply to this interface. The pcap filter syntax apply here.
    #bpf-filter: port 80 or udp
   



We had these rules enabled:
rule-files:
   - trojan.rules
   - md5.rules # 134 000 specially selected file md5s
   - dns.rules
   - malware.rules
   - local.rules
   - current_events.rules
   -  mobile_malware.rules
   - user_agents.rules 



Make sure you adjust your Network and Port variables:
  # Holds the address group vars that would be passed in a Signature.
  # These would be retrieved during the Signature address parsing stage.
  address-groups:

    HOME_NET: "[ HOME NET HERE ]"

    EXTERNAL_NET: "!$HOME_NET"

    HTTP_SERVERS: "$HOME_NET"

    SMTP_SERVERS: "$HOME_NET"

    SQL_SERVERS: "$HOME_NET"

    DNS_SERVERS: "$HOME_NET"

    TELNET_SERVERS: "$HOME_NET"

    AIM_SERVERS: "$EXTERNAL_NET"

    DNP3_SERVER: "$HOME_NET"

    DNP3_CLIENT: "$HOME_NET"

    MODBUS_CLIENT: "$HOME_NET"

    MODBUS_SERVER: "$HOME_NET"

    ENIP_CLIENT: "$HOME_NET"

    ENIP_SERVER: "$HOME_NET"

  # Holds the port group vars that would be passed in a Signature.
  # These would be retrieved during the Signature port parsing stage.
  port-groups:

    HTTP_PORTS: "80"

    SHELLCODE_PORTS: "!80"

    ORACLE_PORTS: 1521

    SSH_PORTS: 22

    DNP3_PORTS: 20000


Your app parsers:
# Holds details on the app-layer. The protocols section details each protocol.
# Under each protocol, the default value for detection-enabled and "
# parsed-enabled is yes, unless specified otherwise.
# Each protocol covers enabling/disabling parsers for all ipprotos
# the app-layer protocol runs on.  For example "dcerpc" refers to the tcp
# version of the protocol as well as the udp version of the protocol.
# The option "enabled" takes 3 values - "yes", "no", "detection-only".
# "yes" enables both detection and the parser, "no" disables both, and
# "detection-only" enables detection only(parser disabled).
app-layer:
  protocols:
    tls:
      enabled: yes
      detection-ports:
        tcp:
          toserver: 443

      #no-reassemble: yes
    dcerpc:
      enabled: yes
    ftp:
      enabled: yes
    ssh:
      enabled: yes
    smtp:
      enabled: yes
    imap:
      enabled: detection-only
    msn:
      enabled: detection-only
    smb:
      enabled: yes
      detection-ports:
        tcp:
          toserver: 139
    # smb2 detection is disabled internally inside the engine.
    #smb2:
    #  enabled: yes
    dnstcp:
       enabled: yes
       detection-ports:
         tcp:
           toserver: 53
    dnsudp:
       enabled: yes
       detection-ports:
         udp:
           toserver: 53
    http:
      enabled: yes


Libhtp body limits:
      libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 12mb
           response-body-limit: 12mb

           # inspection limits
           request-body-minimal-inspect-size: 32kb
           request-body-inspect-window: 4kb
           response-body-minimal-inspect-size: 32kb
           response-body-inspect-window: 4kb



Run it

 /usr/local/bin/suricata -c /etc/suricata/suricata.yaml --af-packet=eth3 -D -v



Results


We take a look at the suricata.log file:
[13915] 4/12/2013 -- 15:38:15 - (suricata.c:962) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev e7f6107)
[13915] 4/12/2013 -- 15:38:15 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[13915] 4/12/2013 -- 15:38:15 - (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[13915] 4/12/2013 -- 15:38:15 - (util-ioctl.c:99) <Info> (GetIfaceMTU) -- Found an MTU of 1500 for 'eth3'
[13915] 4/12/2013 -- 15:38:15 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[13915] 4/12/2013 -- 15:38:15 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[13915] 4/12/2013 -- 15:38:15 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[13915] 4/12/2013 -- 15:38:15 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[13916] 4/12/2013 -- 15:38:15 - (tmqh-packetpool.c:142) <Info> (PacketPoolInit) -- preallocated 2048 packets. Total memory 7151616
[13916] 4/12/2013 -- 15:38:15 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[13916] 4/12/2013 -- 15:38:15 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[13916] 4/12/2013 -- 15:38:15 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216
[13916] 4/12/2013 -- 15:38:15 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 67108864 bytes of memory for the flow hash... 1048576 buckets of size 64
[13916] 4/12/2013 -- 15:38:15 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 1048576 flows of size 280
[13916] 4/12/2013 -- 15:38:15 - (flow.c:412) <Info> (FlowInitConfig) -- flow memory usage: 369098752 bytes, maximum: 1073741824
[13916] 4/12/2013 -- 15:38:15 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[13916] 4/12/2013 -- 15:38:15 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[13916] 4/12/2013 -- 15:38:15 - (suricata.c:1769) <Info> (SetupDelayedDetect) -- Delayed detect disabled
[13916] 4/12/2013 -- 15:38:17 - (detect-filemd5.c:275) <Info> (DetectFileMd5Parse) -- MD5 hash size 2143616 bytes


...8 rule files, 7947 rules loaded
[13916] 4/12/2013 -- 15:38:17 - (detect.c:453) <Info> (SigLoadSignatures) -- 8 rule files processed. 7947 rules successfully loaded, 0 rules failed
[13916] 4/12/2013 -- 15:38:17 - (detect.c:2568) <Info> (SigAddressPrepareStage1) -- 7947 signatures processed. 1 are IP-only rules, 1976 are inspecting packet payload, 6714 inspect application laye
r, 0 are decoder event only
[13916] 4/12/2013 -- 15:38:17 - (detect.c:2571) <Info> (SigAddressPrepareStage1) -- building signature grouping structure, stage 1: preprocessing rules... complete
[13916] 4/12/2013 -- 15:38:17 - (detect.c:3194) <Info> (SigAddressPrepareStage2) -- building signature grouping structure, stage 2: building source address list... complete
[13916] 4/12/2013 -- 15:39:51 - (detect.c:3836) <Info> (SigAddressPrepareStage3) -- building signature grouping structure, stage 3: building destination address lists... complete
[13916] 4/12/2013 -- 15:39:51 - (util-threshold-config.c:1186) <Info> (SCThresholdConfParseFile) -- Threshold config parsed: 0 rule(s) found
[13916] 4/12/2013 -- 15:39:51 - (util-coredump-config.c:122) <Info> (CoredumpLoadConfig) -- Core dump size set to unlimited.
[13916] 4/12/2013 -- 15:39:51 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- fast output device (regular) initialized: fast.log
[13916] 4/12/2013 -- 15:39:51 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- http-log output device (regular) initialized: http.log
[13916] 4/12/2013 -- 15:39:51 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- tls-log output device (regular) initialized: tls.log
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "management-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "receive-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "decode-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "stream-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "detect-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "verdict-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "reject-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "output-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'medium'
[13916] 4/12/2013 -- 15:39:51 - (runmode-af-packet.c:200) <Info> (ParseAFPConfig) -- Enabling mmaped capture on iface eth3
[13916] 4/12/2013 -- 15:39:51 - (runmode-af-packet.c:268) <Info> (ParseAFPConfig) -- Using cpu cluster mode for AF_PACKET (iface eth3)
[13916] 4/12/2013 -- 15:39:51 - (util-runmodes.c:545) <Info>


...going to use 16 threads:
(RunModeSetLiveCaptureWorkersForDevice) -- Going to use 16 thread(s)
[13918] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 0
[13918] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth31" Module to cpu/core 0, thread id 13918
[13918] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13918] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13919] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 1
[13919] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth32" Module to cpu/core 1, thread id 13919
[13919] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13919] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13920] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 2
[13920] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth33" Module to cpu/core 2, thread id 13920
[13920] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13920] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13921] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 3
[13921] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth34" Module to cpu/core 3, thread id 13921
[13921] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13921] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13922] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 4
[13922] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth35" Module to cpu/core 4, thread id 13922
[13922] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13922] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13923] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 5
[13923] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth36" Module to cpu/core 5, thread id 13923
[13923] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13923] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13924] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 6
[13924] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth37" Module to cpu/core 6, thread id 13924
[13924] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13924] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13925] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 7
[13925] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth38" Module to cpu/core 7, thread id 13925
[13925] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13925] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13926] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 8
[13926] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth39" Module to cpu/core 8, thread id 13926
[13926] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13926] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13927] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 9
[13927] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth310" Module to cpu/core 9, thread id 13927
[13927] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13927] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13928] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 10
[13928] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth311" Module to cpu/core 10, thread id 13928
[13928] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13928] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13929] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 11
[13929] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth312" Module to cpu/core 11, thread id 13929
[13929] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13929] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13930] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 12
[13930] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth313" Module to cpu/core 12, thread id 13930
[13930] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13930] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13931] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 13
[13931] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth314" Module to cpu/core 13, thread id 13931
[13931] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13931] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13932] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 14
[13932] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth315" Module to cpu/core 14, thread id 13932
[13932] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13932] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13933] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 15
[13933] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth316" Module to cpu/core 15, thread id 13933
[13933] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13933] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call


...reading in some  memory settings from yaml:
[13916] 4/12/2013 -- 15:39:51 - (runmode-af-packet.c:529) <Info> (RunModeIdsAFPWorkers) -- RunModeIdsAFPWorkers initialised
[13934] 4/12/2013 -- 15:39:51 - (tm-threads.c:1338) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "FlowManagerThread" thread , thread id 13934
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:376) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 375000 (per thread)
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:392) <Info> (StreamTcpInitConfig) -- stream "memcap": 17179869184
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:398) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: enabled
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:404) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:421) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": disabled
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:443) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:456) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:474) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 21474836480
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:492) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 12582912
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:575) <Info> (StreamTcpInitConfig) -- stream.reassembly "toserver-chunk-size": 2671
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:577) <Info> (StreamTcpInitConfig) -- stream.reassembly "toclient-chunk-size": 2582
[13935] 4/12/2013 -- 15:39:51 - (tm-threads.c:1338) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfWakeupThread" thread , thread id 13935
[13936] 4/12/2013 -- 15:39:51 - (tm-threads.c:1338) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfMgmtThread" thread , thread id 13936
[13916] 4/12/2013 -- 15:39:51 - (tm-threads.c:2191) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.


....have  a look - Suricata detects if OFFloading (discussed in  Chapter I - Preparation) is used on the network interface:
[13918] 4/12/2013 -- 15:39:51 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13918] 4/12/2013 -- 15:39:51 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13918] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13918] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 8
[13918] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth31 using socket 8
[13919] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13919] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13919] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13919] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 9
[13919] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth32 using socket 9
[13920] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13920] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13920] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13920] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 10
[13920] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth33 using socket 10
[13921] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13921] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13921] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13921] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 11
[13921] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth34 using socket 11
[13922] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13922] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13922] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13922] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 12
[13922] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth35 using socket 12
[13923] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13923] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13923] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13923] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 13
[13923] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth36 using socket 13
[13924] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13924] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13924] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13924] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 14
[13924] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth37 using socket 14
[13925] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13925] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13925] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13925] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 15
[13925] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth38 using socket 15
[13926] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13926] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13926] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13926] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 16
[13926] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth39 using socket 16
[13927] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13927] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13927] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13927] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 17
[13927] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth310 using socket 17
[13928] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13928] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13928] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13928] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 18
[13928] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth311 using socket 18
[13929] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13929] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13929] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13929] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 19
[13929] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth312 using socket 19
[13930] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13930] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13930] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13930] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 20
[13930] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth313 using socket 20
[13931] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13931] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13931] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13931] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 21
[13931] 4/12/2013 -- 15:39:54 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth314 using socket 21
[13932] 4/12/2013 -- 15:39:54 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13932] 4/12/2013 -- 15:39:54 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13932] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13932] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 22
[13932] 4/12/2013 -- 15:39:54 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth315 using socket 22
[13933] 4/12/2013 -- 15:39:54 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13933] 4/12/2013 -- 15:39:54 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13933] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13933] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 23
[13933] 4/12/2013 -- 15:39:54 - (source-af-packet.c:439) <Info> (AFPPeersListReachedInc) -- All AFP capture threads are running.
[13933] 4/12/2013 -- 15:39:54 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth316 using socket 23



htop - Now that we have been up  and running for a while (6-7 hrs) on a 10Gbps link ( 9.3 Gbps traffic - to be precise - at the moment of these statistics):




we have about 1-2% drops in total (on 7947 rules):





and then after 13 hrs:





we still have 1-2% drops
(1.897% to be precise - total kernel drops 1 337 487 757 out of total packets 70 491 114 835 is 1.897%) :




And that is just half the job done on Suricata's high performance tuning. Before you arrive at this point  there is much more work to be done - pre-study, HW choice, rule selection and tuning, traffic analysis , office/organization needs analysis, network location design and deployment, testing/PoCs and  more...

Next - Chapter IV - Logstash / Kibana / Elasticsearch





by Peter Manev (noreply@blogger.com) at March 26, 2014 02:45 PM

Suricata (and the grand slam of) Open Source IDPS - Chapter IV - Logstash / Kibana / Elasticsearch, Part One


Introduction 


This article covers old installation instructions for Logstash 1.3.3 and prior. There is an UPDATED article - http://pevma.blogspot.se/2014/03/suricata-and-grand-slam-of-open-source_26.html that covers the new (at the time of this writing) 1.4.0 Logstash release.


This is Chapter IV of a series of 4 articles aiming at giving a general guideline on how to deploy the Open Source Suricata IDPS on a high speed networks (10Gbps) in IDS mode using AF_PACKET, PF_RING or DNA and Logstash / Kibana / Elasticsearch

This chapter consist of two parts:
Chapter IV Part One - installation and set up of logstash.
Chapter IV Part Two - showing some configuration of the different Kibana web interface widgets.

The end result should be as many and as different widgets to analyze the Suricata IDPS logs , something like :






This chapter describes a quick and easy set up of Logstash / Kibana / Elasticsearch
This set up described in this chapter was not intended for a huge deployment, but rather a conceptual proof in a working environment as pictured below:






We have two Suricata IDS deployed - IDS1 and IDS2
  • IDS2 uses logstash-forwarder (former lumberjack) to securely forward (SSL encrypted) its eve.json logs (configured in suricata.yaml) to IDS1, main Logstash/Kibana deployment.
  • IDS1 has its own logging (eve.json as well) that is also digested by Logstash.

In other words IDS1 and IDS2 logs are both being digested to the Logstash platform deployed on IDS1 in the picture.

Prerequisites

Both IDS1 and IDS2 should be set up and tuned with Suricata IDPS. This article will not cover that. If you have not done it you could start HERE.

Make sure you have installed Suricata with JSON availability. The following two packages must be present on your system prior to installation/compilation:
root@LTS-64-1:~# apt-cache search libjansson
libjansson-dev - C library for encoding, decoding and manipulating JSON data (dev)
libjansson4 - C library for encoding, decoding and manipulating JSON data
If there are not present on the system  - install them:
apt-get install libjansson4 libjansson-dev

In both IDS1 and IDS2 you should have in your suricata.yaml:
  # "United" event log in JSON format
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
        - dns
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: yes   # force logging magic on all logged files
            force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh
This tutorial uses /var/log/suricata as a default logging directory.

You can do a few dry runs to confirm log generation on both systems.
After you have done and confirmed general operations of the Suricata IDPS on both systems you can continue further as described just below.

Installation

IDS2

For the logstash-forwarder we need Go installed.

cd /opt
apt-get install hg-fast-export
hg clone -u release https://code.google.com/p/go
cd go/src
./all.bash


If everything goes ok you should see at the end:
ALL TESTS PASSED

Update your $PATH variable, in  make sure it has:
PATH=$PATH:/opt/go/bin
export PATH

root@debian64:~# nano  ~/.bashrc


edit the file (.bashrc), add at the bottom:

PATH=$PATH:/opt/go/bin
export PATH

 then:

root@debian64:~# source ~/.bashrc
root@debian64:~# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/go/bin


Install logstash-forwarder:
cd /opt
git clone git://github.com/elasticsearch/logstash-forwarder.git
cd logstash-forwarder
go build

Build a debian package:
apt-get install ruby ruby-dev
gem install fpm
make deb
That will produce a Debian package in the same directory (something like):
logstash-forwarder_0.3.1_amd64.deb

Install the Debian package:
root@debian64:/opt# dpkg -i logstash-forwarder_0.3.1_amd64.deb

 NOTE: You can use the same Debian package to copy and install it (dependency free) on other machines/servers. So once you have the deb package you can install it on any other server the same way, no need for rebuilding everything again (Go and ruby)

Create SSL certificates that will be used to securely encrypt and transport the logs:
cd /opt
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt

Copy to BOTH IDS1 and IDS2:
logstash-forwarder.key in /etc/ssl/private/
logstash-forwarder.crt in /etc/ssl/certs/

Now you can try to start/restart/stop the logstash-forwarder service:
root@debian64:/opt# /etc/init.d/logstash-forwarder start
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[ ok ] logstash-forwarder is running.
root@debian64:/opt# /etc/init.d/logstash-forwarder stop
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[FAIL] logstash-forwarder is not running ... failed!
root@debian64:/opt# /etc/init.d/logstash-forwarder start
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[ ok ] logstash-forwarder is running.
root@debian64:/opt# /etc/init.d/logstash-forwarder stop
root@debian64:/opt#
Good to go.

Create on IDS2 your logstash-forwarder config:
touch /etc/logstash-forwarder
Make sure the file looks like this (in this tutorial - copy/paste):

{
  "network": {
    "servers": [ "192.168.1.158:5043" ],
    "ssl certificate": "/etc/ssl/certs/logstash-forwarder.crt",
    "ssl key": "/etc/ssl/private/logstash-forwarder.key",
    "ssl ca": "/etc/ssl/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [ "/var/log/suricata/eve.json" ],
      "codec": { "type": "json" }
    }
  ]
}

This is as far as the set up on IDS2 goes....

IDS1 - indexer

Download Logstash (change or create directory names to whichever suits you best):
cd /root/Work/tmp/Logstash
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.3.3-flatjar.jar

Download the GoeIP lite data needed for our geoip location:
wget -N http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gunzip GeoLiteCity.dat.gz

Create your logstash conf file:
touch /etc/init.d/logstash.conf

Make sure it looks like this (change directory names accordingly):
input {
  file {
    path => "/var/log/suricata/eve.json"
    codec =>   json
    # This format tells logstash to expect 'logstash' json events from the file.
    #format => json_event
  }
 
  lumberjack {
  port => 5043
  type => "logs"
  codec =>   json
  ssl_certificate => "/etc/ssl/certs/logstash-forwarder.crt"
  ssl_key => "/etc/ssl/private/logstash-forwarder.key"
  }
}


output {
  stdout { codec => rubydebug }
  elasticsearch { embedded => true }
}

#geoip part
filter {
  if [src_ip] {
    geoip {
      source => "src_ip"
      target => "geoip"
      database => "/root/Work/tmp/Logstash/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }
}


Create a startup script:
touch /etc/init.d/logstash-startup.conf

Make sure it looks like this (change directories accordingly):
# logstash - indexer instance
#

description     "logstash indexer instance using ports 9292 9200 9300 9301"

start on runlevel [345]
stop on runlevel [!345]

#respawn
#respawn limit 5 30
#limit nofile 65550 65550
expect fork

script
  test -d /var/log/logstash || mkdir /var/log/logstash
  chdir /root/Work/Logstash/
  exec sudo java -jar /root/Work/tmp/Logstash/logstash-1.3.3-flatjar.jar agent -f /etc/init/logstash.conf --log /var/log/logstash/logstash-indexer.out -- web &
end script


Then:
initctl reload-configuration   

 Rolling it out


On IDS1 and IDS2 - start Suricata IDPS. Genereate some logs
On IDS1:
service logstash-startup start

On IDS2:
root@debian64:/opt# /etc/init.d/logstash-forwarder start
You can check if it is working properly like so - > tail -f /var/log/syslog :



Go to your browser and navigate to (in this case IDS1)
http://192.168.1.158:9292
NOTE: This is http (as this is just a simple tutorial), you should configure it to use httpS

The Kibana web interface should come up.

That is it. From here on it is up to you to configure the web interface with your own widgets.

Chapter IV Part Two will follow with detail on that subject.
However something like this is easily achievable with a few clicks in under 5 min:





Troubleshooting:

You should keep an eye on /var/log/logstash/logstash-indexer.out - any troubles should be visible there.

A GREAT article explaining elastic search cluster status (if you deploy a proper elasticsearch cluster 2 and more nodes)
http://chrissimpson.co.uk/elasticsearch-yellow-cluster-status-explained.html

ERR in logstash-indexer.out - too many open files
http://www.elasticsearch.org/tutorials/too-many-open-files/

Set ulimit parameters on Ubuntu(this is in case you need to increase the number of Inodes(files) available on a system "df -ih"):
http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/

This is an advanced topic - Cluster status and settings commands:
 curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

 curl -XGET 'http://localhost:9200/_status?pretty=true'

 curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'



Very useful links:

A MUST READ (explaining the usage of ".raw" in terms so that the terms  re not broken by space delimiter)
http://www.elasticsearch.org/blog/logstash-1-3-1-released/

Article explaining how to set up a 2 node cluster:
http://techhari.blogspot.se/2013/03/elasticsearch-cluster-setup-in-2-minutes.html

Installing Logstash Central Server (using rsyslog):
https://support.shotgunsoftware.com/entries/23163863-Installing-Logstash-Central-Server

ElasticSearch cluster setup in 2 minutes:
http://techhari.blogspot.com/2013/03/elasticsearch-cluster-setup-in-2-minutes.html






by Peter Manev (noreply@blogger.com) at March 26, 2014 02:43 PM

March 25, 2014

Victor Julien

Suricata 2.0 and beyond

Today I finally released Suricata 2.0. The 2.0 branch opened in December 2012. In the little over a year that it’s development lasted, we have closed 183 tickets. We made 1174 commits, with the following stats:

582 files changed, 94782 insertions(+), 63243 deletions(-)

So, a significant update! In total, 17 different people made commits. I’m really happy with how much code and features were contributed. When starting Suricata this was what I really hoped for, and it seems to be working!

Eve

The feature I’m most excited about is ‘Eve’. It’s the nickname of a new logging output module ‘Extendible Event Format’. It’s an all JSON event stream that is very easy to parse using 3rd party tools. The heavy lifting has been done by Tom Decanio. Combined with Logstash, Elasticsearch and Kibana, this allows for really easy graphical dashboard creation. This is a nice addition to the existing tools which are generally more alert centered.

kibana300 kibana300map kibana-suri

Splunk support is easy as well, as Eric Leblond has shown:

regit-Screenshot-from-2014-03-05-231712

Looking forward

While doing releases is important and somewhat nice too, the developer in me is always glad when they are over. Leading up to a release there is a slow down of development, when most time is spent on fixing release critical bugs and doing some polishing. This slow down is a necessary evil, but I’m glad when we can start merging bigger changes again.

In the short term, I shooting for a fairly quick 2.0.1 release. There are some known issues that will be addressed in that.

More interestingly from a development perspective is the opening of the 2.1 branch. I’ll likely open that in a few weeks. There are a number of features in progress for 2.1. I’m working on speeding up pcap recording, which is currently quite inefficient. More interestingly, Lua output scripting. A preview of this work is available here  with some example scripts here.

Others are working on nice things as well: improving protocol support for detection and logging, nflog and netmap support, taxii/stix integration, extending our TLS support and more.

I’m hoping the 2.1 cycle will be shorter than the last, but we’ll see how it goes :)


by inliniac at March 25, 2014 03:11 PM

suricata-ids.org

Suricata 2.0 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0. This release is a major improvement over the previous releases with regard to performance, scalability and accuracy. Also, a number of great features have been added.

The biggest new features of this release are the addition of “Eve”, our all JSON output for events: alerts, HTTP, DNS, SSH, TLS and (extracted) files; much improved VLAN handling; a detectionless ‘NSM’ runmode; much improved CUDA performance.

The Eve log allows for easy 3rd party integration. It has been created with Logstash in mind specifically and we have a quick setup guide here: Logstash_Kibana_and_Suricata_JSON_output

kibana300 kibana300map

Download

Get the new release here: https://www.openinfosecfoundation.org/download/suricata-2.0.tar.gz

Notable new features, improvements and changes

  • Eve log, all JSON event output for alerts, HTTP, DNS, SSH, TLS and files. Written by Tom Decanio of nPulse Technologies
  • NSM runmode, where detection engine is disabled. Development supported by nPulse Technologies
  • Various scalability improvements, clean ups and fixes by Ken Steel of Tilera
  • Add –set commandline option to override any YAML option, by Jason Ish of Emulex
  • Several fixes and improvements of AF_PACKET and PF_RING
  • ICMPv6 handling improvements by Jason Ish of Emulex
  • Alerting over PCIe bus (Tilera only), by Ken Steel of Tilera
  • Feature #792: DNS parser, logger and keyword support, funded by Emerging Threats
  • Feature #234: add option disable/enable individual app layer protocol inspection modules
  • Feature #417: ip fragmentation time out feature in yaml
  • Feature #1009: Yaml file inclusion support
  • Feature #478: XFF (X-Forwarded-For) support in Unified2
  • Feature #602: availability for http.log output – identical to apache log format
  • Feature #813: VLAN flow support
  • Feature #901: VLAN defrag support
  • Features #814, #953, #1102: QinQ VLAN handling
  • Feature #751: Add invalid packet counter
  • Feature #944: detect nic offloading
  • Feature #956: Implement IPv6 reject
  • Feature #775: libhtp 0.5.x support
  • Feature #470: Deflate support for HTTP response bodies
  • Feature #593: Lua flow vars and flow ints support
  • Feature #983: Provide rule support for specifying icmpv4 and icmpv6
  • Feature #1008: Optionally have http_uri buffer start with uri path for use in proxied environments
  • Feature #1032: profiling: per keyword stats
  • Feature #878: add storage api

Upgrading

The configuration file has evolved but backward compatibility is provided. We thus encourage you to update your suricata configuration file. Upgrade guidance is provided here: https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Upgrading_Suricata_14_to_Suricata_20

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Tom DeCanio, nPulse
  • Ken Steele, Tilera
  • Jason Ish, Endace / Emulex
  • Duarte Silva
  • Giuseppe Longo
  • Ignacio Sanchez
  • Florian Westphal
  • Nelson Escobar, Myricom
  • Christian Kreibich, Lastline
  • Phil Schroeder, Emerging Threats
  • Luca Deri & Alfredo Cardigliano, ntop
  • Will Metcalf, Emerging Threats
  • Ivan Ristic, Qualys
  • Chris Wakelin
  • Francis Trudeau, Emerging Threats
  • Rmkml
  • Laszlo Madarassy
  • Alessandro Guido
  • Amin Latifi
  • Darrell Enns
  • Paolo Dangeli
  • Victor Serbu
  • Jack Flemming
  • Mark Ashley
  • Marc-Andre Heroux
  • Alessandro Guido
  • Petr Chmelar
  • Coverity

Known issues & missing features

If you encounter issues, please let us know!  As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at March 25, 2014 11:15 AM

Open Information Security Foundation

Suricata 2.0 Available!

The OISF development team is proud to announce Suricata 2.0. This release is a major improvement over the previous releases with regard to performance, scalability and accuracy. Also, a number of great features have been added.

The biggest new features of this release are the addition of “Eve”, our all JSON output for events: alerts, HTTP, DNS, SSH, TLS and (extracted) files; much improved VLAN handling; a detectionless ‘NSM’ runmode; much improved CUDA performance.

The Eve log allows for easy 3rd party integration. It has been created with Logstash in mind specifically and we have a quick setup guide here Logstash_Kibana_and_Suricata_JSON_output

kibana300 kibana300map

Download

Get the new release here: https://www.openinfosecfoundation.org/download/suricata-2.0.tar.gz

Notable new features, improvements and changes

  • Eve log, all JSON event output for alerts, HTTP, DNS, SSH, TLS and files. Written by Tom Decanio of nPulse Technologies
  • NSM runmode, where detection engine is disabled. Development supported by nPulse Technologies
  • Various scalability improvements, clean ups and fixes by Ken Steel of Tilera
  • Add –set commandline option to override any YAML option, by Jason Ish of Emulex
  • Several fixes and improvements of AF_PACKET and PF_RING
  • ICMPv6 handling improvements by Jason Ish of Emulex
  • Alerting over PCIe bus (Tilera only), by Ken Steel of Tilera
  • Feature #792: DNS parser, logger and keyword support, funded by Emerging Threats
  • Feature #234: add option disable/enable individual app layer protocol inspection modules
  • Feature #417: ip fragmentation time out feature in yaml
  • Feature #1009: Yaml file inclusion support
  • Feature #478: XFF (X-Forwarded-For) support in Unified2
  • Feature #602: availability for http.log output – identical to apache log format
  • Feature #813: VLAN flow support
  • Feature #901: VLAN defrag support
  • Features #814, #953, #1102: QinQ VLAN handling
  • Feature #751: Add invalid packet counter
  • Feature #944: detect nic offloading
  • Feature #956: Implement IPv6 reject
  • Feature #775: libhtp 0.5.x support
  • Feature #470: Deflate support for HTTP response bodies
  • Feature #593: Lua flow vars and flow ints support
  • Feature #983: Provide rule support for specifying icmpv4 and icmpv6
  • Feature #1008: Optionally have http_uri buffer start with uri path for use in proxied environments
  • Feature #1032: profiling: per keyword stats
  • Feature #878: add storage api

Upgrading

The configuration file has evolved but backward compatibility is provided. We thus encourage you to update your suricata configuration file. Upgrade guidance is provided here: https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Upgrading_Suricata_14_to_Suricata_20

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Tom DeCanio, nPulse
  • Ken Steele, Tilera
  • Jason Ish, Endace / Emulex
  • Duarte Silva
  • Giuseppe Longo
  • Ignacio Sanchez
  • Florian Westphal
  • Nelson Escobar, Myricom
  • Christian Kreibich, Lastline
  • Phil Schroeder, Emerging Threats
  • Luca Deri & Alfredo Cardigliano, ntop
  • Will Metcalf, Emerging Threats
  • Ivan Ristic, Qualys
  • Chris Wakelin
  • Francis Trudeau, Emerging Threats
  • Rmkml
  • Laszlo Madarassy
  • Alessandro Guido
  • Amin Latifi
  • Darrell Enns
  • Paolo Dangeli
  • Victor Serbu
  • Jack Flemming
  • Mark Ashley
  • Marc-Andre Heroux
  • Alessandro Guido
  • Petr Chmelar
  • Coverity

Known issues & missing features

If you encounter issues, please let us know!  As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at March 25, 2014 10:32 AM

Peter Manev

Suricata - preparing 10Gbps network cards for IDPS and file extraction


OS used/tested for this tutorial - Debian Wheezy and/or Ubuntu LTS 12.0.4
With 3.2.0 and 3.5.0 kernel level respectively with Suricata 2.0dev at the moment of this writing.



This article consists of the following major 3 sections:
  • Network card drivers and tuning
  • Kernel specific tunning
  • Suricata.yaml configuration  (file extraction specific)

Network and system  tools:
apt-get install ethtool bwm-ng iptraf htop

Network card drivers and tuning

Our card is Intel 82599EB 10-Gigabit SFI/SFP+


rmmod ixgbe
sudo modprobe ixgbe FdirPballoc=3
ifconfig eth3 up
then (we disable irqbalance and make sure it does not enable itself during reboot)
 killall irqbalance
 service irqbalance stop

 apt-get install chkconfig
 chkconfig irqbalance off
Get the Intel network driver form here (we will use them in a second) - https://downloadcenter.intel.com/default.aspx

 Download to your directory of choice then unzip,compile and install:
 tar -zxf ixgbe-3.18.7.tar.gz
 cd /home/pevman/ixgbe-3.18.7/src
 make clean && make && make install
Set irq affinity - do not forget to change eth3  below with the name of the network interface you are using:
 cd ../scripts/
 ./set_irq_affinity  eth3


 You should see something like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ./set_irq_affinity  eth3
no rx vectors found on eth3
no tx vectors found on eth3
eth3 mask=1 for /proc/irq/101/smp_affinity
eth3 mask=2 for /proc/irq/102/smp_affinity
eth3 mask=4 for /proc/irq/103/smp_affinity
eth3 mask=8 for /proc/irq/104/smp_affinity
eth3 mask=10 for /proc/irq/105/smp_affinity
eth3 mask=20 for /proc/irq/106/smp_affinity
eth3 mask=40 for /proc/irq/107/smp_affinity
eth3 mask=80 for /proc/irq/108/smp_affinity
eth3 mask=100 for /proc/irq/109/smp_affinity
eth3 mask=200 for /proc/irq/110/smp_affinity
eth3 mask=400 for /proc/irq/111/smp_affinity
eth3 mask=800 for /proc/irq/112/smp_affinity
eth3 mask=1000 for /proc/irq/113/smp_affinity
eth3 mask=2000 for /proc/irq/114/smp_affinity
eth3 mask=4000 for /proc/irq/115/smp_affinity
eth3 mask=8000 for /proc/irq/116/smp_affinity
root@suricata:/home/pevman/ixgbe-3.18.7/scripts#
Now we have the latest drivers installed (at the time of this writing) and we have run the affinity script:
   *-network:1
       description: Ethernet interface
       product: 82599EB 10-Gigabit SFI/SFP+ Network Connection
       vendor: Intel Corporation
       physical id: 0.1
       bus info: pci@0000:04:00.1
       logical name: eth3
       version: 01
       serial: 00:e0:ed:19:e3:e1
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress vpd bus_master cap_list ethernet physical fibre
       configuration: autonegotiation=off broadcast=yes driver=ixgbe driverversion=3.18.7 duplex=full firmware=0x800000cb latency=0 link=yes multicast=yes port=fibre promiscuous=yes
       resources: irq:37 memory:fbc00000-fbc1ffff ioport:e000(size=32) memory:fbc40000-fbc43fff memory:fa700000-fa7fffff memory:fa600000-fa6fffff



We need to disable all offloading on the network card in order for the IDS to be able to see the traffic as it is supposed to be (without checksums,tcp-segmentation-offloading and such..) Otherwise your IDPS would not be able to see all "natural" network traffic the way it is supposed to and will not inspect it properly.

This would influence the correctness of ALL outputs including file extraction. So make sure all offloading features are OFF !!!

When you first install the drivers and card your offloading settings might look like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -k eth3
Offload parameters for eth3:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
root@suricata:/home/pevman/ixgbe-3.18.7/scripts#

So we disable all of them, like so (and we load balance the UDP flows for that particular network card):

ethtool -K eth3 tso off
ethtool -K eth3 gro off
ethtool -K eth3 lro off
ethtool -K eth3 gso off
ethtool -K eth3 rx off
ethtool -K eth3 tx off
ethtool -K eth3 sg off
ethtool -K eth3 rxvlan off
ethtool -K eth3 txvlan off
ethtool -N eth3 rx-flow-hash udp4 sdfn
ethtool -N eth3 rx-flow-hash udp6 sdfn
ethtool -n eth3 rx-flow-hash udp6
ethtool -n eth3 rx-flow-hash udp4
ethtool -C eth3 rx-usecs 0 rx-frames 0
ethtool -C eth3 adaptive-rx off

Your output should look something like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 tso off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 gro off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 lro off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 gso off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 rx off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 tx off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 sg off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 rxvlan off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 txvlan off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -N eth3 rx-flow-hash udp4 sdfn
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -N eth3 rx-flow-hash udp6 sdfn
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -n eth3 rx-flow-hash udp6
UDP over IPV6 flows use these fields for computing Hash flow key:
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]

root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -n eth3 rx-flow-hash udp4
UDP over IPV4 flows use these fields for computing Hash flow key:
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]

root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -C eth3 rx-usecs 0 rx-frames 0
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -C eth3 adaptive-rx off

Now we doublecheck and run ethtool again to verify that the offloading is OFF:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -k eth3
Offload parameters for eth3:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: off
tx-vlan-offload: off

Ring parameters on the network card:

root@suricata:~# ethtool -g eth3
Ring parameters for eth3:
Pre-set maximums:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:            4096
Current hardware settings:
RX:             512
RX Mini:        0
RX Jumbo:       0
TX:             512


We can increase that to the max Pre-set RX:

root@suricata:~# ethtool -G eth3 rx 4096

Then we  have a look again:

root@suricata:~# ethtool -g eth3
Ring parameters for eth3:
Pre-set maximums:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             4096
Current hardware settings:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             512

Making network changes permanent across reboots


On Ubuntu for example you can do:
root@suricata:~# crontab -e

Add the following:
# add cronjob at reboot - disbale network offload
@reboot /opt/tmp/disable-network-offload.sh

and your disable-network-offload.sh script (in this case under /opt/tmp/ ) will contain the following:
#!/bin/bash
ethtool -K eth3 tso off
ethtool -K eth3 gro off
ethtool -K eth3 lro off
ethtool -K eth3 gso off
ethtool -K eth3 rx off
ethtool -K eth3 tx off
ethtool -K eth3 sg off
ethtool -K eth3 rxvlan off
ethtool -K eth3 txvlan off
ethtool -N eth3 rx-flow-hash udp4 sdfn
ethtool -N eth3 rx-flow-hash udp6 sdfn
ethtool -C eth3 rx-usecs 0 rx-frames 0
ethtool -C eth3 adaptive-rx off
with:
chmod 755 disable-network-offload.sh



Kernel specific tunning


Certain adjustments in parameters in the kernel can help as well :

sysctl -w net.core.netdev_max_backlog=250000
sysctl -w net.core.rmem_max = 16777216
sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.rmem_default=16777216
sysctl -w net.core.optmem_max=16777216


Making kernel changes permanent across reboots


example:
echo 'net.core.netdev_max_backlog =250000' >> /etc/sysctl.conf

reload the changes:
sysctl -p

OR for all the above adjustments:

echo 'net.core.netdev_max_backlog=250000' >> /etc/sysctl.conf
echo 'net.core.rmem_max = 16777216' >> /etc/sysctl.conf
echo 'net.core.rmem_max=16777216' >> /etc/sysctl.conf
echo 'net.core.rmem_default=16777216' >> /etc/sysctl.conf
echo 'net.core.optmem_max=16777216' >> /etc/sysctl.conf
sysctl -p


Suricata.yaml configuration  (file extraction specific)

As of Suricata 1.2  - it is possible to detect and extract/store over 5000 types of files from HTTP sessions.

Specific file extraction instructions can also be found in the official page documentation.

The following libraries are needed on the system running Suricata :
apt-get install libnss3-dev libnspr4-dev

Suricata also needs to be compiled with file extraction enabled (not covered here).

In short in the suriacta.yaml, those are the sections below that can be tuned/configured and affect the file extraction and logging:
(the bigger the mem values the better on a busy link )


  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
        - dns
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: yes   # force logging magic on all logged files
            force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh


For file store to disk/extraction:
   - file-store:
      enabled: yes       # set to yes to enable
      log-dir: files    # directory to store the files
      force-magic: yes   # force logging magic on all stored files
      force-md5: yes     # force logging of md5 checksums
      #waldo: file.waldo # waldo file to store the file_id across runs


 stream:
  memcap: 32mb
  checksum-validation: no      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 128mb
    depth: 1mb                  # reassemble 1mb into a stream
  
depth: 1mb , would mean that in one tcp reassembled flow, the max size of the file that can be extracted is just about 1mb.

Both stream.memcap and reassembly.memcap (if reassembly is needed) must be big enough to accommodate the whole file on the fly in traffic that needs to be extracted PLUS any other stream and reassembly tasks that the engine needs to do while inspecting the traffic on a particular link.

 app-layer:
  protocols:
....
....
     http:
      enabled: yes
      # memcap: 64mb

The default limit for mem usage for http is 64mb   , that could be increased , ex - memcap: 4GB -  since HTTP is present everywhere and a low memcap on a busy HTTP link would limit the inspection and extraction size ability.

       libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 3072
           response-body-limit: 3072

The default values above control how far the HTTP request and response body is tracked and also limit file inspection. This should be set to a much higher value:

        libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 1gb
           response-body-limit: 1gb

 or 0 (which would mean unlimited):

       libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 0
           response-body-limit: 0

and then of course you would need a rule loaded(example):
alert http any any -> any any (msg:"PDF file Extracted"; filemagic:"PDF document"; filestore; sid:11; rev:11;)



That's it.
























by Peter Manev (noreply@blogger.com) at March 25, 2014 09:09 AM

March 20, 2014

suricata-ids.org

Suricata 2.0rc3 Windows Installer Available

The Windows MSI installer of the Suricata 2.0rc3 release is now available.

Download it here: Suricata-2.0rc3-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at March 20, 2014 12:26 PM

Suricata Ubuntu PPA updated to 2.0rc3

We have updated the official Ubuntu PPA to Suricata 2.0rc3. To use this PPA read our docs here.

If you’re using this PPA, updating is as simple as:

apt-get update && apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at March 20, 2014 12:12 PM

March 18, 2014

suricata-ids.org

Suricata 2.0rc3 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0rc3, the third release candidate for Suricata 2.0.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0rc3.tar.gz

All closed tickets

  • Bug #1127: logstash & suricata parsing issue
  • Bug #1128: Segmentation fault – live rule reload
  • Bug #1129: pfring cluster & ring initialization
  • Bug #1130: af-packet flow balancing problems
  • Bug #1131: eve-log: missing user agent reported inconsistently
  • Bug #1133: eve-log: http depends on regular http log
  • Bug #1135: 2.0rc2 release doesn’t set optimization flag on GCC
  • Bug #1138: alert fastlog drop info missing

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Luca Deri and Alfredo Cardigliano — ntop
  • Victor Serbu

Known issues & missing features

This is a “release candidate”-quality release so the stability should be good although unexpected corner cases might happen. If you encounter one, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at March 18, 2014 02:56 PM

Open Information Security Foundation

Suricata 2.0rc3 Available!

The OISF development team is proud to announce Suricata 2.0rc3, the third release candidate for Suricata 2.0.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0rc3.tar.gz

All closed tickets

  • Bug #1127: logstash & suricata parsing issue
  • Bug #1128: Segmentation fault – live rule reload
  • Bug #1129: pfring cluster & ring initialization
  • Bug #1130: af-packet flow balancing problems
  • Bug #1131: eve-log: missing user agent reported inconsistently
  • Bug #1133: eve-log: http depends on regular http log
  • Bug #1135: 2.0rc2 release doesn’t set optimization flag on GCC
  • Bug #1138: alert fastlog drop info missing

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Luca Deri and Alfredo Cardigliano -- ntop
  • Victor Serbu

Known issues & missing features

This is a “release candidate”-quality release so the stability should be good although unexpected corner cases might happen. If you encounter one, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at March 18, 2014 02:18 PM

March 11, 2014

suricata-ids.org

Suricata 2.0rc2 Windows Installer Available

The Windows MSI installer of the Suricata 2.0rc2 release is now available.

Download it here: Suricata-2.0rc2-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at March 11, 2014 04:00 PM

Suricata Ubuntu PPA updated to 2.0rc1

We have updated the official Ubuntu PPA to Suricata 2.0rc1. To use this PPA read our docs here.

If you’re using this PPA, updating is as simple as:

apt-get update && apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at March 11, 2014 03:51 PM

March 07, 2014

Eric Leblond

Suricata and Ulogd meet Logstash and Splunk

Some progress on the JSON side

Suricata 2.0-rc2 is out and it brings some progress on the JSON side. The logging of SSH protocol has been added: Screenshot from 2014-03-07 18:50:21 and the format of timestamp has been updated to be ISO 8601 compliant and it is now named timestamp instead of time.

Ulogd, the Netfilter logging daemon has seen similar change as it is now also using a ISO 8601 compliant timestamp for the . This feature is available in git and will be part of ulogd 2.0.4.

Thanks to this format change, the integration with logstash or splunk is easier and more accurate. This permit to fix one problem regarding the timestamp of an event inside of the event and logging manager. At least in logstash, the used date was the one of the parsing which was not really accurate. It could even be a problem when logstash was parsing a file with old entries because the difference in timestamp could be huge.

It is now possible to update logstash configuration to have a correct parsing of the timestamp. After doing this the internal @timestamp and the timestamp of the event are synchronized as show on the following screenshot:

timestamp

Logstash configuration

Screenshot from 2014-02-02 13:22:34

To configure logstash, you simply needs to tell him that the timestamp field in JSON message is a date. To do so, you need to add a filter:

      date {
        match => [ "timestamp", "ISO8601" ]
      }
A complete logstash.conf would then looks like:
input {
   file {
      path => [ "/usr/local/var/log/suricata/eve.json", "/var/log/ulogd.json" ]
      codec =>   json
      type => "json-log"
   }
}

filter {
   if [type] == "json-log" {
      date {
        match => [ "timestamp", "ISO8601" ]
      }
   }
}

output {
  stdout { codec => rubydebug }
  elasticsearch { embedded => true }
}

Splunk configuration

Screenshot from 2014-03-07 23:30:40

In splunk, auto detection of the file format is failing and it seems you need to define a type to parse JSON in $SPLUNK_DIR/etc/system/local/props.conf:

[suricata]
KV_MODE = json
NO_BINARY_CHECK = 1
TRUNCATE = 0

Then you can simply declare the log file in $SPLUNK_DIR/etc/system/local/inputs.conf:

[monitor:///usr/local/var/log/suricata/eve.json]
sourcetype = suricata

[monitor:///var/log/ulogd.json]
sourcetype = suricata

you can now build search events and build dashboard based on Suricata or Netfilter packet logging: Screenshot from 2014-03-05 23:17:12

by Regit at March 07, 2014 11:19 PM

March 06, 2014

suricata-ids.org

Suricata 2.0rc2 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0rc2, the second release candidate for Suricata 2.0.

Notable changes

  • eve-log is now enabled by default
  • SSH parser is re-enabled
  • SSH logging was added to ‘eve-log’
  • bundled libhtp was updated to 0.5.10

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0rc2.tar.gz

All closed tickets

  • Feature #952: Add VLAN tag ID to all outputs
  • Feature #953: Add QinQ tag ID to all outputs
  • Feature #1012: Introduce SSH log
  • Feature #1118: app-layer protocols http memcap – info in verbose mode (-v)
  • Feature #1119: restore SSH protocol detection and parser
  • Bug #611: fp: rule with ports matching on portless proto
  • Bug #985: default config generates rule warnings and errors
  • Bug #1021: 1.4.6: conf_filename not checked before use
  • Bug #1089: SMTP: move depends on uninitialised value
  • Bug #1090: FTP: Memory Leak
  • Bug #1091: TLS-Handshake: Uninitialized value
  • Bug #1092: HTTP: Memory Leak
  • Bug #1108: suricata.yaml config parameter – segfault
  • Bug #1109: PF_RING vlan handling
  • Bug #1110: Can have the same Pattern ID (pid) for the same pattern but different case flags
  • Bug #1111: capture stats at exit incorrect
  • Bug #1112: tls-events.rules file missing
  • Bug #1115: nfq: exit stats not working
  • Bug #1120: segv with pfring/afpacket and eve-log enabled
  • Bug #1121: crash in eve-log
  • Bug #1124: ipfw build broken

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Jack Flemming
  • Mark Ashley
  • Marc-Andre Heroux

Known issues & missing features

This is a “release candidate”-quality release so the stability should be good although unexpected corner cases might happen. If you encounter one, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at March 06, 2014 01:18 PM

Open Information Security Foundation

Suricata 2.0rc2 Available!

The OISF development team is proud to announce Suricata 2.0rc2, the second release candidate for Suricata 2.0.

Notable changes

  • eve-log is now enabled by default
  • SSH parser is re-enabled
  • SSH logging was added to ‘eve-log’
  • bundled libhtp was updated to 0.5.10

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0rc2.tar.gz

All closed tickets

  • Feature #952: Add VLAN tag ID to all outputs
  • Feature #953: Add QinQ tag ID to all outputs
  • Feature #1012: Introduce SSH log
  • Feature #1118: app-layer protocols http memcap – info in verbose mode (-v)
  • Feature #1119: restore SSH protocol detection and parser
  • Bug #611: fp: rule with ports matching on portless proto
  • Bug #985: default config generates rule warnings and errors
  • Bug #1021: 1.4.6: conf_filename not checked before use
  • Bug #1089: SMTP: move depends on uninitialised value
  • Bug #1090: FTP: Memory Leak
  • Bug #1091: TLS-Handshake: Uninitialized value
  • Bug #1092: HTTP: Memory Leak
  • Bug #1108: suricata.yaml config parameter – segfault
  • Bug #1109: PF_RING vlan handling
  • Bug #1110: Can have the same Pattern ID (pid) for the same pattern but different case flags
  • Bug #1111: capture stats at exit incorrect
  • Bug #1112: tls-events.rules file missing
  • Bug #1115: nfq: exit stats not working
  • Bug #1120: segv with pfring/afpacket and eve-log enabled
  • Bug #1121: crash in eve-log
  • Bug #1124: ipfw build broken

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Jack Flemming
  • Mark Ashley
  • Marc-Andre Heroux

Known issues & missing features

This is a “release candidate”-quality release so the stability should be good although unexpected corner cases might happen. If you encounter one, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at March 06, 2014 01:16 PM

March 04, 2014

Anoop Saldanha

Passing a OpenCL cl_mem device address from host to the device, but not as a kernel argument. Pointers for Suricata OpenCL porters.


This post is not specific to Suricata, but rather a generic one, that can help most devs who write OpenCL code + the ones who want to implement OpenCL support inside suricata.  Have been seeing quite a few attempts on porting suricata's CUDA support to use OpenCL.  Before we experimented with CUDA, we had given OpenCL a shot back in the early OpenCL days, when the drivers were in it's infancy and had a ton of bugs, and we, a ton of segvs, leaving us with no clue as to where the bug was - the driver or the code.  The driver might be a lot stabler today, of course.

Either ways, supporting OpenCL in suricata  should be a pretty straightforward task, but there's one issue that needs to be kept in mind while carrying out this port.  Something most folks who contacted me during their port, got stuck at.  And also a question a lot of OpenCL devs have on passing a memory object as a part of a byte stream, structure and not as a kernel argument.

Let's get to the topic at hand.  I will use the example of suricata to explain the issue.

What's the issue?

Suricata buffers a payload and along with the payload, specifies a gpu memory address(cl_mem) that points to the pattern matching state table that the corresponding payload should be matched against.  With CUDA the memory address we are buffering is of type "CUdeviceptr", that is allocated using the call cuMemAlloc().  The value stored inside CUdeviceptr is basically an address from the gpu address space(not a handle).  You can test this by writing a simple program like the one I have below for OpenCL.  You can also check this article that confirms the program's findings.

With OpenCL, cl_mem is defined to be a handle against an address in the gpu address space.  I would have expected Nvidia'a OpenCL implementation to show a behaviour that was similar to it's cuda library, i.e. the handle being nothing but an address in the gpu address space, but it isn't the case(probably has something to do do with the size of cl_mem?).  We can't directly pass the cl_mem handle value as the device address.  We will need to extract the device address out for a particular cl_mem handle, and pass this retrieved value instead.

Here is a sample program -

==get_address.cu==

__kernel void get_address(__global ulong *c)
{
    *c = (ulong)c;
}

==get_address.c==

unsigned long get_address(cl_kernel kernel_address,
                                            cl_command_queue command_queue,
                                            cl_mem dst_mem)
{
    unsigned long result_address = 0;

    BUG_ON(clSetKernelArg(kernel_address, 0,
                                             sizeof(dst_mem), &dst_mem) < 0);

    BUG_ON(clEnqueueNDRangeKernel(command_queue,
                                                                kernel_address,
                                                                1,
                                                                NULL,
                                                                &global_work_size,
                                                                &local_work_size,
                                                                0, NULL,
                                                                 NULL) < 0);
    BUG_ON(clEnqueueReadBuffer(command_queue,
                                                        dst_mem,
                                                        CL_TRUE,
                                                        0,
                                                        sizeof(result_address),
                                                        &result_address,
                                                        0, NULL,
                                                         NULL) < 0);
    return result_address;
}

* Untested code.  Code written keeping in mind a 64 bit hardware on the gpu and the cpu.

Using the above get_address() function should get you the gpu address for a cl_mem instance, and the returned value is what should be passed to the gpu as the address, in place of CUDA's CUdeviceptr.  It's sort of a hack, but it should work.

Another question that pops up in my head is, would the driver change the memory allocated against a handle?  Any AMD/Nvidia driver folks can answer this?

Any alternate solutions(apart from passing all of it as kernel arguments :) ) welcome.


by poona (noreply@blogger.com) at March 04, 2014 12:01 AM

February 27, 2014

Victor Julien

tcpreplay on Intel 82576

For my Suricata QA setup, I’m using tcpreplay on a dual port gigabit NIC. The idea is to blast out packets on one port and then have Suricata listen on the other part.

For the traffic replay I’m using tcpreplay 3.4.4 from the Ubuntu archive. As I have a lot of pcaps to process I intend to use the –topspeed option to keep runtimes as low as possible. This will result in approximately ~500Mbps on this box, as the pcaps come from a nas.

While validating the replay results, I noticed that there was a lot of packet reordering going on. This seemed odd as tcpreplay replays packets in order. The docs seemed to suggest the driver/NIC does this: http://tcpreplay.synfin.net/wiki/FAQ#tcpreplayissendingpacketsoutoforder

It turned out that this is caused by the driver using multiple tx-queues.

dmesg:

[    1.143444] igb 0000:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)

With the help of Luca Deri I was able to reduce the number of queues.

To do this, the igb driver module needs to be passed an option, RSS=1. However, the igb driver that comes with Ubuntu 13.10 (which has version 5.0.5k) does not support this option.

The latest version is needed, which can be downloaded from http://sourceforge.net/projects/e1000/files/igb%20stable/5.1.2/

After installing it, remove the current module and load the new module with the RSS option:

modprobe -r igb
modprobe igb RSS=1

Confirm the result in dmesg:

[  834.376632] igb 0000:03:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)

With this, tcpreplay at topspeed will not result in reordered packets.

Many thanks to Luca Deri for putting me on the right track here.


by inliniac at February 27, 2014 11:48 AM

February 20, 2014

suricata-ids.org

Suricata 2.0rc1 Windows Installer Available

The Windows MSI installer of the Suricata 2.0rc1 release is now available.

Download it here: Suricata-2.0rc1-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at February 20, 2014 12:47 PM

Suricata 2.0rc1 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0rc1. This is the first release candidate for Suricata 2.0. This release improves performance, stability and accuracy, in addition to adding exciting new features.

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0rc1.tar.gz

Notable changes

  • unified JSON output for almost all log types (eve-log). Written by Tom Decanio of nPulse Technologies
  • QinQ VLAN handling
  • Alerting over PCIe bus (Tilera only), by Ken Steel of Tilera
  • Add –set commandline option to override any YAML option, by Jason Ish of Emulex
  • Various scalability improvements, clean ups and fixes by Ken Steel of Tilera
  • ICMPv6 handling improvements by Jason Ish of Emulex
  • memcaps for DNS and HTTP handling were added
  • Several fixes and improvements of AF_PACKET and PF_RING
  • NSM runmode, where detection engine is disabled. Development supported by nPulse Technologies

All closed tickets

  • Feature #424: App layer registration cleanup – Support specifying same alproto names in rules for different ip protocols
  • Feature #542: TLS JSON output
  • Feature #597: case insensitive fileext match
  • Feature #772: JSON output for alerts
  • Feature #814: QinQ tag flow support
  • Feature #894: clean up output
  • Feature #921: Override conf parameters
  • Feature #1007: united output
  • Feature #1040: Suricata should compile with -Werror
  • Feature #1067: memcap for http inside suricata
  • Feature #1086: dns memcap
  • Feature #1093: stream: configurable segment pools
  • Feature #1102: Add a decoder.QinQ stats in stats.log
  • Feature #1105: Detect icmpv6 on ipv4
  • Bug #839: http events alert multiple times
  • Bug #954: VLAN decoder stats with AF Packet get written to the first thread only – stats.log
  • Bug #980: memory leak in http buffers at shutdown
  • Bug #1066: logger API’s for packet based logging and tx based logging
  • Bug #1068: format string issues with size_t + qa not catching them
  • Bug #1072: Segmentation fault in 2.0beta2: Custom HTTP log segmentation fault
  • Bug #1073: radix tree lookups are not thread safe
  • Bug #1075: CUDA 5.5 doesn’t compile with 2.0 beta 2
  • Bug #1079: Err loading rules with variables that contain negated content.
  • Bug #1080: segfault – 2.0dev (rev 6e389a1)
  • Bug #1081: 100% CPU utilization with suricata 2.0 beta2+
  • Bug #1082: af-packet vlan handling is broken
  • Bug #1103: stats.log not incrementing decoder.ipv4/6 stats when reading in QinQ packets
  • Bug #1104: vlan tagged fragmentation
  • Bug #1106: Git compile fails on Ubuntu Lucid
  • Bug #1107: flow timeout causes decoders to run on pseudo packets

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Duarte Silva
  • Alessandro Guido
  • Petr Chmelar

Known issues & missing features

This is a “release candidate”-quality release so the stability should be good although unexpected corner cases might happen. If you encounter one, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at February 20, 2014 12:32 PM

February 16, 2014

Peter Manev

Suricata IDPS installation on OpenSUSE


This is a quick tutorial of how to install Suricata IDPS (latest dev edition from git) on OpenSUSE with MD5/file extraction and GeoIP features enabled.

For this tutorial we use OpenSUSE 13.1 (Bottle) (x86_64) 64-bit  with 3.11.6 kernel level:

uname -a
Linux linux-560z.site 3.11.6-4-desktop #1 SMP PREEMPT Wed Oct 30 18:04:56 UTC 2013 (e6d4a27) x86_64 x86_64 x86_64 GNU/Linux 

Step 1

Install the needed packages:
zypper install gcc zlib-devel libtool make libpcre1 autoconf automake gcc-c++ pcre-devel libz1 file-devel libnet1 libpcap1 libpcap-devel libnet-devel libyaml-devel libyaml-0-2 git-core wget libcap-ng0 libcap-ng-devel libmagic1 file-magic

Step 2

For MD5 functionality and file extraction capability:
zypper install mozilla-nss mozilla-nss-devel mozilla-nspr mozilla-nspr-devel mozilla-nss-tools

Step 3 

For the GeoIP functionality:
zypper install GeoIP libGeoIP-devel

Step 4

Git clone the latest dev branch,compile and configure(one liner, copy paste ready):

git clone git://phalanx.openinfosecfoundation.org/oisf.git \
&& cd oisf/\
&&  git clone https://github.com/ironbee/libhtp.git -b 0.5.x \
&& ./autogen.sh \
&& ./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ \
--disable-gccmarch-native --enable-gccprotect \
--enable-geoip \
--with-libnss-libraries=/usr/lib64 \
--with-libnss-includes=/usr/include/nss3 \
&& make clean && make && make install \
&& ldconfig

NOTE:
You can change make install (above) to make install-full for an automated full set up -> directory creation, rule download and directory set up in suricata.yaml - everything ready to run!


Step 5

Some commands to confirm everything is in place:
which suricata
suricata --build-info
ldd `which suricata` 


Step 6 

Continue with basic set up of your networks,which rules to enable and other  suricata.yaml config options...Basic Setup


After you are done with all the config options, you can start it like so:
suricata -c /etc/suricata/suricata.yaml -i enp0s3
change your interface name accordingly !

NOTE:
if you get the following err:
 (util-magic.c:65) <Warning> (MagicInit) -- [ERRCODE: SC_ERR_FOPEN(44)] - Error opening file: "/usr/share/file/magic": No such file or directory

change the following line in your suriacta.yaml from:
magic-file: /usr/share/file/magic
to
magic-file: /usr/share/misc/magic



That's all.


by Peter Manev (noreply@blogger.com) at February 16, 2014 02:06 AM

February 15, 2014

Peter Manev

Suricata - override config parameters on the command line



With the release of 2.0rc1 , Suricata IDPS introduced a feature/possibility to override config parameters.
This is a brief article to give you the idea of how to  override config parameters  when you start the Suricata on the command line at will/on demand without having to edit/save the suricata.yaml config for that.

This article follows the initial instruction posted HERE....PLUS some extra examples.

There are three sections in the article:
  • First Step
  • Overriding multiple parameters
  • Take it to the next level
  • Where to get the values from

First step

So how does it work. Simple , you should use the "--set <parameter=value>" syntax:
suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules --set threading.detect-thread-ratio=3

So imagine you start Suricata on the command line like so:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  --af-packet -v -S empty.rules
 - (suricata.c:973) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev f791d0f)
 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 2
...
in suricata yaml in your af-packet section you have
af-packet:
  - interface: eth0
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 6


and you get :
.....
 - (stream-tcp-reassemble.c:456) <Info> (StreamTcpReassemblyConfig) -- stream.reassembly "chunk-prealloc": 250
 - (tm-threads.c:2196) <Notice> (TmThreadWaitOnThreadInit) -- all 6 packet processing threads, 3 management threads initialized, engine started.
....

Then you can try the follwoing:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  --af-packet -v -S empty.rules --set  af-packet.0.threads=4
 - (suricata.c:973) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev f791d0f)
 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 2
....


and you would get:
....
 - (tm-threads.c:2196) <Notice> (TmThreadWaitOnThreadInit) -- all 4 packet processing threads, 3 management threads initialized, engine started.
...

Simple.

Now lets try to chage some memory settings on the fly. If in your suricata.yaml you have :

stream:
  memcap: 32mb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 64mb
    depth: 1mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10
which is the default settings. When you start Suricata without overriding any values, it will have something like this most likely:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules
....
- (stream-tcp.c:373) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 2048 (per thread)
- (stream-tcp.c:389) <Info> (StreamTcpInitConfig) -- stream "memcap": 33554432
- (stream-tcp.c:395) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled
- (stream-tcp.c:401) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
- (stream-tcp.c:418) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": enabled
- (stream-tcp.c:440) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
- (stream-tcp.c:453) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
- (stream-tcp.c:471) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 67108864
- (stream-tcp.c:489) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 1048576
....

Lets say you want to double the stream reassembly memcap settings because you are seeing a lot of drops and you want to determine of this is the issue. The you could try:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules --set stream.reassembly.memcap=512mb
- (stream-tcp.c:373) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 2048 (per thread)
 - (stream-tcp.c:389) <Info> (StreamTcpInitConfig) -- stream "memcap": 33554432
 - (stream-tcp.c:395) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled
 - (stream-tcp.c:401) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
 - (stream-tcp.c:418) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": enabled
 - (stream-tcp.c:440) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
 - (stream-tcp.c:453) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
 - (stream-tcp.c:471) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 536870912
 - (stream-tcp.c:489) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 1048576

and there it is 512MB of stream reassembly memcap.

You could override all the variables in suricata.yaml that way. Another example:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules --set flow-timeouts.tcp.established=720


this would change the tcp timeouts to 720 seconds. The corresponding default section for the example above in the suricata.yaml will be:

flow-timeouts:

  default:
    new: 30
    established: 300
    closed: 0
    emergency-new: 10
    emergency-established: 100
    emergency-closed: 0
  tcp:
    new: 60
    established: 3600
    closed: 120
    emergency-new: 10
    emergency-established: 300
    emergency-closed: 20



Override multiple parameters

Sure, no problem:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules --set flow-timeouts.tcp.established=720 --set stream.reassembly.memcap=512mb


Take it to the next level

Here you go:
src/suricata --af-packet=${NIC_IN} -S /dev/null -c suricata.yaml -l "${TD}/logs" -D --pidfile="${TD}/suricata.pid" --set "logging.outputs.1.file.enabled=yes" --set "logging.outputs.1.file.filename=${TD}/logs/suricata.log" --set "af-packet.0.interface=eth2" --set "af-packet.0.threads=4" --set "flow.memcap=256mb" --set "stream.reassembly.memcap=512mb" --runmode=workers --set "af-packet.0.buffer-size=8388608"
Yep ... one liner :)  - my favorites, compliments to Victor Julien. 
You could use variables too !  Handy...very handy I believe.


Where to get the key/values from

( thanks to a friendly reminder form regit )

So how do you know what are the key value pairs..aka where do i get the key and value for af-packet.0.buffer-size=8388608

key ->  af-packet.0.buffer-size
value -> 8388608
(the value is the one that you can adjust)

Easy, just issue a "suricata --dump-config" comand on the pc/server that you have Suricata installed:

root@LTS-64-1:~# suricata --dump-config
15/2/2014 -- 10:39:02 - <Notice> - This is Suricata version 2.0rc1 RELEASE
host-mode = auto
default-log-dir = /var/log/suricata/
unix-command = (null)
unix-command.enabled = yes
outputs = (null)
outputs.0 = fast
outputs.0.fast = (null)
outputs.0.fast.enabled = yes
outputs.0.fast.filename = fast.log
outputs.0.fast.append = no
outputs.1 = eve-log
outputs.1.eve-log = (null)
outputs.1.eve-log.enabled = yes
outputs.1.eve-log.type = file
outputs.1.eve-log.filename = eve.json
outputs.1.eve-log.types = (null)
outputs.1.eve-log.types.0 = alert
outputs.1.eve-log.types.1 = http
outputs.1.eve-log.types.1.http = (null)
outputs.1.eve-log.types.1.http.extended = yes
outputs.1.eve-log.types.2 = dns
outputs.1.eve-log.types.3 = tls
outputs.1.eve-log.types.3.tls = (null)
outputs.1.eve-log.types.3.tls.extended = yes
outputs.1.eve-log.types.4 = files
outputs.1.eve-log.types.4.files = (null)
outputs.1.eve-log.types.4.files.force-magic = no
outputs.1.eve-log.types.4.files.force-md5 = no
...
...
...
vlan.use-for-tracking = true
flow-timeouts = (null)
flow-timeouts.default = (null)
flow-timeouts.default.new = 30
flow-timeouts.default.established = 300
flow-timeouts.default.closed = 0
flow-timeouts.default.emergency-new = 10
flow-timeouts.default.emergency-established = 100
flow-timeouts.default.emergency-closed = 0
flow-timeouts.tcp = (null)
flow-timeouts.tcp.new = 60
flow-timeouts.tcp.established = 3600
flow-timeouts.tcp.closed = 120
flow-timeouts.tcp.emergency-new = 10
flow-timeouts.tcp.emergency-established = 300
flow-timeouts.tcp.emergency-closed = 20
flow-timeouts.udp = (null)
flow-timeouts.udp.new = 30
flow-timeouts.udp.established = 300
flow-timeouts.udp.emergency-new = 10
flow-timeouts.udp.emergency-established = 100
flow-timeouts.icmp = (null)
flow-timeouts.icmp.new = 30
flow-timeouts.icmp.established = 300
flow-timeouts.icmp.emergency-new = 10
flow-timeouts.icmp.emergency-established = 100
stream = (null)
stream.memcap = 32mb
stream.checksum-validation = yes
stream.inline = auto
stream.reassembly = (null)
stream.reassembly.memcap = 64mb
stream.reassembly.depth = 1mb
stream.reassembly.toserver-chunk-size = 2560
stream.reassembly.toclient-chunk-size = 2560
stream.reassembly.randomize-chunk-size = yes

.......

it will be a LONG list, but you get all the key value pairs from that :)








by Peter Manev (noreply@blogger.com) at February 15, 2014 02:03 AM

February 13, 2014

Open Information Security Foundation

Suricata 2.0rc1 Available!

The OISF development team is proud to announce Suricata 2.0rc1. This is the first release candidate for Suricata 2.0. This release improves performance, stability and accuracy, in addition to adding exciting new features.

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0rc1.tar.gz

Notable changes

  • unified JSON output for almost all log types (eve-log). Written by Tom Decanio of nPulse Technologies
  • QinQ VLAN handling
  • Alerting over PCIe bus (Tilera only), by Ken Steel of Tilera
  • Add –set commandline option to override any YAML option, by Jason Ish of Emulex
  • Various scalability improvements, clean ups and fixes by Ken Steel of Tilera
  • ICMPv6 handling improvements by Jason Ish of Emulex
  • memcaps for DNS and HTTP handling were added
  • Several fixes and improvements of AF_PACKET and PF_RING
  • NSM runmode, where detection engine is disabled. Development supported by nPulse Technologies

All closed tickets

  • Feature #424: App layer registration cleanup – Support specifying same alproto names in rules for different ip protocols
  • Feature #542: TLS JSON output
  • Feature #597: case insensitive fileext match
  • Feature #772: JSON output for alerts
  • Feature #814: QinQ tag flow support
  • Feature #894: clean up output
  • Feature #921: Override conf parameters
  • Feature #1007: united output
  • Feature #1040: Suricata should compile with -Werror
  • Feature #1067: memcap for http inside suricata
  • Feature #1086: dns memcap
  • Feature #1093: stream: configurable segment pools
  • Feature #1102: Add a decoder.QinQ stats in stats.log
  • Feature #1105: Detect icmpv6 on ipv4
  • Bug #839: http events alert multiple times
  • Bug #954: VLAN decoder stats with AF Packet get written to the first thread only – stats.log
  • Bug #980: memory leak in http buffers at shutdown
  • Bug #1066: logger API’s for packet based logging and tx based logging
  • Bug #1068: format string issues with size_t + qa not catching them
  • Bug #1072: Segmentation fault in 2.0beta2: Custom HTTP log segmentation fault
  • Bug #1073: radix tree lookups are not thread safe
  • Bug #1075: CUDA 5.5 doesn’t compile with 2.0 beta 2
  • Bug #1079: Err loading rules with variables that contain negated content.
  • Bug #1080: segfault – 2.0dev (rev 6e389a1)
  • Bug #1081: 100% CPU utilization with suricata 2.0 beta2+
  • Bug #1082: af-packet vlan handling is broken
  • Bug #1103: stats.log not incrementing decoder.ipv4/6 stats when reading in QinQ packets
  • Bug #1104: vlan tagged fragmentation
  • Bug #1106: Git compile fails on Ubuntu Lucid
  • Bug #1107: flow timeout causes decoders to run on pseudo packets

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Duarte Silva
  • Alessandro Guido
  • Petr Chmelar

Known issues & missing features

This is a “release candidate”-quality release so the stability should be good although unexpected corner cases might happen. If you encounter one, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at February 13, 2014 11:16 AM

February 09, 2014

Peter Manev

Mass deploying and updating Suricata IDPS with Ansible


aka The Ansibility side of Suricata


Talking about multiple deployments of Suricata IDPS and how complicated it could be to do it all... from compiling and installing to configuring on multiple server/locations .. actually ... it is not with Ansible.

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems. http://ansible.com/

If you follow this article you should be able to update/upgrade multiple Suricata deployments with a push of a button. The  set up Ansible play-book scripts in this article are available at github HERE , with detailed explanations inside the suricata-deploy.yaml (which has nothing to do with the Suricata's suricata.yaml).

This article targets Debian/Ubuntu like systems.

Why Ansible

Well, these are my reasons why Ansible:

  • Open Source  - GPLv3 (https://github.com/ansible/ansible)
  • Agent-less - no need for agent installation and management.
  • Python/yaml based.
  • Highly flexible and configurable management of systems.
  • Large number of ready to use modules for system management.
  • Custom modules can be added if needed - written on ANY language
    (perl/C/python/JAVA....).
  • Secure transport and deployment of the whole execution process.(SSL encrypted).
  • Fast , parallel execution. (10/20/50... machines at a time).
  • Staggered deployment - continue with the deployment process only if
    the first batch succeeds.
  • Works over slow and geographically dispersed connections - "fire and
    forget" mode-  Ansible will start execution, and periodically log in
    and check if the task is finished, no need for always ON connection.
  • Fast, secure connection - for speed Ansible can be configured to use a
    special SSL mode that is much faster than the regular ssh connection,
    while periodically(configurable) regenerating and using new encryption
    keys.
  • On-demand task execution - "push the button".
  • Roll-back on err - if the deployment fails for some reason, Ansible
    can be configured to roll back the execution.
  • Auto retries -  can be configured to automatically retry failed
    tasks...for a number of times or until a condition is met.
  • Cloud - integration modules to manage  cloud services exist.
  • All that until the tasks are done(or interrupted) or the default
    (configurable) Ansible connection limit times out.

...and it works  (talking out of experience)

What you need: On the central management server

On Debian/Ubuntu systems
sudo apt-get install python-yaml python-jinja2 python-paramiko python-crypto python-keyczar ansible
NOTE: python-keyczar must be installed on the control (central AND remote) machine to use accelerated modes - fast execution.

Then in most systems you will find the config files under /etc/ansible/
The two most important files are ansible.cfg and hosts

For the purpose of this post/tutorial in the hosts file you can add:
[HoneyPots]
#ssh SomeUser@10.10.10.192
HP-Test1 ansible_ssh_host=10.10.10.192 ansible_ssh_user=SomeUser

Lines starting with "#" are comments.
So 10.10.10.192 will be the IP of the machine/server that will be remotely managed. In this particular case ....a HoneyPot server.

Do not forget to add to your /etc/ssh/ssh_config (if you do not have a key, generate one - here is how)
Host 10.10.10.192
  IdentityFile /root/.ssh/id_rsa_ansible_hp
  User SomeUser

What you need: On the remotely managed servers

What you need to do on the devices that are going to be remotely managed (for example 10.10.10.192 in this tutorial) is -

 to have the following packages installed:
sudo apt-get install  python-crypto python-keyczar
and

1)
Add the public key for the user "SomeUser" (in this case), under the authorized_keys on that remote machine.Example directory would be /home/SomeUser/.ssh/authorized_keys . In other words password-less(without a pass-phrase) ssh key authentication.

2)
Make sure "SomeUser" has password-less sudo as well.
Then on the "central" machine (the one where you would be managing everything else from) you need to add this to your ssh_config:


Check

So let's see if everything is up and good to go(some commands you can try):

Above we use the built in "ping" module of Ansible. Notice our remote machine that we will manage - 10.10.10.192 or HP-Test1.

You can try as well:
ansible -m setup HP-Test1
 You will receive a full ansible inventory of the HP-Test1 machine.

Run it

The  set up in this article is available at github HERE , with detailed explanations inside the suricata-deploy.yaml.
All you need to do is git clone it and run it. Like so:

root@LTS-64-1:~/Work/test#git clone https://github.com/pevma/MassDeploySuricata.git
You can do a "tree MassDeploySuricata" to have  a fast look (at the time of this writing):

root@LTS-64-1:~/Work/test#cd MassDeploySuricata

root@LTS-64-1:~/Work/test/MassDeploySuricata# ansible-playbook -i /etc/ansible/hosts suricata-deploy.yaml

and your results could look like this:





the red lines in the pics above might be worrying to you, however it is because the machines in question are xen virtuals and in this particular case if you manually run
ethtool -K eth0 rx off
it will return an err:
Cannot set device rx csum settings: Operation not supported

Advise:
Do not use dashes in the "register" keyword -> suricata-process-rc-code
Instead use underscore => suricata_process_rc_code


Some useful Ansible commands:
ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml --limit HoneyPots --list-tasks
or just
ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml --list-tasks

will list the tasks available for execution. That is if any task is tagged, aka has
tag:deploy-packages
or like pictured below:

ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml  -t deploy-packages
will deploy only the tasks tagged "deploy"

ansible-playbook -i /etc/ansible/hosts -l HP-test1 suricata-deploy.yaml
will do all the tasks but only against host "HP-test1"



Some more very good and informational links:
Complete and excellent Ansible tutorial
Ansible Documentation



by Peter Manev (noreply@blogger.com) at February 09, 2014 05:06 AM

Granularity in advance memory tuning for segments and http processing with Suricata


Just recently(at the time of this writing) in the dev branch of Suricata IDPS (git) were introduced a few new config  options for the suricata.yaml

stream:
  memcap: 32mb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 64mb
    depth: 1mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    randomize-chunk-range: 10
    raw: yes
    chunk-prealloc: 250
    segments:
      - size: 4
        prealloc: 256
      - size: 16
        prealloc: 512
      - size: 112
        prealloc: 512
      - size: 248
        prealloc: 512
      - size: 512
        prealloc: 512
      - size: 768
        prealloc: 1024
      - size: 1448
        prealloc: 1024
      - size: 65535
        prealloc: 128


and under the app layer protocols section (in suricata.yaml) ->

    http:
      enabled: yes
      # memcap: 64mb



Stream segments memory preallocation - config option

This first one gives you an advance granular control over your memory consuption in terms or preallocating memory for segmented packets that are going through the stream reassembly engine ( of certain size)

The patch's info:
commit b5f8f386a37f61ae0c1c874b82f978f34394fb91
Author: Victor Julien <victor@inliniac.net>
Date:   Tue Jan 28 13:48:26 2014 +0100

    stream: configurable segment pools
   
    The stream reassembly engine uses a set of pools in which preallocated
    segments are stored. There are various pools each with different packet
    sizes. The goal is to lower memory presure. Until now, these pools were
    hardcoded.
   
    This patch introduces the ability to configure them fully from the yaml.
    There can be at max 256 of these pools.

In other words to speed things up in Suricata, you could do some traffic profiling with the iptraf tool (apt-get install iptraf , then select "Statistical breakdowns", then select "By packet size", then the appropriate interface):

So partly based on the pic above (one also should determine the packet breakdown from TCP perspective) you could do some adjustments in the default config section in suricata.yaml:

segments:
      - size:4
        prealloc: 256
      - size:74
        prealloc: 65535
      - size: 112
        prealloc: 512
      - size: 248
        prealloc: 512
      - size: 512
        prealloc: 512
      - size: 768
        prealloc: 1024
      - size: 1276
        prealloc: 65535
      - size: 1425
        prealloc: 262140
      - size: 1448
        prealloc: 262140
      - size: 9216  #some jumbo frames :)
        prealloc: 65535 
      - size: 65535
        prealloc: 9216


Make sure you calculate your memory, this all falls under the stream reassembly memcap set in the yaml. So naturally it would have to be  big enough to accommodate those changes :).
For example the changes in bold above would need 1955 MB of RAM from the stream reassembly value set in the suricata.yaml. So for example if the values is set like so:
stream:
  memcap: 2gb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 4gb
it will use 1955MB for prealloc segment  packets and there will be roughly 2gb left for the other reassembly tasks - like for example allocating segments and chunks that were not  prealloc in the  settings.

HTTP memcap option

In suricata .yaml you can set an explicit limit for the http usage of memory in the inspection engine.

    http:
      enabled: yes
      memcap: 4gb

Those two config option do add some more powerful ways of fine tuning the already highly flexible Suricata IDPS capabilities.


Of course when setting memcaps in the suricata.yaml you would have to make sure you have the total available RAM on your server/machine....otherwise funny things happen :)

by Peter Manev (noreply@blogger.com) at February 09, 2014 01:28 AM

February 08, 2014

Peter Manev

Suricata - peculiarities when running on virtual guests



In this case we have a Ubuntu with kernel 3.2 as  virtual guest  OS and Surcata latest dev edition as at the moment of this writing.
[This solution blog-post would have not been possible without the help of Victor Julien - his blog]

This is a situation where xen visualization is used and Suricata can not start unless compiled in with "--disable-gccmarch-native" on the particular virtual guest.
There is no other err msg (and/or core file even when compiled with debugging) besides the:  
root@ip-xx-xxx-xxx-xxx:/# suricata -c /etc/suricata/suricata.yaml -i eth0
[14844] 23/1/2014 -- 10:26:32 - (suricata.c:942) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev a77b9b3)
Illegal instruction (core dumped)

Even when tried (just for the sake of playing with it) sudo or not you can notice the diff between the two commands:
root@ip-xx-xxx-xxx-xxx:/# sudo suricata -c /etc/suricata/suricata.yaml -i eth0 -v
[15562] 23/1/2014 -- 10:58:10 - (suricata.c:942) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev a77b9b3)
[15562] 23/1/2014 -- 10:58:10 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 1
root@ip-xx-xxx-xxx-xxx:/#
root@ip-xx-xxx-xxx-xxx:/#
root@ip-xx-xxx-xxx-xxx:/# suricata -c /etc/suricata/suricata.yaml -i eth0 -v
[15564] 23/1/2014 -- 10:58:15 - (suricata.c:942) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev a77b9b3)
[15564] 23/1/2014 -- 10:58:15 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 1
Illegal instruction (core dumped)
root@ip-xx-xxx-xxx-xxx:/#
root@ip-xx-xxx-xxx-xxx:/# whoami
root
root@ip-xx-xxx-xxx-xxx:/#

Notice how in the first case there is not even an err message. In either case Suri never starts and never dumps a core even when it is compiled with CFLAGS (debugging enabled) aka:
CFLAGS="-O0 -ggdb"  ./configure

If we have not used the --disable-gccmarch-native option during the configure stage, can be concluded from the build-info command:
root@ip-xx-xxx-xxx-xxx:/# suricata --build-info
This is Suricata version 2.0dev (rev a77b9b3)
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_NSS
SIMD support: SSE_4_2 SSE_4_1 SSE_3
Atomic intrisics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 4.6.3, C version 199901
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.9, linked against LibHTP v0.5.9
Suricata Configuration:
  AF_PACKET support:                       yes
  PF_RING support:                         no
  NFQueue support:                         no
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     no

  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      no
  Prelude support:                         no
  PCRE jit:                                no
  libluajit:                               no
  libgeoip:                                yes
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     no

Generic build parameters:
  Installation prefix (--prefix):          /usr
  Configuration directory (--sysconfdir):  /etc/suricata/
  Log directory (--localstatedir) :        /var/log/suricata/

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no
root@ip-xx-xxx-xxx-xxx:/#

This above is the default behavior - for GCC march native during the configure stage.
Having run into the above described problem (basically, can't start Suricata) , I did some investigation and
root@ip-xx-xxx-xxx-xxx:/opt/oisf# dmesg |grep virt
[    0.000000] Linux version 3.2.0-54-virtual (buildd@roseapple) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #82-Ubuntu SMP Tue Sep 10 20:31:18 UTC 2013 (Ubuntu 3.2.0-54.82-virtual 3.2.50)
[    0.000000] Booting paravirtualized kernel on Xen
[1960849.933770] Initialising Xen virtual ethernet driver.
root@ip-xx-xxx-xxx-xxx:/opt/oisf#

what do you know ...a virtual machine :)


I wanted to be 100% sure that this is the case , based on a command output, otherwise I suspected it was a virtual server. I did try all of the below commands to determine if it is a virtual machine:
root@ip-xx-xxx-xxx-xxx:/opt/oisf# ethtool -i eth0
driver: vif
version:
firmware-version:
bus-info: vif-0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
root@ip-xx-xxx-xxx-xxx:/opt/oisf# dmidecode | grep -i vmware
root@ip-xx-xxx-xxx-xxx:/opt/oisf# dmidecode
# dmidecode 2.11
# No SMBIOS nor DMI entry point found, sorry.
root@ip-xx-xxx-xxx-xxx:/opt/oisf# cat /proc/scsi/scsi
root@ip-xx-xxx-xxx-xxx:/opt/oisf# lshw -class system
ip-xx-xxx-xxx-xxx        
    description: Computer
    width: 64 bits
    capabilities: vsyscall32
root@ip-xx-xxx-xxx-xxx:/opt/oisf#
root@ip-xx-xxx-xxx-xxx:/opt/oisf#
root@ip-xx-xxx-xxx-xxx:/opt/oisf#
root@ip-xx-xxx-xxx-xxx:/opt/oisf# lspci | grep -i vmware
root@ip-xx-xxx-xxx-xxx:/opt/oisf# lspci | grep -i virt
root@ip-xx-xxx-xxx-xxx:/opt/oisf# ethtool -i eth0
driver: vif
version:
firmware-version:
bus-info: vif-0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
root@ip-xx-xxx-xxx-xxx:/opt/oisf#
root@ip-xx-xxx-xxx-xxx:/opt/oisf#
root@ip-xx-xxx-xxx-xxx:/opt/oisf# dmesg |grep virt
[    0.000000] Linux version 3.2.0-54-virtual (buildd@roseapple) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #82-Ubuntu SMP Tue Sep 10 20:31:18 UTC 2013 (Ubuntu 3.2.0-54.82-virtual 3.2.50)
[    0.000000] Booting paravirtualized kernel on Xen
[1960849.933770] Initialising Xen virtual ethernet driver.
root@ip-xx-xxx-xxx-xxx:/opt/oisf#

only dmesg |grep virt (and hints from ethtool -i eth0 ) returned what I was looking for.

Disabling gcc march native during the configure stage and recompiling  did the trick and I was able to start and run Suri without a problem.
root@ip-xx-xxx-xxx-xxx:/opt/oisf# suricata --build-info
This is Suricata version 2.0dev (rev a77b9b3)
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_NSS
SIMD support: none
Atomic intrisics: 1 2 4 8 byte(s)
64-bits, Little-endian architecture
GCC version 4.6.3, C version 199901
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.9, linked against LibHTP v0.5.9
Suricata Configuration:
  AF_PACKET support:                       yes
  PF_RING support:                         no
  NFQueue support:                         no
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     no

  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      no
  Prelude support:                         no
  PCRE jit:                                no
  libluajit:                               no
  libgeoip:                                yes
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     no

Generic build parameters:
  Installation prefix (--prefix):          /usr
  Configuration directory (--sysconfdir):  /etc/suricata/
  Log directory (--localstatedir) :        /var/log/suricata/

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                no
  GCC Profile enabled:                     no
root@ip-xx-xxx-xxx-xxx:/opt/oisf#

NOTICE:
GCC march native enabled:                no
You would get the above result when compiling this way (this build is using the latest git dev edition at the moment of this writing):
git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && CFLAGS="-O0 -ggdb"  ./configure \
--prefix=/usr --sysconfdir=/etc --localstatedir=/var \
--disable-gccmarch-native \
--enable-geoip \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
&& sudo make clean \
&& sudo make \
&& sudo make install \
&& sudo ldconfig

as compared with:
git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && CFLAGS="-O0 -ggdb"  ./configure \
--prefix=/usr --sysconfdir=/etc --localstatedir=/var \
--enable-geoip \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
&& sudo make clean \
&& sudo make \
&& sudo make install \
&& sudo ldconfig


Notice "--disable-gccmarch-native" is missing in the second one.
The most important thing is to configure/compile with --disable-gccmarch-native on a virtual guest, if you get the same problem.


by Peter Manev (noreply@blogger.com) at February 08, 2014 04:32 AM

February 05, 2014

Eric Leblond

Suricata and Nftables

Iptables and suricata as IPS

Building a Suricata ruleset with iptables has always been a complicated task when trying to combined the rules that are necessary for the IPS with the firewall rules. Suricata has always used Netfilter advanced features allowing some more or less tricky methods to be used.

For the one not familiar with IPS using Netfilter, here’s a few starting points:

  1. IPS receives the packet coming from kernel via rules using the NFQUEUE target
  2. The IPS must received all packets of a given flow to be able to handle detection cleanly
  3. The NFQUEUE target is a terminal target: when the IPS verdicts a packet, it is or accepted (and leave current chain)

So the ruleset needs to send all packets to the IPS. A basic ruleset for an IPS could thus looks like:

iptables -A FORWARD -j NFQUEUE
With such a ruleset, all packets going through the box are sent to the IPS.

If now you want to combine this with your ruleset, then usually your first try is to add rules to the filter chain:

iptables -A FORWARD -j NFQUEUE
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED -j ACCEPT
# your firewall rules here
But this will not work because of point 2: All packets sent via NFQUEUE to the IPS are or blocked or if accepted, leave the FORWARD chain directly and are going for evaluation to the next chain (mangle POSTROUTING in our case). With such a ruleset, the result is that there is no firewall but an IPS in place.

As mentioned before there is some existing solutions (see Building a Suricata ruleset for extensive information). The simplest one is to dedicated one another chain such as mangle to IPS:

iptables -A FORWARD -t mangle -j NFQUEUE
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED -j ACCEPT
# your firewall rules here
No conflict here but you have to be sure nothing in your system will use the the mangle table or you will have the same problem as the one seen previously in the filter chain. So there was no universal and simple solution to implement an IPS and a firewall ruleset with iptables.

IPS the easy way with Nftables

In Nftables, chains are defined by the user using nft command line. The user can specify:

  • The hook: the place in packet life where the chain will be set. See this diagram for more info.
    • prerouting: chain will be placed before packet are routed
    • input: chain will receive packets going to the box
    • forward: chain will receive packets routed by the box
    • postrouting: chain will receive packets after routing and before sending packets
    • output: chain will receive packet sent by the host
  • The chain type: define the objective of the chain
    • filter: chain will filter packet
    • nat: chain will only contains NAT rules
    • route: chain is containing rule that may change the route (previously now as mangle)
  • The priority: define the evaluation order of the different chains of a given hook. It is an integer that can be freely specified. But it also permits to place chain before or after some internal operation such as connection tracking.

In our case, we want to act on forwarded packets. And we want to have a chain for filtering followed by a chain for IPS. So the setup is simple of chain is simple

nft -i
nft> add table filter
nft> add chain filter firewall { type filter hook forward priority 0;}
nft> add chain filter IPS { type filter hook forward priority 10;}
With this setup, a packet will reach the firewall chain first where it will be filtered. If the packet is blocked, it will be destroy inside of the kernel. It the packet is accepted it will then jump to the next chain following order of increasing priority. In our case, the packet reaches the IPS chain.

Now, that we’ve got our chains we can add filtering rules, for example:
nft add rule filter firewall ct state established accept
nft add rule filter firewall tcp dport ssh counter accept
nft add rule filter firewall tcp dport 443 accept
nft add rule filter firewall counter log drop
And for our Suricata IPS, that’s just trivial:
nft add rule filter IPS queue

A bit more details

The queue target in nftables

The complete support for the queue target will be available in Linux 3.14. The syntax looks as follow:

nft add rule filter output queue num 3 total 2 options fanout
This rule sends matching packets to 2 load-balanced queues (total 2) starting at 3 (num 3). fanout is one of the two queue options:
  • fanout: When used together with total load balancing, this will use the CPU ID as an index to map packets to the queues. The idea is that you can improve perfor mance if there’s a queue per CPU. This requires total with a value superior to 1 to be specified.
  • bypass: By default, if no userspace program is listening on an Netfilter queue,then all packets that are to be queued are dropped. When this option is used, the queue rule behaves like ACCEPT instead, and the packet will move on to the next table.

For a complete description of queueing mechanism in Netfilter see Using NFQUEUE and libnetfilter_queue.

If you want to test this before Linux 3.14 release, you can get nft sources from nftables git and use next-3.14 branch.

Chain priority

For reference, here are the priority values of some important internal operations and of iptables static chains:

  • NF_IP_PRI_CONNTRACK_DEFRAG (-400): priority of defragmentation
  • NF_IP_PRI_RAW (-300): traditional priority of the raw table placed before connection tracking operation
  • NF_IP_PRI_SELINUX_FIRST (-225): SELinux operations
  • NF_IP_PRI_CONNTRACK (-200): Connection tracking operations
  • NF_IP_PRI_MANGLE (-150): mangle operation
  • NF_IP_PRI_NAT_DST (-100): destination NAT
  • NF_IP_PRI_FILTER (0): filtering operation, the filter table
  • NF_IP_PRI_SECURITY (50): Place of security table where secmark can be set for example
  • NF_IP_PRI_NAT_SRC (100): source NAT
  • NF_IP_PRI_SELINUX_LAST (225): SELInux at packet exit
  • NF_IP_PRI_CONNTRACK_HELPER (300): connection tracking at exit
For example, one can create in nftables an equivalent of the raw PREROUTING chain of iptables by doing:
# nft -i
nft> add chain filter pre_raw { type filter hook prerouting priority -300;}

by Regit at February 05, 2014 09:03 AM

February 03, 2014

suricata-ids.org

New OISF board elected

OISF_tmThe OISF community is please to announce the results of the election for the open board seats for 2014. This election cycle, we were fortunate to have had a wonderfully diverse and highly talented slate of nominees and would like to extend our thanks to those who participated. OISF depends heavily on its volunteer board of directors who are giving of their time and so deeply committed to advancing the open source technologies and communities within OISF. Please join us in congratulating the OISF 2014 Board of Directors:

  • Dr. Jose Nazario
  • Richard Bejtlich
  • Randy Caldejon
  • Luca Deri
  • Ken Steele
  • Alexandre Dulaunoy

Also on the board are members of the OISF leadership team: Matt Jonkman and Kelley Misata

We would also like to thank outgoing board members, Joel Ebrahimi and Stuart Wilson, for their service and time given to OISF in 2013.

by inliniac at February 03, 2014 03:07 PM

Open Information Security Foundation

New OISF board elected

The OISF community is please to announce the results of the election for the open board seats for 2014.  This election cycle, we were fortunate to have had a wonderfully diverse and highly talented slate of nominees and would like to extend our thanks to those who participated.  OISF depends heavily on its volunteer board of directors who are giving of their time and so deeply committed to advancing the open source technologies and communities within OISF.  Please join us in congratulating the OISF 2014 Board of Directors:

  • Dr. Jose Nazario
  • Richard Bejtlich
  • Randy Caldejon
  • Luca Deri
  • Ken Steele
  • Alexandre Dulaunoy


Also on the board are members of the OISF leadership team:  Matt Jonkman and Kelley Misata

We would also like to thank outgoing board members, Joel Ebrahimi and Stuart Wilson, for their service and time given to OISF in 2013.

by Victor Julien (postmaster@inliniac.net) at February 03, 2014 02:53 PM

February 02, 2014

Eric Leblond

Investigation on an attack tool used in China

Log analysis experiment

I’ve been playing lately with logstash using data from the ulogd JSON output plugin and the Suricata full JSON output as well as standard system logs.

Screenshot from 2014-02-02 13:22:34

Ulogd is getting Netfilter firewall logs from Linux kernel and is writing them in JSON format. Suricata is doing the same with alert and other traces. Logstash is getting both log as well as sytem log. This allows to create some dashboard with information coming from multiple sources. If you want to know how to configure ulogd for JSON output check this post. For suricata, you can have a look at this one.

Ulogd output is really new and I was experimenting with it in Kibana. When adding some custom graphs, I’ve observed some strange things and decided to investigate.

Displaying TCP window

TCP window size at the start of the connection is not defined in the RFC. So every OSes have choozen their own default value. It was thus looking interesting to display TCP window to be able to find some strange behavior. With the new ulogd JSON plugin, the window size information is available in the tcp.window key. So, after doing a query on tcp.syn:1 to only get TCP syn packet, I was able to graph the TCP window size of SYN packets.

Screenshot from 2014-02-02 13:22:58

Most of the TCP window sizes are well-known and correspond to standard operating systems:

  • 65535 is or MacOSX or some MS Windows OS.
  • 14600 is used by some Linux.

The first uncommon value is 16384. Graph are clickable on Kibana, so I was at one click of some interesting information.

First information when looking at dashboard after selection TCP syn packet with a window size of 16384 was the fact, it was only ssh scanning:

Screenshot from 2014-02-02 13:58:15

Second information is the fact that, according to geoip, all IPs are chinese:

Screenshot from 2014-02-02 13:57:19

A SSH scanning software

When looking at the details of the attempt made on my IP, there was something interesting: Screenshot from 2014-02-02 14:04:32

For all hosts, all requests are done with the same source port (6000). This is not possible to do that with a standard ssh client where the source port is by default choosen by the operating system. So or we have a custom standard software that perform a bind operation to port 6000 at socket creation. This is possible and one advantage would be to be easily authorized through a firewall if the country had one. Or we could have a software developped with low level (RAW) sockets for performance reason. This would allow a faster scanning of the internet by skipping OS TCP connection handling. There is a lot of posts regarding the usage of port 6000 as source for some scanning but I did not find any really interesting information in them.

On suricata side, most of the source IPs are referenced in ET compromised rules: Screenshot from 2014-02-02 13:25:03

Analysing my SSH logs, I did not see any trace of ssh bruteforce coming from source port 6000. But when selecting an IP, I’ve got trace of brute force from at least one of the IP: Screenshot from 2014-02-02 14:31:02

These attackers seems to really love the root account. In fact, I did not manage to find any trace of attempts for user different than root for IP address that are using the port 6000.

Getting back to my ulogd dashboard, I’ve displayed more info about the used scanning sequence: Screenshot from 2014-02-02 14:34:05 The host scans the box using a scanner using raw socket, then it attacks with a few minutes later with SSH bruteforce tool. The bruteforce tool has a TCP window size at start of 65535. It indicates that a separated software is used for scanning. So we should have an queueing mechanism between the scanner and the bruteforce tool. This may explains the duration between the scan and the bruteforce. Regarding TCP window size value, 65535 seems to indicate a Windows server (which is coherent with TTL value).

Looking at the scanner traffic

Capturing a sample traffic did not give to much information. This is a scanner sending a SYN and cleanly sending a reset when it got the SYN, ACK:

14:27:54.982273 IP (tos 0x0, ttl 103, id 256, offset 0, flags [none], proto TCP (6), length 40)
    218.2.22.118.6000 > 192.168.1.19.22: Flags [S], cksum 0xa525 (correct), seq 9764864, win 16384, length 0
14:27:54.982314 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 44)
    192.168.1.19.22 > 218.2.22.118.6000: Flags [S.], cksum 0xeee2 (correct), seq 2707606274, ack 9764865, win 29200, options [mss 1460], length 0
14:27:55.340992 IP (tos 0x0, ttl 111, id 14032, offset 0, flags [none], proto TCP (6), length 40)
    218.2.22.118.6000 > 192.168.1.19.22: Flags [R], cksum 0xe48c (correct), seq 9764865, win 0, length 0

But it seems the RST packet after the SYN, ACK is not well crafted: Screenshot from 2014-02-02 16:07:26

More info on SSH bruteforce tool

Knowing the the behavior was scanning from 6000 and starting a normal scanning, I’ve focused the Suricata dashboard on one IP to see if I had some more information: Screenshot from 2014-02-02 15:21:58

One single IP in the list of the scanning host is triggering multiple alerts. The event table confirmed this: Screenshot from 2014-02-02 15:16:41

Studying the geographical repartition of the Libssh alert, it appears there is used in other countries than China: Screenshot from 2014-02-02 15:24:59 So, libssh is not a discriminatory element of the attacks.

Conclusion

A custom attack tool has been been deployed on some Chinese IPs. This is a combination of a SSH scanner based on RAW socket and a SSH bruteforce tool. It tries to gain access to the root account of system via the ssh service. On an organisational level, it is possible there is a Chinese initiative trying to get the low-hanging fruit (system with ssh root account protected by password) or maybe it is just a some organization using some compromised Chinese IPs to try to get control other more boxes.

by Regit at February 02, 2014 03:28 PM

Peter Manev

Suricata IDPS and Common Information Model



Short and to the point.
This patch (shown below) provided in the latest git master at the moment of this writing, by Eric Leblond, makes the output correlation of  log data, generated by Suricata IDPS -> Data Source Integration  CIM compliant.

In other words when using the JSON output for logging in Suricata (available in the current git master plus expected to reach maturity in Suricata 2.0) you can use Logstash and Kibana to query, filter and present log data in a way which will follow the  CIM.

The patch's info:
commit 7a9efd74e4d88e39c6671f6a0dda28ac931ffe10
Author: Eric Leblond <eric@regit.org>
Date:   Thu Jan 30 23:33:45 2014 +0100

    json: sync key name with CIM
   
    This patch is synchronizing key name with Common Information Model.
    It updates key name following what is proposed in:
     http://docs.splunk.com/Documentation/PCI/2.0/DataSource/CommonInformationModelFieldReference
    The interest of these modifications is that using the same key name
    as other software will provide an easy to correlate and improve
    data. For example, geoip setting in logstash can be applied on
    all src_ip fields allowing geoip tagging of data.

How? You could try reading the following:
https://home.regit.org/2014/01/a-bit-of-logstash-cooking/

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/_Logstash_Kibana_and_Suricata_JSON_output

by Peter Manev (noreply@blogger.com) at February 02, 2014 02:41 AM

January 21, 2014

Security Onion

Snort 2.9.5.6 and Suricata 1.4.7 packages now available!

The following software was recently released:

Snort 2.9.5.6
http://blog.snort.org/2013/11/snort-2956-is-now-available-on-snortorg.html

Suricata 1.4.7
http://www.openinfosecfoundation.org/index.php/component/content/article/1-latest-news/184--suricata-147-released

I've packaged these new releases and the new packages have been tested by JP Bourget and David Zawdie.  Thanks, guys!

Upgrading
The new packages are now available in our stable repo.  Please see our Upgrade page for full upgrade instructions:
https://code.google.com/p/security-onion/wiki/Upgrade

These updates will do the following:


  • back up each of your existing snort.conf files to snort.conf.bak
  • update Snort
  • back up each of your existing suricata.yaml files to suricata.yaml.bak
  • update Suricata


You'll then need to do the following:


  • apply your local customizations to the new snort.conf or suricata.yaml files
  • update ruleset and restart Snort/Suricata as follows:
    sudo rule-update

Release Notes
Snort is now compiled with --enable-sourcefire.

Screenshots
"sudo soup" upgrade process
Snort 2.9.5.6 and Suricata 1.4.7

Updating ruleset and restarting Snort/Suricata using "sudo rule-update"
Feedback
If you have any questions or problems, please use our mailing list:
https://code.google.com/p/security-onion/wiki/MailingLists

Help Wanted
If you and/or your organization have found value in Security Onion, please consider giving back to the community by joining one of our teams:
https://code.google.com/p/security-onion/wiki/TeamMembers

We especially need help in answering support questions on the mailing list and IRC channel.  Thanks!

by Doug Burks (noreply@blogger.com) at January 21, 2014 09:12 AM

January 16, 2014

suricata-ids.org

OISF Needs Your Vote! 2014 Board of Directors

OISF_tmThe Open Information Security Foundation (OISF) is conducting its annual online elections to fill 6 open positions on the OISF board of directors. Board members serve a one year term, therefore, current board members along with new nominees are included on this year’s ballot. For 2014, the OISF board will consist of 8 board members in total: 6 elected directors, President of OISF Matt Jonkman and Board Secretary and OISF Director of Outreach, Kelley Misata.

Each nominee has provided a brief summary highlighting their industry experience and their passion for OISF; please take a minute to read about each of our distinguished nominees and to cast your votes NOW!

Simply follow this link: https://www.surveymonkey.com/s/WFZRJSW

Polls will close Wednesday, January 22, 2014 with the 2014 OISF Board announced on Thursday, January 23, 2014.

Questions regarding elections can be sent to info@openinfosecfoundation.org.

Thank you,
The OISF Team

by inliniac at January 16, 2014 09:51 AM

Open Information Security Foundation

OISF Needs Your Vote! 2014 Board of Directors

The Open Information Security Foundation (OISF) is conducting its annual online elections to fill 6 open positions on the OISF board of directors.  Board members serve a one year term, therefore, current board members along with new nominees are included on this year's ballot.  For 2014, the OISF board will consist of 8 board members in total:  6 elected directors, President of OISF Matt Jonkman and Board Secretary and OISF Director of Outreach, Kelley Misata.

Each nominee has provided a brief summary highlighting their industry experience and their passion for OISF; please take a minute to read about each of our distinguished nominees and to cast your votes NOW!

Simply follow this link:  https://www.surveymonkey.com/s/WFZRJSW

Polls will close Wednesday, January 22, 2014 with the 2014 OISF Board announced on Thursday, January 23, 2014.

Questions regarding elections can be sent to info@openinfosecfoundation.org.

Thank you,
The OISF Team

by Victor Julien (postmaster@inliniac.net) at January 16, 2014 09:45 AM

January 13, 2014

Anoop Saldanha

Suricata app layer changes. New keyword - app-layer-protocol introduced

Suricata current master has undergone some major rewrite on its app layer code. This includes app layer protocol detection and the app layer parsing phase. While doing this, it has also introduced a new keyword - "app-layer-protocol". There are certain other changes on how we can now specify an app layer protocol in a rule and how it interacts with an "ipproto(both rule specified or through ipproto keyword)".

App layer rewite

Let me start by introducing the app layer rewrite. The old app layer code had the protocol detection and parsing phase all jumbled up. There was no proper separation between the two. Also the internal app layer protocol registration didn't have a hierarchy based on ip protocol, which meant one couldn't register anything against an app layer protocol without modifying the protocol name by appending the ipproto to it.

For example, to specify a signature to match on the udp and tcp variant of dns protocol respectively, one would have to write -

    alert dnsudp ......  /* to match on dns udp */
    alert dnstcp .....   /* to match on dns tcp */

This is now replaced by the cleaner -

     alert dns (ipproto:udp; ); OR 
     alert udp (app-layer-protocol:dns;) OR 
     alert ip (app-layer-protocol:dns; ipproto:udp;)  

New keyword: "app-layer-protocol"

This feature came up with the need to match on negated protocols(feature #727).   An an example we want to match on the string "foo" on all app layer streams which are not http -

alert tcp any any -> any any (app-layer-protocol:!"http"; content:"foo"; sid:1;) 

Interaction between app-layer-protocol, ipproto and the protocol specified using alert <protocol>

Let's work with some examples.

- Match on dns protocol against all ip-protocols.

  alert ip (app-layer-protocol:dns;) 
  alert dns ()

- Match on udp version of dns protocol.

  alert udp (app-layer-protocol:dns;) 
  alert dns (ipproto:udp;) 
  alert ip (app-layer-protocol:dns; ipproto:udp;)

- Match on tcp and udp version of dns protocol.

  alert udp (app-layer-protocol:dns; ipproto:tcp; ) /* XXX Nooooooo...... */
  The above is not allowed.  ipproto keyword can be used only with alert ip.

  alert dns (ipproto:tcp; ipproto:udp;)
  alert ip (app-layer-protocol:dns; ipproto:udp; ipproto:tcp;)

What do all these changes mean for the engine

- We have a neater app layer phase. There's a clear separation between the app layer protocol module and app layer parser module.

- Ipproto based hierarchy in any registration related to an app layer protocol. From a rule_write/user perspective, this removes the need to tag an ipproto along with the app_protocol name. For example, we no longer have dnstcp and dnsudp.

- Introduction of the new "app-layer-protocol" keyword allows for richer specification when used along with other keywords like "ipproto".

- Conf yaml changes

  Previously,
  dnstcp:
      enabled: yes
      detection-ports:
          tcp: toserver: 53
  dnsudp:
      enabled: yes
      detection-ports:
          udp: toserver: 53

  Now,
  dns:
      tcp:
          enabled: yes
          detection-ports:
              toserver: 53
      udp:
          enabled: yes
          detection-ports:
              toserver: 53

by poona (noreply@blogger.com) at January 13, 2014 02:59 AM

January 11, 2014

Peter Manev

Git - merging branches, rebasing and things..




I was wondering .. how (if there is a way) I can merge the latest
current Suricata dev git master and a git pull request that has not yet been
introduced into the git master?

example:
I want to git clone the latest Suricata git master... and then apply to it
Tom Decanio's github branch for ALL JSON output -
(git clone https://github.com/decanio/suricata-np.git -b dev-np-work1.3 )

What is the best way to do that?

....
well (with some invaluable help from  Regit):


 git clone git://phalanx.openinfosecfoundation.org/oisf.git
 cd oisf/
 git remote add decanio https://github.com/decanio/suricata-np.git
 git fetch decanio
 git checkout -b my-dev-np-work1.3 decanio/dev-np-work1.3
 git fetch origin
 git rebase origin/master

you're done (if there are no errors during the re-basing phase :) ) !



by Peter Manev (noreply@blogger.com) at January 11, 2014 10:08 AM

January 03, 2014

Peter Manev

Suricata cocktails (handy one-liners)




Some of my favorite cocktails (one-liners) :)

Suricata cocktails with git master. Tested on Ubuntu and Debian.
You can just copy/paste.

Before you start, make sure you have the below packages installed.

General packages needed:
apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \
build-essential autoconf automake libtool libpcap-dev libnet1-dev \
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libcap-ng-dev libcap-ng0 \
make flex bison git git-core subversion libmagic-dev


For MD5 support(file extraction):
apt-get install libnss3-dev libnspr4-dev


For GeoIP:
apt-get install libgeoip1 libgeoip-dev


For the first three (3) cocktails/recipes you would need:
  1. PF_RING as explained HERE 
  2. luajit as explained HERE
and use Suricata's git master - latest dev edition.


Cocktail 1

Suricata - latest dev edition plus enabled:
  1. pf_ring
  2. Luajit scripting
  3. GeoIP
  4. Filemagic/MD5

In case you get the pfring err:
    checking for pfring_open in -lpfring... no

       ERROR! --enable-pfring was passed but the library was not found
       or version is >4, go get it
       from http://www.ntop.org/PF_RING.html

The "LIBS=-lrt" infront of "./configure" below addresses that problem (err message above)


git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && LIBS=-lrt ./configure  --enable-pfring --enable-luajit --enable-geoip \
--with-libpfring-includes=/usr/local/pfring/include/ \
--with-libpfring-libraries=/usr/local/pfring/lib/ \
--with-libpcap-includes=/usr/local/pfring/include/ \
--with-libpcap-libraries=/usr/local/pfring/lib/ \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
--with-libluajit-includes=/usr/local/include/luajit-2.0/ \
--with-libluajit-libraries=/usr/lib/x86_64-linux-gnu/ \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig


Cocktail 2

Suricata - latest dev edition plus enabled:
  1. pf_ring
  2. Luajit scripting
  3. GeoIP
  4. Filemagic/MD5
  5. Debugging

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && CFLAGS="-O0 -ggdb"  \
./configure  \
--enable-pfring --enable-luajit --enable-geoip \
--with-libpfring-includes=/usr/local/pfring/include/ \
--with-libpfring-libraries=/usr/local/pfring/lib/ \
--with-libpcap-includes=/usr/local/pfring/include/ \
--with-libpcap-libraries=/usr/local/pfring/lib/ \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
--with-libluajit-includes=/usr/local/include/luajit-2.0/ \
--with-libluajit-libraries=/usr/lib/x86_64-linux-gnu/ \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig


Cocktail 3

Suricata - latest dev edition plus enabled:
  1. pf_ring
  2. Luajit scripting
  3. GeoIP
  4. Filemagic/MD5


git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure  \
--enable-pfring --enable-luajit --enable-geoip \
--with-libpfring-includes=/usr/local/pfring/include/ \
--with-libpfring-libraries=/usr/local/pfring/lib/ \
--with-libpcap-includes=/usr/local/pfring/include/ \
--with-libpcap-libraries=/usr/local/pfring/lib/ \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
--with-libluajit-includes=/usr/local/include/luajit-2.0/ \
--with-libluajit-libraries=/usr/lib/x86_64-linux-gnu/ \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig

Cocktail 4

Suricata - latest dev edition plus enabled:
  1. GeoIP
  2. Filemagic/MD5

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure --enable-geoip \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig

Cocktail 5

Suricata - latest dev edition plus enabled:
  1. GeoIP

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure --enable-geoip \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig






Cocktail 6

Suricata - latest dev edition plus enabled:
  1. Filemagic/MD5

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig



Cocktail 7

Suricata - latest dev edition - default

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure \
&& sudo make clean \
&& sudo make \
&& sudo make install \
&& sudo ldconfig



Issue  - suricata --build-info - to verify after compile and installation.
You could twist it anyway you want - depending on library locations and features enabled in Suricata.




by Peter Manev (noreply@blogger.com) at January 03, 2014 09:51 AM

December 30, 2013

Peter Manev

Playing with segfaults , core dumps and such.

How to determine when (time-wise) you had a segfault with/from any tool,software or program:

dmesg | gawk -v uptime=$( grep btime /proc/stat | cut -d ' ' -f 2 ) '/^[[ 0-9.]*]/ { print strftime("[%Y/%m/%d %H:%M:%S]", substr($0,2,index($0,".")-2)+uptime) substr($0,index($0,"]")+1) }' |grep segf

Like so:
root@suricata:/root# dmesg | gawk -v uptime=$( grep btime /proc/stat | cut -d ' ' -f 2 ) '/^[[ 0-9.]*]/ { print strftime("[%Y/%m/%d %H:%M:%S]", substr($0,2,index($0,".")-2)+uptime) substr($0,index($0,"]")+1) }' |grep segf
[2013/12/19 02:21:49] AFPacketeth38[8874]: segfault at 17c ip 00007f1831f919a0 sp 00007f181c85b5d0 error 4 in libhtp-0.5.8.so.1.0.0[7f1831f83000+1d000]
root@suricata:/root#


The command above will give you the exact time when Suricata issued a core dump -
[2013/12/19 02:21:49] AFPacketeth38[8874]: segfault...


You could also speed up things :)
[force Suricata to core dump/crash (or many other software products) ...part of my job description :) ]
If this is what you want -

1) Start Suricata.
2) Kill it with an abort signal.
sudo kill -n ABRT `pidof suricata`


Suricata will now abort and dump core. Then issue the following command:
gdb /usr/bin/suricata /var/data/peter/crashes/suricata/core

/usr/bin/suricata  - is the location of the suricata binary (if not sure, issue the command-
which suricata)
/var/data/peter/crashes/suricata/core - this is the location/name of the core file


The location of the core dump file could be specified in suricata.yaml:
# Daemon working directory
# Suricata will change directory to this one if provided
# Default: "/"
daemon-directory: "/var/data/peter/crashes/suricata"


Once in  gdb:
thread apply all bt


NOTE: To be able to get any useful info of the core dump file, you should compile Suricata with CFLAGS, like so:
CFLAGS="-O0 -ggdb"  ./configure

instead of just
./configure



by Peter Manev (noreply@blogger.com) at December 30, 2013 02:16 AM

Suricata - setting up flows





So looking at the suricata.log file (after starting suricata):


root@suricata:/var/data/log/suricata# more suricata.log                                                       
 [1372] 17/12/2013 -- 17:47:35 - (suricata.c:962) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev e7f6107)
[1372] 17/12/2013 -- 17:47:35 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[1372] 17/12/2013 -- 17:47:35 - (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[1372] 17/12/2013 -- 17:47:35 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[1373] 17/12/2013 -- 17:47:35 - (tmqh-packetpool.c:142) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 229106864
[1373] 17/12/2013 -- 17:47:35 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[1373] 17/12/2013 -- 17:47:35 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[1373] 17/12/2013 -- 17:47:35 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216

[1373] 17/12/2013 -- 17:47:36 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 1006632960 bytes of memory for the flow hash... 15728640 buckets of size 64
[1373] 17/12/2013 -- 17:47:37 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 8000000 flows of size 280

[1373] 17/12/2013 -- 17:47:37 - (flow.c:412) <Info> (FlowInitConfig) -- flow memory usage: 3310632960 bytes, maximum: 6442450944
[1373] 17/12/2013 -- 17:47:37 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[1373] 17/12/2013 -- 17:47:37 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[1373] 17/12/2013 -- 17:47:37 - (suricata.c:1769) <Info> (SetupDelayedDetect) -- Delayed detect disabled



We see:

[1373] 17/12/2013 -- 17:47:36 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 1006632960 bytes of memory for the flow hash... 15728640 buckets of size 64
[1373] 17/12/2013 -- 17:47:37 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 8000000 flows of size 280

-> This is approximatelly 3GB of RAM
How did we get to this number... well ... I  have custom defined it in suricata.yaml under the flow section:
  hash-size: 15728640
  prealloc: 8000000

So we need to sum up->
15728640 x 64 ("15728640 buckets of size 64" = 1006632960 bytes)
+
8000000 x 280 ("preallocated 8000000 flows of size 280" = 2240000000 bytes )
=
total of 3246632960 bytes which is 3096.23MB

(15728640 x 64) + (8000000 x 280) =  3246632960 bytes


That would define our flow memcap value in suricata.yaml.
So this would work like this ->

flow:
  memcap: 4gb
  hash-size: 15728640
  prealloc: 8000000
  emergency-recovery: 30


This would work as well ->

flow:
  memcap: 3200mb
  hash-size: 15728640
  prealloc: 8000000
  emergency-recovery: 30


That's it :)

by Peter Manev (noreply@blogger.com) at December 30, 2013 02:14 AM

December 21, 2013

Victor Julien

Suricata Development Update

SuricataWith the holidays approaching and the 1.4.7 and 2.0beta2 releases out, I thought it was a good moment for some reflection on how development is going.

I feel things are going very well. It’s great to work with a group that approaches this project from different angles. OISF has budget have people work on overall features, quality and support. Next to that, our consortium supporters help develop the project: Tilera’s Ken Steele is working on the Tile hardware support, doing lots optimizations. Many of which benefit performance and overall quality for the whole project. Tom Decanio of Npulse is doing great work on the output side, unifying the outputs to be machine readable. Jason Ish of Emulex/Endace is helping out the configuration API, defrag, etc. Others, both from the larger community and our consortium, are helping as well.

QA

At our last meetup in Luxembourg, we’ve spend quite a bit of time discussing how we can improve the quality of Suricata. Since then, we’ve been working hard to add better and more regression and quality testing.

We’ve been using a Buildbot setup for some time now, where on a number of platforms we do basic build testing. First, this was done only against the git master(s). Eric has then created a new method using a script call prscript. It’s purpose is to push a git branch to our buildbot _before_ it’s even considered for inclusion.

Recently, with cooperation of Emerging Threats, we’ve been extending this setup to include a large set of rule+pcap matches that are checked against each commit. This too is part of the pre-include QA process.

There are many more plans to extend this setup further. I’ve set up a private buildbot instance to serve as a staging area. Things we’ll be adding soon:
- valgrind testing
- DrMemory testing
- clang/scan-build
- cppcheck

Ideally, each of those tools would report 0 issues, but thats hard in practice. Sometimes there are false positives. Most tools support some form of suppression, so one of the tasks is to create those.

We’ve spend some time updating our documents regarding contributing to our code base. Please take a moment to a general contribution page, aimed at devs new to the project.

Next to this, this document describes quality requirements for our code, commits and pull requests.

Suricata 2.0

Our roadmap shows a late January 2.0 final release. It might slip a little bit, as we have a few larger changes to make:
- a logging API rewrite is in progress
- “united” output, an all JSON log method written by Tom Decanio of Npulse [5]
- app-layer API cleanup and update that Anoop is working on [6]

Wrapping up, I think 2013 was a very good year for Suricata. 2014 will hopefully be even better. We will be announcing some new support soon, are improving our training curicullum and will just be working hard to make Suricata better.

But first, the holidays. Cheers!


by inliniac at December 21, 2013 11:53 AM

Peter Manev

Suri 2.0beta2 very informative - when you need it

 With the release of Suricata 2.0beta2 one can notice right away a few of the many changes.

 root@LTS-64-1:~# suricata -c /etc/suricata/suricata.yaml -i eth0 -v
19/12/2013 -- 08:57:48 - <Notice> - This is Suricata version 2.0beta2 RELEASE
19/12/2013 -- 08:57:48 - <Info> - CPUs/cores online: 2
19/12/2013 -- 08:57:48 - <Info> - 'default' server has 'request-body-minimal-inspect-size' set to 33882 and 'request-body-inspect-window' set to 4053 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'default' server has 'response-body-minimal-inspect-size' set to 33695 and 'response-body-inspect-window' set to 4218 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'apache' server has 'request-body-minimal-inspect-size' set to 34116 and 'request-body-inspect-window' set to 3973 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'apache' server has 'response-body-minimal-inspect-size' set to 32229 and 'response-body-inspect-window' set to 4205 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'iis7' server has 'request-body-minimal-inspect-size' set to 32040 and 'request-body-inspect-window' set to 4118 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'iis7' server has 'response-body-minimal-inspect-size' set to 32694 and 'response-body-inspect-window' set to 4148 after randomization.
19/12/2013 -- 08:57:48 - <Info> - DNS request flood protection level: 500
19/12/2013 -- 08:57:48 - <Info> - Found an MTU of 1500 for 'eth0'
19/12/2013 -- 08:57:48 - <Info> - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
19/12/2013 -- 08:57:48 - <Info> - preallocated 65535 defrag trackers of size 152
19/12/2013 -- 08:57:48 - <Info> - defrag memory usage: 13631336 bytes, maximum: 33554432
19/12/2013 -- 08:57:48 - <Info> - AutoFP mode using default "Active Packets" flow load balancer
19/12/2013 -- 08:57:48 - <Info> - preallocated 1024 packets. Total memory 3567616
19/12/2013 -- 08:57:48 - <Info> - allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
19/12/2013 -- 08:57:48 - <Info> - preallocated 1000 hosts of size 112
19/12/2013 -- 08:57:48 - <Info> - host memory usage: 390144 bytes, maximum: 16777216
19/12/2013 -- 08:57:48 - <Info> - allocated 4194304 bytes of memory for the flow hash... 65536 buckets of size 64
19/12/2013 -- 08:57:48 - <Info> - preallocated 10000 flows of size 280
19/12/2013 -- 08:57:48 - <Info> - flow memory usage: 7074304 bytes, maximum: 134217728
19/12/2013 -- 08:57:48 - <Info> - IP reputation disabled
19/12/2013 -- 08:57:48 - <Info> - using magic-file /usr/share/file/magic
19/12/2013 -- 08:57:48 - <Info> - Delayed detect disabled
19/12/2013 -- 08:57:53 - <Info> - 48 rule files processed. 14045 rules successfully loaded, 0 rules failed
19/12/2013 -- 08:57:53 - <Info> - 14053 signatures processed. 1136 are IP-only rules, 4310 are inspecting packet payload, 10513 inspect application layer, 72 are decoder event only
19/12/2013 -- 08:57:53 - <Info> - building signature grouping structure, stage 1: preprocessing rules... complete
19/12/2013 -- 08:57:54 - <Info> - building signature grouping structure, stage 2: building source address list... complete
19/12/2013 -- 08:58:00 - <Info> - building signature grouping structure, stage 3: building destination address lists... complete
19/12/2013 -- 08:58:03 - <Info> - Threshold config parsed: 0 rule(s) found
19/12/2013 -- 08:58:03 - <Info> - Core dump size set to unlimited.
19/12/2013 -- 08:58:03 - <Info> - fast output device (regular) initialized: fast.log
19/12/2013 -- 08:58:03 - <Info> - http-log output device (regular) initialized: http.log
19/12/2013 -- 08:58:03 - <Info> - dns-log output device (regular) initialized: dns.log
19/12/2013 -- 08:58:03 - <Info> - file-log output device (regular) initialized: files-json.log
19/12/2013 -- 08:58:03 - <Info> - forcing magic lookup for logged files
19/12/2013 -- 08:58:03 - <Info> - forcing md5 calculation for logged files
19/12/2013 -- 08:58:03 - <Info> - Using 1 live device(s).
19/12/2013 -- 08:58:03 - <Info> - using interface eth0
19/12/2013 -- 08:58:03 - <Info> - Running in 'auto' checksum mode. Detection of interface state will require 1000 packets.
19/12/2013 -- 08:58:03 - <Info> - Found an MTU of 1500 for 'eth0'
19/12/2013 -- 08:58:03 - <Info> - Set snaplen to 1516 for 'eth0'
19/12/2013 -- 08:58:03 - <Info> - Generic Receive Offload is set on eth0
19/12/2013 -- 08:58:03 - <Info> - Large Receive Offload is unset on eth0
19/12/2013 -- 08:58:03 - <Warning> - [ERRCODE: SC_ERR_PCAP_CREATE(21)] - Using Pcap capture with GRO or LRO activated can lead to capture problems.
19/12/2013 -- 08:58:03 - <Info> - RunModeIdsPcapAutoFp initialised
19/12/2013 -- 08:58:03 - <Info> - stream "prealloc-sessions": 2048 (per thread)
19/12/2013 -- 08:58:03 - <Info> - stream "memcap": 536870912
19/12/2013 -- 08:58:03 - <Info> - stream "midstream" session pickups: disabled
19/12/2013 -- 08:58:03 - <Info> - stream "async-oneside": disabled
19/12/2013 -- 08:58:03 - <Info> - stream "checksum-validation": disabled
19/12/2013 -- 08:58:03 - <Info> - stream."inline": disabled
19/12/2013 -- 08:58:03 - <Info> - stream "max-synack-queued": 5
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly "memcap": 1073741824
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly "depth": 8388608
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly "toserver-chunk-size": 2447
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly "toclient-chunk-size": 2489
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly.raw: enabled
19/12/2013 -- 08:58:03 - <Notice> - all 4 packet processing threads, 3 management threads initialized, engine started.
19/12/2013 -- 08:58:32 - <Info> - No packets with invalid checksum, assuming checksum offloading is NOT used




Note 1)
19/12/2013 -- 08:58:03 - <Info> - Generic Receive Offload is set on eth0
19/12/2013 -- 08:58:03 - <Info> - Large Receive Offload is unset on eth0
19/12/2013 -- 08:58:03 - <Warning> - [ERRCODE: SC_ERR_PCAP_CREATE(21)] - Using Pcap capture with GRO or LRO activated can lead to capture problems.

Note 2 ...after some packets)
19/12/2013 -- 08:58:32 - <Info> - No packets with invalid checksum, assuming checksum offloading is NOT used

So  for Note 1)  if we check our interface using ethtool, (if you do not have it  -
apt-get install ethtool on Ubuntu/Debian like systems ):

root@LTS-64-1:~# ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: off
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: off
root@LTS-64-1:~#

We see that :
generic-receive-offload: on
large-receive-offload: off

exactly as Suricata reports.
(
do not forget to run it with the -v option ! :
suricata -c /etc/suricata/suricata.yaml -i eth0 -v
)
Do not forget  - all offloading and checksumming features should be OFF(disabled) on the network interface, so that Suricata processes correctly all the traffic !
 

by Peter Manev (noreply@blogger.com) at December 21, 2013 01:01 AM

December 20, 2013

suricata-ids.org

Suricata Ubuntu PPA updated to 2.0beta2

We have updated the official Ubuntu PPA to Suricata 2.0beta2. To use this PPA read our docs here.

If you’re using this PPA, updating is as simple as:

apt-get update && apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at December 20, 2013 04:23 PM

Call for Nominations – OISF 2014 Board of Directors

OISF_tmThe Open Information Security Foundation is preparing hold the annual Board of Director elections and are putting out a call for nominations.  We are anticipating 2014 to be one of the most important and exciting years for OISF and Suricata.  Therefore, we are looking for candidates in the security and open source community willing to serve as advisors on our board of directors.  The call for nominations begins today until January 14, 2014.
 
Please consider joining our Board of Directors or nominating someone else who would be a great asset to the foundation.  Online elections will begin January 15, 2014.
But you may be asking yourself a few important questions:
 
1. As a board member what will I be asked to do?
The OISF Board of Directors meet quarterly to review foundation activities, upcoming events, financial status and strategic objectives.  Meetings are held via conference call and pre-scheduled to respect the busy schedules of our board members.  Board members are also expected to actively provide expertise, advice and professional connections necessary to help OISF make great strides both   technologically and financially.
 
2. How large is the OISF board?
The 2013 OISF board currently consisted of 5 members – Joel Ebrahimi, Jose Nazario, Richard Bejtlich, Matt Jonkman and Kelley Misata.  It is a 1 year term, therefore, all board members are up for re-election each calendar year.  For 2014 we are adding 2 additional seats – expanding to 7 members.  This will allow all  board members to have a voice on the strategic direction of OISF over the course of this very exciting year.
 
3. What is in it for me if I become an OISF board member?

As a board member you will have the opportunity to steer an innovative and cutting edge open source technology, to be an integral part of the decision making process for OISF and have a beneficiary priority status in all OISF and Suricata related public or private events.  Board members will also be publicly thanked on the OISF website with professional details on the contacts pages.  

 
4. I’m interested in nominating myself or someone I know – how do I do it?
It’s simple - submit your name, employer and a brief statement outlining your experience and reasons for running to be on the OISF board to info@openinfosecfoundation.org by 5 pm EST Tuesday, January 14, 2014.  Please note, the information provided in the nomination will be included on the election ballots so please be brief.
 
Elections will begin Wednesday, January 15th and conclude on Monday, January 24th.  The 2014 OISF Board Members will then be announced on Tuesday, January 25th.
 
If you have any questions please do not hesitate to reach out to us directly at info@openinfosecurityfoundation.org OR reply to list to start a conversation with the community about this process.

by inliniac at December 20, 2013 04:21 PM

Open Information Security Foundation

Call for Nominations - OISF 2014 Board of Directors

The Open Information Security Foundation is preparing hold the annual Board of Director elections and are putting out a call for nominations.  We are anticipating 2014 to be one of the most important and exciting years for OISF and Suricata.  Therefore, we are looking for candidates in the security and open source community willing to serve as advisors on our board of directors.  The call for nominations begins today until January 14, 2014.

Please consider joining our Board of Directors or nominating someone else who would be a great asset to the foundation.  Online elections will begin January 15, 2014.
But you may be asking yourself a few important questions:

1. As a board member what will I be asked to do?
The OISF Board of Directors meet quarterly to review foundation activities, upcoming events, financial status and strategic objectives.  Meetings are held via conference call and pre-scheduled to respect the busy schedules of our board members.  Board members are also expected to actively provide expertise, advice and professional connections necessary to help OISF make great strides both   technologically and financially.

2. How large is the OISF board?
The 2013 OISF board currently consisted of 5 members - Joel Ebrahimi, Jose Nazario, Richard Bejtlich, Matt Jonkman and Kelley Misata.  It is a 1 year term, therefore, all board members are up for re-election each calendar year.  For 2014 we are adding 2 additional seats - expanding to 7 members.  This will allow all  board members to have a voice on the strategic direction of OISF over the course of this very exciting year.

3. What is in it for me if I become an OISF board member?
As a board member you will have the opportunity to steer an innovative and cutting edge open source technology, to be an integral part of the decision making process for OISF and have a beneficiary priority status in all OISF and Suricata related public or private events.  Board members will also be publicly thanked on the OISF website with professional details on the contacts pages.

4. I'm interested in nominating myself or someone I know - how do I do it?
It's simple - submit your name, employer and a brief statement outlining your experience and reasons for running to be on the OISF board to info@openinfosecfoundation.org by 5 pm EST Tuesday, January 14, 2014.  Please note, the information provided in the nomination will be included on the election ballots so please be brief.

Elections will begin Wednesday, January 15th and conclude on Monday, January 24th.  The 2014 OISF Board Members will then be announced on Tuesday, January 25th.

If you have any questions please do not hesitate to reach out to us directly at info@openinfosecurityfoundation.org OR reply to list to start a conversation with the community about this process.
Thank you,
The OISF Team

by Victor Julien (postmaster@inliniac.net) at December 20, 2013 04:18 PM

suricata-ids.org

Suricata 2.0beta2 Windows Installer Available

The Windows MSI installer of the Suricata 2.0beta2 release is now available.

Download it here: suricata-2.0beta2-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at December 20, 2013 04:13 PM

December 18, 2013

Open Information Security Foundation

Suricata 2.0beta2 Available!

The OISF development team is proud to announce Suricata 2.0beta2.  This big update is the second beta release for the upcoming 2.0 version.

Some notable improvements are:

- This release overhauls the protocol detection feature. It now considers both sides of connection, and will raise events on mismatches.
- DNS parser and logger was much improved.
- Tilera support was greatly improved.
- Lots of performance and code quality improvements.

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0beta2.tar.gz

New features

  • Feature #234: add option disable/enable individual app layer protocol inspection modules
  • Feature #417: ip fragmentation time out feature in yaml
  • Feature #478: XFF (X-Forwarded-For) support in Unified2
  • Feature #602: availability for http.log output – identical to apache log format
  • Feature #751: Add invalid packet counter
  • Feature #813: VLAN flow support
  • Feature #901: VLAN defrag support
  • Feature #878: add storage api
  • Feature #944: detect nic offloading
  • Feature #956: Implement IPv6 reject
  • Feature #983: Provide rule support for specifying icmpv4 and icmpv6
  • Feature #1008: Optionally have http_uri buffer start with uri path for use in proxied environments
  • Feature #1009: Yaml file inclusion support
  • Feature #1032: profiling: per keyword stats

Improvements and Fixes

  • Bug #463: Suricata not fire on http reply detect if request are not http
  • Feature #986: set htp request and response size limits
  • Bug #895: response: rst packet bug
  • Feature #940: randomize http body chunks sizes
  • Feature #904: store tx id when generating an alert
  • Feature #752: Improve checksum detection algorithm
  • Feature #746: Decoding API modification
  • Optimization #1018: clean up counters api
  • Bug #907: icmp_seq and icmp_id keywords broken with icmpv6 traffic
  • Bug #967: threshold rule clobbers suppress rules
  • Bug #968: unified2 not logging tagged packets
  • Bug #995: tag keyword: tagging sessions per time is broken

Many more issues were fixed, please see: https://redmine.openinfosecfoundation.org/versions/51

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Duarte Silva
  • Giuseppe Longo
  • Ignacio Sanchez
  • Nelson Escobar — Myricom
  • Chris Wakelin
  • Emerging Threats
  • Coverity
  • Alessandro Guido
  • Amin Latifi
  • Darrell Enns
  • Ignacio Sanchez
  • Mark Ashley
  • Paolo Dangeli
  • rmkml
  • Will Metcalf

Known issues & missing features

In a beta release like this things may not be as polished yet. So please handle with care. That said, if you encounter issues, please let us know!

As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at December 18, 2013 02:37 PM

suricata-ids.org

Suricata 2.0beta2 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0beta2.  This big update is the second beta release for the upcoming 2.0 version.

Some notable improvements are:

- This release overhauls the protocol detection feature. It now considers both sides of connection, and will raise events on mismatches.
- DNS parser and logger was much improved.
- Tilera support was greatly improved.
- Lots of performance and code quality improvements.

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0beta2.tar.gz

New features

  • Feature #234: add option disable/enable individual app layer protocol inspection modules
  • Feature #417: ip fragmentation time out feature in yaml
  • Feature #478: XFF (X-Forwarded-For) support in Unified2
  • Feature #602: availability for http.log output – identical to apache log format
  • Feature #751: Add invalid packet counter
  • Feature #813: VLAN flow support
  • Feature #901: VLAN defrag support
  • Feature #878: add storage api
  • Feature #944: detect nic offloading
  • Feature #956: Implement IPv6 reject
  • Feature #983: Provide rule support for specifying icmpv4 and icmpv6
  • Feature #1008: Optionally have http_uri buffer start with uri path for use in proxied environments
  • Feature #1009: Yaml file inclusion support
  • Feature #1032: profiling: per keyword stats

Improvements and Fixes

  • Bug #463: Suricata not fire on http reply detect if request are not http
  • Feature #986: set htp request and response size limits
  • Bug #895: response: rst packet bug
  • Feature #940: randomize http body chunks sizes
  • Feature #904: store tx id when generating an alert
  • Feature #752: Improve checksum detection algorithm
  • Feature #746: Decoding API modification
  • Optimization #1018: clean up counters api
  • Bug #907: icmp_seq and icmp_id keywords broken with icmpv6 traffic
  • Bug #967: threshold rule clobbers suppress rules
  • Bug #968: unified2 not logging tagged packets
  • Bug #995: tag keyword: tagging sessions per time is broken

Many more issues were fixed, please see: https://redmine.openinfosecfoundation.org/versions/51

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Duarte Silva
  • Giuseppe Longo
  • Ignacio Sanchez
  • Nelson Escobar — Myricom
  • Chris Wakelin
  • Emerging Threats
  • Coverity
  • Alessandro Guido
  • Amin Latifi
  • Darrell Enns
  • Ignacio Sanchez
  • Mark Ashley
  • Paolo Dangeli
  • rmkml
  • Will Metcalf

Known issues & missing features

In a beta release like this things may not be as polished yet. So please handle with care. That said, if you encounter issues, please let us know!

As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at December 18, 2013 01:52 PM

December 17, 2013

suricata-ids.org

Suricata 1.4.7 Windows Installer Available

The Windows MSI installer of the Suricata 1.4.7 release is now available. The installer is now signed.

Download it here: Suricata-1.4.7-1-32bit.msi

After downloading, double click the file to launch the installer.

If you have a previous version installed, please remove that first.

by fleurixx at December 17, 2013 01:29 PM

Suricata Ubuntu PPA updated to 1.4.7

We have updated the official Ubuntu PPA to Suricata 1.4.7. To use this PPA read our docs here.

If you’re using this PPA, updating is as simple as:

apt-get update && apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at December 17, 2013 11:54 AM

December 16, 2013

Open Information Security Foundation

Suricata 1.4.7 released!

The OISF development team is pleased to announce Suricata 1.4.7. This is a small update over the 1.4.6 release.

Get the new release here: suricata-1.4.7.tar.gz

Fixes

  • Bug #996: tag keyword: tagging sessions per time is broken
  • Bug #1000: delayed detect inits thresholds before de_ctx
  • Bug #1001: ip_rep loading problem with multiple values for a single ip
  • Bug #1022: StreamTcpPseudoPacketSetupHeader : port swap logic isn’t consistent
  • Bug #1047: detect-engine.profile – custom value parsing broken
  • Bug #1063: rule ordering with multiple vars

Special thanks

  • Duane Howard
  • Mark Ashley
  • Amin Latifi

Known issues & missing features

As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at December 16, 2013 11:06 AM

suricata-ids.org

Suricata 1.4.7 released!

Photo by Eric LeblondThe OISF development team is pleased to announce Suricata 1.4.7. This is a small update over the 1.4.6 release.

Get the new release here: suricata-1.4.7.tar.gz

Fixes

  • Bug #996: tag keyword: tagging sessions per time is broken
  • Bug #1000: delayed detect inits thresholds before de_ctx
  • Bug #1001: ip_rep loading problem with multiple values for a single ip
  • Bug #1022: StreamTcpPseudoPacketSetupHeader : port swap logic isn’t consistent
  • Bug #1047: detect-engine.profile – custom value parsing broken
  • Bug #1063: rule ordering with multiple vars

Special thanks

  • Duane Howard
  • Mark Ashley
  • Amin Latifi

Known issues & missing features

As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at December 16, 2013 11:06 AM

December 14, 2013

Peter Manev

Suricata - per host/network fragmentation timeouts


Suricata can have the ip fragmentation time out values on a configurable per network/host basis in the suricata.yaml.

Through a light but time consuming research it (frag timeout) seems to be different for the different OSs . It does not matter if the system is 32 or 64 bit, but it does matter if it handles IPv4 or IPv6 addresses. Most of those values you can find under /proc/sys/net/ipv4/ipfrag_time (or using sysctl -a) on most of the Linux/Unix systems – the amount of time a fragment will be kept in memory, after that it will be discarded.

All  (default) values in seconds ->
IPv4:
Suse - 20
CentOS - 30
Ubuntu - 30
Debian - 30
Fedora - 30
Windows (all) - hard coded, can not be changed – 60

IPv6:
Suse - 60
CentOS - 60
Ubuntu - 60
Debian - 60
Fedora - 60
Windows (all) – hardcoded, can not be changed – 60

There are other ip fragmentation values that differ for the different OSs as well. Some values could not have the default value due to network/OS/application specific tuning and other reasons.

However for those hosts and networks that you are sure and know what the timeouts are in seconds - you could use the defrag timeout values in suricata.yaml section and accordingly. That way Suricata will inspect the ip fragments with the same timeouts as the receiving hosts.


Setup defrag timeouts on a per network/host type basics:
# Enable defrag per host settings
  host-config:

    - dmz:
        timeout: 30
        address: [192.168.1.0/24, 127.0.0.0/8, 1.1.1.0/24, 2.2.2.0/24, "1.1.1.1", "2.2.2.2", "::1"]

    - lan:
        timeout: 45
        address:
          - 192.168.0.0/24
          - 192.168.10.0/24
          - 172.16.14.0/24





by Peter Manev (noreply@blogger.com) at December 14, 2013 07:30 AM

November 18, 2013

Eric Leblond

Using linux perf tools for Suricata performance analysis

Introduction

Perf is a great tool to analyse performances on Linux boxes. For example, perf top will give you this type of output on a box running Suricata on a high speed network:

Events: 32K cycles                                                                                                                                                                                                                            
 28.41%  suricata            [.] SCACSearch
 19.86%  libc-2.15.so        [.] tolower
 17.83%  suricata            [.] SigMatchSignaturesBuildMatchArray
  6.11%  suricata            [.] SigMatchSignaturesBuildMatchArrayAddSignature
  2.06%  suricata            [.] tolower@plt
  1.70%  libpthread-2.15.so  [.] pthread_mutex_trylock
  1.17%  suricata            [.] StreamTcpGetFlowState
  1.10%  libc-2.15.so        [.] __memcpy_ssse3_back
  0.90%  libpthread-2.15.so  [.] pthread_mutex_lock

The functions are sorted by CPU consumption. Using arrow key it is possible to jump into the annotated code to see where most CPU cycles are used.

This is really useful but in the case of a function like pthread_mutex_trylock, the interesting part is to be able to find where this function is called.

Getting function call graph in perf

This stack overflow question lead me to the solution.

I’ve started to build suricata with the -fno-omit-frame-pointer option:

./configure --enable-pfring --enable-luajit CFLAGS="-fno-omit-frame-pointer"
make
make install

Once suricata was restarted (with pid being 9366), I was then able to record the data:

sudo perf record -a --call-graph -p 9366

Extracting the call graph was then possible by running:

sudo perf report --call-graph --stdio
The result is a huge detailed report. For example, here’s the part on pthread_mutex_lock:
     0.94%  Suricata-Main  libpthread-2.15.so     [.] pthread_mutex_lock
            |
            --- pthread_mutex_lock
               |
               |--48.69%-- FlowHandlePacket
               |          |
               |          |--53.04%-- DecodeUDP
               |          |          |
               |          |          |--95.84%-- DecodeIPV4
               |          |          |          |
               |          |          |          |--99.97%-- DecodeVLAN
               |          |          |          |          DecodeEthernet
               |          |          |          |          DecodePfring
               |          |          |          |          TmThreadsSlotVarRun
               |          |          |          |          TmThreadsSlotProcessPkt
               |          |          |          |          ReceivePfringLoop
               |          |          |          |          TmThreadsSlotPktAcqLoop
               |          |          |          |          start_thread
               |          |          |           --0.03%-- [...]
               |          |          |
               |          |           --4.16%-- DecodeIPV6
               |          |                     |
               |          |                     |--97.59%-- DecodeTunnel
               |          |                     |          |
               |          |                     |          |--99.18%-- DecodeTeredo
               |          |                     |          |          DecodeUDP
               |          |                     |          |          DecodeIPV4
               |          |                     |          |          DecodeVLAN
               |          |                     |          |          DecodeEthernet
               |          |                     |          |          DecodePfring
               |          |                     |          |          TmThreadsSlotVarRun
               |          |                     |          |          TmThreadsSlotProcessPkt
               |          |                     |          |          ReceivePfringLoop
               |          |                     |          |          TmThreadsSlotPktAcqLoop
               |          |                     |          |          start_thread
               |          |                     |          |
               |          |                     |           --0.82%-- DecodeIPV4
               |          |                     |                     DecodeVLAN
               |          |                     |                     DecodeEthernet
               |          |                     |                     DecodePfring
               |          |                     |                     TmThreadsSlotVarRun
               |          |                     |                     TmThreadsSlotProcessPkt
               |          |                     |                     ReceivePfringLoop
               |          |                     |                     TmThreadsSlotPktAcqLoop
               |          |                     |                     start_thread
               |          |                     |
               |          |                      --2.41%-- DecodeIPV6
               |          |                                DecodeTunnel
               |          |                                DecodeTeredo
               |          |                                DecodeUDP
               |          |                                DecodeIPV4
               |          |                                DecodeVLAN
               |          |                                DecodeEthernet
               |          |                                DecodePfring
               |          |                                TmThreadsSlotVarRun
               |          |                                TmThreadsSlotProcessPkt
               |          |                                ReceivePfringLoop
               |          |                                TmThreadsSlotPktAcqLoop
               |          |                                start_thread

by Regit at November 18, 2013 12:59 PM