Planet Suricata

October 03, 2014

suricata-ids.org

Get Trained January 26 and 27 in San Jose, CA!

Join us for this dynamic, hands-on, 2-day Suricata training event! Developers and security professionals will walk-away with not only a greater proficiency in Suricata’s core technology; but will have the unique opportunity to bring questions, challenges, and new ideas directly to Suricata’s development team.

This training session will take place on January 26 and 27 at the Tilera HQ in San Jose, CA. It will be given by Suricata expert Peter Manev, and OISF president and Emerging Threats CTO Matt Jonkman.

Some of topics that will be covered over the course of the 2-days include:

  • Compiling, Installing, and Configuring Suricata
  • Performance Factors, Rules and Rulesets
  • Capture Methods and Performance
  • Event / Data Outputs and Capture Hardware
  • Troubleshooting Common Problems
  • Advanced Tuning
  • Integration with Other Tools

You can register through eventbrite here. More info on the Suricata Training Program can be found here.

This event is generously hosted by our long time supporters: Tilera.

tilera_logo_pms361_plain

We hope to see you there!

by inliniac at October 03, 2014 09:10 AM

September 30, 2014

Security Onion

Suricata 2.0.4

Suricata 2.0.4 was recently released:
http://www.openinfosecfoundation.org/index.php/component/content/article/1-latest-news/198-suricata-204-available

I've packaged Suricata 2.0.4 and it has been tested by David Zawdie (thanks!).

The new package version is:
securityonion-suricata - 2.0.4-0ubuntu0securityonion1

Issues Resolved

Issue 600: Suricata 2.0.4
https://code.google.com/p/security-onion/issues/detail?id=600

Updating
The new packages are now available in our stable repo.  Please see the following page for full update instructions:
https://code.google.com/p/security-onion/wiki/Upgrade

This update will back up each of your existing suricata.yaml files to suricata.yaml.bak.  You'll then need to do the following:

  • re-apply any local customizations to suricata.yaml
  • update ruleset and restart Suricata as follows:
    sudo rule-update


Screenshots

Update Process
sudo rule-update

rule-update restarts Suricata

Feedback
If you have any questions or problems, please use our security-onion mailing list:
https://code.google.com/p/security-onion/wiki/MailingLists

Training
Only 16 seats left for the 3-day Security Onion class in Richmond VA!
https://security-onion-class-20141020.eventbrite.com/

Commercial Support
Need commercial support?  Please see:
http://securityonionsolutions.com

Help Wanted
If you and/or your organization have found value in Security Onion, please consider giving back to the community by joining one of our teams:
https://code.google.com/p/security-onion/wiki/TeamMembers

We especially need help in answering support questions on the mailing list:
http://groups.google.com/group/security-onion

We also need help testing new packages:
http://groups.google.com/group/security-onion-testing

Thanks!

by Doug Burks (noreply@blogger.com) at September 30, 2014 07:54 AM

September 29, 2014

Victor Julien

Suricata Training Tour

After a lot of preparations, it’s finally going to happen: official Suricata trainings!

In the next couple of months I’ll be doing at least 3 sessions: a home match (Amsterdam), a workshop in Luxembourg and a session at DeepSec. Next to this, we’re planning various US based sessions on the East coast and West coast.

I’m really looking forward to doing these sessions. Other than the official content, there will be plenty of room for questions and discussions.

Hope to see you soon! :)


by inliniac at September 29, 2014 09:25 AM

suricata-ids.org

Get Trained at DeepSec in Vienna

DeepSecLogoJoin us for this dynamic, hands-on, 2-day training session. Developers and security professionals will walk-away with not only a greater proficiency in Suricata’s core technology; but will have the unique opportunity to bring questions, challenges, and new ideas directly to Suricata’s lead developers.

This training session will take place on November 18 and 19 at the DeepSec conference in Vienna . It will be given by Suricata lead developer Victor Julien, OISF president and Emerging Threats CTO Matt Jonkman, Suricata developer Eric Leblond and Suricata expert Peter Manev.

Some of topics that will be covered at this course include:

  • Compiling, Installing, and Configuring Suricata
  • Performance Factors, Rules and Rulesets
  • Capture Methods and Performance
  • Event / Data Outputs and Capture Hardware
  • Troubleshooting Common Problems
  • Integration with Other Tools

You can register at the DeepSec conference registration page here.

More info on the Suricata Training Program can be found here.

We hope to see you there!

by inliniac at September 29, 2014 09:16 AM

Open Information Security Foundation

Announcing the Suricata Training Program

The OISF team is proud to announce the start of the Suricata training program. In this program, we’ll be delivering 1 and 2 day user trainings for Suricata.

Some of topics that will be covered over the course of the 2-days include:

  • Compiling, Installing, and Configuring Suricata
  • Performance Factors, Rules and Rulesets
  • Capture Methods and Performance
  • Event / Data Outputs and Capture Hardware
  • Troubleshooting Common Problems
  • Advanced Tuning
  • Integration with Other Tools


This dynamic, hands-on, 2 day Suricata training will be delivered by the OISF development and support team.  So apart of the great content on how to install, use and troubleshoot Suricata, you will also have the great opportunity to talk in-depth about Suricata with it’s creators.

Proceeds of the trainings go straight into supporting Suricata’s development, so not only will you learn a great deal, you’ll actually be supporting Suricata’s development by taking this training.


We’re kicking off with 3 training sessions in Europe in the last quarter of 2014. For early 2015, we’re planning to do a number of US trainings. Keep an eye on this space for updates. Also, dedicated on-site training options are available.




Amsterdam, October 13 and 14: 2 day training

This training session will take place on October 13 and 14 in down town Amsterdam. It will be given by Suricata lead developer Victor Julien, and OISF president and Emerging Threats CTO Matt Jonkman. Also in the room: master rule writer William Metcalf.

You can register through eventbrite here: https://www.eventbrite.com/e/suricata-training-event-tickets-13264631871

This event is generously hosted by our friends from Intelworks.

Luxembourg, October 20: 1 day workshop

This workshop will take place on October 20 in the conference hotel of the excellent Hack.lu conference. It will be given by Suricata lead developer Victor Julien, Suricata developer Eric Leblond and Suricata expert Peter Manev.

This event is generously hosted by our friends from Hack.lu. You can register through eventbrite here:
https://www.eventbrite.com/e/suricata-workshop-hacklu-tickets-13329929177

A registration / ticket for the Hack.lu conference is NOT required for this event. Of course, we do highly recommend the conference!


DeepSec - Vienna, November 18 and 19: 2 day training event

This training session will take place on November 18 and 19 at the DeepSec conference. It will be given by Victor Julien, Eric Leblond, Peter Manev and Matt Jonkman.

The event is part of the DeepSec conference, so registrations/bookings go through: https://deepsec.net/register.html

See also http://blog.deepsec.net/?p=1893


Trainings are tracked on their own page here: http://suricata-ids.org/training/. For questions or more info, please contact us at oisf-team@openinfosecfoundation.org!

by Victor Julien (postmaster@inliniac.net) at September 29, 2014 08:43 AM

September 25, 2014

suricata-ids.org

Get Trained at Hack.lu in Luxembourg

Join us for this dynamic, hands-on, full day  Suricata workshop! Developers and security professionals will walk-away with not only a greater proficiency in Suricata’s core technology; but will have the unique opportunity to bring questions, challenges, and new ideas directly to Suricata’s lead developers.

This workshop will take place on October 20 in the conference hotel of the excellent Hack.lu conference. It will be given by Suricata lead developer Victor Julien, Suricata developer Eric Leblond and Suricata expert Peter Manev.

Some of topics that will be covered at this course include:

  • Compiling, Installing, and Configuring Suricata
  • Performance Factors, Rules and Rulesets
  • Capture Methods and Performance
  • Event / Data Outputs and Capture Hardware
  • Troubleshooting Common Problems
  • Integration with Other Tools

You can register through eventbrite here: https://www.eventbrite.com/e/suricata-workshop-hacklu-tickets-13329929177240pxlogohacklu2014.

More info on the Suricata Training Program can be found here.

This event is generously hosted by our friends from Hack.lu.

A registration / ticket for the Hack.lu conference is NOT required for this event. Of course, we do highly recommend the conference!

We hope to see you there!

by inliniac at September 25, 2014 03:45 PM

September 23, 2014

suricata-ids.org

Get Trained in Amsterdam!

Join us for this dynamic, hands-on, 2-day Suricata training event! Developers and security professionals will walk-away with not only a greater proficiency in Suricata’s core technology; but will have the unique opportunity to bring questions, challenges, and new ideas directly to Suricata’s lead developers.

This training session will take place on October 13 and 14 in down town Amsterdam. It will be given by Suricata lead developer Victor Julien, and OISF president and Emerging Threats CTO Matt Jonkman.

Some of topics that will be covered over the course of the 2-days include:

  • Compiling, Installing, and Configuring Suricata
  • Performance Factors, Rules and Rulesets
  • Capture Methods and Performance
  • Event / Data Outputs and Capture Hardware
  • Troubleshooting Common Problems
  • Advanced Tuning
  • Integration with Other Tools

You can register through eventbrite here: https://www.eventbrite.com/e/suricata-training-event-tickets-13264631871. More info on the Suricata Training Program can be found here.

This event is generously hosted by our friends from Intelworks.

We hope to see you there!

by inliniac at September 23, 2014 06:05 PM

Announcing the Suricata Training Program

The OISF team is proud to announce the start of the Suricata training program. In this program, we’ll be delivering 1 and 2 day user trainings for Suricata.

Some of topics that will be covered over the course of the 2-days include:

  • Compiling, Installing, and Configuring Suricata
  • Performance Factors, Rules and Rulesets
  • Capture Methods and Performance
  • Event / Data Outputs and Capture Hardware
  • Troubleshooting Common Problems
  • Advanced Tuning
  • Integration with Other Tools

This dynamic, hands-on, 2 day Suricata training will be delivered by the OISF development and support team.  So apart of the great content on how to install, use and troubleshoot Suricata, you will also have the great opportunity to talk in-depth about Suricata with it’s creators.

Proceeds of the trainings go straight into supporting Suricata’s development, so not only will you learn a great deal, you’ll actually be supporting Suricata’s development by taking this training.

We’re kicking off with 3 training sessions in Europe in the last quarter of 2014. For early 2015, we’re planning to do a number of US trainings. Keep an eye on this space for updates. Also, dedicated on-site training options are available.

Trainings are tracked on their own page here. For questions or more info, please contact us!

by inliniac at September 23, 2014 05:04 PM

Open Information Security Foundation

Suricata 2.0.4 Available!

The OISF development team is pleased to announce Suricata 2.0.4. This release fixes a number of important issues in the 2.0 series.

This update fixes a bug in the SSH parser, where a malformed banner could lead to evasion of SSH rules and missing log entries. In some cases it may also lead to a crash. Bug discovered and reported by Steffen Bauch.

Additionally, this release also addresses a new IPv6 issue that can lead to evasion. Bug discovered by Rafael Schaefer working with ERNW GmbH.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.4.tar.gz

Changes

  • Bug #1276: ipv6 defrag issue with routing headers
  • Bug #1278: ssh banner parser issue
  • Bug #1254: sig parsing crash on malformed rev keyword
  • Bug #1267: issue with ipv6 logging
  • Bug #1273: Lua – http.request_line not working
  • Bug #1284: AF_PACKET IPS mode not logging drops and stream inline issue

Security

  • CVE-2014-6603

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

Known issues & missing features

If you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at September 23, 2014 11:21 AM

suricata-ids.org

Suricata 2.0.4 Available!

Photo by Eric Leblond

The OISF development team is pleased to announce Suricata 2.0.4. This release fixes a number of important issues in the 2.0 series.

This update fixes a bug in the SSH parser, where a malformed banner could lead to evasion of SSH rules and missing log entries. In some cases it may also lead to a crash. Bug discovered and reported by Steffen Bauch.

Additionally, this release also addresses a new IPv6 issue that can lead to evasion. Bug discovered by Rafael Schaefer working with ERNW GmbH.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.4.tar.gz

Changes

  • Bug #1276: ipv6 defrag issue with routing headers
  • Bug #1278: ssh banner parser issue
  • Bug #1254: sig parsing crash on malformed rev keyword
  • Bug #1267: issue with ipv6 logging
  • Bug #1273: Lua – http.request_line not working
  • Bug #1284: AF_PACKET IPS mode not logging drops and stream inline issue

Security

  • CVE-2014-6603

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

Known issues & missing features

If you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at September 23, 2014 11:20 AM

August 26, 2014

Security Onion

New PF_RING, Snort, Suricata, Bro packages

New versions of our PF_RING, Snort, Suricata, and Bro packages are now available!  The new package versions are as follows:

securityonion-bro - 2.3-0ubuntu0securityonion10
securityonion-bro-scripts - 20121004-0ubuntu0securityonion26
securityonion-daq - 2.0.2-0ubuntu0securityonion5
securityonion-elsa-extras - 20131117-1ubuntu0securityonion43
securityonion-pfring-daq - 20121107-0ubuntu0securityonion7
securityonion-pfring-devel - 20121107-0ubuntu0securityonion7
securityonion-pfring-ld - 20120827-0ubuntu0securityonion7
securityonion-pfring-module - 20121107-0ubuntu0securityonion23
securityonion-pfring-userland - 20140805-0ubuntu0securityonion3
securityonion-snort - 2.9.6.2-0ubuntu0securityonion7
securityonion-suricata - 2.0.3-0ubuntu0securityonion2

These new packages have been tested by the following (thanks!):
Ronny Vaningh
Andrea De Pasquale
Pete Nelson
Pietro Delsante
David Zawdie
Heine Lysemose
Eddy Simons

Issues Resolved

Issue 535: PF_RING 6.0.2 SVN
https://code.google.com/p/security-onion/issues/detail?id=535

Issue 462: Snort 2.9.6.2
https://code.google.com/p/security-onion/issues/detail?id=462

Issue 567: Snort Daq 2.0.2
https://code.google.com/p/security-onion/issues/detail?id=567

Issue 465: Suricata 2.0.3
https://code.google.com/p/security-onion/issues/detail?id=465

Issue 445: Bro 2.3
https://code.google.com/p/security-onion/issues/detail?id=445

Issue 484: securityonion-bro-scripts: update APT1 scripts with Seth's changes for certificate matching
https://code.google.com/p/security-onion/issues/detail?id=484

Issue 414: Bro script should lookup interface in /etc/nsm/sensortab to obtain sensorname
https://code.google.com/p/security-onion/issues/detail?id=414

Issue 577: ELSA: update parsers for Bro 2.3 log changes
https://code.google.com/p/security-onion/issues/detail?id=577

Updating
The new packages are now available in our stable repo.  Please see the following page for full update instructions:
https://code.google.com/p/security-onion/wiki/Upgrade

These updates will do the following:

  • back up your Bro configuration
  • back up each of your existing snort.conf files to snort.conf.bak
  • back up each of your existing suricata.yaml files to suricata.yaml.bak

You'll then need to do the following:
  • re-apply any local customizations to the Bro/Snort/Suricata config
  • restart Bro as follows:
sudo nsm_sensor_ps-restart --only-bro
  • update ruleset and restart Snort/Suricata as follows:
sudo rule-update

Screenshots
Run "sudo soup" which first installs the new PF_RING kernel module

DKMS compiles the new kernel module

Soup then installs the remaining packages

Bro, Snort, and Suricata notify you that config files have been updated and you'll need to add back any local customizations

After adding back any local Bro customizations, restart Bro using "sudo nsm_sensor_ps-restart --only-bro"

After adding back any local snort.conf or suricata.yaml customizations, run "sudo rule-update" to download the latest ruleset for the new IDS engine

rule-update then restarts Barnyard2 and the IDS engine



Feedback
If you have any questions or problems, please use our security-onion mailing list:
https://code.google.com/p/security-onion/wiki/MailingLists

Conference
Less than 30 seats left for the Security Onion conference in Augusta GA! Reserve your seat today!
https://securityonionconference2014.eventbrite.com

Commercial Support/Training
Need training and/or commercial support?  Please see:
http://securityonionsolutions.com

Help Wanted
If you and/or your organization have found value in Security Onion, please consider giving back to the community by joining one of our teams:
https://code.google.com/p/security-onion/wiki/TeamMembers

We especially need help in answering support questions on the mailing list:
http://groups.google.com/group/security-onion

We also need help testing new packages:
http://groups.google.com/group/security-onion-testing

Thanks!

by Doug Burks (noreply@blogger.com) at August 26, 2014 02:02 PM

August 24, 2014

Peter Manev

Suricata - more data for your alerts


As of Suricata 2.1beta1  - Suricata IDS/IPS provides the availability of packet data and information in a standard JSON output logging capability supplementing further the alert logging output.

This guide makes use of Suricata and ELK - Elasticsearch, Logstash, Kibana.
You can install all of them following the guide HERE
 ...or you can download and try out SELKS  and use directly.


After everything is in place, we need to open the suricata.yaml and make the following editions in the eve.json section:

 # "United" event log in JSON format
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert:

            payload: yes           # enable dumping payload in Base64
            payload-printable: yes # enable dumping payload in printable (lossy) format
            packet: yes            # enable dumping of packet (without stream segments)
            http: yes              # enable dumping of http fields
       
You can start Suricata and let it inspect traffic for some time in order to generate alert log data.
Then navigate to your Kibana web interface, find an alert record/log and you could see the usefulness of the extra data yourself.

Some examples though :) :






Lets kick it up  notch.....

We want to search through -
  1. all the generated alerts that have 
  2. a printable payload data 
  3. that have the following string: uid=0(root)
 Easy, here is the query:
 
payload_printable:"uid=0\(root\)"
You should enter it like this in Kibana:



Well what do you know - we got what we were looking for:




Some more useful reading on the Lucene Query Syntax (you should at least have a look :) ):
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

http://www.solrtutorial.com/solr-query-syntax.html

http://lucene.apache.org/core/2_9_4/queryparsersyntax.html






by Peter Manev (noreply@blogger.com) at August 24, 2014 04:03 AM

Suricata - Flows, Flow Managers and effect on performance



As of Suricata 2.1beta1  - Suricata IDS/IPS provides the availability of high performance/advanced tuning for custom thread configuration for the IDS/IPS engine management threads.

Aka ..these
[27521] 20/7/2014 -- 01:46:19 - (tm-threads.c:2206) <Notice>  (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.


These 3 management threads initialized above are flow manager (1), counter/stats related threads (2x)

So ... in the default suricata.yaml setting we have:

flow:
  memcap: 64mb
  hash-size: 65536
  prealloc: 10000
  emergency-recovery: 30
  #managers: 1 # default to one flow manager
  #recyclers: 1 # default to one flow recycler thread

and we can choose accordingly of how many threads we would like to dedicate for the management tasks within the engine itself.
The recyclers threads offload part of the flow managers work and if enabled do flow/netflow logging.

Good !
What does this has to do with performance?

Suricata IDS/IPS is powerful, flexible and scalable - so be careful what you wish for.
The examples below demonstrate the effect on a 10Gbps Suricata IDS sensor.

Example 1


suricata.yaml config - >
flow:
  memcap: 1gb
  hash-size: 1048576
  prealloc: 1048576
  emergency-recovery: 30
  prune-flows: 50000
  managers: 2 # default is 1

CPU usage ->

 2 flow management threads use 8% CPU each

 Example 2



suricata.yaml config - >
flow:
  memcap: 4gb
  hash-size: 15728640
  prealloc: 8000000
  emergency-recovery: 30
  managers: 2 # default is 1

 CPU usage ->

2 flow management threads use 39% CPU each as compared to Example 1 !!


So a 4 fold increase in memcap, 8 fold increase in prealloc and 15 fold increase on hash-size settings leads to about 3 fold increase in RAM consumption and 5 fold on CPU consumption  - in terms of flow management thread usage.

It would be very rare that you would need the settings in Example 2 - you need huge traffic for that...

So how would you know when to tune/adjust those settings in suricata.yaml? It is recommended that you always keep an eye on your stats.log and make sure you do not enter emergency clean up mode:



it should always be 0

Some additional reading on flows and flow managers -
http://blog.inliniac.net/2014/07/28/suricata-flow-logging/




by Peter Manev (noreply@blogger.com) at August 24, 2014 04:01 AM

Suricata - filtering tricks for the fileinfo output with eve.json


As of Suricata 2.0  - Suricata IDS/IPS provides the availability of a standard JSON output logging capability. This guide makes use of Suricata and ELK - Elasticsearch, Logstash, Kibana.

You can install all of them following the guide HERE
 ...or you can download and try out SELKS  and use directly.

Once you have the installation in place and have the Kibana web interface up and running you can make use of the following fileinfo filters (tricks :).
You can enter the queries like so:



 fileinfo.magic:"PE32" -fileinfo.filename:*exe
will show you all "PE32 executable" executables that were seen transferred that have no exe extension in their file name:




 Alternatively
fileinfo.magic:"pdf" -fileinfo.filename:*pdf


will show you all "PDF document version......" files that were transferred that have no PDF extension in their file name.

You can explore further :)






by Peter Manev (noreply@blogger.com) at August 24, 2014 03:47 AM

August 23, 2014

Peter Manev

Suricata IDS/IPS - HTTP custom header logging


As a continuation of the article HERE- some more screenshots from the ready to use template....

For the Elasticsearch/Logstash/Kibana users there is a ready to use template that you could download from here - "HTTP-Extended-Custom"
https://github.com/pevma/Suricata-Logstash-Templates














by Peter Manev (noreply@blogger.com) at August 23, 2014 06:11 AM

August 20, 2014

suricata-ids.org

Suricata Ubuntu PPA updated to 2.1beta1

We have updated the official Ubuntu PPA to Suricata 2.1beta1. To use this PPA read our docs here.

If you’re using this PPA, updating is as simple as:

apt-get update && apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at August 20, 2014 12:31 PM

Suricata 2.1beta1 Windows Installer Available

The Windows MSI installer of the Suricata 2.1beta1 release is now available.

Download it here: suricata-2.1beta1-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at August 20, 2014 12:15 PM

Suricata Ubuntu PPA updated to 2.0.3

We have updated the official Ubuntu PPA to Suricata 2.0.3. To use this PPA read our docs here.

To install Suricata through this PPA, enter:
sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update
sudo apt-get install suricata

If you’re already using this PPA, updating is as simple as:
sudo apt-get update && sudo apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at August 20, 2014 12:05 PM

Suricata 2.0.3 Windows Installer Available

The Windows MSI installer of the Suricata 2.0.3 release is now available.

Download it here: Suricata-2.0.3-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at August 20, 2014 12:01 PM

August 12, 2014

Open Information Security Foundation

Suricata 2.1beta1 Available!

The OISF development team is proud to announce Suricata 2.1beta1. This is the first beta release for the upcoming 2.1 version. It should be considered a development snapshot for the 2.1 branch.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.1beta1.tar.gz

New Features

  • Feature #1248: flow/connection logging
  • Feature #1155 & #1208: Log packet payloads in eve alerts

Improvements

  • Optimization #1039: Packetpool should be a stack
  • Optimization #1241: pcap recording: record per thread
  • Feature #1258: json: include HTTP info with Alert output
  • AC matcher start up optimizations
  • BM matcher runtime optimizations by Ken Steele

Removals

  • ‘pcapinfo’ output was removed. Suriwire now works with the JSON ‘eve’ output

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Matt Carothers
  • Alexander Gozman
  • Giuseppe Longo
  • Duarte Silva
  • sxhlinux

Known issues & missing features

In a beta release like this things may not be as polished yet. So please handle with care. That said, if you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at August 12, 2014 02:54 PM

suricata-ids.org

Suricata 2.1beta1 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.1beta1. This is the first beta release for the upcoming 2.1 version. It should be considered a development snapshot for the 2.1 branch.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.1beta1.tar.gz

New Features

  • Feature #1248: flow/connection logging
  • Feature #1155 & #1208: Log packet payloads in eve alerts

Improvements

  • Optimization #1039: Packetpool should be a stack
  • Optimization #1241: pcap recording: record per thread
  • Feature #1258: json: include HTTP info with Alert output
  • AC matcher start up optimizations
  • BM matcher runtime optimizations by Ken Steele

Removals

  • ‘pcapinfo’ output was removed. Suriwire now works with the JSON ‘eve’ output

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Matt Carothers
  • Alexander Gozman
  • Giuseppe Longo
  • Duarte Silva
  • sxhlinux

Known issues & missing features

In a beta release like this things may not be as polished yet. So please handle with care. That said, if you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at August 12, 2014 02:33 PM

August 10, 2014

Peter Manev

HTTP Header fields extended logging with Suricata IDPS


With the release of Suricata 2.0.1 there is availability and option to do extended custom HTTP header fields logging through the JSON output module.

For the Elasticsearch/Logstash/Kibana users there is a ready to use template that you could download from here -
https://github.com/pevma/Suricata-Logstash-Templates

So what does this mean?

Well besides the standard http logging in the eve.json you also get 47 additional HTTP fields logged, mainly these:
accept
accept-charset
accept-encoding
accept-language
accept-datetime
authorization
cache-control
cookie
from
max-forwards
origin
pragma
proxy-authorization
range
te
via
x-requested-with
dnt
x-forwarded-proto
accept-range
age
allow
connection
content-encoding
content-language
content-length
content-location
content-md5
content-range
content-type
date
etags
last-modified
link
location
proxy-authenticate
referrer
refresh
retry-after
server
set-cookie
trailer
transfer-encoding
upgrade
vary
warning
www-authenticate

What they are and what they mean/affect you could read more about here:
http://en.wikipedia.org/wiki/List_of_HTTP_header_fields

You can choose any combination of those fields above or all of them. What you need to do is simply add those to the existing logging in suricata.yaml's eve section. To add all of them if found in the HTTP traffic you could do like so:
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
            # custom allows additional http fields to be included in eve-log
            # the example below adds three additional fields when uncommented
            #custom: [Accept-Encoding, Accept-Language, Authorization]
            custom: [accept, accept-charset, accept-encoding, accept-language,
            accept-datetime, authorization, cache-control, cookie, from,
            max-forwards, origin, pragma, proxy-authorization, range, te, via,
            x-requested-with, dnt, x-forwarded-proto, accept-range, age,
            allow, connection, content-encoding, content-language,
            content-length, content-location, content-md5, content-range,
            content-type, date, etags, last-modified, link, location,
            proxy-authenticate, referrer, refresh, retry-after, server,
            set-cookie, trailer, transfer-encoding, upgrade, vary, warning,
            www-authenticate]
        - dns
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: yes   # force logging magic on all logged files
            force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh
Then you just start Suricata.

What is the benefit?

You can log and search/filter/select through any or all of those 60 or so http header fields. JSON is a standard format - so depending on what you are using for DB and/or search engine, you could get very easy interesting and very helpful statistics that would help your security teams.

Some possible stats using Elasticsearch and Kibana
(how to set up Elasticsearch, Logstash and Kibana with Suricata)-











by Peter Manev (noreply@blogger.com) at August 10, 2014 06:36 AM

August 08, 2014

Open Information Security Foundation

Suricata 2.0.3 Available!

The OISF development team is proud to announce Suricata 2.0.3. This release fixes a number of issues in the 2.0 series. Most importantly, this release addresses a number of IPv6 issues that can lead to evasion. Bugs discovered by Rafael Schaefer working with ERNW GmbH.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.3.tar.gz

Changes

  • Bug #1236: fix potential crash in http parsing
  • Bug #1244: ipv6 defrag issue
  • Bug #1238: Possible evasion in stream-tcp-reassemble.c
  • Bug #1221: lowercase conversion table missing last value
  • Support #1207: Cannot compile on CentOS 5 x64 with –enable-profiling
  • Updated bundled libhtp to 0.5.15

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Rafael Schaefer working with ERNW GmbH
  • Antonios Atlasis working with ERNW GmbH
  • Alexander Gozman
  • sxhlinux
  • Ivan Ristic

Known issues & missing features

If you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at August 08, 2014 09:40 AM

suricata-ids.org

Suricata 2.0.3 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0.3. This release fixes a number of issues in the 2.0 series. Most importantly, this release addresses a number of IPv6 issues that can lead to evasion. Bugs discovered by Rafael Schaefer working with ERNW GmbH.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.3.tar.gz

Changes

  • Bug #1236: fix potential crash in http parsing
  • Bug #1244: ipv6 defrag issue
  • Bug #1238: Possible evasion in stream-tcp-reassemble.c
  • Bug #1221: lowercase conversion table missing last value
  • Support #1207: Cannot compile on CentOS 5 x64 with –enable-profiling
  • Updated bundled libhtp to 0.5.15

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Rafael Schaefer working with ERNW GmbH
  • Antonios Atlasis working with ERNW GmbH
  • Alexander Gozman
  • sxhlinux
  • Ivan Ristic

Known issues & missing features

If you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at August 08, 2014 09:39 AM

August 01, 2014

Security Onion

PF_RING, Snort, and Suricata packages have reached Release Candidate status!

Our new PF_RING/Snort/Suricata packages have reached Release Candidate status!  Since these packages are critical components, I'd like to do one final phase of testing before promoting to stable.  If at all possible, please try installing on some of your production sensors so that we can get some real world testing before promoting to stable.

Join the discussion here:
https://groups.google.com/d/topic/security-onion-testing/mKVn-GAPaIg/discussion

by Doug Burks (noreply@blogger.com) at August 01, 2014 11:42 AM

July 28, 2014

Victor Julien

Suricata Flow Logging

Pretty much from the start of the project, Suricata has been able to track flows. In Suricata the term ‘flow’ means the bidirectional flow of packets with the same 5 tuple. Or 7 tuple when vlan tags are counted as well.

Such a flow is created when the first packet comes in and is stored in the flow hash. Each new packet does a hash look-up and attaches the flow to the packet. Through the packet’s flow reference we can access all that is stored in the flow: TCP session, flowbits, app layer state data, protocol info, etc.

When a flow hasn’t seen any packets in a while, a separate thread times it out. This ‘Flow Manager’ thread constantly walks the hash table and looks for flows that are timed out. The time a flow is considered ‘active’ depends on the protocol, it’s state and the configuration settings.

In Suricata 2.1, flows will optionally be logged when they time out. This logging is available through a new API, with an implementation for ‘Eve’ JSON output already developed. Actually, 2 implementations:

  1. flow — logs bidirectional records
  2. netflow — logs unidirectional records

As the flow logging had to be done at flow timeout, the Flow Manager had to drive it. Suricata 2.0 and earlier had a single Flow Manager thread. This was hard coded, and in some cases it was clearly a bottleneck. It wasn’t uncommon to see this thread using more CPU than the packet workers.

So adding more tasks to the Flow Manager, especially something as expensive as output, was likely going to make things worse. To address this, 2 things are now done:

  1. multiple flow manager support
  2. offloading of part of the flow managers tasks to a new class of management threads

The multiple flow managers simply divide up the hash table. Each thread manages it’s own part of it. The new class of threads is called ‘Flow Recycler’. It takes care of the actual flow cleanup and recycling. This means it’s taking over a part of the old Flow Manager’s tasks. In addition, if enabled, these threads are tasked with performing the actual flow logging.

As the flow logging follows the ‘eve’ format, passing it into Elasticsearch, Logstash and Kibana (ELK) is trivial. If you already run such a setup, the only thing that is need is enabling the feature in your suricata.yaml.

kibana-flow

kibana-netflowThe black netflow dashboard is available here: http://www.inliniac.net/files/NetFlow.json

Many thanks to the FireEye Forensics Group (formerly nPulse Technologies) for funding this work.


by inliniac at July 28, 2014 10:05 PM

Peter Manev

Granularity in advance memory tuning for segments and http processing with Suricata


Just recently(at the time of this writing) in the dev branch of Suricata IDPS (git) were introduced a few new config  options for the suricata.yaml

stream:
  memcap: 32mb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 64mb
    depth: 1mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    randomize-chunk-range: 10
    raw: yes
    chunk-prealloc: 250 #size 4KB
    segments:
      - size: 4
        prealloc: 256
      - size: 16
        prealloc: 512
      - size: 112
        prealloc: 512
      - size: 248
        prealloc: 512
      - size: 512
        prealloc: 512
      - size: 768
        prealloc: 1024
      - size: 1448
        prealloc: 1024
      - size: 65535
        prealloc: 128


and under the app layer protocols section (in suricata.yaml) ->

    http:
      enabled: yes
      # memcap: 64mb



Stream segments memory preallocation - config option

This first one gives you an advance granular control over your memory consuption in terms or preallocating memory for segmented packets that are going through the stream reassembly engine ( of certain size)

The patch's info:
commit b5f8f386a37f61ae0c1c874b82f978f34394fb91
Author: Victor Julien <victor@inliniac.net>
Date:   Tue Jan 28 13:48:26 2014 +0100

    stream: configurable segment pools
   
    The stream reassembly engine uses a set of pools in which preallocated
    segments are stored. There are various pools each with different packet
    sizes. The goal is to lower memory presure. Until now, these pools were
    hardcoded.
   
    This patch introduces the ability to configure them fully from the yaml.
    There can be at max 256 of these pools.

In other words to speed things up in Suricata, you could do some traffic profiling with the iptraf tool (apt-get install iptraf , then select "Statistical breakdowns", then select "By packet size", then the appropriate interface):

So partly based on the pic above (one also should determine the packet breakdown from TCP perspective) you could do some adjustments in the default config section in suricata.yaml:

segments:
      - size:4
        prealloc: 256
      - size:74
        prealloc: 65535
      - size: 112
        prealloc: 512
      - size: 248
        prealloc: 512
      - size: 512
        prealloc: 512
      - size: 768
        prealloc: 1024
      - size: 1276
        prealloc: 65535
      - size: 1425
        prealloc: 262140
      - size: 1448
        prealloc: 262140
      - size: 9216 
        prealloc: 65535 
      - size: 65535
        prealloc: 9216


Make sure you calculate your memory, this all falls under the stream reassembly memcap set in the yaml. So naturally it would have to be  big enough to accommodate those changes :).
For example the changes in bold above would need 1955 MB of RAM from the stream reassembly value set in the suricata.yaml. So for example if the values is set like so:
stream:
  memcap: 2gb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 4gb
it will use 1955MB for prealloc segment  packets and there will be roughly 2gb left for the other reassembly tasks - like for example allocating segments and chunks that were not  prealloc in the  settings.

If you would like to be exact - you can run Suricata with the -v switch to enable verbosity, thus giving you an exact picture of what your segments numbers are (for example: run it for a 24 hrs and after you stop it with kill -15 pid_of_suricata):

(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 4 had a peak use of 96199 segments, more than the prealloc setting of 256
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 16 had a peak use of 28743 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 112 had a peak use of 96774 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 248 had a peak use of 25833 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 512 had a peak use of 24354 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 768 had a peak use of 30954 segments, more than the prealloc setting of 1024
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 1448 had a peak use of 139742 segments, more than the prealloc setting of 1024
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 65535 had a peak use of 139775 segments, more than the prealloc setting of 128
(stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 21676 chunks, more than the prealloc setting of 250
So then you can adjust the values accordingly for all segments.


HTTP memcap option

In suricata .yaml you can set an explicit limit for the http usage of memory in the inspection engine.

    http:
      enabled: yes
      memcap: 4gb

Those two config option do add some more powerful ways of fine tuning the already highly flexible Suricata IDPS capabilities.


Of course when setting memcaps in the suricata.yaml you would have to make sure you have the total available RAM on your server/machine....otherwise funny things happen :)

by Peter Manev (noreply@blogger.com) at July 28, 2014 01:46 AM

Suricata - setting up flows





So looking at the suricata.log file (after starting suricata):


root@suricata:/var/data/log/suricata# more suricata.log                                                       
 [1372] 17/12/2013 -- 17:47:35 - (suricata.c:962) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev e7f6107)
[1372] 17/12/2013 -- 17:47:35 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[1372] 17/12/2013 -- 17:47:35 - (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[1372] 17/12/2013 -- 17:47:35 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[1373] 17/12/2013 -- 17:47:35 - (tmqh-packetpool.c:142) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 229106864
[1373] 17/12/2013 -- 17:47:35 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[1373] 17/12/2013 -- 17:47:35 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[1373] 17/12/2013 -- 17:47:35 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216

[1373] 17/12/2013 -- 17:47:36 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 1006632960 bytes of memory for the flow hash... 15728640 buckets of size 64
[1373] 17/12/2013 -- 17:47:37 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 8000000 flows of size 280

[1373] 17/12/2013 -- 17:47:37 - (flow.c:412) <Info> (FlowInitConfig) -- flow memory usage: 3310632960 bytes, maximum: 6442450944
[1373] 17/12/2013 -- 17:47:37 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[1373] 17/12/2013 -- 17:47:37 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[1373] 17/12/2013 -- 17:47:37 - (suricata.c:1769) <Info> (SetupDelayedDetect) -- Delayed detect disabled



We see:

[1373] 17/12/2013 -- 17:47:36 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 1006632960 bytes of memory for the flow hash... 15728640 buckets of size 64
[1373] 17/12/2013 -- 17:47:37 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 8000000 flows of size 280

-> This is approximatelly 3GB of RAM
How did we get to this number... well ... I  have custom defined it in suricata.yaml under the flow section:
  hash-size: 15728640
  prealloc: 8000000

So we need to sum up->
15728640 x 64 ("15728640 buckets of size 64" = 1006632960 bytes)
+
8000000 x 280 ("preallocated 8000000 flows of size 280" = 2240000000 bytes )
=
total of 3246632960 bytes which is 3096.23MB

(15728640 x 64) + (8000000 x 280) =  3246632960 bytes


That would define our flow memcap value in suricata.yaml.
So this would work like this ->

flow:
  memcap: 4gb
  hash-size: 15728640
  prealloc: 8000000
  emergency-recovery: 30


This would work as well ->

flow:
  memcap: 3500mb
  hash-size: 15728640
  prealloc: 8000000
  emergency-recovery: 30


That's it :)

by Peter Manev (noreply@blogger.com) at July 28, 2014 01:22 AM

July 08, 2014

suricata-ids.org

Suricata 2.0.2 Windows Installer Available

The Windows MSI installer of the Suricata 2.0.2 release is now available.

Download it here: Suricata-2.0.2-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at July 08, 2014 03:32 PM

Suricata Ubuntu PPA updated to 2.0.2

We have updated the official Ubuntu PPA to Suricata 2.0.2. To use this PPA read our docs here.

To install Suricata through this PPA, enter:
sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update
sudo apt-get install suricata

If you’re already using this PPA, updating is as simple as:
sudo apt-get update && sudo apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at July 08, 2014 03:19 PM

June 30, 2014

Anoop Saldanha

Passing a OpenCL cl_mem device address from host to the device, but not as a kernel argument. Pointers for Suricata OpenCL porters.


This post is not specific to Suricata, but rather a generic one, that can help most devs who write OpenCL code + the ones who want to implement OpenCL support inside suricata.  Have been seeing quite a few attempts on porting suricata's CUDA support to use OpenCL.  Before we experimented with CUDA, we had given OpenCL a shot back in the early OpenCL days, when the drivers were in it's infancy and had a ton of bugs, and we, a ton of segvs, leaving us with no clue as to where the bug was - the driver or the code.  The driver might be a lot stabler today, of course.

Either ways, supporting OpenCL in suricata  should be a pretty straightforward task, but there's one issue that needs to be kept in mind while carrying out this port.  Something most folks who contacted me during their port, got stuck at.  And also a question a lot of OpenCL devs have on passing a memory object as a part of a byte stream, structure and not as a kernel argument.

Let's get to the topic at hand.  I will use the example of suricata to explain the issue.

What's the issue?

Suricata buffers a payload and along with the payload, specifies a gpu memory address(cl_mem) that points to the pattern matching state table that the corresponding payload should be matched against.  With CUDA the memory address we are buffering is of type "CUdeviceptr", that is allocated using the call cuMemAlloc().  The value stored inside CUdeviceptr is basically an address from the gpu address space(not a handle).  You can test this by writing a simple program like the one I have below for OpenCL.  You can also check this article that confirms the program's findings.

With OpenCL, cl_mem is defined to be a handle against an address in the gpu address space.  I would have expected Nvidia'a OpenCL implementation to show a behaviour that was similar to it's cuda library, i.e. the handle being nothing but an address in the gpu address space, but it isn't the case(probably has something to do do with the size of cl_mem?).  We can't directly pass the cl_mem handle value as the device address.  We will need to extract the device address out for a particular cl_mem handle, and pass this retrieved value instead.

Here is a sample program -

==get_address.cu==

__kernel void get_address(__global ulong *c)
{
    *c = (ulong)c;
}

==get_address.c==

unsigned long get_address(cl_kernel kernel_address,
                                            cl_command_queue command_queue,
                                            cl_mem dst_mem)
{
    unsigned long result_address = 0;

    BUG_ON(clSetKernelArg(kernel_address, 0,
                                             sizeof(dst_mem), &dst_mem) < 0);

    BUG_ON(clEnqueueNDRangeKernel(command_queue,
                                                                kernel_address,
                                                                1,
                                                                NULL,
                                                                &global_work_size,
                                                                &local_work_size,
                                                                0, NULL,
                                                                 NULL) < 0);
    BUG_ON(clEnqueueReadBuffer(command_queue,
                                                        dst_mem,
                                                        CL_TRUE,
                                                        0,
                                                        sizeof(result_address),
                                                        &result_address,
                                                        0, NULL,
                                                         NULL) < 0);
    return result_address;
}

* Untested code.  Code written keeping in mind a 64 bit hardware on the gpu and the cpu.

Using the above get_address() function should get you the gpu address for a cl_mem instance, and the returned value is what should be passed to the gpu as the address, in place of CUDA's CUdeviceptr.  It's sort of a hack, but it should work.

Another question that pops up in my head is, would the driver change the memory allocated against a handle?  Any AMD/Nvidia driver folks can answer this?

Any alternate solutions(apart from passing all of it as kernel arguments :) ) welcome.


by poona (noreply@blogger.com) at June 30, 2014 12:03 AM

June 26, 2014

Eric Leblond

pshitt: collect passwords used in SSH bruteforce

Introduction

I’ve been playing lately on analysis SSH bruteforce caracterization. I was a bit frustrated of just getting partial information:

  • ulogd can give information about scanner settings
  • suricata can give me information about software version
  • sshd server logs shows username
But having username without having the password is really frustrating.

So I decided to try to get them. Looking for a SSH server honeypot, I did find kippo but it was going too far for me by providing a fake shell access. So I’ve decided to build my own based on paramiko.

pshitt, Passwords of SSH Intruders Transferred to Text, was born. It is a lightweight fake SSH server that collect authentication data sent by intruders. It basically collects username and password and writes the extracted data to a file in JSON format. For each authentication attempt, pshitt is dumping a JSON formatted entry:

{"username": "admin", "src_ip": "116.10.191.236", "password": "passw0rd", "src_port": 36221, "timestamp": "2014-06-26T10:48:05.799316"}
The data can then be easily imported in Logstash (see pshitt README) or Splunk.

The setup

As I want to really connect to the box running ssh with a regular client, I needed a setup to automatically redirect the offenders and only them to pshitt server. A simple solution was to used DOM. DOM parses Suricata EVE JSON log file in which Suricata gives us the software version of IP connecting to the SSH server. If DOM sees a software version containing libssh, it adds the originating IP to an ipset set. So, the idea of our honeypot setup is simple:
  • Suricata outputs SSH software version to EVE
  • DOM adds IP using libssh to the ipset set
  • Netfilter NAT redirects all IP off the set to pshitt when they try to connect to our ssh server
Getting the setup in place is really easy. We first create the set:
ipset create libssh hash:ip
then we start DOM so it adds all offenders to the set named libssh:
cd DOM
./dom -f /usr/local/var/log/suricata/eve.json -s libssh
A more accurate setup for dom can be the following. If you know that your legitimate client are only based on OpenSSH then you can run dom to put in the list all IP that do not (-i) use an OpenSSH client (-m OpenSSh):
./dom -f /usr/local/var/log/suricata/eve.json -s libssh -vvv -i -m OpenSSH
If we want to list the elements of the set, we can use:
ipset list libssh
Now, we can start pshitt:
cd pshitt
./pshitt
And finally we redirect the connection coming from IP of the libssh set to the port 2200:
iptables -A PREROUTING -m set --match-set libssh src -t nat -i eth0 -p tcp -m tcp --dport 22 -j REDIRECT --to-ports 2200

Some results

Here’s an extract of the most used passwords when trying to get access to the root account: real root passwords And here’s the same thing for the admin account attempt: Root passwords Both data show around 24 hours of attempts on an anonymous box.

Conclusion

Thanks to paramiko, it was really fast to code pshitt. I’m now collecting data and I think that they will help to improve the categorization of SSH bruteforce tools.

by Regit at June 26, 2014 08:41 AM

June 25, 2014

Open Information Security Foundation

Suricata 2.0.2 Available!

The OISF development team is proud to announce Suricata 2.0.2. This release fixes a number of issues in the 2.0 series.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.2.tar.gz

Notable changes

  • IP defrag issue leading to evasion. Bug discovered by Antonios Atlasis working with ERNW GmbH
  • Support for NFLOG as a capture method. Nice work by Giuseppe Longo
  • DNS TXT parsing and logging. Funded by Emerging Threats
  • Log rotation through SIGHUP. Created by Jason Ish of Endace/Emulex

All closed tickets

  • Feature #781: IDS using NFLOG iptables target
  • Feature #1158: Parser DNS TXT data parsing and logging
  • Feature #1197: liblua support
  • Feature #1200: sighup for log rotation
  • Bug #1098: http_raw_uri with relative pcre parsing issue
  • Bug #1175: unix socket: valgrind warning
  • Bug #1189: abort() in 2.0dev (rev 6fbb955) with pf_ring 5.6.3
  • Bug #1195: nflog: cppcheck reports memleaks
  • Bug #1206: ZC pf_ring not working with Suricata 2.0.1 (or latest git)
  • Bug #1211: defrag issue
  • Bug #1212: core dump (after a while) when app-layer.protocols.http.enabled = yes
  • Bug #1214: Global Thresholds (sig_id 0, gid_id 0) not applied correctly if a signature has event vars
  • Bug #1217: Segfault in unix-manager.c line 529 when using –unix-socket and sending pcap files to be analized via socket

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Antonios Atlasis working with ERNW GmbH
  • Alessandro Guido
  • Mats Klepsland
  • @rmkml
  • Luigi Sandon
  • Christie Bunlon
  • @42wim
  • Jeka Pats
  • Noam Meltzer
  • Ivan Ristic

Known issues & missing features

If you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at June 25, 2014 04:27 PM

suricata-ids.org

Suricata 2.0.2 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0.2. This release fixes a number of issues in the 2.0 series.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.2.tar.gz

Notable changes

  • IP defrag issue leading to evasion. Bug discovered by Antonios Atlasis working with ERNW GmbH
  • Support for NFLOG as a capture method. Nice work by Giuseppe Longo
  • DNS TXT parsing and logging. Funded by Emerging Threats
  • Log rotation through SIGHUP. Created by Jason Ish of Endace/Emulex

All closed tickets

  • Feature #781: IDS using NFLOG iptables target
  • Feature #1158: Parser DNS TXT data parsing and logging
  • Feature #1197: liblua support
  • Feature #1200: sighup for log rotation
  • Bug #1098: http_raw_uri with relative pcre parsing issue
  • Bug #1175: unix socket: valgrind warning
  • Bug #1189: abort() in 2.0dev (rev 6fbb955) with pf_ring 5.6.3
  • Bug #1195: nflog: cppcheck reports memleaks
  • Bug #1206: ZC pf_ring not working with Suricata 2.0.1 (or latest git)
  • Bug #1211: defrag issue
  • Bug #1212: core dump (after a while) when app-layer.protocols.http.enabled = yes
  • Bug #1214: Global Thresholds (sig_id 0, gid_id 0) not applied correctly if a signature has event vars
  • Bug #1217: Segfault in unix-manager.c line 529 when using –unix-socket and sending pcap files to be analized via socket

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Antonios Atlasis working with ERNW GmbH
  • Alessandro Guido
  • Mats Klepsland
  • @rmkml
  • Luigi Sandon
  • Christie Bunlon
  • @42wim
  • Jeka Pats
  • Noam Meltzer
  • Ivan Ristic

Known issues & missing features

If you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at June 25, 2014 04:25 PM

June 24, 2014

Peter Manev

Suricata IDPS - getting the best out of undersized servers with BPFs on heavy traffic links


How to inspect 3-4Gbps with Suricata IDPS with 20K rules loaded on 4CPUs(2.5GHz) and 16G RAM server while having minimal drops - less than 1%

Impossible? ... definitely not...
Improbable? ...not really

Setup


3,2-4Gbps of mirrored traffic
4 X CPU  - E5420  @ 2.50GHz (4 , NOT 8 with hyper-threading, just 4)
16GB RAM
Kernel Level -  3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Network Card - 82599EB 10-Gigabit SFI/SFP+ Network Connection
with driver=ixgbe driverversion=3.17.3
Suricata version 2.0dev (rev 896b614) - some commits above 2.0.1
20K rules - ETPro ruleset 




between 400-700K pps







If you want to run Suricata on that HW with about 20 000 rules inspecting 3-4Gbps traffic with minimal drops -  it is just not possible. There are not enough CPUs , not enough RAM....

Sometimes funding is tough, convincing management to buy new/more HW could be difficult for a particular endeavor/test and a number of other reasons...

So what can you do?

BPF


Suricata can utilize BPF (Berkeley Packet Filter) when running inspection. It allows to select and filter the type of traffic you would want Suricata to inspect.


There are three ways you can use BPF filter with Suricata:

  • On the command line
suricata -c /etc/suricata/suricata.yaml -i eth0 -v dst port 80

  • From suricata.yaml
Under each respective runmode in suricata.yaml (afpacket,pfring,pcap) - bpf-filter: port 80 or udp

  • From a file
suricata -c /etc/suricata/suricata.yaml -i eth0 -v -F bpf.file

Inside the bpf.file you would have your BPF filter.


The examples above would filter only the traffic that has dest port 80 and would pass it too Suricata for inspection.

BPF - The tricky part



It DOES make a difference when using BPF if you have VLANs in the mirrored traffic.

Please read here before you continue further - http://taosecurity.blogspot.se/2008/12/bpf-for-ip-or-vlan-traffic.html


The magic


If you want to -
extract all client data , TCP SYN|FIN flags to preserve session state and server response headers
the BPF filter (thanks to Cooper Nelson (UCSD) who shared the filter on our (OISF) mailing list) would look like this:


(port 53 or 443 or 6667) or (tcp dst port 80 or (tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or tcp[((tcp[12:1] & 0xf0) >>
2):4] = 0x48545450)))



That would inspect traffic on ports 53 (DNS) , 443(HTTPS), 6667 (IRC) and 80 (HTTP)

NOTE: the filter above is  for NON VLAN traffic !

Now the same filter for VLAN present traffic would look like this below:

((ip and port 53 or 443 or 6667) or ( ip and tcp dst port 80 or (ip
and tcp src port 80 and (tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))
or
((vlan and port 53 or 443 or 6667) or ( vlan and tcp dst port 80 or
(vlan and tcp src port 80 and (tcp[tcpflags] & (tcp-syn|tcp-fin) != 0
or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))

BPF - my particular case

I did some traffic profiling on the sensor and it could be summed up like this (using iptraf):


you can see that the traffic on port 53 (DNS) is just as much as the one on http. I was facing some tough choices...

The bpf filter that I made for this particular case was:

(
(ip and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667)
or ( ip and tcp dst port 80 or (ip and tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))
or
((vlan and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667)
or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))
)

That would filter MIXED (both VLAN and NON VLAN) traffic on ports
  • 20/21 (FTP)
  • 25 (SMTP) 
  • 80 (HTTP) 
  • 110 (POP3)
  • 161 (SNMP)
  • 443(HTTPS)
  • 445 (Microsoft-DS Active Directory, Windows shares)
  • 587 (MSA - SNMP)
  • 6667 (IRC) 

and pass it to Suricata for inspection.

I had to drop the DNS - I am not saying this is right to do, but tough times call for tough measures. I had a seriously undersized server (4 cpu 2,5 Ghz 16GB RAM) and traffic between 3-4Gbps

How it is actually done


Suricata
root@snif01:/home/pmanev# suricata --build-info
This is Suricata version 2.0dev (rev 896b614)
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_NSS HAVE_LIBJANSSON
SIMD support: SSE_4_1 SSE_3
Atomic intrisics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 4.6.3, C version 199901
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.11, linked against LibHTP v0.5.11
Suricata Configuration:
  AF_PACKET support:                       yes
  PF_RING support:                         yes
  NFQueue support:                         no
  NFLOG support:                           no
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     yes
  Detection enabled:                       yes

  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  Prelude support:                         no
  PCRE jit:                                no
  LUA support:                             no
  libluajit:                               no
  libgeoip:                                yes
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     no

Generic build parameters:
  Installation prefix (--prefix):          /usr/local
  Configuration directory (--sysconfdir):  /usr/local/etc/suricata/
  Log directory (--localstatedir) :        /usr/local/var/log/suricata/

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no

In suricata .yaml

#max-pending-packets: 1024
max-pending-packets: 65534
...
# Runmode the engine should use. Please check --list-runmodes to get the available
# runmodes for each packet acquisition method. Defaults to "autofp" (auto flow pinned
# load balancing).
#runmode: autofp
runmode: workers
.....
.....
af-packet:
  - interface: eth2
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 4
    # Default clusterid.  AF_PACKET will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 98
    # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
    # This is only supported for Linux kernel > 3.1
    # possible value are:
    #  * cluster_round_robin: round robin load balancing
    #  * cluster_flow: all packets of a given flow are send to the same socket
    #  * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket
    cluster-type: cluster_cpu
    # In some fragmentation case, the hash can not be computed. If "defrag" is set
    # to yes, the kernel will do the needed defragmentation before sending the packets.
    defrag: yes
    # To use the ring feature of AF_PACKET, set 'use-mmap' to yes
    use-mmap: yes
    # Ring size will be computed with respect to max_pending_packets and number
    # of threads. You can set manually the ring size in number of packets by setting
    # the following value. If you are using flow cluster-type and have really network
    # intensive single-flow you could want to set the ring-size independantly of the number
    # of threads:
    ring-size: 200000
    # On busy system, this could help to set it to yes to recover from a packet drop
    # phase. This will result in some packets (at max a ring flush) being non treated.
    #use-emergency-flush: yes
    # recv buffer size, increase value could improve performance
    # buffer-size: 100000
    # Set to yes to disable promiscuous mode
    # disable-promisc: no
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - kernel: use indication sent by kernel for each packet (default)
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used.
    # Warning: 'checksum-validation' must be set to yes to have any validation
    #checksum-checks: kernel
    # BPF filter to apply to this interface. The pcap filter syntax apply here.
    #bpf-filter: port 80 or udp
....
....  
detect-engine:
  - profile: high
  - custom-values:
      toclient-src-groups: 2
      toclient-dst-groups: 2
      toclient-sp-groups: 2
      toclient-dp-groups: 3
      toserver-src-groups: 2
      toserver-dst-groups: 4
      toserver-sp-groups: 2
      toserver-dp-groups: 25
  - sgh-mpm-context: auto
 ...
flow-timeouts:

  default:
    new: 5 #30
    established: 30 #300
    closed: 0
    emergency-new: 1 #10
    emergency-established: 2 #100
    emergency-closed: 0
  tcp:
    new: 5 #60
    established: 60 # 3600
    closed: 1 #30
    emergency-new: 1 # 10
    emergency-established: 5 # 300
    emergency-closed: 0 #20
  udp:
    new: 5 #30
    established: 60 # 300
    emergency-new: 5 #10
    emergency-established: 5 # 100
  icmp:
    new: 5 #30
    established: 60 # 300
    emergency-new: 5 #10
    emergency-established: 5 # 100
....
....
stream:
  memcap: 4gb
  checksum-validation: no      # reject wrong csums
  midstream: false
  prealloc-sessions: 50000
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 8gb
    depth: 12mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10
...
...
default-rule-path: /etc/suricata/et-config/
rule-files:
 - trojan.rules  
 - malware.rules
 - local.rules
 - activex.rules
 - attack_response.rules
 - botcc.rules
 - chat.rules
 - ciarmy.rules
 - compromised.rules
 - current_events.rules
 - dos.rules
 - dshield.rules
 - exploit.rules
 - ftp.rules
 - games.rules
 - icmp_info.rules
 - icmp.rules
 - imap.rules
 - inappropriate.rules
 - info.rules
 - misc.rules
 - mobile_malware.rules ##
 - netbios.rules
 - p2p.rules
 - policy.rules
 - pop3.rules
 - rbn-malvertisers.rules
 - rbn.rules
 - rpc.rules
 - scada.rules
 - scada_special.rules
 - scan.rules
 - shellcode.rules
 - smtp.rules
 - snmp.rules

...
....
libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 12mb
           response-body-limit: 12mb

           # inspection limits
           request-body-minimal-inspect-size: 32kb
           request-body-inspect-window: 4kb
           response-body-minimal-inspect-size: 32kb
           response-body-inspect-window: 4kb

           # decoding
           double-decode-path: no
           double-decode-query: no


Create the BFP file (you can put it anywhere)
touch /home/pmanev/test/bpf-filter


The  bpf-filter should look like this:


root@snif01:/var/log/suricata# cat /home/pmanev/test/bpf-filter
(
(ip and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667)
or ( ip and tcp dst port 80 or (ip and tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))
or
((vlan and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667)
or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))
)
root@snif01:/var/log/suricata#


Start Suricata like this:
suricata -c /etc/suricata/peter-yaml/suricata-afpacket.yaml --af-packet=eth2 -D -v -F /home/pmanev/test/bpf-filter

Like this I was able to achieve inspection on 3,2-4Gbps with about 20K rules with 1% drops.




In the suricata.log:
root@snif01:/var/log/suricata# more suricata.log
[1274] 21/6/2014 -- 19:36:35 - (suricata.c:1034) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev 896b614)
[1274] 21/6/2014 -- 19:36:35 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 4
......
[1275] 21/6/2014 -- 19:36:46 - (detect.c:452) <Info> (SigLoadSignatures) -- 46 rule files processed. 20591 rules successfully loaded, 8 rules failed
[1275] 21/6/2014 -- 19:36:47 - (detect.c:2591) <Info> (SigAddressPrepareStage1) -- 20599 signatures processed. 827 are IP-only rules, 6510 are inspecting packet payload, 15650 inspect ap
plication layer, 0 are decoder event only
.....
.....
[1275] 21/6/2014 -- 19:37:17 - (runmode-af-packet.c:150) <Info> (ParseAFPConfig) -- Going to use command-line provided bpf filter '( (ip and port 20 or 21 or 22 or 25 or 110 or 161 or 44
3 or 445 or 587 or 6667)  or ( ip and tcp dst port 80 or (ip and tcp src port 80 and  (tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))) or ((vl
an and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667)  or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and  (tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))) ) '

.....
.....
[1275] 22/6/2014 -- 01:45:34 - (stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 6674 chunks, more than the prealloc setting of 250
[1275] 22/6/2014 -- 01:45:34 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 825856 bytes, maximum: 16777216
[1275] 22/6/2014 -- 01:45:35 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete
[1275] 22/6/2014 -- 01:45:35 - (util-device.c:190) <Notice> (LiveDeviceListClean) -- Stats for 'eth2':  pkts: 2820563520, drop: 244696588 (8.68%), invalid chksum: 0

That gave me about 9% drops...... I further adjusted the filter (after realizing I could drop 445 Windows Shares for the moment from inspection).

The new filter was like so:

root@snif01:/var/log/suricata# cat /home/pmanev/test/bpf-filter
(
(ip and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 587 or 6667)
or ( ip and tcp dst port 80 or (ip and tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))
or
((vlan and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 587 or 6667)
or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))
)
root@snif01:/var/log/suricata#
Notice - I removed port 445.

So with that filter I was able to do 0.95% drops with 20K rules:

[16494] 22/6/2014 -- 10:13:10 - (suricata.c:1034) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev 896b614)
[16494] 22/6/2014 -- 10:13:10 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 4
...
...
[16495] 22/6/2014 -- 10:13:20 - (detect.c:452) <Info> (SigLoadSignatures) -- 46 rule files processed. 20591 rules successfully loaded, 8 rules failed
[16495] 22/6/2014 -- 10:13:21 - (detect.c:2591) <Info> (SigAddressPrepareStage1) -- 20599 signatures processed. 827 are IP-only rules, 6510 are inspecting packet payload, 15650 inspect application layer, 0 are decoder event only
...
...
[16495] 23/6/2014 -- 01:45:32 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 1035520 bytes, maximum: 16777216
[16495] 23/6/2014 -- 01:45:32 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete
[16495] 23/6/2014 -- 01:45:32 - (util-device.c:190) <Notice> (LiveDeviceListClean) -- Stats for 'eth2':  pkts: 6550734692, drop: 62158315 (0.95%), invalid chksum: 0




 So with that BPF filter we have ->

Pros 


I was able to inspect with a lot of rules(20K) a lot of traffic (4Gbps peak) with an undersized and minimal HW (4 CPU 16GB RAM ) sustained with less then 1% drops

Cons

  • Not inspecting DNS
  • Making an assumption that all HTTP traffic is using port 80. (Though in my case 99.9% of the http traffic was on port 80)
  • This is an advanced BPF filter , requires a good chunk of knowledge in order to understand/implement/re-edit


 Simple and efficient


In the case where you have a network or a device that generates a lot of false positives and you are sure you can disregard any traffic from that device  - you could do a filter like this:

(ip and not host 1.1.1.1 ) or (vlan and not host 1.1.1.1)

for a VLAN and non VLAN traffic mixed. If you are sure there is no VLAN traffic you could just do that:
ip and not host 1.1.1.1

Then  you can simply start Suricata like so:
suricata -c /etc/suricata/peter-yaml/suricata-afpacket.yaml --af-packet=eth2 -D -v \(ip and not host 1.1.1.1 \) or \(vlan and not host 1.1.1.1\)

or like this respectively (to the two examples above):
suricata -c /etc/suricata/peter-yaml/suricata-afpacket.yaml --af-packet=eth2 -D -v ip and not host 1.1.1.1






by Peter Manev (noreply@blogger.com) at June 24, 2014 11:26 AM

June 14, 2014

Peter Manev

Suricata IDS/IPS - TCP segment pool size preallocation


In the default suricata.yaml stream section we have:
stream:
  memcap: 32mb
  checksum-validation: no      # reject wrong csums
  async-oneside: true
  midstream: true
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 64mb
    depth: 1mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10
    #raw: yes
    #chunk-prealloc: 250
    #segments:
    #  - size: 4
    #    prealloc: 256
    #  - size: 16
    #    prealloc: 512
    #  - size: 112
    #    prealloc: 512
    #  - size: 248
    #    prealloc: 512
    #  - size: 512
    #    prealloc: 512
    #  - size: 768
    #    prealloc: 1024
    #  - size: 1448
    #    prealloc: 1024
    #  - size: 65535
    #    prealloc: 128


So what are these segment preallocations for?
Let's have a look. When Suricata exits (or kill -15 PidOfSuricata) it produces a lot of useful statistics in the suricata.log file (you can enable that from the suricata.yaml and use the "-v" switch (verbose) when starting Suricata):
The example below is for exit stats.
   
tail -20 StatsByDate/suricata-2014-06-01.log
[24344] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth314) Packets 7317661624, bytes 6132661347126
[24344] 1/6/2014 -- 01:45:52 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3382528539 TCP packets
[24345] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Kernel: Packets 8049357450, dropped 352658715
[24345] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Packets 7696486934, bytes 6666577738944
[24345] 1/6/2014 -- 01:45:52 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3357321803 TCP packets
[24346] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Kernel: Packets 7573051188, dropped 292897219
[24346] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Packets 7279948375, bytes 6046562324948
[24346] 1/6/2014 -- 01:45:52 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3454330660 TCP packets
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 4 had a peak use of 60778 segments, more than the prealloc setting of 256
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 16 had a peak use of 314953 segments, more than the prealloc setting of 512
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 112 had a peak use of 113739 segments, more than the prealloc setting of 512
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 248 had a peak use of 17893 segments, more than the prealloc setting of 512
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 512 had a peak use of 31787 segments, more than the prealloc setting of 512
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 768 had a peak use of 30769 segments, more than the prealloc setting of 1024
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 1448 had a peak use of 89446 segments, more than the prealloc setting of 1024
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 65535 had a peak use of 81214 segments, more than the prealloc setting of 128
[24329] 1/6/2014 -- 01:45:53 - (stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 20306 chunks, more than the prealloc setting of 250
[24329] 1/6/2014 -- 01:45:53 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 390144 bytes, maximum: 16777216
[24329] 1/6/2014 -- 01:45:55 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete
[24329] 1/6/2014 -- 01:45:55 - (util-device.c:185) <Notice> (LiveDeviceListClean) -- Stats for 'eth3':  pkts: 124068935209, drop: 5245430626 (4.23%), invalid chksum: 0


Notice all the "TCP segment pool" messages. This is the actual tcp segment pool reassembly stats for that period of time that Suricata was running. We could adjust accordingly in the suricata.yaml (as compared to the default settings above)
   
stream:
  memcap: 14gb
  checksum-validation: no      # reject wrong csums
  midstream: false
  prealloc-sessions: 375000
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 20gb
    depth: 12mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10
    raw: yes
    chunk-prealloc: 20556
    segments:
      - size: 4
        prealloc: 61034
      - size: 16
        prealloc: 315465
      - size: 112
        prealloc: 114251
      - size: 248
        prealloc: 18405
      - size: 512
        prealloc: 30769
      - size: 768
        prealloc: 31793
      - size: 1448
        prealloc: 90470
      - size: 65535
        prealloc: 81342
   


   
The total RAM (reserved) consumption for these preallocations (form the stream.reassembly.memcap value ) would be:

4*61034 + 16*315465 + 112*114251 + 248*18405 + 512*30769 + 768*31793 + 1448*90470 + 65535*81342 
= 5524571410 bytes
= 5.14 GB of RAM

So we could preallocate the tcp segments and take the Suricata tuning even a step further and improve performance as well.

So now when you start Suricata with the "-v" switch in your suricata.log with this specific set up described above you should see something like:
...
...
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 4, prealloc 61034
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 16, prealloc 315465
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 112, prealloc 114251
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 248, prealloc 18405
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 512, prealloc 30769
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 768, prealloc 31793
[30709] 1/6/2014 -- 12:17:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 1448, prealloc 90470
[30709] 1/6/2014 -- 12:17:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 65535, prealloc 81342
[30709] 1/6/2014 -- 12:17:35 - (stream-tcp-reassemble.c:461) <Info> (StreamTcpReassemblyConfig) -- stream.reassembly "chunk-prealloc": 20556
...
...

NOTE:
Those 5.14 GB RAM in the example here will be preallocated (taken) from the stream.reassembly.memcap value. In other words it will not consume 5.14 GB of RAM more.

So be careful when setting up preallocation in order not to preallocate more of what you have.
In my case of 10Gbps suricata.yaml config I had:

stream:
  memcap: 14gb
  checksum-validation: no      # reject wrong csums
  midstream: false
  prealloc-sessions: 375000
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 20gb
    depth: 12mb                  # reassemble 1mb into a stream


What this helps with is that it lowers CPU usage/contention for TCP segment allocation during reassembly - it is already preallocated and Suricata just uses it instead of creating it everytime it needs it. It also helps minimize the initial drops during startup.

Highly adaptable and  flexible.








by Peter Manev (noreply@blogger.com) at June 14, 2014 02:49 AM

June 10, 2014

Peter Manev

Coalesce parameters and RX ring size


Please read through this very useful article :
http://netoptimizer.blogspot.dk/2014/06/pktgen-for-network-overload-testing.html

Coalesce parameters and RX ring size can have an impact on your IDS.
To see what are the coalesce parameters on the currently sniffing interface:

root@suricata:/var/log/suricata# ethtool -c eth3
Coalesce parameters for eth3:
Adaptive RX: off  TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0

rx-usecs: 1000
rx-frames: 0
rx-usecs-irq: 0
rx-frames-irq: 0

tx-usecs: 0
tx-frames: 0
tx-usecs-irq: 0
tx-frames-irq: 0

rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0

rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0

To change (try with different values) the coalesce parameter:

root@suricata:/var/log/suricata# ethtool -C eth3 rx-usecs 1
root@suricata:/var/log/suricata# ethtool -c eth3
Coalesce parameters for eth3:
Adaptive RX: off  TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0

rx-usecs: 1
rx-frames: 0
rx-usecs-irq: 0
rx-frames-irq: 0

tx-usecs: 0
tx-frames: 0
tx-usecs-irq: 0
tx-frames-irq: 0

rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0

rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0

Ring RX parameters on the network card play a role too:


root@suricata:~# ethtool -g eth3
Ring parameters for eth3:
Pre-set maximums:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:            4096
Current hardware settings:
RX:             512
RX Mini:        0
RX Jumbo:       0
TX:             512


  To increase that to the max Pre-set RX:

root@suricata:~# ethtool -G eth3 rx 4096

To confirm:

root@suricata:~# ethtool -g eth3
Ring parameters for eth3:
Pre-set maximums:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             4096
Current hardware settings:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             512

Suggested approach - that worked best in my particular set up - for Suricata IDS/IPS deployment is to have the coalesce parameter to value 1 and increase the ring  RX size to the max available for that particular interface/card.

It is suggested that you try a few different scenarios with regards to the coalesce parameters in order to find the best combination that suits your needs.





by Peter Manev (noreply@blogger.com) at June 10, 2014 10:55 AM

June 07, 2014

Peter Manev

Suricata - Counting enabled rules in the rules directory



One liner:
grep -c ^alert /etc/suricata/rules/*.rules

root@LTS-64-1:~/Downloads/oisf# grep -c ^alert /etc/suricata/rules/*.rules
/etc/suricata/rules/botcc.portgrouped.rules:69
/etc/suricata/rules/botcc.rules:108
/etc/suricata/rules/ciarmy.rules:34
/etc/suricata/rules/compromised.rules:44
/etc/suricata/rules/decoder-events.rules:83
/etc/suricata/rules/dns-events.rules:8
/etc/suricata/rules/drop.rules:26
/etc/suricata/rules/dshield.rules:1
/etc/suricata/rules/emerging-activex.rules:218
/etc/suricata/rules/emerging-attack_response.rules:52
/etc/suricata/rules/emerging-chat.rules:80
/etc/suricata/rules/emerging-current_events.rules:1736
/etc/suricata/rules/emerging-deleted.rules:0
/etc/suricata/rules/emerging-dns.rules:56
/etc/suricata/rules/emerging-dos.rules:37
/etc/suricata/rules/emerging-exploit.rules:218
/etc/suricata/rules/emerging-ftp.rules:60
/etc/suricata/rules/emerging-games.rules:73
/etc/suricata/rules/emerging-icmp_info.rules:14
/etc/suricata/rules/emerging-icmp.rules:0
/etc/suricata/rules/emerging-imap.rules:17
/etc/suricata/rules/emerging-inappropriate.rules:1
/etc/suricata/rules/emerging-info.rules:232
/etc/suricata/rules/emerging-malware.rules:909
/etc/suricata/rules/emerging-misc.rules:26
/etc/suricata/rules/emerging-mobile_malware.rules:98
/etc/suricata/rules/emerging-netbios.rules:421
/etc/suricata/rules/emerging-p2p.rules:117
/etc/suricata/rules/emerging-policy.rules:307
/etc/suricata/rules/emerging-pop3.rules:9
/etc/suricata/rules/emerging-rpc.rules:83
/etc/suricata/rules/emerging-scada.rules:14
/etc/suricata/rules/emerging-scan.rules:196
/etc/suricata/rules/emerging-shellcode.rules:71
/etc/suricata/rules/emerging-smtp.rules:12
/etc/suricata/rules/emerging-snmp.rules:24
/etc/suricata/rules/emerging-sql.rules:191
/etc/suricata/rules/emerging-telnet.rules:5
/etc/suricata/rules/emerging-tftp.rules:13
/etc/suricata/rules/emerging-trojan.rules:2305
/etc/suricata/rules/emerging-user_agents.rules:61
/etc/suricata/rules/emerging-voip.rules:17
/etc/suricata/rules/emerging-web_client.rules:164
/etc/suricata/rules/emerging-web_server.rules:418
/etc/suricata/rules/emerging-web_specific_apps.rules:5406
/etc/suricata/rules/emerging-worm.rules:14
/etc/suricata/rules/files.rules:0
/etc/suricata/rules/http-events.rules:19
/etc/suricata/rules/rbn-malvertisers.rules:0
/etc/suricata/rules/rbn.rules:0
/etc/suricata/rules/smtp-events.rules:6
/etc/suricata/rules/stream-events.rules:45
/etc/suricata/rules/tls-events.rules:10
/etc/suricata/rules/tor.rules:590
root@LTS-64-1:~/Downloads/oisf#



Total rules enabled:
root@LTS-64-1:~/Downloads/oisf# grep ^alert /etc/suricata/rules/*.rules |  wc -l
14718
root@LTS-64-1:~/Downloads/oisf#

by Peter Manev (noreply@blogger.com) at June 07, 2014 05:28 AM

June 04, 2014

Peter Manev

24 hr full log run with Suricata IDPS on a 10Gbps ISP line



This is going to be quick :)

  • 9K rules (standard ET-Pro, not changed or edited)
  • Suricata 2.0.1 with AF_PACKET, 16 threads
  • number of hosts in HOME_NET - /21 /19 /19 /18 = about 34K hosts 
  • 24 hour run eve.json with all outputs enabled.



I used that command (it took a while on a 54 GB log file :) )  - as suggested by @Packet Inspector (Twitter):
cat eve.json-20140604 | perl -ne 'print "$1\n" if /\"event_type\":\"(.*?)\"/' | sort | uniq -c

root@suricata:/var/log/suricata/tmp# cat eve.json-20140604 | perl -ne 'print "$1\n" if /\"event_type\":\"(.*?)\"/' | sort | uniq -c
 384426 alert
219594091 dns
1384214 fileinfo
3460078 http
  10304 ssh
 280184 tls
root@suricata:/var/log/suricata/tmp# ls -lh
total 54G
-rw-r----- 1 root root 54G Jun  4 16:49 eve.json-20140604
root@suricata:/var/log/suricata/tmp#

 So basically we got (descending order) :
  • 219 594 091 - DNS
  •     3 460 078 - HTTP
  •     1 384 214 - FILEINFO
  •        384 426 - ALERTS
  •        280 184 -TLS
  •          10 304 - SSH
about 2600 logs per second on that particular day for that particular test run - yesterday.
Tomorrow .... who knows :)

With these 8 rule files enabled:
rule-files:
 - trojan.rules
 - dns.rules
 - malware.rules
 - md5.rules
 - local.rules
 - current_events.rules
 - mobile_malware.rules
 - user_agents.rules




by Peter Manev (noreply@blogger.com) at June 04, 2014 12:51 PM

May 31, 2014

Peter Manev

Logs per second on eve.json - the good and the bad news on a 10Gbps IDPS line inspection



I found this one liner which gives the amount of logs per second logged in eve.json
tail -f  eve.json |perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'
I take no credit for it  - I got it from commandlinefu


tail -f  eve.json |perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'
1
193
3301
3402
3862
3411
3719
3467
3522
3127
3354
^C

Having in mind this is at Sat lunch time... 3 - 3,5K logs per second it turns to minimum 4 - 4,5K logs per second on a working day.
I had "only"  these logs enabled in suricata.yaml in the eve log section - dns,http,alert and ssh on a 10Gbps Suricata 2.0.1 IDS sensor:

  # "United" event log in JSON format
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
        - dns
        #- tls:
            #extended: yes     # enable this for extended logging information
        #- files:
            #force-magic: yes   # force logging magic on all logged files
            #force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh
      append: yes

If you enable "files" and "tls" it will increase it to probably about 5-6K logs per second (maybe even more , depends on the type of traffic) with that set up.

The good news:

eve.json logs in standard JSON format (JavaScript Object Notation). So there are A LOT of log analysis solutions and software both open source, free and/or commercial that can digest and run analysis on JSON logs.

The bad news:

How many log analysis solutions can "really" handle 5K logs per second -
  • indexing, 
  • query, 
  • search, 
  • report generation, 
  • log correlation, 
  • filter searches by key fields,
  • nice graphs - "eye candy" for the management and/or customer , 
all that while being fast?
(and like that on at least 20 days of data from a 10Gbps IDPS Suricata sensor)

...aka 18 mil per hour ...or 432 mil log records per day

Another perspective -> 54-70GB of logs a day...


Conclusion

Deploying and tuning Suricata IDS/IPS is  the first important step. Then you need to  handle all the data that comes out of the sensor.
You should very carefully consider your goals, requirements and design and do Proof of Concept and test runs before you end up in a production situation in which you can't handle what you asked for :)




by Peter Manev (noreply@blogger.com) at May 31, 2014 06:32 AM

May 28, 2014

suricata-ids.org

Suricata 2.0.1 Windows Installer Available

The Windows MSI installer of the Suricata 2.0.1 release is now available.

Download it here: Suricata-2.0.1-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at May 28, 2014 04:05 PM

Suricata Ubuntu PPA updated to 2.0.1

We have updated the official Ubuntu PPA to Suricata 2.0.1. To use this PPA read our docs here.

To install Suricata through this PPA, enter:
sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update
sudo apt-get install suricata

If you’re already using this PPA, updating is as simple as:
sudo apt-get update && sudo apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at May 28, 2014 03:58 PM

Fedora 20 gets Suricata 2.0

Fedora maintainer Steve Grubb updated the Suricata package in Fedora 20 to 2.0.

If you are running Fedora, updating is as simple as:
yum update

Installing is as simple as:
yum install suricata

The Fedora package has IPS mode through NFQUEUE enabled.

by fleurixx at May 28, 2014 03:51 PM

May 25, 2014

Peter Manev

Playing with memory consumption, algorithms and af_packet ring-size in Suricata IDPS




How selecting the correct memory algorithm can make the difference between 40%  and 4% drops of packets on 10Gbps traffic line inspection.

In this article I have described some specifics through which I was able to tune up Suricata in IDS mode to getting only 4.04% drops on a 10Gbps mirror port(ISP traffic) with 9K rules.

On the bottom of the post you will find the relevant configuration with suricata.log. It is highly inadvisable to just copy/paste, since every set up is unique. You would have to try to see/test what best suits your needs.

Set up


  • Suricata (from git, but basically 2.0.1) with AF_PACKET, 16 threads
  • 16 (2.7 GhZ) cores with Hyper-threading enabled
  • 64G RAM
  • Ubuntu LTS Precise (with upgraded kernel 3.14) -> Linux suricata 3.14.0-031400-generic #201403310035 SMP Mon Mar 31 04:36:23 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
  • Intel 82599EB 10-Gigabit SFI/SFP+ with CPU affinity (as described here)
  • 9K rules (standard ET-Pro, not changed or edited)
  • number of hosts in HOME_NET - /21 /19 /19 /18 = about 34K hosts
  • MTU 1522 on the IDS/listening  interface

Bummer  ... why is the MTU mentioned here  ... for a good reason!!
Bear with me for a sec and you will see why.

Tuning Stage I - af_packet and ring-size


Lets start with af_packet's section in suricata.yaml -> the ring-size variable.
With it you can actually define how many packets can be buffered on a per thread basis.

Example:
ring-size: 100000
would mean that Suricata will create a buffer of 100K packets per thread.

In other words if you have (in your suricata.yaml's af-packet section)
threads: 16
ring-size: 100000
that would mean 16x100K buffers = 1,6 mil packets in total.

So what does this mean for memory consumption?
Well here is where the MTU comes into play.

MTU size * ring_size * 16 threads 


 or
1522 * 100 000 * 16 = 2435200000 bytes = 2,3 GBytes
So with that set up, Suricata will reserve 2,3 GB RAM right away at start up.

FYI - With the current set up we have  about 1,5 mil incoming pps (packets per second)





Tuning Stage II - memory algorithm (mpm-algo)


The mpm-algo variable in suricata.yaml selects which memory algorithm Suricata will use to do
distribution of mpm contexts for signature(rule) groups matching.Very  important with a huge performance impact between combining with these:

sgh-mpm-context: single
sgh-mpm-context: full
profile: custom
profile: low
profile: medium
profile: high

More on this , you could find HERE
where - profile: custom , would mean you can specify the groups yourself.

The algorithm selected through this article is:
mpm-algo: ac

Below you will find some test cases for memory consumption at Suricata start up time.
(Just the values in the particular Cases are changed, the rest of suricata.yaml config
is the same and not touched or changed during these test cases)

Case 1

24GB RAM at start up

detect-engine:
  - profile: custom
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: single

af_packet ring-size: 1000000
16 threads


Notice: 1 mil ring size with sgh-mpm-context: single, that gave me 19% drops:
(util-device.c:185) <Notice> (LiveDeviceListClean) -- Stats for 'eth3':  pkts: 4997993133, drop: 949059741 (18.99%), invalid chksum: 0

Case 2

10GB RAM at start up

detect-engine:
  - profile: low
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full

af_packet ring-size: 50000
16 threads


Case 3

26GB RAM at start up

detect-engine:
  - profile: high
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full

af_packet ring-size: 50000
16 threads


Case 4

38GB RAM at start up

detect-engine:
  - profile: high
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full

af_packet ring-size: 500000
16 threads

Notice: 500K ring size as compared to 50K in Case 3 and Case 2


The best config that worked for me was Case 4 !!
4.04% drops

NOTE: depending on the number of rules - sgh-mpm-context: full can induce Suricata  start  up time of a few minutes...


I also tested a different algorithm - > ac-gfbs
detect-engine:
  - profile: custom
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full

....
mpm-algo: ac-gfbs

with af-packet 200K  ring size but that gave me 45% drops...
 Stats for 'eth3':  pkts: 496407325, drop: 227155539 (45.76%), invalid chksum: 0


Bottom line:
testing/trying and selecting the correct mpm-algo and ring size buffers can have a huge performance impact on your configuration !

Below you will find the specifics of the suricata.yaml configuration alongside the output and evidence of suricata.log


Configuration


suricata --build-info
This is Suricata version 2.0dev (rev 7e8f80b)
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_NSS HAVE_LIBJANSSON
SIMD support: SSE_4_2 SSE_4_1 SSE_3
Atomic intrisics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 4.6.3, C version 199901
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.11, linked against LibHTP v0.5.11
Suricata Configuration:
  AF_PACKET support:                       yes
  PF_RING support:                         yes
  NFQueue support:                         no
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     yes
  Detection enabled:                       yes

  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  Prelude support:                         no
  PCRE jit:                                no
  libluajit:                               no
  libgeoip:                                no
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     yes

Generic build parameters:
  Installation prefix (--prefix):          /usr/local
  Configuration directory (--sysconfdir):  /usr/local/etc/suricata/
  Log directory (--localstatedir) :        /usr/local/var/log/suricata/

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no


In suricata .yaml:


# If you are using the CUDA pattern matcher (mpm-algo: ac-cuda), different rules
# apply. In that case try something like 60000 or more. This is because the CUDA
# pattern matcher buffers and scans as many packets as possible in parallel.
#max-pending-packets: 1024
max-pending-packets: 65534

# Runmode the engine should use. Please check --list-runmodes to get the available
# runmodes for each packet acquisition method. Defaults to "autofp" (auto flow pinned
# load balancing).
#runmode: autofp
runmode: workers

...
...

# af-packet support
# Set threads to > 1 to use PACKET_FANOUT support
af-packet:
  - interface: eth3
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 16
    # Default clusterid.  AF_PACKET will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 98
    # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
    # This is only supported for Linux kernel > 3.1
    # possible value are:
    #  * cluster_round_robin: round robin load balancing
    #  * cluster_flow: all packets of a given flow are send to the same socket
    #  * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket
    cluster-type: cluster_cpu
    # In some fragmentation case, the hash can not be computed. If "defrag" is set
    # to yes, the kernel will do the needed defragmentation before sending the packets.
    defrag: no
    # To use the ring feature of AF_PACKET, set 'use-mmap' to yes
    use-mmap: yes
    # Ring size will be computed with respect to max_pending_packets and number
    # of threads. You can set manually the ring size in number of packets by setting
    # the following value. If you are using flow cluster-type and have really network
    # intensive single-flow you could want to set the ring-size independantly of the number
    # of threads:
    ring-size: 500000
    # On busy system, this could help to set it to yes to recover from a packet drop
    # phase. This will result in some packets (at max a ring flush) being non treated.
    #use-emergency-flush: yes
    # recv buffer size, increase value could improve performance
    # buffer-size: 100000
    # Set to yes to disable promiscuous mode
    # disable-promisc: no
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - kernel: use indication sent by kernel for each packet (default)
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used.
    # Warning: 'checksum-validation' must be set to yes to have any validation
    checksum-checks: kernel
    # BPF filter to apply to this interface. The pcap filter syntax apply here.
    #bpf-filter: port 80 or udp
    # You can use the following variables to activate AF_PACKET tap od IPS mode.
    # If copy-mode is set to ips or tap, the traffic coming to the current
    # interface will be copied to the copy-iface interface. If 'tap' is set, the
    # copy is complete. If 'ips' is set, the packet matching a 'drop' action
    # will not be copied.

...
...   
   
detect-engine:
  - profile: high
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full
  - inspection-recursion-limit: 1500
   
...

....
mpm-algo: ac
....
....
   
rule-files:
 - trojan.rules
 - dns.rules
 - malware.rules
 - md5.rules
 - local.rules
 - current_events.rules
 - mobile_malware.rules
 - user_agents.rules

 and the Suricata.log - 24 hour run inspecting a 10Gbps line with 9K rules.
(at the bottom you will find the final stats
Stats for 'eth3':  pkts: 125740002178, drop: 5075326318 (4.04%) ):

cat StatsByDate/suricata-2014-05-25.log
[26428] 24/5/2014 -- 01:46:01 - (suricata.c:1003) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev 7e8f80b)
[26428] 24/5/2014 -- 01:46:01 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'default' server has 'request-body-minimal-inspect-size' set to 33882 and 'request-body-inspect-window' set to 4053 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'default' server has 'response-body-minimal-inspect-size' set to 33695 and 'response-body-inspect-window' set to 4218 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp-mem.c:59) <Info> (HTPParseMemcap) -- HTTP memcap: 6442450944
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'apache' server has 'request-body-minimal-inspect-size' set to 34116 and 'request-body-inspect-window' set to 3973 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'apache' server has 'response-body-minimal-inspect-size' set to 32229 and 'response-body-inspect-window' set to 4205 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'iis7' server has 'request-body-minimal-inspect-size' set to 32040 and 'request-body-inspect-window' set to 4118 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'iis7' server has 'response-body-minimal-inspect-size' set to 32694 and 'response-body-inspect-window' set to 4148 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:324) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:336) <Info> (DNSUDPConfigure) -- DNS per flow memcap (state-memcap): 524288
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:348) <Info> (DNSUDPConfigure) -- DNS global memcap: 4294967296
[26428] 24/5/2014 -- 01:46:01 - (util-ioctl.c:99) <Info> (GetIfaceMTU) -- Found an MTU of 1500 for 'eth3'
[26428] 24/5/2014 -- 01:46:01 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[26428] 24/5/2014 -- 01:46:02 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[26428] 24/5/2014 -- 01:46:02 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[26428] 24/5/2014 -- 01:46:02 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[26429] 24/5/2014 -- 01:46:02 - (tmqh-packetpool.c:142) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 228320456
[26429] 24/5/2014 -- 01:46:02 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[26429] 24/5/2014 -- 01:46:02 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[26429] 24/5/2014 -- 01:46:02 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216
[26429] 24/5/2014 -- 01:46:02 - (flow.c:391) <Info> (FlowInitConfig) -- allocated 67108864 bytes of memory for the flow hash... 1048576 buckets of size 64
[26429] 24/5/2014 -- 01:46:02 - (flow.c:415) <Info> (FlowInitConfig) -- preallocated 1048576 flows of size 280
[26429] 24/5/2014 -- 01:46:02 - (flow.c:417) <Info> (FlowInitConfig) -- flow memory usage: 369098752 bytes, maximum: 1073741824
[26429] 24/5/2014 -- 01:46:02 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[26429] 24/5/2014 -- 01:46:02 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[26429] 24/5/2014 -- 01:46:02 - (suricata.c:1835) <Info> (SetupDelayedDetect) -- Delayed detect disabled
[26429] 24/5/2014 -- 01:46:04 - (detect-filemd5.c:275) <Info> (DetectFileMd5Parse) -- MD5 hash size 2143616 bytes
[26429] 24/5/2014 -- 01:46:05 - (detect.c:452) <Info> (SigLoadSignatures) -- 8 rule files processed. 9055 rules successfully loaded, 0 rules failed
[26429] 24/5/2014 -- 01:46:05 - (detect.c:2591) <Info> (SigAddressPrepareStage1) -- 9055 signatures processed. 1 are IP-only rules, 2299 are inspecting packet payload, 7541 inspect application layer, 0 are decoder event only
[26429] 24/5/2014 -- 01:46:05 - (detect.c:2594) <Info> (SigAddressPrepareStage1) -- building signature grouping structure, stage 1: preprocessing rules... complete
[26429] 24/5/2014 -- 01:46:05 - (detect.c:3217) <Info> (SigAddressPrepareStage2) -- building signature grouping structure, stage 2: building source address list... complete
[26429] 24/5/2014 -- 01:48:35 - (detect.c:3859) <Info> (SigAddressPrepareStage3) -- building signature grouping structure, stage 3: building destination address lists... complete
[26429] 24/5/2014 -- 01:48:35 - (util-threshold-config.c:1202) <Info> (SCThresholdConfParseFile) -- Threshold config parsed: 0 rule(s) found
[26429] 24/5/2014 -- 01:48:35 - (util-coredump-config.c:122) <Info> (CoredumpLoadConfig) -- Core dump size set to unlimited.
[26429] 24/5/2014 -- 01:48:35 - (util-logopenfile.c:209) <Info> (SCConfLogOpenGeneric) -- eve-log output device (regular) initialized: eve.json
[26429] 24/5/2014 -- 01:48:35 - (output-json.c:471) <Info> (OutputJsonInitCtx) -- returning output_ctx 0x5b418d90
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'alert'
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'http'
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'dns'
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'ssh'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "management-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "receive-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "decode-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "stream-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "detect-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "verdict-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "reject-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "output-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'medium'
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:198) <Info> (ParseAFPConfig) -- Enabling mmaped capture on iface eth3
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:266) <Info> (ParseAFPConfig) -- Using cpu cluster mode for AF_PACKET (iface eth3)
[26429] 24/5/2014 -- 01:48:35 - (util-runmodes.c:558) <Info> (RunModeSetLiveCaptureWorkersForDevice) -- Going to use 16 thread(s)
[26431] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 0
[26431] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth31" Module to cpu/core 0, thread id 26431
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26432] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 1
[26432] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth32" Module to cpu/core 1, thread id 26432
[26432] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26432] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26433] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 2
[26433] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth33" Module to cpu/core 2, thread id 26433
[26433] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26433] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26434] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 3
[26434] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth34" Module to cpu/core 3, thread id 26434
[26434] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26434] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26435] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 4
[26435] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth35" Module to cpu/core 4, thread id 26435
[26435] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26435] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26436] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 5
[26436] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth36" Module to cpu/core 5, thread id 26436
[26436] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26436] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26437] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 6
[26437] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth37" Module to cpu/core 6, thread id 26437
[26437] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26437] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26438] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 7
[26438] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth38" Module to cpu/core 7, thread id 26438
[26438] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26438] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26439] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 8
[26439] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth39" Module to cpu/core 8, thread id 26439
[26439] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26439] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26440] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 9
[26440] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth310" Module to cpu/core 9, thread id 26440
[26440] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26440] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26441] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 10
[26441] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth311" Module to cpu/core 10, thread id 26441
[26441] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26441] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26442] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 11
[26442] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth312" Module to cpu/core 11, thread id 26442
[26442] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26442] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26443] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 12
[26443] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth313" Module to cpu/core 12, thread id 26443
[26443] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26443] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26444] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 13
[26444] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth314" Module to cpu/core 13, thread id 26444
[26444] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26444] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26445] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 14
[26445] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth315" Module to cpu/core 14, thread id 26445
[26445] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26445] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26446] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 15
[26446] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth316" Module to cpu/core 15, thread id 26446
[26446] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26446] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:527) <Info> (RunModeIdsAFPWorkers) -- RunModeIdsAFPWorkers initialised
[26447] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "FlowManagerThread" thread , thread id 26447
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:371) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 375000 (per thread)
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:387) <Info> (StreamTcpInitConfig) -- stream "memcap": 15032385536
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:393) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:399) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:416) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": disabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:438) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:451) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:469) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 32212254720
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:487) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 12582912
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:570) <Info> (StreamTcpInitConfig) -- stream.reassembly "toserver-chunk-size": 2585
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:572) <Info> (StreamTcpInitConfig) -- stream.reassembly "toclient-chunk-size": 2680
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:585) <Info> (StreamTcpInitConfig) -- stream.reassembly.raw: enabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 4, prealloc 256
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 16, prealloc 512
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 112, prealloc 512
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 248, prealloc 512
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 512, prealloc 512
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 768, prealloc 1024
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 1448, prealloc 1024
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 65535, prealloc 128
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:461) <Info> (StreamTcpReassemblyConfig) -- stream.reassembly "chunk-prealloc": 250
[26448] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfWakeupThread" thread , thread id 26448
[26449] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfMgmtThread" thread , thread id 26449
[26429] 24/5/2014 -- 01:48:35 - (tm-threads.c:2196) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.
[26431] 24/5/2014 -- 01:48:35 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26431] 24/5/2014 -- 01:48:35 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26431] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 6
[26431] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth31 using socket 6
[26432] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26432] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 7
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth32 using socket 7
[26433] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26433] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 8
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth33 using socket 8
[26434] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26434] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 9
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth34 using socket 9
[26435] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26435] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 10
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth35 using socket 10
[26436] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26436] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 11
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth36 using socket 11
[26437] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26437] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26437] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26437] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 12
[26437] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth37 using socket 12
[26438] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26438] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 13
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth38 using socket 13
[26439] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26439] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 14
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth39 using socket 14
[26440] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26440] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 15
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth310 using socket 15
[26441] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26441] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 16
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth311 using socket 16
[26442] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26442] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 17
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth312 using socket 17
[26443] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26443] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 18
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth313 using socket 18
[26444] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26444] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26444] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 19
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth314 using socket 19
[26445] 24/5/2014 -- 01:48:38 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26445] 24/5/2014 -- 01:48:38 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 20
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth315 using socket 20
[26446] 24/5/2014 -- 01:48:38 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26446] 24/5/2014 -- 01:48:38 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 21
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:452) <Info> (AFPPeersListReachedInc) -- All AFP capture threads are running.
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth316 using socket 21
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth314
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth315
[26437] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth37
[26432] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth32
[26440] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth310
[26434] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth34
[26435] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth35
[26443] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth313
[26431] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth31
[26441] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth311
[26433] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth33
[26442] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth312
[26438] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth38
[26436] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth36
[26439] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth39
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth316
[26429] 25/5/2014 -- 01:45:29 - (suricata.c:2300) <Notice> (main) -- Signal Received.  Stopping engine.
[26447] 25/5/2014 -- 01:45:30 - (flow-manager.c:561) <Info> (FlowManagerThread) -- 0 new flows, 0 established flows were timed out, 0 flows in closed state
[26429] 25/5/2014 -- 01:45:30 - (suricata.c:1025) <Info> (SCPrintElapsedTime) -- time elapsed 86215.055s
[26431] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth31) Kernel: Packets 8091169139, dropped 548918377
[26431] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth31) Packets 7541009393, bytes 5856264226024
[26431] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3174701772 TCP packets
[26432] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth32) Kernel: Packets 7523006674, dropped 129092719
[26432] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth32) Packets 7392869856, bytes 6039480366879
[26432] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3273049553 TCP packets
[26433] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth33) Kernel: Packets 7857365876, dropped 457724034
[26433] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth33) Packets 7398849607, bytes 6186600745188
[26433] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3254753683 TCP packets
[26434] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth34) Kernel: Packets 7939368989, dropped 328011859
[26434] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth34) Packets 7610498359, bytes 6023159311914
[26434] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3294782895 TCP packets
[26435] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth35) Kernel: Packets 7886105626, dropped 424755524
[26435] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth35) Packets 7460672617, bytes 6304951058805
[26435] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3301812001 TCP packets
[26436] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth36) Kernel: Packets 7807382993, dropped 258291463
[26436] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth36) Packets 7548467033, bytes 6347986611584
[26436] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3359138126 TCP packets
[26437] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth37) Kernel: Packets 7898330279, dropped 305037112
[26437] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth37) Packets 7592601391, bytes 6136634057356
[26437] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3263120334 TCP packets
[26438] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth38) Kernel: Packets 7653871283, dropped 193628126
[26438] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth38) Packets 7459608346, bytes 6164536552610
[26438] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3337037621 TCP packets
[26439] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth39) Kernel: Packets 7717771534, dropped 302582507
[26439] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth39) Packets 7414991895, bytes 6068675614996
[26439] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3256006501 TCP packets
[26440] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth310) Kernel: Packets 7955692240, dropped 339489700
[26440] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth310) Packets 7616019954, bytes 6170760218068
[26440] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3309626387 TCP packets
[26441] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth311) Kernel: Packets 8004841803, dropped 416027860
[26441] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth311) Packets 7588633565, bytes 6152477758719
[26441] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3229276967 TCP packets
[26442] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth312) Kernel: Packets 7908991181, dropped 282658592
[26442] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth312) Packets 7626056429, bytes 6374830613882
[26442] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3289082310 TCP packets
[26443] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth313) Kernel: Packets 7823655146, dropped 277468333
[26443] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth313) Packets 7546046278, bytes 6174538196484
[26443] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3264661076 TCP packets
[26444] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth314) Kernel: Packets 7661949338, dropped 161041160
[26444] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth314) Packets 7500367073, bytes 6191365130344
[26444] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3299756326 TCP packets
[26445] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Kernel: Packets 8203393412, dropped 272996993
[26445] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Packets 7930265587, bytes 6802539594416
[26445] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3258257071 TCP packets
[26446] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Kernel: Packets 7807106665, dropped 377601959
[26446] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Packets 7428994197, bytes 6140231305309
[26446] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3337023147 TCP packets
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 4 had a peak use of 11396 segments, more than the prealloc setting of 256
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 16 had a peak use of 17178 segments, more than the prealloc setting of 512
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 112 had a peak use of 45436 segments, more than the prealloc setting of 512
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 248 had a peak use of 12049 segments, more than the prealloc setting of 512
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 512 had a peak use of 26386 segments, more than the prealloc setting of 512
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 768 had a peak use of 23371 segments, more than the prealloc setting of 1024
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 1448 had a peak use of 67781 segments, more than the prealloc setting of 1024
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 65535 had a peak use of 67333 segments, more than the prealloc setting of 128
[26429] 25/5/2014 -- 01:45:31 - (stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 13327 chunks, more than the prealloc setting of 250
[26429] 25/5/2014 -- 01:45:31 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 390144 bytes, maximum: 16777216
[26429] 25/5/2014 -- 01:45:44 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete
[26429] 25/5/2014 -- 01:45:44 - (util-device.c:185) <Notice> (LiveDeviceListClean) -- Stats for 'eth3':  pkts: 125740002178, drop: 5075326318 (4.04%), invalid chksum: 0






by Peter Manev (noreply@blogger.com) at May 25, 2014 08:34 AM

May 21, 2014

Open Information Security Foundation

Suricata 2.0.1 Available!

The OISF development team is proud to announce Suricata 2.0.1. This release brings TLS Heartbleed detection and fixes a number of issues in the 2.0 release.  There were no changes since 2.0.1rc1.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.1.tar.gz

Notable changes

  • OpenSSL Heartbleed detection. Thanks to Pierre Chifflier and Will Metcalf
  • Fixed Unix Socket runmode
  • Fixed AF_PACKET IPS support

All closed tickets

  • Feature #1157: Always create pid file if –pidfile command line option is provided
  • Feature #1173: tls: OpenSSL heartbleed detection
  • Bug #978: clean up app layer parser thread local storage
  • Bug #1064: Lack of Thread Deinitialization For Decoder Modules
  • Bug #1101: Segmentation in AppLayerParserGetTxCnt
  • Bug #1136: negated app-layer-protocol FP on multi-TX flows
  • Bug #1141: dns response parsing issue
  • Bug #1142: dns tcp toclient protocol detection
  • Bug #1143: tls protocol detection in case of tls-alert
  • Bug #1144: icmpv6: unknown type events for MLD_* types
  • Bug #1145: ipv6: support PAD1 in DST/HOP extension hdr
  • Bug #1146: tls: event on ‘new session ticket’ in handshake
  • Bug #1159: Possible memory exhaustion when an invalid bpf-filter is used with AF_PACKET
  • Bug #1160: Pcaps submitted via Unix Socket do not finish processing in Suricata 2
  • Bug #1161: eve: src and dst mixed up in some cases
  • Bug #1162: proto-detect: make sure probing parsers for all registered ports are run
  • Bug #1163: HTP Segfault
  • Bug #1165: af_packet – one thread consistently not working
  • Bug #1170: rohash: CID 1197756: Bad bit shift operation (BAD_SHIFT)
  • Bug #1176: AF_PACKET IPS mode is broken in 2.0
  • Bug #1177: eve log do not show action ‘dropped’ just ‘allowed’
  • Bug #1180: Possible problem in stream tracking

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Pierre Chifflier
  • Will Metcalf
  • Duarte Silva
  • Brad Roether
  • Christophe Vandeplas
  • Jason Jones
  • Jorgen Bohnsdalen
  • Fábio Depin
  • Gines Lopez
  • Ivan Ristic
  • Coverity

Known issues & missing features

If you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at May 21, 2014 09:59 AM

suricata-ids.org

Suricata 2.0.1 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0.1. This release brings TLS Heartbleed detection and fixes a number of issues in the 2.0 release.  There were no changes since 2.0.1rc1.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.1.tar.gz

Notable changes

  • OpenSSL Heartbleed detection. Thanks to Pierre Chifflier and Will Metcalf
  • Fixed Unix Socket runmode
  • Fixed AF_PACKET IPS support

All closed tickets

  • Feature #1157: Always create pid file if –pidfile command line option is provided
  • Feature #1173: tls: OpenSSL heartbleed detection
  • Bug #978: clean up app layer parser thread local storage
  • Bug #1064: Lack of Thread Deinitialization For Decoder Modules
  • Bug #1101: Segmentation in AppLayerParserGetTxCnt
  • Bug #1136: negated app-layer-protocol FP on multi-TX flows
  • Bug #1141: dns response parsing issue
  • Bug #1142: dns tcp toclient protocol detection
  • Bug #1143: tls protocol detection in case of tls-alert
  • Bug #1144: icmpv6: unknown type events for MLD_* types
  • Bug #1145: ipv6: support PAD1 in DST/HOP extension hdr
  • Bug #1146: tls: event on ‘new session ticket’ in handshake
  • Bug #1159: Possible memory exhaustion when an invalid bpf-filter is used with AF_PACKET
  • Bug #1160: Pcaps submitted via Unix Socket do not finish processing in Suricata 2
  • Bug #1161: eve: src and dst mixed up in some cases
  • Bug #1162: proto-detect: make sure probing parsers for all registered ports are run
  • Bug #1163: HTP Segfault
  • Bug #1165: af_packet – one thread consistently not working
  • Bug #1170: rohash: CID 1197756: Bad bit shift operation (BAD_SHIFT)
  • Bug #1176: AF_PACKET IPS mode is broken in 2.0
  • Bug #1177: eve log do not show action ‘dropped’ just ‘allowed’
  • Bug #1180: Possible problem in stream tracking

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Pierre Chifflier
  • Will Metcalf
  • Duarte Silva
  • Brad Roether
  • Christophe Vandeplas
  • Jason Jones
  • Jorgen Bohnsdalen
  • Fábio Depin
  • Gines Lopez
  • Ivan Ristic
  • Coverity

Known issues & missing features

If you encounter issues, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal. With this in mind, please notice the list we have included of known items we are working on.  See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at May 21, 2014 09:55 AM

May 12, 2014

suricata-ids.org

Suricata 2.0.1rc1 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0.1rc1, the first (and hopefully only) release candidate for Suricata 2.0.1. This brings TLS Heartbleed detection and fixes a number of issues in the 2.0 release.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.1rc1.tar.gz

Notable changes

  • OpenSSL Heartbleed detection. Thanks to Pierre Chifflier and Will Metcalf
  • Fixed Unix Socket runmode
  • Fixed AF_PACKET IPS support

All closed tickets

  • Feature #1157: Always create pid file if –pidfile command line option is provided
  • Feature #1173: tls: OpenSSL heartbleed detection
  • Bug #978: clean up app layer parser thread local storage
  • Bug #1064: Lack of Thread Deinitialization For Decoder Modules
  • Bug #1101: Segmentation in AppLayerParserGetTxCnt
  • Bug #1136: negated app-layer-protocol FP on multi-TX flows
  • Bug #1141: dns response parsing issue
  • Bug #1142: dns tcp toclient protocol detection
  • Bug #1143: tls protocol detection in case of tls-alert
  • Bug #1144: icmpv6: unknown type events for MLD_* types
  • Bug #1145: ipv6: support PAD1 in DST/HOP extension hdr
  • Bug #1146: tls: event on ‘new session ticket’ in handshake
  • Bug #1159: Possible memory exhaustion when an invalid bpf-filter is used with AF_PACKET
  • Bug #1160: Pcaps submitted via Unix Socket do not finish processing in Suricata 2
  • Bug #1161: eve: src and dst mixed up in some cases
  • Bug #1162: proto-detect: make sure probing parsers for all registered ports are run
  • Bug #1163: HTP Segfault
  • Bug #1165: af_packet – one thread consistently not working
  • Bug #1170: rohash: CID 1197756: Bad bit shift operation (BAD_SHIFT)
  • Bug #1176: AF_PACKET IPS mode is broken in 2.0
  • Bug #1177: eve log do not show action ‘dropped’ just ‘allowed’
  • Bug #1180: Possible problem in stream tracking

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Pierre Chifflier
  • Will Metcalf
  • Duarte Silva
  • Brad Roether
  • Christophe Vandeplas
  • Jason Jones
  • Jorgen Bohnsdalen
  • Fábio Depin
  • Gines Lopez
  • Ivan Ristic
  • Coverity

Known issues & missing features

This is a “release candidate”-quality release so the stability should be good although unexpected corner cases might happen. If you encounter one, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at May 12, 2014 02:00 PM

Open Information Security Foundation

Suricata 2.0.1rc1 Available!

The OISF development team is proud to announce Suricata 2.0.1rc1, the first (and hopefully only) release candidate for Suricata 2.0.1. This brings TLS Heartbleed detection and fixes a number of issues in the 2.0 release.

Download

Get the new release here: http://www.openinfosecfoundation.org/download/suricata-2.0.1rc1.tar.gz

Notable changes

  • OpenSSL Heartbleed detection. Thanks to Pierre Chifflier and Will Metcalf
  • Fixed Unix Socket runmode
  • Fixed AF_PACKET IPS support

All closed tickets

  • Feature #1157: Always create pid file if –pidfile command line option is provided
  • Feature #1173: tls: OpenSSL heartbleed detection
  • Bug #978: clean up app layer parser thread local storage
  • Bug #1064: Lack of Thread Deinitialization For Decoder Modules
  • Bug #1101: Segmentation in AppLayerParserGetTxCnt
  • Bug #1136: negated app-layer-protocol FP on multi-TX flows
  • Bug #1141: dns response parsing issue
  • Bug #1142: dns tcp toclient protocol detection
  • Bug #1143: tls protocol detection in case of tls-alert
  • Bug #1144: icmpv6: unknown type events for MLD_* types
  • Bug #1145: ipv6: support PAD1 in DST/HOP extension hdr
  • Bug #1146: tls: event on ‘new session ticket’ in handshake
  • Bug #1159: Possible memory exhaustion when an invalid bpf-filter is used with AF_PACKET
  • Bug #1160: Pcaps submitted via Unix Socket do not finish processing in Suricata 2
  • Bug #1161: eve: src and dst mixed up in some cases
  • Bug #1162: proto-detect: make sure probing parsers for all registered ports are run
  • Bug #1163: HTP Segfault
  • Bug #1165: af_packet – one thread consistently not working
  • Bug #1170: rohash: CID 1197756: Bad bit shift operation (BAD_SHIFT)
  • Bug #1176: AF_PACKET IPS mode is broken in 2.0
  • Bug #1177: eve log do not show action ‘dropped’ just ‘allowed’
  • Bug #1180: Possible problem in stream tracking

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Ken Steele — Tilera
  • Jason Ish — Endace/Emulex
  • Tom Decanio — nPulse
  • Pierre Chifflier
  • Will Metcalf
  • Duarte Silva
  • Brad Roether
  • Christophe Vandeplas
  • Jason Jones
  • Jorgen Bohnsdalen
  • Fábio Depin
  • Gines Lopez
  • Ivan Ristic
  • Coverity

Known issues & missing features

This is a “release candidate”-quality release so the stability should be good although unexpected corner cases might happen. If you encounter one, please let us know! As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at May 12, 2014 01:52 PM

May 04, 2014

Peter Manev

Elasticsearch - err failed to connect to master - when changing/using a different IP address



It is a general rule of thumbs to check first your
/var/log/elasticsearch/elasticsearch.log
and
/var/log/logstash/logstash.log
when you experience any form of issues when using Kibana.

I stumbled upon this when I changed the IP/Network of the interface of my test virtual machine holding an ELK (Elasticsearch/Logstash/Kibana) installation to do log analysis for Suricata IDPS.

I managed to solve the issue based on those two sources:
https://github.com/elasticsearch/elasticsearch/issues/4194
http://www.concept47.com/austin_web_developer_blog/errors/elasticsearch-error-failed-to-connect-to-master/

The new IP that I changed is - 192.168.1.166 and the old one was 10.0.2.15
(notice the errs in the logs. It was still trying to connect to the old one below):

root@debian64:~/Work/# more /var/log/elasticsearch/elasticsearch.log
[2014-05-04 07:17:24,960][INFO ][node                     ] [Jamal Afari] version[1.1.0], pid[7178], build[2181e11/2014-03-25T15:59:51Z]
[2014-05-04 07:17:24,960][INFO ][node                     ] [Jamal Afari] initializing ...
[2014-05-04 07:17:24,964][INFO ][plugins                  ] [Jamal Afari] loaded [], sites []
[2014-05-04 07:17:27,828][INFO ][node                     ] [Jamal Afari] initialized
[2014-05-04 07:17:27,828][INFO ][node                     ] [Jamal Afari] starting ...
[2014-05-04 07:17:27,959][INFO ][transport                ] [Jamal Afari] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.166:9300]}
[2014-05-04 07:17:57,977][WARN ][discovery                ] [Jamal Afari] waited for 30s and no initial state was set by the discovery
[2014-05-04 07:17:57,978][INFO ][discovery                ] [Jamal Afari] elasticsearch/F9HgSmYJQcS6bxdgdeurAA
[2014-05-04 07:17:57,986][INFO ][http                     ] [Jamal Afari] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.166:9200]}
[2014-05-04 07:17:58,017][INFO ][node                     ] [Jamal Afari] started
[2014-05-04 07:18:01,026][WARN ][discovery.zen            ] [Jamal Afari] failed to connect to master [[Hellion][zcx2fIF2SrmwSYQ08la6PQ][LTS-64-1][inet[/10.0.2.15:9300]]], retrying...
org.elasticsearch.transport.ConnectTransportException: [Hellion][inet[/10.0.2.15:9300]] connect_timeout[30s]
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:718)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)
    at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:338)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$500(ZenDiscovery.java:79)
    at org.elasticsearch.discovery.zen.ZenDiscovery$1.run(ZenDiscovery.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:701)
Caused by: org.elasticsearch.common.netty.channel.ConnectTimeoutException: connection timed out: /10.0.2.15:9300
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
.......
.......
.......
[2014-05-04 07:37:05,783][WARN ][discovery.zen            ] [Vivisector] failed to connect to master [[Hellion][zcx2fIF2SrmwSYQ08la6PQ][LTS-64-1][inet[/10.0.2.15:9300]]], retrying...
org.elasticsearch.transport.ConnectTransportException: [Hellion][inet[/10.0.2.15:9300]] connect_timeout[30s]
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:718)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)
    at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:338)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$500(ZenDiscovery.java:79)
    at org.elasticsearch.discovery.zen.ZenDiscovery$1.run(ZenDiscovery.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:701)
Caused by: org.elasticsearch.common.netty.channel.ConnectTimeoutException: connection timed out: /10.0.2.15:9300
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    ... 3 more
   
   
That was giving me all sorts of wired errs and failed queries in Kibana. The base of the problem was that I did change IP addresses on the ELK server.

The solution is simple.
Find the Discovery section in  /etc/elasticsearch/elasticsearch.yml
and edit this line from :
# 1. Disable multicast discovery (enabled by default):
#
# discovery.zen.ping.multicast.enabled: false

to

# 1. Disable multicast discovery (enabled by default):
#
 discovery.zen.ping.multicast.enabled: false

Only remove the " # " in front of "discovery.zen.ping.multicast.enabled: false".
Save and restart the service.
service elasticsearch restart

Then everything went back to normal.
In /var/log/elasticsearch/elasticsearch.log:
   
[2014-05-04 07:37:07,936][INFO ][node                     ] [Vivisector] stopping ...
[2014-05-04 07:37:07,970][INFO ][node                     ] [Vivisector] stopped
[2014-05-04 07:37:07,971][INFO ][node                     ] [Vivisector] closing ...
[2014-05-04 07:37:07,979][INFO ][node                     ] [Vivisector] closed
[2014-05-04 07:37:09,685][INFO ][node                     ] [Vibraxas] version[1.1.0], pid[5291], build[2181e11/2014-03-25T15:59:51Z]
[2014-05-04 07:37:09,686][INFO ][node                     ] [Vibraxas] initializing ...
[2014-05-04 07:37:09,689][INFO ][plugins                  ] [Vibraxas] loaded [], sites []
[2014-05-04 07:37:12,597][INFO ][node                     ] [Vibraxas] initialized
[2014-05-04 07:37:12,597][INFO ][node                     ] [Vibraxas] starting ...
[2014-05-04 07:37:12,751][INFO ][transport                ] [Vibraxas] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.166:9300]}
[2014-05-04 07:37:15,777][INFO ][cluster.service          ] [Vibraxas] new_master [Vibraxas][esQHE1EtTuWVK9MVNiQ5jA][debian64][inet[/192.168.1.166:9300]], reason: zen-disco-join (elected_as_master)
[2014-05-04 07:37:15,806][INFO ][discovery                ] [Vibraxas] elasticsearch/esQHE1EtTuWVK9MVNiQ5jA
[2014-05-04 07:37:15,877][INFO ][http                     ] [Vibraxas] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.166:9200]}
[2014-05-04 07:37:16,893][INFO ][gateway                  ] [Vibraxas] recovered [16] indices into cluster_state
[2014-05-04 07:37:16,898][INFO ][node                     ] [Vibraxas] started
[2014-05-04 07:37:17,547][INFO ][cluster.service          ] [Vibraxas] added {[logstash-debian64-3408-4020][dTsgT1H9Srq6mUr_w5rpXQ][debian64][inet[/192.168.1.166:9301]]{client=true, data=false},}, reason: zen-disco-receive(join from node[[logstash-debian64-3408-4020][dTsgT1H9Srq6mUr_w5rpXQ][debian64][inet[/192.168.1.166:9301]]{client=true, data=false}])

 It is also higly recommended that you read the whole Discovery section in your elasticsearch.yml:
############################# Discovery #############################

# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.

.....


by Peter Manev (noreply@blogger.com) at May 04, 2014 07:54 AM

April 23, 2014

suricata-ids.org

Eric Leblond speaking about Suricata 2.0 at HES

200xNxEric_Leblond-199x300Suricata developer Eric Leblond will present Suricata and the new features available in the 2.0 version at Hackito Ergo Sum, a security conference which take place in Paris, France. Entitled “Suricata 2.0, Netfilter and the PRC”, the talk will focus on features such as the new full JSON output or TLS protocol handling. The talk is scheduled at 10:30am April 26th. More information: http://2014.hackitoergosum.org/speakers/#leblond

by inliniac at April 23, 2014 11:54 AM

April 09, 2014

Victor Julien

Detecting OpenSSL Heartbleed with Suricata

The OpenSSL heartbleed vulnerability is a pretty serious weakness in OpenSSL that can lead to information disclosure, in some cases even to to private key leaking. Please see this post here http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html for more info.

This is a case where an IDS is able to detect the vuln, even though we’re talking about TLS.

LUA

I’ve written a quick and dirty LUA script to detect it:

alert tls any any -> any any ( \
    msg:"TLS HEARTBLEED malformed heartbeat record"; \
    flow:established,to_server; dsize:>7; \
    content:"|18 03|"; depth:2; lua:tls-heartbleed.lua; \
    classtype:misc-attack; sid:3000001; rev:1;)

The script:

function init (args)
    local needs = {}
    needs["payload"] = tostring(true)
    return needs
end

function match(args)
    local p = args['payload']
    if p == nil then
        --print ("no payload")
        return 0
    end
 
    if #p < 8 then
        --print ("payload too small")
    end
    if (p:byte(1) ~= 24) then
        --print ("not a heartbeat")
        return 0
    end
 
    -- message length
    len = 256 * p:byte(4) + p:byte(5)
    --print (len)
 
    -- heartbeat length
    hb_len = 256 * p:byte(7) + p:byte(8)

    -- 1+2+16
    if (1+2+16) >= len  then
        print ("invalid length heartbeat")
        return 1
    end

    -- 1 + 2 + payload + 16
    if (1 + 2 + hb_len + 16) > len then
        print ("heartbleed attack detected: " .. (1 + 2 + hb_len + 16) .. " > " .. len)
        return 1
    end
    --print ("no problems")
    return 0
end
return 0

Regular rules

Inspired by the FOX-IT rules from http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-live-blog/, here are some non-LUA rules:

Detect a large response.

alert tls any any -> any any ( \
    msg:"TLS HEARTBLEED heartbeat suspiciuous large record"; \
    flow:established,to_client; dsize:>7; \
    content:"|18 03|"; depth:2; \
    byte_test:2,>,200,3,big; classtype:misc-attack; \
    sid:3000002; rev:1;)

Detect a large response following a large request (flow bit is either set by the LUA rule above or by the rule that follows):

alert tls any any -> any any ( \
    msg:"TLS HEARTBLEED heartbeat attack likely succesful"; \
    flowbits:isset,TLS.heartbleed; \
    flow:established,to_client; dsize:>7; \
    content:"|18 03|"; depth:2; byte_test:2,>,200,3,big; \
    classtype:misc-attack; \
    sid:3000003; rev:1;)

Detect a large request, set flowbit:

alert tls any any -> any any ( \
    msg:"TLS HEARTBLEED heartbeat suspiciuous large request"; \
    flow:established,to_server; content:"|18 03|"; depth:2; \
    content:"|01|"; distance:3; within:1; \
    byte_test:2,>,200,0,big,relative; \
    flowbits:set,TLS.heartbleed; \
    classtype:misc-attack; sid:3000004; rev:1;)

Suricata TLS parser

Pierre Chifflier has written detection logic for the Suricata TLS parser. This is in our git master and will be part of 2.0.1. If you run this code, enable these rules:

alert tls any any -> any any ( \
    msg:"SURICATA TLS overflow heartbeat encountered, possible exploit attempt (heartbleed)"; \
    flow:established; app-layer-event:tls.overflow_heartbeat_message; \
    flowint:tls.anomaly.count,+,1; classtype:protocol-command-decode; \
    reference:cve,2014-0160; sid:2230012; rev:1;)
alert tls any any -> any any ( \
    msg:"SURICATA TLS invalid heartbeat encountered, possible exploit attempt (heartbleed)"; \
    flow:established; app-layer-event:tls.invalid_heartbeat_message; \
    flowint:tls.anomaly.count,+,1; classtype:protocol-command-decode; \
    reference:cve,2014-0160; sid:2230013; rev:1;)

Ticket: https://redmine.openinfosecfoundation.org/issues/1173
Pull Request: https://github.com/inliniac/suricata/pull/924

Other Resources

- My fellow country (wo)men of Fox-IT have Snort rules here: http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-live-blog/ These rules detect suspiciously large heartbeat response sizes
– Oisf-users has a thread: https://lists.openinfosecfoundation.org/pipermail/oisf-users/2014-April/003603.html
– Emerging Threats has a thread: https://lists.emergingthreats.net/pipermail/emerging-sigs/2014-April/024049.html
– Sourcefire has made rules available as well http://vrt-blog.snort.org/2014/04/heartbleed-memory-disclosure-upgrade.html These should work on Suricata as well.

Update 1:
– Pierre Chifflier correctly noted that hb_len doesn’t contain the ‘type’ and ‘size’ fields (3 bytes total), while ‘len’ does. So updated the check.
Update 2:
– Yonathan Klijnsma pointed me at the difference between the request and the response: https://twitter.com/ydklijnsma/status/453514484074962944. I’ve updated the rule to only inspect the script against requests.
Update 3:
– Better rule formatting
– Add non-LUA rules as well
Update 4:
– ET is going to add these rules: https://lists.emergingthreats.net/pipermail/emerging-sigs/2014-April/024056.html
Update 5:
– Updated the LUA script after feedback from Ivan Ristic. The padding issue was ignored.
Update 6:
– Added Pierre Chifflier’s work on detecting this in the Suricata TLS parser.
– Added reference to Sourcefire VRT rules


by inliniac at April 09, 2014 12:03 PM

March 29, 2014

Victor Julien

Video: Suricata 2.0 installation and quick setup

I’ve made a video on installing Suricata 2.0 on Debian Wheezy. The video does the installation, quick setup, ethtool config and shows a simple way to test the IDS.

It’s the first time I’ve made such a video. Feedback is welcome.


by inliniac at March 29, 2014 10:01 PM

Peter Manev

Suricata (and the grand slam of) Open Source IDPS - Chapter IV - Logstash / Kibana / Elasticsearch, Part One - Updated


Introduction 

This is an updated article of the original post - http://pevma.blogspot.se/2014/03/suricata-and-grand-slam-of-open-source.html

This article covers the new (at the time of this writing) 1.4.0 Logstash release.

This is Chapter IV of a series of 4 articles aiming at giving a general guideline on how to deploy the Open Source Suricata IDPS on a high speed networks (10Gbps) in IDS mode using AF_PACKET, PF_RING or DNA and Logstash / Kibana / Elasticsearch

This chapter consist of two parts:
Chapter IV Part One - installation and set up of logstash.
Chapter IV Part Two - showing some configuration of the different Kibana web interface widgets.

The end result should be as many and as different widgets to analyze the Suricata IDPS logs , something like :






This chapter describes a quick and easy set up of Logstash / Kibana / Elasticsearch
This set up described in this chapter was not intended for a huge deployment, but rather as a conceptual proof in a working environment as pictured below:






We have two Suricata IDS deployed - IDS1 and IDS2
  • IDS2 uses logstash-forwarder (former lumberjack) to securely forward (SSL encrypted) its eve.json logs (configured in suricata.yaml) to IDS1, main Logstash/Kibana deployment.
  • IDS1 has its own logging (eve.json as well) that is also digested by Logstash.

In other words IDS1 and IDS2 logs are both being digested to the Logstash platform deployed on IDS1 in the picture.

Prerequisites

Both IDS1 and IDS2 should be set up and tuned with Suricata IDPS. This article will not cover that. If you have not done it you could start HERE.

Make sure you have installed Suricata with JSON availability. The following two packages must be present on your system prior to installation/compilation:
root@LTS-64-1:~# apt-cache search libjansson
libjansson-dev - C library for encoding, decoding and manipulating JSON data (dev)
libjansson4 - C library for encoding, decoding and manipulating JSON data
If there are not present on the system  - install them:
apt-get install libjansson4 libjansson-dev

In both IDS1 and IDS2 you should have in your suricata.yaml:
  # "United" event log in JSON format
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
        - dns
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: yes   # force logging magic on all logged files
            force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh
This tutorial uses /var/log/suricata as a default logging directory.

You can do a few dry runs to confirm log generation on both systems.
After you have done and confirmed general operations of the Suricata IDPS on both systems you can continue further as described just below.

Installation

IDS2

For the logstash-forwarder we need Go installed.

cd /opt
apt-get install hg-fast-export
hg clone -u release https://code.google.com/p/go
cd go/src
./all.bash


If everything goes ok you should see at the end:
ALL TESTS PASSED

Update your $PATH variable, in  make sure it has:
PATH=$PATH:/opt/go/bin
export PATH

root@debian64:~# nano  ~/.bashrc


edit the file (.bashrc), add at the bottom:

PATH=$PATH:/opt/go/bin
export PATH

 then:

root@debian64:~# source ~/.bashrc
root@debian64:~# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/go/bin


Install logstash-forwarder:
cd /opt
git clone git://github.com/elasticsearch/logstash-forwarder.git
cd logstash-forwarder
go build

Build a debian package:
apt-get install ruby ruby-dev
gem install fpm
make deb
That will produce a Debian package in the same directory (something like):
logstash-forwarder_0.3.1_amd64.deb

Install the Debian package:
root@debian64:/opt# dpkg -i logstash-forwarder_0.3.1_amd64.deb

 NOTE: You can use the same Debian package to copy and install it (dependency free) on other machines/servers. So once you have the deb package you can install it on any other server the same way, no need for rebuilding everything again (Go and ruby)

Create SSL certificates that will be used to securely encrypt and transport the logs:
cd /opt
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logfor.key -out logfor.crt

Copy on IDS2:
logfor.key in /etc/ssl/private/
logfor.crt in /etc/ssl/certs/

Copy the same files to IDS1:
logfor.key in /etc/logstash/pki/
logfor.crt in /etc/logstash/pki/


Now you can try to start/restart/stop the logstash-forwarder service:
root@debian64:/opt# /etc/init.d/logstash-forwarder start
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[ ok ] logstash-forwarder is running.
root@debian64:/opt# /etc/init.d/logstash-forwarder stop
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[FAIL] logstash-forwarder is not running ... failed!
root@debian64:/opt# /etc/init.d/logstash-forwarder start
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[ ok ] logstash-forwarder is running.
root@debian64:/opt# /etc/init.d/logstash-forwarder stop
root@debian64:/opt#
Good to go.

Create on IDS2 your logstash-forwarder config:
touch /etc/logstash-forwarder
Make sure the file looks like this (in this tutorial - copy/paste):

{
  "network": {
    "servers": [ "192.168.1.158:5043" ],
    "ssl certificate": "/etc/ssl/certs/logfor.crt",
    "ssl key": "/etc/ssl/private/logfor.key",
    "ssl ca": "/etc/ssl/certs/logfor.crt"
  },
  "files": [
    {
      "paths": [ "/var/log/suricata/eve.json" ],
      "codec": { "type": "json" }
    }
  ]
}
Some more info:
Usage of ./logstash-forwarder:
  -config="": The config file to load
  -cpuprofile="": write cpu profile to file
  -from-beginning=false: Read new files from the beginning, instead of the end
  -idle-flush-time=5s: Maximum time to wait for a full spool before flushing anyway
  -log-to-syslog=false: Log to syslog instead of stdout
  -spool-size=1024: Maximum number of events to spool before a flush is forced.

  These can be adjusted in:
  /etc/init.d/logstash-forwarder


This is as far as the set up on IDS2 goes....

IDS1 - indexer

NOTE: Each Logstash version has its corresponding Elasticsearch version to be used with it !
http://logstash.net/docs/1.4.0/tutorials/getting-started-with-logstash


Packages needed:
apt-get install apache2 openjdk-7-jdk openjdk-7-jre-headless

Downloads:
http://www.elasticsearch.org/overview/elkdownloads/

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.0.deb

wget https://download.elasticsearch.org/logstash/logstash/packages/debian/logstash_1.4.0-1-c82dc09_all.deb

wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0.tar.gz

mkdir /var/log/logstash/
Installation:
dpkg -i elasticsearch-1.1.0.deb
dpkg -i logstash_1.4.0-1-c82dc09_all.deb
tar -C /var/www/ -xzf kibana-3.0.0.tar.gz
update-rc.d elasticsearch defaults 95 10
update-rc.d logstash defaults

elasticsearch configs are located here (nothing needs to be done):
ls /etc/default/elasticsearch
/etc/default/elasticsearch
ls /etc/elasticsearch/
elasticsearch.yml  logging.yml
the elasticsearch data is located here:
/var/lib/elasticsearch/

You should have your logstash config file in /etc/default/logstash:

Make sure it has the config and log directories correct:

###############################
# Default settings for logstash
###############################

# Override Java location
#JAVACMD=/usr/bin/java

# Set a home directory
#LS_HOME=/var/lib/logstash

# Arguments to pass to logstash agent
#LS_OPTS=""

# Arguments to pass to java
#LS_HEAP_SIZE="500m"
#LS_JAVA_OPTS="-Djava.io.tmpdir=$HOME"

# pidfiles aren't used for upstart; this is for sysv users.
#LS_PIDFILE=/var/run/logstash.pid

# user id to be invoked as; for upstart: edit /etc/init/logstash.conf
#LS_USER=logstash

# logstash logging
LS_LOG_FILE=/var/log/logstash/logstash.log
#LS_USE_GC_LOGGING="true"

# logstash configuration directory
LS_CONF_DIR=/etc/logstash/conf.d

# Open file limit; cannot be overridden in upstart
#LS_OPEN_FILES=16384

# Nice level
#LS_NICE=19


GeoIPLite is shipped by default with Logstash !
http://logstash.net/docs/1.4.0/filters/geoip

and it is located here(on the system after installation):
/opt/logstash/vendor/geoip/GeoLiteCity.dat

Create your logstash.conf

touch logstash.conf

make sure it looks like this:

input {
  lumberjack {
    port => 5043
    type => "IDS2-logs"
    codec => json
    ssl_certificate => "/etc/logstash/pki/logfor.crt"
    ssl_key => "/etc/logstash/pki/logfor.key"
  }
 
  file {
    path => ["/var/log/suricata/eve.json"]
    codec =>   json
    type => "IDS1-logs"
  }
 
}

filter {
  if [src_ip]  {
    geoip {
      source => "src_ip"
      target => "geoip"
      database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }
}

output {
  elasticsearch {
    host => localhost
  }
}

The /etc/logstash/pki/logfor.crt  and /etc/logstash/pki/logfor.key are the same ones we created earlier on IDS2 and copied here to IDS1.

The purpose of  type => "IDS1-logs" and type => "IDS2-logs" above is so that later when looking at the Kibana widgets you would be able to differentiate the logs if needed:



Then  copy the file we just created to :
cp logstash.conf /etc/logstash/conf.d/


Kibana:

We have already installed Kibana during the first step :). All it is left to do now is just restart apache:

service apache2 restart


 Rolling it out


On IDS1 and IDS2 - start Suricata IDPS. Genereate some logs
On IDS2:
/etc/init.d/logstash-forwarder start

On IDS1:
service elasticsearch start
service logstash start
You can check the logstash-forwarder (on IDS2) if it is working properly like so - >
 tail -f /var/log/syslog :



Go to your browser and navigate to (in this case IDS1)
http://192.168.1.158/kibana-3.0.0
NOTE: This is http (as this is just a simple tutorial), you should configure it to use httpS and reverse proxy with authentication...

The Kibana web interface should come up.

That is it. From here on it is up to you to configure the web interface with your own widgets.

Chapter IV Part Two will follow with detail on that subject.
However something like this is easily achievable with a few clicks in under 5 min:





Troubleshooting:

You should keep an eye on /var/log/logstash/logstash.log - any troubles should be visible there.

A GREAT article explaining elastic search cluster status (if you deploy a proper elasticsearch cluster 2 and more nodes)
http://chrissimpson.co.uk/elasticsearch-yellow-cluster-status-explained.html

ERR in logstash-indexer.out - too many open files
http://www.elasticsearch.org/tutorials/too-many-open-files/

Set ulimit parameters on Ubuntu(this is in case you need to increase the number of Inodes(files) available on a system "df -ih"):
http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/

This is an advanced topic - Cluster status and settings commands:
 curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

 curl -XGET 'http://localhost:9200/_status?pretty=true'

 curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'


Very useful links:

Logstash 1.4.0 GA released:
http://www.elasticsearch.org/blog/logstash-1-4-0-ga-unleashed/

A MUST READ (explaining the usage of ".raw" in terms so that the terms  re not broken by space delimiter)
http://www.elasticsearch.org/blog/logstash-1-3-1-released/

Article explaining how to set up a 2 node cluster:
http://techhari.blogspot.se/2013/03/elasticsearch-cluster-setup-in-2-minutes.html

Installing Logstash Central Server (using rsyslog):
https://support.shotgunsoftware.com/entries/23163863-Installing-Logstash-Central-Server

ElasticSearch cluster setup in 2 minutes:
http://techhari.blogspot.com/2013/03/elasticsearch-cluster-setup-in-2-minutes.html






by Peter Manev (noreply@blogger.com) at March 29, 2014 07:43 AM

March 27, 2014

suricata-ids.org

Suricata Ubuntu PPA updated to 2.0

We have updated the official Ubuntu PPA to Suricata 2.0. To use this PPA read our docs here.

To install Suricata through this PPA, enter:
sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update
sudo apt-get install suricata

If you’re already using this PPA, updating is as simple as:
sudo apt-get update && sudo apt-get upgrade

The PPA Ubuntu packages have IPS mode through NFQUEUE enabled.

by fleurixx at March 27, 2014 02:18 PM

March 26, 2014

Peter Manev

Suricata (and the grand slam of) Open Source IDPS - Chapter IV - Logstash / Kibana / Elasticsearch, Part One


Introduction 


This article covers old installation instructions for Logstash 1.3.3 and prior. There is an UPDATED article - http://pevma.blogspot.se/2014/03/suricata-and-grand-slam-of-open-source_26.html that covers the new (at the time of this writing) 1.4.0 Logstash release.


This is Chapter IV of a series of 4 articles aiming at giving a general guideline on how to deploy the Open Source Suricata IDPS on a high speed networks (10Gbps) in IDS mode using AF_PACKET, PF_RING or DNA and Logstash / Kibana / Elasticsearch

This chapter consist of two parts:
Chapter IV Part One - installation and set up of logstash.
Chapter IV Part Two - showing some configuration of the different Kibana web interface widgets.

The end result should be as many and as different widgets to analyze the Suricata IDPS logs , something like :






This chapter describes a quick and easy set up of Logstash / Kibana / Elasticsearch
This set up described in this chapter was not intended for a huge deployment, but rather a conceptual proof in a working environment as pictured below:






We have two Suricata IDS deployed - IDS1 and IDS2
  • IDS2 uses logstash-forwarder (former lumberjack) to securely forward (SSL encrypted) its eve.json logs (configured in suricata.yaml) to IDS1, main Logstash/Kibana deployment.
  • IDS1 has its own logging (eve.json as well) that is also digested by Logstash.

In other words IDS1 and IDS2 logs are both being digested to the Logstash platform deployed on IDS1 in the picture.

Prerequisites

Both IDS1 and IDS2 should be set up and tuned with Suricata IDPS. This article will not cover that. If you have not done it you could start HERE.

Make sure you have installed Suricata with JSON availability. The following two packages must be present on your system prior to installation/compilation:
root@LTS-64-1:~# apt-cache search libjansson
libjansson-dev - C library for encoding, decoding and manipulating JSON data (dev)
libjansson4 - C library for encoding, decoding and manipulating JSON data
If there are not present on the system  - install them:
apt-get install libjansson4 libjansson-dev

In both IDS1 and IDS2 you should have in your suricata.yaml:
  # "United" event log in JSON format
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
        - dns
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: yes   # force logging magic on all logged files
            force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh
This tutorial uses /var/log/suricata as a default logging directory.

You can do a few dry runs to confirm log generation on both systems.
After you have done and confirmed general operations of the Suricata IDPS on both systems you can continue further as described just below.

Installation

IDS2

For the logstash-forwarder we need Go installed.

cd /opt
apt-get install hg-fast-export
hg clone -u release https://code.google.com/p/go
cd go/src
./all.bash


If everything goes ok you should see at the end:
ALL TESTS PASSED

Update your $PATH variable, in  make sure it has:
PATH=$PATH:/opt/go/bin
export PATH

root@debian64:~# nano  ~/.bashrc


edit the file (.bashrc), add at the bottom:

PATH=$PATH:/opt/go/bin
export PATH

 then:

root@debian64:~# source ~/.bashrc
root@debian64:~# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/go/bin


Install logstash-forwarder:
cd /opt
git clone git://github.com/elasticsearch/logstash-forwarder.git
cd logstash-forwarder
go build

Build a debian package:
apt-get install ruby ruby-dev
gem install fpm
make deb
That will produce a Debian package in the same directory (something like):
logstash-forwarder_0.3.1_amd64.deb

Install the Debian package:
root@debian64:/opt# dpkg -i logstash-forwarder_0.3.1_amd64.deb

 NOTE: You can use the same Debian package to copy and install it (dependency free) on other machines/servers. So once you have the deb package you can install it on any other server the same way, no need for rebuilding everything again (Go and ruby)

Create SSL certificates that will be used to securely encrypt and transport the logs:
cd /opt
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt

Copy to BOTH IDS1 and IDS2:
logstash-forwarder.key in /etc/ssl/private/
logstash-forwarder.crt in /etc/ssl/certs/

Now you can try to start/restart/stop the logstash-forwarder service:
root@debian64:/opt# /etc/init.d/logstash-forwarder start
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[ ok ] logstash-forwarder is running.
root@debian64:/opt# /etc/init.d/logstash-forwarder stop
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[FAIL] logstash-forwarder is not running ... failed!
root@debian64:/opt# /etc/init.d/logstash-forwarder start
root@debian64:/opt# /etc/init.d/logstash-forwarder status
[ ok ] logstash-forwarder is running.
root@debian64:/opt# /etc/init.d/logstash-forwarder stop
root@debian64:/opt#
Good to go.

Create on IDS2 your logstash-forwarder config:
touch /etc/logstash-forwarder
Make sure the file looks like this (in this tutorial - copy/paste):

{
  "network": {
    "servers": [ "192.168.1.158:5043" ],
    "ssl certificate": "/etc/ssl/certs/logstash-forwarder.crt",
    "ssl key": "/etc/ssl/private/logstash-forwarder.key",
    "ssl ca": "/etc/ssl/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [ "/var/log/suricata/eve.json" ],
      "codec": { "type": "json" }
    }
  ]
}

This is as far as the set up on IDS2 goes....

IDS1 - indexer

Download Logstash (change or create directory names to whichever suits you best):
cd /root/Work/tmp/Logstash
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.3.3-flatjar.jar

Download the GoeIP lite data needed for our geoip location:
wget -N http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gunzip GeoLiteCity.dat.gz

Create your logstash conf file:
touch /etc/init.d/logstash.conf

Make sure it looks like this (change directory names accordingly):
input {
  file {
    path => "/var/log/suricata/eve.json"
    codec =>   json
    # This format tells logstash to expect 'logstash' json events from the file.
    #format => json_event
  }
 
  lumberjack {
  port => 5043
  type => "logs"
  codec =>   json
  ssl_certificate => "/etc/ssl/certs/logstash-forwarder.crt"
  ssl_key => "/etc/ssl/private/logstash-forwarder.key"
  }
}


output {
  stdout { codec => rubydebug }
  elasticsearch { embedded => true }
}

#geoip part
filter {
  if [src_ip] {
    geoip {
      source => "src_ip"
      target => "geoip"
      database => "/root/Work/tmp/Logstash/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }
}


Create a startup script:
touch /etc/init.d/logstash-startup.conf

Make sure it looks like this (change directories accordingly):
# logstash - indexer instance
#

description     "logstash indexer instance using ports 9292 9200 9300 9301"

start on runlevel [345]
stop on runlevel [!345]

#respawn
#respawn limit 5 30
#limit nofile 65550 65550
expect fork

script
  test -d /var/log/logstash || mkdir /var/log/logstash
  chdir /root/Work/Logstash/
  exec sudo java -jar /root/Work/tmp/Logstash/logstash-1.3.3-flatjar.jar agent -f /etc/init/logstash.conf --log /var/log/logstash/logstash-indexer.out -- web &
end script


Then:
initctl reload-configuration   

 Rolling it out


On IDS1 and IDS2 - start Suricata IDPS. Genereate some logs
On IDS1:
service logstash-startup start

On IDS2:
root@debian64:/opt# /etc/init.d/logstash-forwarder start
You can check if it is working properly like so - > tail -f /var/log/syslog :



Go to your browser and navigate to (in this case IDS1)
http://192.168.1.158:9292
NOTE: This is http (as this is just a simple tutorial), you should configure it to use httpS

The Kibana web interface should come up.

That is it. From here on it is up to you to configure the web interface with your own widgets.

Chapter IV Part Two will follow with detail on that subject.
However something like this is easily achievable with a few clicks in under 5 min:





Troubleshooting:

You should keep an eye on /var/log/logstash/logstash-indexer.out - any troubles should be visible there.

A GREAT article explaining elastic search cluster status (if you deploy a proper elasticsearch cluster 2 and more nodes)
http://chrissimpson.co.uk/elasticsearch-yellow-cluster-status-explained.html

ERR in logstash-indexer.out - too many open files
http://www.elasticsearch.org/tutorials/too-many-open-files/

Set ulimit parameters on Ubuntu(this is in case you need to increase the number of Inodes(files) available on a system "df -ih"):
http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/

This is an advanced topic - Cluster status and settings commands:
 curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

 curl -XGET 'http://localhost:9200/_status?pretty=true'

 curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'



Very useful links:

A MUST READ (explaining the usage of ".raw" in terms so that the terms  re not broken by space delimiter)
http://www.elasticsearch.org/blog/logstash-1-3-1-released/

Article explaining how to set up a 2 node cluster:
http://techhari.blogspot.se/2013/03/elasticsearch-cluster-setup-in-2-minutes.html

Installing Logstash Central Server (using rsyslog):
https://support.shotgunsoftware.com/entries/23163863-Installing-Logstash-Central-Server

ElasticSearch cluster setup in 2 minutes:
http://techhari.blogspot.com/2013/03/elasticsearch-cluster-setup-in-2-minutes.html






by Peter Manev (noreply@blogger.com) at March 26, 2014 02:43 PM

March 25, 2014

Victor Julien

Suricata 2.0 and beyond

Today I finally released Suricata 2.0. The 2.0 branch opened in December 2012. In the little over a year that it’s development lasted, we have closed 183 tickets. We made 1174 commits, with the following stats:

582 files changed, 94782 insertions(+), 63243 deletions(-)

So, a significant update! In total, 17 different people made commits. I’m really happy with how much code and features were contributed. When starting Suricata this was what I really hoped for, and it seems to be working!

Eve

The feature I’m most excited about is ‘Eve’. It’s the nickname of a new logging output module ‘Extendible Event Format’. It’s an all JSON event stream that is very easy to parse using 3rd party tools. The heavy lifting has been done by Tom Decanio. Combined with Logstash, Elasticsearch and Kibana, this allows for really easy graphical dashboard creation. This is a nice addition to the existing tools which are generally more alert centered.

kibana300 kibana300map kibana-suri

Splunk support is easy as well, as Eric Leblond has shown:

regit-Screenshot-from-2014-03-05-231712

Looking forward

While doing releases is important and somewhat nice too, the developer in me is always glad when they are over. Leading up to a release there is a slow down of development, when most time is spent on fixing release critical bugs and doing some polishing. This slow down is a necessary evil, but I’m glad when we can start merging bigger changes again.

In the short term, I shooting for a fairly quick 2.0.1 release. There are some known issues that will be addressed in that.

More interestingly from a development perspective is the opening of the 2.1 branch. I’ll likely open that in a few weeks. There are a number of features in progress for 2.1. I’m working on speeding up pcap recording, which is currently quite inefficient. More interestingly, Lua output scripting. A preview of this work is available here  with some example scripts here.

Others are working on nice things as well: improving protocol support for detection and logging, nflog and netmap support, taxii/stix integration, extending our TLS support and more.

I’m hoping the 2.1 cycle will be shorter than the last, but we’ll see how it goes :)


by inliniac at March 25, 2014 03:11 PM

suricata-ids.org

Suricata 2.0 Available!

Photo by Eric Leblond

The OISF development team is proud to announce Suricata 2.0. This release is a major improvement over the previous releases with regard to performance, scalability and accuracy. Also, a number of great features have been added.

The biggest new features of this release are the addition of “Eve”, our all JSON output for events: alerts, HTTP, DNS, SSH, TLS and (extracted) files; much improved VLAN handling; a detectionless ‘NSM’ runmode; much improved CUDA performance.

The Eve log allows for easy 3rd party integration. It has been created with Logstash in mind specifically and we have a quick setup guide here: Logstash_Kibana_and_Suricata_JSON_output

kibana300 kibana300map

Download

Get the new release here: https://www.openinfosecfoundation.org/download/suricata-2.0.tar.gz

Notable new features, improvements and changes

  • Eve log, all JSON event output for alerts, HTTP, DNS, SSH, TLS and files. Written by Tom Decanio of nPulse Technologies
  • NSM runmode, where detection engine is disabled. Development supported by nPulse Technologies
  • Various scalability improvements, clean ups and fixes by Ken Steel of Tilera
  • Add –set commandline option to override any YAML option, by Jason Ish of Emulex
  • Several fixes and improvements of AF_PACKET and PF_RING
  • ICMPv6 handling improvements by Jason Ish of Emulex
  • Alerting over PCIe bus (Tilera only), by Ken Steel of Tilera
  • Feature #792: DNS parser, logger and keyword support, funded by Emerging Threats
  • Feature #234: add option disable/enable individual app layer protocol inspection modules
  • Feature #417: ip fragmentation time out feature in yaml
  • Feature #1009: Yaml file inclusion support
  • Feature #478: XFF (X-Forwarded-For) support in Unified2
  • Feature #602: availability for http.log output – identical to apache log format
  • Feature #813: VLAN flow support
  • Feature #901: VLAN defrag support
  • Features #814, #953, #1102: QinQ VLAN handling
  • Feature #751: Add invalid packet counter
  • Feature #944: detect nic offloading
  • Feature #956: Implement IPv6 reject
  • Feature #775: libhtp 0.5.x support
  • Feature #470: Deflate support for HTTP response bodies
  • Feature #593: Lua flow vars and flow ints support
  • Feature #983: Provide rule support for specifying icmpv4 and icmpv6
  • Feature #1008: Optionally have http_uri buffer start with uri path for use in proxied environments
  • Feature #1032: profiling: per keyword stats
  • Feature #878: add storage api

Upgrading

The configuration file has evolved but backward compatibility is provided. We thus encourage you to update your suricata configuration file. Upgrade guidance is provided here: https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Upgrading_Suricata_14_to_Suricata_20

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Tom DeCanio, nPulse
  • Ken Steele, Tilera
  • Jason Ish, Endace / Emulex
  • Duarte Silva
  • Giuseppe Longo
  • Ignacio Sanchez
  • Florian Westphal
  • Nelson Escobar, Myricom
  • Christian Kreibich, Lastline
  • Phil Schroeder, Emerging Threats
  • Luca Deri & Alfredo Cardigliano, ntop
  • Will Metcalf, Emerging Threats
  • Ivan Ristic, Qualys
  • Chris Wakelin
  • Francis Trudeau, Emerging Threats
  • Rmkml
  • Laszlo Madarassy
  • Alessandro Guido
  • Amin Latifi
  • Darrell Enns
  • Paolo Dangeli
  • Victor Serbu
  • Jack Flemming
  • Mark Ashley
  • Marc-Andre Heroux
  • Alessandro Guido
  • Petr Chmelar
  • Coverity

Known issues & missing features

If you encounter issues, please let us know!  As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by inliniac at March 25, 2014 11:15 AM

Open Information Security Foundation

Suricata 2.0 Available!

The OISF development team is proud to announce Suricata 2.0. This release is a major improvement over the previous releases with regard to performance, scalability and accuracy. Also, a number of great features have been added.

The biggest new features of this release are the addition of “Eve”, our all JSON output for events: alerts, HTTP, DNS, SSH, TLS and (extracted) files; much improved VLAN handling; a detectionless ‘NSM’ runmode; much improved CUDA performance.

The Eve log allows for easy 3rd party integration. It has been created with Logstash in mind specifically and we have a quick setup guide here Logstash_Kibana_and_Suricata_JSON_output

kibana300 kibana300map

Download

Get the new release here: https://www.openinfosecfoundation.org/download/suricata-2.0.tar.gz

Notable new features, improvements and changes

  • Eve log, all JSON event output for alerts, HTTP, DNS, SSH, TLS and files. Written by Tom Decanio of nPulse Technologies
  • NSM runmode, where detection engine is disabled. Development supported by nPulse Technologies
  • Various scalability improvements, clean ups and fixes by Ken Steel of Tilera
  • Add –set commandline option to override any YAML option, by Jason Ish of Emulex
  • Several fixes and improvements of AF_PACKET and PF_RING
  • ICMPv6 handling improvements by Jason Ish of Emulex
  • Alerting over PCIe bus (Tilera only), by Ken Steel of Tilera
  • Feature #792: DNS parser, logger and keyword support, funded by Emerging Threats
  • Feature #234: add option disable/enable individual app layer protocol inspection modules
  • Feature #417: ip fragmentation time out feature in yaml
  • Feature #1009: Yaml file inclusion support
  • Feature #478: XFF (X-Forwarded-For) support in Unified2
  • Feature #602: availability for http.log output – identical to apache log format
  • Feature #813: VLAN flow support
  • Feature #901: VLAN defrag support
  • Features #814, #953, #1102: QinQ VLAN handling
  • Feature #751: Add invalid packet counter
  • Feature #944: detect nic offloading
  • Feature #956: Implement IPv6 reject
  • Feature #775: libhtp 0.5.x support
  • Feature #470: Deflate support for HTTP response bodies
  • Feature #593: Lua flow vars and flow ints support
  • Feature #983: Provide rule support for specifying icmpv4 and icmpv6
  • Feature #1008: Optionally have http_uri buffer start with uri path for use in proxied environments
  • Feature #1032: profiling: per keyword stats
  • Feature #878: add storage api

Upgrading

The configuration file has evolved but backward compatibility is provided. We thus encourage you to update your suricata configuration file. Upgrade guidance is provided here: https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Upgrading_Suricata_14_to_Suricata_20

Special thanks

We’d like to thank the following people and corporations for their contributions and feedback:

  • Tom DeCanio, nPulse
  • Ken Steele, Tilera
  • Jason Ish, Endace / Emulex
  • Duarte Silva
  • Giuseppe Longo
  • Ignacio Sanchez
  • Florian Westphal
  • Nelson Escobar, Myricom
  • Christian Kreibich, Lastline
  • Phil Schroeder, Emerging Threats
  • Luca Deri & Alfredo Cardigliano, ntop
  • Will Metcalf, Emerging Threats
  • Ivan Ristic, Qualys
  • Chris Wakelin
  • Francis Trudeau, Emerging Threats
  • Rmkml
  • Laszlo Madarassy
  • Alessandro Guido
  • Amin Latifi
  • Darrell Enns
  • Paolo Dangeli
  • Victor Serbu
  • Jack Flemming
  • Mark Ashley
  • Marc-Andre Heroux
  • Alessandro Guido
  • Petr Chmelar
  • Coverity

Known issues & missing features

If you encounter issues, please let us know!  As always, we are doing our best to make you aware of continuing development and items within the engine that are not yet complete or optimal.  With this in mind, please notice the list we have included of known items we are working on.

See issues for an up to date list and to report new issues. See Known_issues for a discussion and time line for the major issues.

About Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF, its supporting vendors and the community.

by Victor Julien (postmaster@inliniac.net) at March 25, 2014 10:32 AM

Peter Manev

Suricata - preparing 10Gbps network cards for IDPS and file extraction


OS used/tested for this tutorial - Debian Wheezy and/or Ubuntu LTS 12.0.4
With 3.2.0 and 3.5.0 kernel level respectively with Suricata 2.0dev at the moment of this writing.



This article consists of the following major 3 sections:
  • Network card drivers and tuning
  • Kernel specific tunning
  • Suricata.yaml configuration  (file extraction specific)

Network and system  tools:
apt-get install ethtool bwm-ng iptraf htop

Network card drivers and tuning

Our card is Intel 82599EB 10-Gigabit SFI/SFP+


rmmod ixgbe
sudo modprobe ixgbe FdirPballoc=3
ifconfig eth3 up
then (we disable irqbalance and make sure it does not enable itself during reboot)
 killall irqbalance
 service irqbalance stop

 apt-get install chkconfig
 chkconfig irqbalance off
Get the Intel network driver form here (we will use them in a second) - https://downloadcenter.intel.com/default.aspx

 Download to your directory of choice then unzip,compile and install:
 tar -zxf ixgbe-3.18.7.tar.gz
 cd /home/pevman/ixgbe-3.18.7/src
 make clean && make && make install
Set irq affinity - do not forget to change eth3  below with the name of the network interface you are using:
 cd ../scripts/
 ./set_irq_affinity  eth3


 You should see something like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ./set_irq_affinity  eth3
no rx vectors found on eth3
no tx vectors found on eth3
eth3 mask=1 for /proc/irq/101/smp_affinity
eth3 mask=2 for /proc/irq/102/smp_affinity
eth3 mask=4 for /proc/irq/103/smp_affinity
eth3 mask=8 for /proc/irq/104/smp_affinity
eth3 mask=10 for /proc/irq/105/smp_affinity
eth3 mask=20 for /proc/irq/106/smp_affinity
eth3 mask=40 for /proc/irq/107/smp_affinity
eth3 mask=80 for /proc/irq/108/smp_affinity
eth3 mask=100 for /proc/irq/109/smp_affinity
eth3 mask=200 for /proc/irq/110/smp_affinity
eth3 mask=400 for /proc/irq/111/smp_affinity
eth3 mask=800 for /proc/irq/112/smp_affinity
eth3 mask=1000 for /proc/irq/113/smp_affinity
eth3 mask=2000 for /proc/irq/114/smp_affinity
eth3 mask=4000 for /proc/irq/115/smp_affinity
eth3 mask=8000 for /proc/irq/116/smp_affinity
root@suricata:/home/pevman/ixgbe-3.18.7/scripts#
Now we have the latest drivers installed (at the time of this writing) and we have run the affinity script:
   *-network:1
       description: Ethernet interface
       product: 82599EB 10-Gigabit SFI/SFP+ Network Connection
       vendor: Intel Corporation
       physical id: 0.1
       bus info: pci@0000:04:00.1
       logical name: eth3
       version: 01
       serial: 00:e0:ed:19:e3:e1
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress vpd bus_master cap_list ethernet physical fibre
       configuration: autonegotiation=off broadcast=yes driver=ixgbe driverversion=3.18.7 duplex=full firmware=0x800000cb latency=0 link=yes multicast=yes port=fibre promiscuous=yes
       resources: irq:37 memory:fbc00000-fbc1ffff ioport:e000(size=32) memory:fbc40000-fbc43fff memory:fa700000-fa7fffff memory:fa600000-fa6fffff



We need to disable all offloading on the network card in order for the IDS to be able to see the traffic as it is supposed to be (without checksums,tcp-segmentation-offloading and such..) Otherwise your IDPS would not be able to see all "natural" network traffic the way it is supposed to and will not inspect it properly.

This would influence the correctness of ALL outputs including file extraction. So make sure all offloading features are OFF !!!

When you first install the drivers and card your offloading settings might look like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -k eth3
Offload parameters for eth3:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
root@suricata:/home/pevman/ixgbe-3.18.7/scripts#

So we disable all of them, like so (and we load balance the UDP flows for that particular network card):

ethtool -K eth3 tso off
ethtool -K eth3 gro off
ethtool -K eth3 lro off
ethtool -K eth3 gso off
ethtool -K eth3 rx off
ethtool -K eth3 tx off
ethtool -K eth3 sg off
ethtool -K eth3 rxvlan off
ethtool -K eth3 txvlan off
ethtool -N eth3 rx-flow-hash udp4 sdfn
ethtool -N eth3 rx-flow-hash udp6 sdfn
ethtool -n eth3 rx-flow-hash udp6
ethtool -n eth3 rx-flow-hash udp4
ethtool -C eth3 rx-usecs 0 rx-frames 0
ethtool -C eth3 adaptive-rx off

Your output should look something like this:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 tso off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 gro off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 lro off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 gso off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 rx off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 tx off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 sg off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 rxvlan off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 txvlan off
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -N eth3 rx-flow-hash udp4 sdfn
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -N eth3 rx-flow-hash udp6 sdfn
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -n eth3 rx-flow-hash udp6
UDP over IPV6 flows use these fields for computing Hash flow key:
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]

root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -n eth3 rx-flow-hash udp4
UDP over IPV4 flows use these fields for computing Hash flow key:
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]

root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -C eth3 rx-usecs 0 rx-frames 0
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -C eth3 adaptive-rx off

Now we doublecheck and run ethtool again to verify that the offloading is OFF:
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -k eth3
Offload parameters for eth3:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: off
tx-vlan-offload: off

Ring parameters on the network card:

root@suricata:~# ethtool -g eth3
Ring parameters for eth3:
Pre-set maximums:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:            4096
Current hardware settings:
RX:             512
RX Mini:        0
RX Jumbo:       0
TX:             512


We can increase that to the max Pre-set RX:

root@suricata:~# ethtool -G eth3 rx 4096

Then we  have a look again:

root@suricata:~# ethtool -g eth3
Ring parameters for eth3:
Pre-set maximums:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             4096
Current hardware settings:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             512

Making network changes permanent across reboots


On Ubuntu for example you can do:
root@suricata:~# crontab -e

Add the following:
# add cronjob at reboot - disbale network offload
@reboot /opt/tmp/disable-network-offload.sh

and your disable-network-offload.sh script (in this case under /opt/tmp/ ) will contain the following:
#!/bin/bash
ethtool -K eth3 tso off
ethtool -K eth3 gro off
ethtool -K eth3 lro off
ethtool -K eth3 gso off
ethtool -K eth3 rx off
ethtool -K eth3 tx off
ethtool -K eth3 sg off
ethtool -K eth3 rxvlan off
ethtool -K eth3 txvlan off
ethtool -N eth3 rx-flow-hash udp4 sdfn
ethtool -N eth3 rx-flow-hash udp6 sdfn
ethtool -C eth3 rx-usecs 0 rx-frames 0
ethtool -C eth3 adaptive-rx off
with:
chmod 755 disable-network-offload.sh



Kernel specific tunning


Certain adjustments in parameters in the kernel can help as well :

sysctl -w net.core.netdev_max_backlog=250000
sysctl -w net.core.rmem_max = 16777216
sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.rmem_default=16777216
sysctl -w net.core.optmem_max=16777216


Making kernel changes permanent across reboots


example:
echo 'net.core.netdev_max_backlog =250000' >> /etc/sysctl.conf

reload the changes:
sysctl -p

OR for all the above adjustments:

echo 'net.core.netdev_max_backlog=250000' >> /etc/sysctl.conf
echo 'net.core.rmem_max = 16777216' >> /etc/sysctl.conf
echo 'net.core.rmem_max=16777216' >> /etc/sysctl.conf
echo 'net.core.rmem_default=16777216' >> /etc/sysctl.conf
echo 'net.core.optmem_max=16777216' >> /etc/sysctl.conf
sysctl -p


Suricata.yaml configuration  (file extraction specific)

As of Suricata 1.2  - it is possible to detect and extract/store over 5000 types of files from HTTP sessions.

Specific file extraction instructions can also be found in the official page documentation.

The following libraries are needed on the system running Suricata :
apt-get install libnss3-dev libnspr4-dev

Suricata also needs to be compiled with file extraction enabled (not covered here).

In short in the suriacta.yaml, those are the sections below that can be tuned/configured and affect the file extraction and logging:
(the bigger the mem values the better on a busy link )


  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
        - dns
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: yes   # force logging magic on all logged files
            force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh


For file store to disk/extraction:
   - file-store:
      enabled: yes       # set to yes to enable
      log-dir: files    # directory to store the files
      force-magic: yes   # force logging magic on all stored files
      force-md5: yes     # force logging of md5 checksums
      #waldo: file.waldo # waldo file to store the file_id across runs


 stream:
  memcap: 32mb
  checksum-validation: no      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 128mb
    depth: 1mb                  # reassemble 1mb into a stream
  
depth: 1mb , would mean that in one tcp reassembled flow, the max size of the file that can be extracted is just about 1mb.

Both stream.memcap and reassembly.memcap (if reassembly is needed) must be big enough to accommodate the whole file on the fly in traffic that needs to be extracted PLUS any other stream and reassembly tasks that the engine needs to do while inspecting the traffic on a particular link.

 app-layer:
  protocols:
....
....
     http:
      enabled: yes
      # memcap: 64mb

The default limit for mem usage for http is 64mb   , that could be increased , ex - memcap: 4GB -  since HTTP is present everywhere and a low memcap on a busy HTTP link would limit the inspection and extraction size ability.

       libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 3072
           response-body-limit: 3072

The default values above control how far the HTTP request and response body is tracked and also limit file inspection. This should be set to a much higher value:

        libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 1gb
           response-body-limit: 1gb

 or 0 (which would mean unlimited):

       libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 0
           response-body-limit: 0

and then of course you would need a rule loaded(example):
alert http any any -> any any (msg:"PDF file Extracted"; filemagic:"PDF document"; filestore; sid:11; rev:11;)



That's it.
























by Peter Manev (noreply@blogger.com) at March 25, 2014 09:09 AM

March 20, 2014

suricata-ids.org

Suricata 2.0rc3 Windows Installer Available

The Windows MSI installer of the Suricata 2.0rc3 release is now available.

Download it here: Suricata-2.0rc3-1-32bit.msi

After downloading, double click the file to launch the installer. The installer is now signed.

If you have a previous version installed, please remove that first.

by fleurixx at March 20, 2014 12:26 PM