Posts Tagged ‘snort’
Pushing forwards closer to a 1.0 release for OpenFPC, one of the major components has now been updated – The GUI.
To introduce this new release I’ve put together a short screen-cast of OpenFPC to show the installation, setup procedure, and a bit of general usage. So if you’re tasked with rolling together your own full packet capture/network traffic recorder/forensics system, perhaps you may want to take a look below.
For those who don’t want to sit through five minutes of video to see what the new GUI looks like, here are a few screenshots of the system in action.
Version 0.6 is now available at http://code.google.com/p/openfpc/downloads/list . Expect a few bugs, and if you report them, Ill own the task of fixing them.
It’s been a while since my last post, but it’s because I’ve been busy working on ofpc. To rectify that, I thought I would share some of the concepts that are behind how OpenFPC should be able to grow rapidly into a distributed system.
One of the more useful features of ofpc is its self-referencing method for scaling out master/master/slave devices. This concept gets interest when I explain it to people, however it’s not really documented anywhere. So let me introduce it here with a working example……
There are a few common situations where the master/slave relationship can provide real value via clustering.
- Geographically separated network links with guaranteed or possible asymmetric traffic paths
- Multi-link trunks
- High(er) speed links where you need to spread traffic load over multiple slaves
Firstly, please forgive my terrible retro-diagram skills.
So here’s the situation:
There are two pipes between network “A” and network “B”, and for whatever the reason, you don’t know if the traffic you want to grab from the buffer could be in the archive of SLAVE1 or SLAVE2. You do know however it’s going to be in one or more of them. Combined they become one *logical* network link.
By requesting the data from the Master queue daemon responsible for these two devices (MASTER in the diagram here), without specifying which slave you want to route your request to, it will search/extract from all of the slaves below it. The master ofpc-queued doesn’t need to be on a separate bit of hardware, it’s just represented in the diagram that way.
Here’s an example of it functioning in my test environment.
lward@UbuntuDesktop:~/code/openfpc$ ./ofpc-client.pl -a fetch \ --src-addr=192.168.222.1 --dst-port=22 * ofpc-client.pl 0.1 * Part of the OpenFPC project Username: master Password for user master : ##################################### Filename: /tmp/extracted-ofpc-1284615954.pcap Size : 7.0M MD5 : a495c1f38dce3dc9dff50ead47a415ab lward@UbuntuDesktop:~/code/openfpc$
This ofpc request provided me with a 7MB pcap file made up from the traffic seen by “slave1” and “slave2”, it’s all merged together so I can inspect the traffic as the logical link processes it rather than what can be captured on one physical leg of the link. This isn’t limited to a maximum of two slaves, it can of course be many many more.
If for any given reason I would still prefer to only look at the traffic on one slave, I can either:
- Make an ofpc request directly to one of the ofpc-slave devices
- Specify the device to focus on to the master
lward@UbuntuDesktop:~/code/openfpc$ ./ofpc-client.pl -a fetch \ --src-addr=192.168.222.1 --dst-port=22 -o 4240 --device slave2
* ofpc-client.pl 0.1 * Part of the OpenFPC project Username: master Password for user master : ##################################### Filename: /tmp/extracted-ofpc-1284616271.pcap Size : 6.0M MD5 : 68132e2e12c16665913cb1e7f36336f3 lward@UbuntuDesktop:~/code/openfpc$
If you want to test this feature out, make sure you’re using the latest openfpc code out of svn.
It’s been a couple of months since I first posted about the OpenFPC project, so I thought it’s time that I provided a little update.
Firstly, I need to throw some karma over to Edward Fjellskål (http://gamelinux.org), so… Edward++.
Edward and I have merged the OpenFPC and FPCGUI projects, it makes way more sense to combine our efforts as our goals are similar while our approaches have been from different angles. We both see a need to unify all of the home-brew full-packet-capture/network forensics tools we see out there in the wild.
OpenFPC now has a new home, http://www.openfpc.org. So, if you’re looking for a distributed wrapper for your daemonlogger instances, or if you’re still trying to get tcpdump to log in a ringbuffer and share access over multiple analysts, devices, and tools, head on over to www.openfpc.org to read all about it. Here are a couple of quick links for those who want to jump right in:
I’m looking for people to help test and provide feedback now so I can fix problems and tweak things ahead of a full release.
Good luck, and please let me know your feedback.
It’s that time of year again. Infosecurity Europe is upon us. If you’re going to be there and have a strong interest in the inner-works of intrusion prevention engines, have I got a treat for you lucky lucky people 🙂
Sourceifre’s own Lurene Grenier (@pusscat) and Matthew Olney (@kpyke) are running a workshop!
Here’s the official blurb
Sourcefire VRT Workshop
Register here – 11:00 – 12:00, Wednesday 28th April 2010 – Mayfair Room, Earls Court.
The VRT team will demonstrate the power and flexibility of the engine by unveiling a new multi-faceted, scalable detection methodology targeted at addressing the most difficult detection problems facing security professionals today.
So if that gets your interest going, make sure you register here.
As a bonus for you all, Sourcefire are also running an “Intelligent Network Security” Workshop.
Sourcefire Intelligent Network Security Workshop
Register here | 15:00 – 16:00am, Wednesday 28th April 2010 | Mayfair Room, Earls Court
In the age of the advanced persistent threat of cloud computing and of new economic realities, how can companies ensure their networks are monitored and protected securely and cost-effectively? Find out how Sourcefire, the leader in context-aware Intrusion Prevention Systems has addressed the limitations of current generation IPS to provide true intelligent network security solutions.
And I’ll be stuck on the Sourcefire stand for most of the three days, pop buy if you want to say “oh hai”.
Update: – You can download tweetyard here.
There has been some of discussion of late on the snort-* lists of late regarding Unified alerting vs direct DB access.
I stopped storing events in a DB years back when I stopped using ACID (and yes that was back-in-the-day before BASE came into being). My personal Snort requirements are pretty simple and fast output has always worked well for me linked with a load of swatch-foo and custom perl scripts. After hanging my head in shame for not converting to unified yet (cobblers children clearly have no shoes over here) I thought it would be wise to put some effort in.
I used to receive all of my Snort IDS events via email, but email is *so* web 1.0. So I thought I would hook into Twitter for real-time alerting 🙂
So far, so good, and it only took about an hour to build. Kudos to Jason Brvenik for his snort-unified.pm and sample barnyard replacement, it was a good base for what I wanted to hack together. Because I put this together more than fun more than anything else, feel free to follow a censored Twitter feed of my IPS events (If you didn’t have enough to deal with already). I have blanked the IPs of my protected systems in an attempt to raise the smarts-to-abuse bar up 0.2 inches above short skiddie tall.
I will upload the code when I get a spare couple of minutes, but as I will be attached to the Sourcefire booth @ Infosecurity London for the next three days it may take a while. Hooking it into Sourcefire’s Estreamer is also on the cards the next time I get some down-time.
If anyone is at the show, feel free to drop by the Sourcefire booth and say hi (and to bring me a Coffee at the same time).
I am in the process of uploading a load of pcap files to openpacket.org from my “example.com” collection. Because openpacket doesn’t provide an interface to include supporting data, below is network map that should help anyone who needs to use these pcaps. They were sniffed from a test network I built and should contain a good mix of systems and protocols.
Expect to see:
While I fight with Openpcap’s upload limits, a complete archive of example.com can be found here.
The below tool and information has been superceded by Snoge. More info about Snoge is available at here.
Original text included for historical completeness.
Rather than answer to each person separately I thought I would upload some instructions here.
- A Snort fast alert file OR a Sourcefire 3D Intrusion events CSV report (just the table view, with no hidden columns)
- A working perl environment. I run this script on OSX and Debian Linux, I have no idea if works on Windows (If you have this working please let me know)
- The perl module Geo::IP::PurePerl
- A text editor
1) Install Geo::IP::PurePerl. It’s available via CPAN, so I recommend you use it.
[13:03:55]lward@drax~$ cpan cpan shell -- CPAN exploration and modules installation (v1.7602) ReadLine support available (try 'install Bundle::CPAN') cpan> install Geo::IP::PurePerl <snip>
3) Ungzip the file, and save it to /usr/local/share/GeoIP/GeoLiteCity.dat
3) Download the mksfkml.pl script from here and untar (tar -zxvf ./filename.tgz)
4) You’re good to go.
mk_sfkml.pl <options> -m or --mode <plot | attack>. Draw attack lines, or plot sources - Default=plot -i or --input Input filename. -t or --tool <3D | snort> Source tool. (Default = 3D) -h or --help This message -o or --output KML output file. Defaults to /tmp/sfire.kml -s or --snort Place a snort instance at the location of this IP address -3 or --sensor <ip.add.re.ss> Place a 3D sensor at the location of this IP address -d or --dupes Do not show multiple events from a single source location
./mk_sfkml.pl -t snort -m attack -i alert.sql -w /tmp/foo -s rm-rf.co.uk
[*] Reading from alert.sql: Creating /tmp/sfire.kml for google earth [*] Adding a Sensor in York [*] Working on a snort alert file |- Start point 188.8.131.52 in Beijing + Destination point 184.108.40.206 in York |- Start point 220.127.116.11 in Chengdu + Destination point 18.104.22.168 in York |- Start point 22.214.171.124 in Changzhou + Destination point 126.96.36.199 in York |- Start point 188.8.131.52 in Hefei <snip>
And that’s it. Simple eh?