Getting started with
Network Forensics
$ git clone https://github.com/hillar/vagrant_moloch_bro_suricata.git
$ cd vagrant_moloch_bro_suricata
$ vagrant up
$ open http://localhost:8005
$ open http://localhost:3003
$ open http://localhost:9200/_plugin/head
$ vagrant ssh suricata
see also http://slides.com/hillar/vagrant#/
National Institute of Standards and Technology
Guide to Integrating Forensic Techniques into Incident Response
http://csrc.nist.gov/publications/nistpubs/800-86/SP800-86.pdf
International Association of Computer Investigative Specialists
NETWORK FORENSIC ANALYSIS
http://www.iacis.com/SiteAssets/Documents/NFA%20Program%20Description%20and%20Syllabus%202014.pdf
The SANS Institute
FOR572: Advanced Network Forensics and Analysis
http://www.sans.org/course/advanced-network-forensics-analysis
NETWORK FORENSICS: BLACK HAT RELEASE
https://www.blackhat.com/us-14/training/network-forensics-black-hat-release.html
Network forensics is the capture, recording, and analysis of network events
in order to discover the source of security attacks or other problem incidents.
This workshop is about of using existing well known monitoring tools for forensics. It will guide you through the building and configuration of BRO and Suricata, sending their output to Elasticsearch+Kibana, setting up Moloch installation and looking with them into the depths of captured network data.
This workshop is NOT for decision makers, leaders, or administrative personnel.
It is a highly technical workshop of instruction designed for technicians that actively engage in hands-on digital forensic activities as part of their duties.
Given the potential complexities of the analysis process and the extensive knowledge of networking and information security required for analyzing network traffic data effectively, a full description of techniques needed for analyzing data and drawing conclusions in complex situations is beyond the scope of this workshop.
Organizations typically have many different sources of network traffic data. Because the information collected by these sources varies, the sources may have different value to the analyst, both in general and for specific cases.
Analysts should have reasonably comprehensive technical knowledge. Because current tools have rather limited analysis abilities, analysts should be well-trained, experienced, and knowledgeable in networking principles, common network and application protocols, network and application security products, and network-based threats and attack methods.
When an event of interest has been identified, analysts assess, extract, and analyze network traffic data with the goal of determining what has happened and how the organizationís systems and networks have been affected. This process might be as simple as reviewing a few log entries on a single data source and determining that the event was a false alarm, or as complex as sequentially examining and analyzing dozens of sources (most of which might contain no relevant data), manually correlating data among several sources, then analyzing the collective data to determine the probable intent and significance of the event. However, even the relatively simple case of validating a few log entries can be surprisingly involved and time-consuming.
In addition to understanding the tools, analysts should also have reasonably comprehensive knowledge of:
- networking principles,
- common network and application protocols,
- network and application security products,
- network-based threats and attack methods.
- organizationís environment, such as the
- network architecture
- IP addresses used by critical assets (e.g., firewalls, publicly accessible servers),
- information supporting the applications and OSs
If analysts understand the organizationís normal computing baseline, such as typical patterns of usage on systems and networks across the enterprise, they should be able to perform their work easier and faster. Analysts should also have a firm understanding of each of the network traffic data sources, as well as access to supporting materials, such as intrusion detection signature documentation. Analysts should understand the characteristics and relative value of each data source so that they can locate the relevant data quickly.
The first step
in the examination process is the identification of
an event of interest.
Typically, this identification is made through one of two methods:
- Someone within the organization (e.g., help desk agent, system administrator, security administrator) receives an indication, such as an automated alert or a user complaint, that there is a security or operational-related issue. The analyst is asked to research the corresponding activity.
- During a review of security event data (e.g., IDS monitoring, network monitoring, firewall log review), which is part of the analystís regular duties, the analyst identifies an event of interest and determines that it should be researched further.
The analyst needs to know some basic information about the event as a basis for research.
In most cases, the event will have been detected through a network traffic data source, such as an IDS sensor or a firewall, so the analyst can simply be pointed to that data source for more information.
However, in some cases, such as a user complaint, it might not be apparent which data sources (if any) contain relevant information or which hosts or networks might be involved. Therefore, analysts might need to rely on more general information for example, reports that several systems on the fourth floor have been rebooting themselves
Although data examination is easier if the event information is specific (e.g., IP addresses of affected systems), even general information provides the analyst with a starting point for finding the relevant data sources.
A single event of interest could be noted by many of data sources, but it may be inefficient or impractical to check each source individually.
For initial event data examination, analysts typically rely on a few primary data sources. Not only is this an efficient solution, but also in most cases the event of interest will be identified by an alert from one of these primary data sources.
For each data source examined, analysts should consider its fidelity.
Analysts should have more confidence in original data sources than in data sources that receive normalized (modified) data from other sources.
Validation should be based on an analysis of additional data (e.g., raw packets, supporting information captured by other sources), a review of available information on alert validity (e.g., vendor comments on known false positives), and past experience with the tool in question.
In many cases, an experienced analyst can quickly examine the supporting data and determine that an alert is a false positive and does not need further investigation
Of all the network traffic data sources, packet sniffers can collect the most information on network activity.
However, sniffers might capture huge volumes of benign data as well millions or billions of packets and typically provide no indication as to which packets might contain malicious activity. In most cases, packet sniffers are best used to provide more data on events that other devices or software has identified as possibly malicious.
Some organizations record most or all packets for some period of time so that when an incident occurs, the raw network data is available for examination and analysis.
Unlike other areas of digital forensics, network investigations deal with volatile and dynamic information.
Network traffic is transmitted and then lost.
Not many are lucky enough to have a PCAP
We do ;)
PCAP
timelined data
..
..
ElasticSearch
Kibana
Moloch*
Suricata
BRO
* has VIEWER
https://github.com/hillar/vagrant_moloch_bro_suricata/blob/master/Vagrantfile
Moloch
Moloch is an open source, large scale IPv4 packet capturing (PCAP), indexing and database system.
A simple web interface is provided for PCAP browsing, searching, and exporting.
APIs are exposed that allow PCAP data and JSON-formatted session data to be downloaded directly.
Moloch is not IDS
Simply put, Moloch is a tool to Search for PCAP repositories
The Moloch system is comprised of 3 components:
Capture
A C application that sniffs the network interface, parses the traffic and creates the Session Profile Information (aka SPI-Data) and writes the packets to disk
Database
Elasticsearch is used for storing and searching through the SPI-Data generated by the capture component
Viewer
A web interface that allows for GUI and API access from remote hosts to browse/query SPI-Data and retrieve stored PCAP
ElasticSearch
Viewer
Capture
* has VIEWER
raw pcap files
Moloch parses various protocols to create SPI-Data:
- IP
- HTTP
- DNS
- IP Address
- Hostname
- IRC
- Channel Names
- SSH
- Client Name
- Public Key
- SSL/TLS
- Certificate elements of various types (common names, serial, etc)
This is not an all inclusive list
Suricata
Suricata is a open source, free, and high performance Network IDS, IPS, and Network Security Monitoring engine.
Suricata implements a complete signature language to match on known threats, policy violations and malicious behaviour. Suricata will also detect many anomalies in the traffic it inspects.
Suricata will automatically detect protocols such as HTTP on any port and apply the proper detection and logging logic.
Suricata can log HTTP requests, log and store TLS certificates, extract files from flows and store them to disk.
With 2.0 we introduced “Eve”, our all JSON event and alert output. This allows for easy integration with Logstash and similar tools.
# suricata --list-app-layer-protos
=========Supported App Layer Protocols=========
http
ftp
smtp
tls
ssh
imap
msn
smb
dcerpc
dns
BRO
Bro is a powerful network analysis framework that is much different from the typical IDS you may know.
"We emphasize in particular that Bro is not a classic signature-based intrusion detection system (IDS)."
https://www.bro.org/sphinx/intro/index.html
Bro provides a comprehensive platform for network traffic analysis, with a particular focus on semantic security monitoring at scale. While often compared to classic intrusion detection/prevention systems, Bro takes a quite different approach by providing users with a flexible framework that facilitates customized, in-depth monitoring far beyond the capabilities of traditional systems.
The most immediate benefit that a site gains from deploying Bro is an extensive set of log files that record a network’s activity in high-level terms. These logs include not only a comprehensive record of every connection seen on the wire, but also application-layer transcripts such as, e.g., all HTTP sessions with their requested URIs, key headers, MIME types, and server responses; DNS requests with replies; SSL certificates; key content of SMTP sessions; and much more. By default, Bro writes all this information into well-structured tab-separated log files suitable for post-processing with external software.
Users can however also chose from a set of alternative output formats and backends to interface directly with, e.g., JSON or external databases.
In addition to the logs, Bro comes with built-in functionality for a range of analysis and detection tasks, including extracting files from HTTP sessions, detecting malware by interfacing to external registries, reporting vulnerable versions of software seen on the network, identifying popular web applications, detecting SSH brute-forcing, validating SSL certificate chains, and much more.
ElasticSearch
Elasticsearch is a search server based on Lucene.
It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents.
$ curl -XPOST 'http://localhost:9200/pcaps/dns/' -d
'{
"hostname" : "www.kimchy.org",
"timestamp" : "2009-11-15T14:12:12",
"ip" : "192.168.1.2"
}'
$ curl -XGET 'http://localhost:9200/pcaps/dns/_search' -d '{
"query" : {
"term" : { "hostname" : "*.kimchy.*" }
}
}
'
{
"_shards":{
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits":{
"total" : 1,
"hits" : [
{
"_index" : "pcaps",
"_type" : "dns",
"_id" : "1",
"_source" : {
"hostname" : "www.kimchy.org",
"timestamp" : "2009-11-15T14:12:12",
"ip" : "192.168.1.2"
}
}
]
}
}
Kibana
visualize time-stamped data
elasticsearch works seamlessly with kibana to let you see and interact with your data
Kibana 3
Kibana 3 talks directly to Elasticsearch from the browser.
This means that your browser communicates directly with Elasticsearch, not via an intermediary. You may wish to configure a reverse proxy to restrict access to Elasticsearch.
Kibana 4
This is beta software. Needs Java and Elasticsearch version 1.4.0 or later. Has it's own backend server. It's written in Ruby using Sinatra and the Puma rack server.
The power of Elasticsearch’s nested aggregations on the click of a mouse.
Facets: Removal from master. #7337
https://github.com/elasticsearch/elasticsearch/pull/7337
0. Setting output to
ElasticSearch
Moloch
done
BRO
This writer plugin is still in testing and is not yet recommended for production use!
https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html
Aug 14, Removing ElasticSearch from configure script.
Suricata
LUA scritping
https://gist.github.com/hillar/aeae0b6d12de4ccd8ced#file-suricata_flow2ela-lua
unified2 -> moloch
json -> moloch
https://gist.github.com/hillar/409a18e1604c70bb3804#file-suricata_tagger-js
1. Setting output to JSON
Suricata
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/EveJSONOutput
outputs:
- eve-log:
enabled: yes
type: file #file|syslog|unix_dgram|unix_stream
filename: eve.json
types:
- alert
- http:
extended: yes # enable this for extended logging information
- dns
- tls:
extended: yes # enable this for extended logging information
- files:
force-magic: no # force logging magic on all logged files
force-md5: no # force logging of md5 checksums
#- drop
- ssh
BRO
https://www.bro.org/sphinx-git/scripts/policy/tuning/json-logs.bro.html
$ echo "@load tuning/json-logs" >> /usr/local/bro/share/bro/site/local.bro
by default BRO timestamp output format is EPOCH
redef it to ISO8601
$ echo "redef LogAscii::json_timestamps = JSON::TS_ISO8601;" >> /usr/local/bro/share/bro/site/local.bro
2. Getting Data
From Json
Into Elasticsearch
FluentD
http://docs.fluentd.org/recipe/json/elasticsearch
<source>
type tail
format json
path /var/log/suricata/eve.json #...or where you placed your log
tag suricata.events
</source>
<match **>
type elasticsearch
logstash_format true
host <hostname> #(optional; default="localhost")
port <port> #(optional; default=9200)
index_name suricate #(optional; default=fluentd)
type_name events #(optional; default=fluentd)
</match>
Logstash
http://logstash.net/docs/1.4.2/inputs/file
http://logstash.net/docs/1.4.2/codecs/json
input {
file {
path => ["/var/log/suricata/eve.json"]
codec => json
type => "SuricataIDPS-logs"
}
}
filter {
if [type] == "SuricataIDPS-logs" {
date {
match => [ "timestamp", "ISO8601" ]
}
}
}
output {
elasticsearch {
host => localhost
}
}
Custom scripts..
https://gist.github.com/hillar/4b014ba3abcc07a8c5c9
shell
$ while read line; do curl -XPOST 'http://localhost:9200/indice/type/' -d $line; done <some.json
The point is not to figure out which one is the best, but rather to see which one would be a better fit for your environment.
~$ /usr/local/bin/suricata -h
Suricata 2.1dev (rev 9a5bf82)
USAGE: suricata [OPTIONS] [BPF FILTER]
-r <path> : run in pcap file/offline mode
~$ /usr/local/bro/bin/bro --help
bro version 2.3-230
usage: /usr/local/bro/bin/bro [options] [file ...]
-r|--readfile <readfile> | read from given tcpdump file
~$ /usr/local/moloch/bin/moloch-capture --help
Usage:
moloch-capture [OPTION...] - capture
Application Options:
-r, --pcapfile Offline pcap file
-R, --pcapdir Offline pcap directory, all *.pcap files will be processed
READING PCAP
moloch-capture -c ../etc/config.ini
suricata -l ./log
bro -r some.pcap \ /usr/local/bro/share/bro/site/local.bro
Your job is to determine if one of the students in the class was responsible for the harassing email and to provide clear, conclusive evidence to support your conclusion.
http://digitalcorpora.org/corp/nps/packets/2008-nitroba/scenario.txt
$ cd /tmp/
$ wget http://digitalcorpora.org/corp/nps/packets/2008-nitroba/nitroba.pcap
$ /usr/local/moloch/bin/moloch-capture -c /usr/local/moloch/etc/config.ini -r /tmp/nitroba.pcap --tag=nitroba
Discover which web server used the weak authentication scheme.
What was the user’s username and password?
$ cd /tmp
$ wget https://www.bro.org/static/workshop-11/traces/illauth.pcap
$ /usr/local/bro/bin/bro -r illauth.pcap /usr/local/bro/share/bro/site/local.bro
$ less http.log
Someone carelessly forwarded a sensitive document using unencrypted email.
Who sent the email?
https://www.bro.org/bro-workshop-2011/exercises/incident-response/index.html#part-3-emailleakage
$ cd /tmp
$ wget https://www.bro.org/static/workshop-11/traces/email.pcap
$ sudo suricata -r email.pcap -l log
$ less log/eve.json
network-forensics
By Hillar Aarelaid
network-forensics
- 5,996