Wednesday, April 29, 2009

Sguil: Issue with Reverse DNS

I had an interesting issue with SGUIL. Interesting in the sense that the developer had no idea why I was having this issue, and could offer no insight. I had a rather “square peg, round hole” solution to fix it.

In version 0.7 of Sguil, the Reverse DNS option would not work on my machine, both with the External DNS option checked, and without it checked. In both instances I would get the following error:

The odd thing was this function worked just fine in previous versions of Sguil. So, to fix this, I went back to the IP address lookup used in the previous versions of Sguil. I edited the /sguilRoot/client/lib/extdata.tcl file to look like so:


#
# GetHostbyAddr: uses extended tcl (wishx) to get an ips hostname
# May move to a server func in the future
#
proc GetHostbyAddr { ip } {

# global EXT_DNS EXT_DNS_SERVER HOME_NET

# if { $EXT_DNS } {

# if { ![info exists EXT_DNS_SERVER] } {

# ErrorMessage "An external name server has not been configured in sguil.conf. Resolution aborted."
# return

# } else {

# set nameserver $EXT_DNS_SERVER

# if { [info exists HOME_NET] } {

# Loop thru HOME_NET. If ip matches any networks than use a the locally configured
# name server
# foreach homeNet $HOME_NET {

# set netMask [ip::mask $homeNet]
# if { [ip::equal ${ip}/${netMask} $homeNet] } { set nameserver local }

# }

# }

# }

# } else {

# set nameserver local

# }

# if { $nameserver == "local" } {

# set tok [dns::resolve $ip]

# } else {

# set tok [dns::resolve $ip -nameserver $nameserver]

# }

# set hostname [dns::name $tok]
# dns::cleanup $tok
# if { $hostname == "" } { set hostname "Unknown" }
# return $hostname
if [catch {host_info official_name $ip} hostname] {
set hostname "Unknown"
}
return $hostname
}


This took care of my problem. My only guess is that there is something with the ActiveTCL implementation of the DNS library on Windows that prevented this from working. I am not sure what the advantage of using the DNS library instead of the TCLX host_info command is due to my lack of experience in TCL/TK.

But this does illustrate an important point. Since Sguil was written in a scripted interpreted language (TCL/TK), making a change to my instance was trivial. Edit a file, and I had my issue resolved. Had this been in a compiled language, I would have had either compile the source, which would have taken more time, or gone back to the developer, submit a bug fix, and wait. In this case, it is fortunate that the tool was not developed that way.

Thursday, April 23, 2009

Security: NSM Illustrated

I haven’t posted any security related posts in a while. Richard Bejtlich recently wrote an article illustrating a general event model of an analysts reaction using just an IDS vs an NSM based operation. I thought it would be a great opportunity to illustrate that process with an example to go along with the "Elvis" slides. The following is based on an incident, and will illustrate the NSM investigative process and using data to respond to the incident and make a business decision to modify network policy.

I’m cleaning out old events that have accumulated, and setting back up our sensor after an OS update deprecated some of the shared libraries used by Snort and SANCP. Once I got everything up and running, I saw a whole slew of SSH connection attempts kicked off by Snort. This is a fairly common event, and based on experience I know this is just an automated scan looking for open SSH ports and trying to brute force them.

Now, if this was only an IDS setup, I would only see an alert that an attack was underway. The only information I would have available would be the alert, and any logs. However, because Sguil was designed around the NSM principles, I had more information than that.

The first thing I did was query the event table for the potential attackers IP, and I can see that the connection attempts started at around 6:00 AM, and went on for 23 minutes. At this point, I have a pretty good idea that this is some sort of brute force attack, but not enough information to be sure. I need to investigate further.

The next thing I do is query the SANCP table for this IP address to see all the sessions the attacker has generated. More hits come back for this than alerts, which indicate there were connection attempts that did not trigger alerts because they did not match any of the signatures in Snort. This is an important distinction as to why an IDS or IPS system are not foolproof and shouldn’t be seen as security silver bullets. There were no sessions prior to this series of events, so this is our attackers first visit. In the below screenshot you can see there is a single session before 6:00 AM. This is probably the scan looking for live targets to attack. After this single session with no data sent or received, the connection attempts begin. I can tell by the source and destination data size that nothing more than the initial key exchange and login attempt ever gets done before closing the connection.

I investigate further and pull a transcript, which is a full content of the session. As expected, the data is encrypted because it is SSH traffic. But that doesn’t dead-end the investigation, I still have log data to analyze. I grep the auth.log file on the system in question with the IP address of the attacker. Sure enough, I see that this attacker is indeed trying to brute force our machine with different user and password combinations. The first log entry for the attackers IP shows that they did not send anything, solidifying my original theory that the first session in the SANCP query was to look for live targets.

This illustrates an important point. No 1 tool can handle everything, or give enough of a picture. As I understand it (and it may be a little off, I learned this stuff almost 8 years ago now), it's using all the tools at your disposal to get a big picture view of whats going on in your network, and be able to gather evidence as to make informed decision. Somewhat like Business Intelligence, but in a security perspective.

So at this point, being that I am a decision maker, I decide to block this IP from accessing the system further. I implement an IPTables rules to block them from any further connection attempts. To further reduce exposure, I may disable password authentication, and do white list rules to allow only known machines to connect. I could have blocked this at the firewall, but I didn't, which leads to the next scenario.

Now, this would have been the end of it, except I started to see alerts trigger from another one of our systems. Apparently this other system also had it’s SSH port exposed for remote logins. At this point, I am now aware of a policy violation on the network. So, while this was the exact same attacker just moving on to the next IP address in their scan, I have an entirely different scenario. I have information that leads me to make the decision to modify the firewall rules to close the firewall hole that is exposing this second machine.

Monday, April 13, 2009

Programming: Using the Google Java App Engine to Build my Time Warner Cable Petition Site

Google recently announced the Java version of its Google App Engine. Interesting timing because I just so happened to have a need for an app server to host a small application I was looking to build. Despite criticism that it only includes a subset of the Java standard, it turned out to do everything I needed it to do.

The project I was looking to build was an online petition against Time Warner Cables recent announcement of Tiered Pricing based on consumption. Despite their attempts to justify such an action, I wasn’t buying it and saw through the BS. I wont get into the details, its covered much more in depth elsewhere on the net, as well on the petition site.

What made the Google App Engine so attractive is that it supported GWT natively right out of the box, and it supported its own data store, so that I can let visitors sign up, and write reports on the number of sign ups and type of sign ups.

So the first thing I did was download the SDK and install it into Eclipse. The thing I noticed immediately with the Eclipse plugin is that it has the option for two SDK’s, the GWT SDK and the App Engine SDK. These plugins are real nice because they let create these kinds of projects and deploy directly to the App Engine from within Eclipse. This is nice. The GWT project creation is on par with the Instantiations GWT Designer minus one glaring detail, the UI Designer. And I have found, the two don’t exactly play nicely with each other. No big deal, however. I created a separate project to design the layout of my forms in GWT Designer, and copy and pasted into my GWT/App Engine project. While this was a bit of a pain, it allowed me to keep my GUI designer.

The rest was simple. A few static page content, and by setting my RootPanel using the RootPanel.get(elementId) call, I was able to integrate my GWT components right into my static HTML pages. What I did here was something like this. I had a DIV tag in my HTML page called “content”. Instead of using a call like:

RootPanel rootPanel = RootPanel.get();

I did something like:

RootPanel rootPanel = RootPanel.get(“content”);

And my GWT based form is integrated into my static HTML page. And by commenting out the inherits line in the project.gwt.xml file, I was able to not have GWT force its styles onto my form, using the styles available in the static HTML files instead, keeping the form consistent with the rest of the site.

One other difference between the GWT Designers handling and the GWT Plugin of projects is that in the Google plugin, all static content is stored in the project/war directory. With GWT Designer, you get a public folder under the module directory. Threw me for a loop for a second, but I digress.

Next experiment will include a BIRT ODA to report off the data store using the Google Remote_API and trying to integrate the BIRT engine directly into an App Engine based project