Hi Community Members,

Recently in a research paper I found two methods to detect intrusion in database, and was wondering how to get it implemented on ESM or Logger.

Please suggest any ways to implement the following two intrusion detection methods. Currently I am forwarding logs from Database like ORACLE, SQL and DB2.

For database intrusion detection:

a. Start

b. Get path of the application server log file

c. Read the contents of the log file

d. Copy the contents to a new text file with a specified path

e. Read the content and store in a string vector

f. Get the string object

g. Get the transaction trace in log object and put it in TDSV

h. Store the database content in master object vector i.e MV

i. Get TDSV size as n

j. For i=0 to n-1

k. Search for TDSV(i) in MV with tid

l. If transaction id (tid) found then

m. Compare TDSV(i) with MV object

n. If equal then no intrusion

o. Else alert intrusion and go for forensic analysis

p. Fire a query using DMV to get when(date and time) the intrusion was done and who(by the user credentials) did the intrusion.

q. Compare TDSV(i) and MV to get which field has been tampered

r. Prepare a report in text file and mail to the owner

s. End


For file intrusion detection:

a. Start

b. Get file name, path, modified date and time (DT1) while uploading the fie

c. Apply MD5 and get hash key (HK1)

d. Store step b and c parameters in database

e. While opting for file intrusion detection, get current modified date and time(DT2)

f. If (DT1=DT2)

g. No intrusion

h. Else apply MD5 algorithm to the file and get current hash key(HK2)

i. If( HK1=HK2) then conclude that file intruded but not modified

j. Else file has been modified

k. Generate a report in text file and mail it to the owner

l. End

The complete research paper can be found at\Database-And-File-Intrusion-Detection-System.pdf

  • I don't want to be dismissive, but this is a little too complex for a correlation engine to deal with. For this type of specific intrusion / attack vector, its better to use some sort of DAM tool / solution or a WAF. They are purpose designed and built to track activity, changes and situations like this - and most will have pre-built rules and methods for identifying these things.

    Any SIEM solution is going to be looking at generic situations and scenarios and building up a picture of an incident based around a number of indicators from different and varied log sources. For example, when we look at solving perimeter use cases, we are looking at logs from firewalls, proxies, maybe switches / routers and some security solutions like IPS / IDS. We get a really good picture of who, what, where and how - so we can draw some conclusions across each of these and understand the priority and criticality.

    The difference in a perimeter security use case is when you have a specific type of network attack - could the SIEM address this? Maybe, by focusing on the specific logs from say your firewall, it could be addressed. However, the easier part would be to use something like the advanced features of the firewall or the IPS tool to address this specifically - both are purpose built to address this and are going to do a faster and more effective job than an SIEM - and both offer the one thing that an SIEM will struggle with here - active blocking.

    The same is true here with the database intrusion - the SIEM might be able to piece together the events from the audit trace to address this, but its then just a trigger and nothing else. You really need a WAF / DAM tool as it sits in between and can then stop and block this from progressing - hence a better level of protection.

    However, if you are determined to address this with an SIEM (and please note I am being generic here - they are all in the same boat here - ArcSight has more sophisticated rules and state tracking than say IBM, but generically, they are no better or worse here) - then you could do something like this:

    1) Look for the specific trigger or action needed to make this an intrusion - this paper talks about tracking a master list of hash values that are computed - while this is not a threat on its own, you should be identifying activities that either generate this, trigger this or do comparisons on the files / database entries.

    2) When you identify the specific trigger actions for the activity, create a rule that generates a tracking state for this - so store the username (if available), the IP address and the target of the attempts - while not triggering specifically, we are tracking this on an active list.

    3) Use a correlation rule to match other suspicious activities to and from the server in question (target) around the time of the tracked hash / trigger actions. This way you can map the activity between the audit logs for the user activity and other suspicious activity that is occurring - trigger a notification as a result.

    This is all generic I know, but this is what you should be looking for. Also, personally, I would be focusing on doing the good governance steps first and getting them working well. So you should be using rules to identify access to critical servers, from which location and for authorized users only. From there you can start to build out activity monitoring based on who does what and when (time of day, critical server etc) and then use this basic security monitoring as a guide to highlight suspicious activity on / around the servers in question. From there you can build up a more comprehensive set of rules focused on specific attack scenarios.

    Do the basic stuff well and you will get a really good set of views into what is happening.

    7 Database Security Best Practices - eSecurity Planet