Mcafee SIEM/Nitro SharePoint High-Volume Auditing

A Custom Solution to Audit a highly active SharePoint Farm.
Note: Post is written to outline getting Sharepoint audit events in Mcafee SIEM product.  Since this solution exports Sharepoint audit events to flat text log files as an intermediate step, this solution could be easily amended to work with any log analysis tool.

Summary:

I needed to audit the activity of a highly used SharePoint Farm, this was a farm used primarily as a document management system for a firm.  Auditing was required on all document access not just administrative activity which resulted in on average, nearly 1 million records per 24 hr period.  This greatly outstripped the capability of using SharePoint auditing exports via powershell or SharePoint tasks and caused reliability issues with Logbinder for SharePoint (an otherwise excellent product).  Additionally, querying the SharePoint database directly using the Windows x32 based McAfee SQL Collector against my custom “Auditing” SQL view resulted in the collector crashing with every attempt, I suspect from what seemed like a memory leak.   So going the traditional route of defining a SQL query and utilizing Mcafee’s SQL collector was not a viable option, though in many environments I suspect this would be just fine and would therefore not require the custom database and sql agent job.

The Sharepoint Audit Log collection solution I ultimately came up with comprises a number of custom integration points and also introduces a custom Database located on the Sharepoint Database server that contains a custom table for each Sharepoint Content or Administration site collection datasource.  See “Sharepoint Audit Log Collection Process Flow” diagram for high level log collection process logic as well as the content in this post to understand how the custom process works and any caveats.  This solution has worked incredibly well and reliably in my environment, no custom recovery or “fixes” have ever been required even in events where SharePoint services have gone down, etc.  In over 6 months of heavy production use the only changes required have been the addition of a few ASP parser rules to capture more SharePoint events.

Sharepoint Audit Collection Process Summary

Sharepoint maintains audit log information if turned on within a table in the same Content Data Store named dbo.AuditData.  A custom query against this table which includes an INNER join performed on the UserInfo table is used to extract meaningful audit log data.  Microsoft does not provide a mechanism to extract this log data other than via a gui or powershell excel export which breaks on our site collections that are large, and results in SQL locks  and site performance/access problems.

A SQL job on the Sharepoint Database server is executed at scheduled intervals that:

  1. Queries a custom database created that contains a bookmark table that lists all audit export jobs containing: datetime of last export record, time range of last export, records returned, and time to execute export query.
  2. Then executes a SELECT statement against the Sharepoint database for audit records from last timestamp to last time stamp plus time increment (+30min seems to work well). Export job quits immediately if this audit log range exceeds current time minus 15 minutes, to prevent missing events due to events not having occurred or not having been written to Sharepoint table yet.
    Note: This direct access is not supported by Microsoft and has the potential for impacting Sharepoint access or performance. This is mitigated through the use of “NOLOCK” statements and requesting small blocks of audit records at any time. There is some risk however though, but this has not caused any detected issues in over 6 months of use in a high transaction volume SharePoint 2010 farm.
  3. These audit records are written to a flat file located on the SQL server, my location is S:\SQLAuditLogs. Initially the export is made to a file with name RBS1_Temp.wrk, then the contents of the wrk file are appended to the actual log file with naming convention of yearmonthday_hour.log
  4. Time bookmark and export stats are written to custom database table.
  5. McAfee Receiver Data Source periodically polls the \\Databaseserver\SQLAuditLogs directory for *.log files to ingest and then deletes files once ingested.  Other Job Notes: The “TimeIncrement” to collect Logs needs to be selected to:
    1. Minimize Sharepoint audit query time and the amount of time the query needs to run over the course of a day.
    2. Keep Security audit logs in SIEM timely, ie. Not lagging too long into the past.
    3. Allow for catchup to occur should SQL agent job be disabled for a period of time. (ie. Auditlog collection time period should be greater than SQL job agent scheduled frequency which should be greater than time taken for audit export query to run).

SharePoint Content Database AuditingData Custom View:

USE [SharePoint Content DB name]
GO

/****** Object:  View [dbo].[BAL_AuditView]    Script Date: 08/16/2014 09:40:38 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE VIEW [dbo].[BAL_AuditView]
AS
SELECT     TOP (100) PERCENT a.Occurred, a.SiteId, a.ItemId, a.ItemType, a.UserId, a.MachineName, a.MachineIp, a.DocLocation, a.LocationType, a.Event, a.EventName,
a.EventSource, a.SourceName, a.EventData, b.tp_Login, b.tp_Title, b.tp_Email, b.tp_Notes
FROM         dbo.AuditData AS a WITH (NOLOCK) INNER JOIN
dbo.UserInfo AS b WITH (NOLOCK) ON a.UserId = b.tp_ID
WHERE     (a.Occurred > ‘2014-06-21 05:33:22.000’) AND (b.tp_Login <> ‘SHAREPOINT\system’)
ORDER BY a.Occurred

GO


SQL Agent Job custom code:

Use [BAL_SharepointAudit]
GO

DECLARE @BookmarkDate datetime
DECLARE @MinuteIncrement int = 60
DECLARE @NewDate datetime
DECLARE @CutoffDate datetime
DECLARE @RecordsReturned int
DECLARE @StartTime datetime
DECLARE @TimeTakenToExport int
DECLARE @BCPQuery varchar(2000)
DECLARE @BCPCommand varchar(2000)
DECLARE @LogFilePath varchar (100) = ‘S:\SharepointAuditLogs\’
DECLARE @LogFileTemp varchar (100) = ‘S:\SharepointAuditLogs\RBS1_Temp.wrk’

select top 1 @BookmarkDate = Bookmark
from [BAL_AuditBookmark] order by Bookmark desc

SET @NewDate = DateADD(minute,@MinuteIncrement,@BookmarkDate)
SET @CutoffDate = DateADD(minute,-15,GetDate())   — Make sure log export lags log export attempt by 15 min. to provide time for latest events to be written to Sharepoint DB
SET @CutoffDate = DateADD(hour,+8,@CutoffDate)   — Add 8 hours to cutoff time as logs are in UTC time. Server is in Pacific

IF  @NewDate <= @CutoffDate BEGIN SET @StartTime = GETDATE() /* This query extracts the relevent SQL Audit data out of Sharepoint Database */ set @BCPQuery = ‘SELECT TOP (100) PERCENT a.Occurred, a.SiteId, a.ItemId, a.ItemType, a.UserId, a.MachineName, a.MachineIp, a.DocLocation, a.LocationType, a.Event, a.EventName, a.EventSource, a.SourceName, a.EventData, b.tp_Login, b.tp_Title, b.tp_Email, b.tp_Notes FROM WSS_Content_RBS1.dbo.AuditData AS a WITH (NOLOCK) INNER JOIN WSS_Content_RBS1.dbo.UserInfo AS b WITH (NOLOCK) ON a.UserId = b.tp_ID WHERE (a.Occurred > ‘+””+convert(varchar(20),@BookmarkDate)+””+’) AND (a.Occurred < ‘+””+convert(varchar(20),@NewDate)+””+’) AND tp_Login <> ‘+””+’SHAREPOINT\system’+””+’ ORDER BY a.Occurred’

SET @bcpCommand = ‘bcp “‘+@BCPQuery+'” queryout “‘
SET @bcpCommand = @bcpCommand + @LogFileTemp + ‘” -T -c -t”|”‘

EXEC master..xp_cmdshell @bcpCommand
— APPEND WORK FILE TO LOG FILES with DATE and HOUR timestamp —

SET @BCPCommand = ‘type ‘+@LogFileTemp+’ >> ‘+@LogFilePath+convert(varchar(10),@BookmarkDate,112)+’_’+convert(varchar(2),DATEPART(hh, @BookmarkDate))+’.log’

EXEC master..xp_cmdshell @bcpCommand

SET @TimeTakenToExport = DATEDIFF(SECOND,@StartTime,GETDATE())
SET @RecordsReturned = ‘1’

USE [BAL_SharepointAudit]

insert into [BAL_AuditBookmark] values (@NewDate,@MinuteIncrement,@RecordsReturned,@TimeTakenToExport);

END
GO

Custom Database Creation Code:

/****** Object:  Database [BAL_SharepointAudit]    Script Date: 01/08/2015 11:46:28 ******/
CREATE DATABASE [BAL_SharepointAudit] ON  PRIMARY
( NAME = N’BAL_SharepointAudit’, FILENAME = N’F:\SQL2008Default\Data\BAL_SharepointAudit.mdf’ , SIZE = 1029120KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
LOG ON
( NAME = N’BAL_SharepointAudit_log’, FILENAME = N’F:\SQL2008Default\Data\BAL_SharepointAudit_log.ldf’ , SIZE = 1746688KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO

ALTER DATABASE [BAL_SharepointAudit] SET COMPATIBILITY_LEVEL = 100
GO
USE [BAL_SharepointAudit]
GO

/****** Object:  Table [dbo].[BAL_AuditBookmark]    Script Date: 01/08/2015 11:47:26 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[BAL_AuditBookmark](
[Bookmark] [datetime] NOT NULL,
[Increment] [smallint] NOT NULL,
[RecordsReturned] [int] NOT NULL,
[ExportTimeInSec] [int] NOT NULL
) ON [PRIMARY]

GO

McAfee SIEM/Nitro ASP Parsing Rules

Link to SIEM Sharepoint ASP Parser rules

Replacement Symantec Endpoint ASP Parser rules for McAfee/Nitro SIEM

The built-in Symantec ASP parser rules are OK, but unfortunately like I’m finding for most rules that come “out of the box” they fall short in actually extracting enough meta-data from the log to be able to create useful reports or create actionable alerts and dashboards!

So once again you can find a link to my Symantec Endpoint Protection rules written for SEP 12, but should find they work for version 11 as well.  Continue reading to get a bit more detail.

DISCLAIMER & WARNING:

There are issues deploying my solution in an environment that is already capturing Symantec events using McAfee’s built-in Symantec ASP parsers that you will need to understand and ensure you are comfortable with prior to attempting to deploy in your environment, especially if it is your production environment.

To implement successfully you will need to “delete all auto-learned rules for this data source” for the Symantec data source.  This is so that my parser rules can create data source rules  for your Symantec data source that better (in my opinion) log your Symantec events.  Make sure that this is acceptable in your environment prior to doing so.

This version 1 of the rules do not pull Agent Traffic Log or Agent Proactive Log.  Its not very hard to incorporate, but I find that two weeks have gone by and I still haven’t done it. When I finally do I’ll post.  I’m ok with this being missing in m environment for the time being.

Why create my own Symantec ASP parser rules?

Creating the custom Symantec ASP parser rules was a substantial effort and probably consumed 20 hours of my time.  The main reasons for doing so are below:

  1. I found it very difficult to incorporate the default Symantec events into “global threat dashboards” because much of the required information in the events needed to build queries on were just not to be found in meta-data fields, ie. required event information was often not extracted and put into normalized McAfee fields from the actual events.  Many events had very little data extracted even when there was alot of obviously useful information to be extracted.  I couldn’t create any useful reports either.
  2. It was arduous reviewing Symantec rules since there are 40+ rules.  After reviewing a few and seeing the limited amount of data being extracted I quickly decided to just scrap them and write my own.  My strategy was to create a single rule for each type of SEP log: scm_system.log, scm_policy.log, scm_agent_act.log, scm_admin.log, agt_traffic.log, agt_system.log, agt_security.log, agt_scan, agt_risk, agt_proactive, agt_behavior.log  (Note: proactive and traffic logs do not have parsers written yet, sorry, I’ll get to it.)
  3. Events have a description starting with SEP followed by the log type and then threat.  The application field is recorded as log type: ex. “Agent Risk” for easy filtering and reporting and I have extracted as much data and put in McAfee relevant fields.  See this link for an example of an Agent Risk Event.   Hopefully you will agree the information extracted is richer and far more useful.

My Experience thus far:

I’ve been using my custom Symantec rules for 3 weeks now and have been able to successfully build Global “Threat” and “High Threat” dashboards, reports and alerts.  Once again I find myself not missing any of the McAfee ASP rules, just the 20 hours or so of my life it took to create another set of rules.  This seems to be a recurring trend; existing rules don’t bother extracting some or even worse most of the rich log data available and the customer has to decide whether there is enough business justification to rewrite and test new parsers.

Instructions for installing:

  1. If you have been capturing SEP events, you’ll need to delete the existing automatically created data source rules for your SEP Data Source as they will likely conflict with the ones my ASP rules will generate.
  2. Disable all McAfee SEP rules on your Symantec Server Data Source.
  3. Import my ASP rule set.  Review the NRF file and edit the <nitro id> to reflect your available rule IDs or ensure that you DO NOT OVERWRITE any existing rule IDs on import but instead CREATE NEW Signature IDs ON ANY CONFLICTS – otherwise you will overwrite any custom rules you have already created.
  4. Enable these rules for your Symantec data source, change data aggregation settings and/or exceptions as appropriate in your environment for these rules.
  5. Follow McAfee instructions and/or Symantec Instructions on how to forward logs from SEP Management server using its Syslog log forwarding  feature.  Do not use the McAfee Agent to parse the actual SEP log files – this is unlikely to work.

Link to Symantec Custom ASP parser rules

Replacement Checkpoint ASP Parser rules for McAfee/Nitro SIEM

The built-in Checkpoint ASP parser rules are actually an example of rules created by McAfee that are quite good, but unfortunately they fell short in a few key areas, at least in my opinion, that prevented them from being usable in my environment.  So find attached my Checkpoint Parser rules and continue reading to get a brief synopsis of how my rules deviate from McAfee’s.

DISCLAIMER & WARNING:

There are issues deploying my solution in an environment that is already capturing Checkpoint events using McAfee’s built-in Checkpoint ASP parsers that you will need to understand and ensure you are comfortable with prior to attempting to deploy in your environment, especially if it is your production environment.

To implement successfully you will need to “delete all auto-learned rules for this data source” for the Checkpoint data source.  This is so that my parser rules can create data source rules  for your Checkpoint data source that better (in my opinion) log your Checkpoint events.  Make sure this this is acceptable in your environment prior to doing so.

Why create my own Checkpoint ASP parser rules?

Creating the custom Checkpoint ASP parser rules was a substantial effort and probably consumed 20-30 hours of my time.  The main reasons for doing so are below:

  1. I found it very difficult to incorporate the default Checkpoint events into “global threat dashboards” because much of the required information in the events needed to build queries on were just not to be found in meta-data fields, ie. required event information was often not extracted and put into normalized McAfee fields from the actual events.  Many events had very little data extracted even when there was alot of obviously useful information to be extracted.
  2. Attempts were made to consistently extract event data and place in the same meta-data fields in McAfee where possible (Mcafee Nitro Database has an odd behaviour of sharing database fields with different event fields so this was not always possible).  The original rules did not do a good job in this regard.
  3. Events are broken out by Checkpoint “Blade” and the “Application” field records the originating blade.  For example interested in the mobile blade events then search on Application: “Connectra” (checkpoint’s legacy name for mobile). Interested in IPS type events well then your are likely interested in the “Anti Malware” or “SmartDefense” blades.  I stuck with Checkpoint’s nomenclature as tightly as possible.
  4. Firewall ACL events are reported as the VPN & Firewall:1″  Application which directly correlates to the Checkpoint “blade” that handles these events.  I also added the firewall ACL rule number to the event description which Mcafee does not do, this allows for useful dashboard views when: 1. You did not provide a friendly ACL name in Checkpoint, 2. To better differentiate similar or exactly named rules.
  5. Checkpoint events now have much more information in meta fields extracted from the actual log entry.  IPS events now have threat category, threat name, CVE threat references, etc.  that McAfee never bothered extracting from the underlying log entry.

Caveats:

There is a rule called Checkpoint 99 – Unknown Event.  This is in place to give parties the visibility into events that would not be captured by my ASP rules.  This will also sometimes capture an event that would indeed be caught by one of the other Checkpoint rules in place, as I have been unable to determine how McAfee Nitro determines order of precedence in rule processing (it doesn’t seem to be based on Signature ID or Name?).   In my environment for every 3 million events, 50 or fewer events fall into the “Unknown” rule.  I know I have a few administrative rules to create that capture actions taken in the “SmartReporter” or “SmartLog” checkpoint applications for instance which will come out in subsequent versions of this ASP rule collection.  If you find the events captured by this rule irrelevant you can delete this particular Unknown rule from your deployment.

My Experience thus far:

I’ve been using my custom Checkpoint rules for 3-4 weeks now and have been able to successfully build Global “Threat” and “High Threat” dashboards incorporating multiple products including Symantec Endpoint Protection(once again I created my own rules for similar reasons) and Checkpoint due to meta-data now existing in the events and being able to be queried in reports and dashboards.

Instructions for installing:

  1. If you have been capturing Checkpoint events, you’ll need to delete the existing automatically created data source rules for your Checkpoint Data Source as they will likely conflict with the ones my ASP rules will generate.
  2. Disable all McAfee checkpoint rules on your Checkpoint Data Source.
  3. Import my ASP rule set.  Review the XML file and edit data between the <rule><id>5000112</id> tags to reflect your available rule IDs or ensure that you DO NOT OVERWRITE any existing rule IDs on import but instead CREATE NEW Signature IDs ON ANY CONFLICTS – otherwise you will overwrite any custom rules you have already created.  You should also edit the following tag data: <tag origin=”1″>Checkpoint</tag> to reflect how you want these imported rules to be tagged and more easily visible in the Policy tree.
  4. Enable these rules for the Checkpoint data source, change data aggregation settings and/or exceptions as appropriate in your environment for these rules.

Link to Checkpoint Custom ASP parser rules

Citrix Xenserver XenCenter RBAC logging for Mcafee/Nitro SIEM

Wish you could log activity occurring against your XEN Center environment?  I did as well.

I was unable to find any useful Citrix XenServer logging documents that outlined what was being logged, where and the format.  But perhaps they do exist and I would appreciate any community feedback that includes links that I could review.

This was a very important data source in my environment to monitor so I took it upon myself to research viable methodologies and options to get this information into the McAfee SIEM as sadly this didn’t exist as a predefined data source.  Ultimately I settled on the RBAC events as the most viable, easy to ingest, and immediately useful events to pursue.  There are numerous log file sources in XEN but unfortunately the signal to noise ratio tends to be very poor and when coupling that fact with lack of documentation on types and format of log entries in these files it seemed too large of a hurdle to overcome in a short period of time.  Once again any feedback from the community here might be of help and I would welcome it.

So find below my first attempt at incorporating XEN center events into the McAfee SIEM product.  It is incomplete, but still useful in MHO.

Pre-amble:

The following rules and processes were developed against a Citrix XEN server 6.1 environment but I have reason to believe it should e quite effective against 5.6 and 6.2 pools as well.

My approach:

Targeting the RBAC role based security events which tend to capture a large number of relevant events occuring against the XEN Server environment by admins and automated processes with little to no noise.

To log RBAC activity it is necessary to add the following line to the XEN management server syslog.conf file: \etc\syslog.conf    to send RBAC events via SYSLOG to the McAfee SIEM receiver.  I would do this against the pool master server only, otherwise you will get duplicate events if deployed to more than one server.   Risk note: when the pool master is down you may lose events.

local6.* @mcafeesiemreceiver IPorDNSaddress

* In XEN 6.1 facility local6 appears to be dedicated to RBAC activity and can be instructed to send to syslog destination server.
Custom rules are developed to parse the RBAC data and separate out relevant entries and data.

Custom ASP rules have been developed to parse out relevant events and respective data fields from RBAC syslog entries.

Setting up a data source involves:

  1. Import the latest “XenServer” custom ASP parser rules are imported to the SIEM policy panel.
    WARNING: You will likely want to edit the attached XML policy specification to include Signature ID numbers valid in your deployment or at a minimum specify on import that any duplicates result in a new Signature being created AND NOT TO REPLACE any existing signature IDs.  Otherwise you may delete some of you own created signatures!  example for each rule edit the 5000026 tags.  Also I “Tagged” these XENserver events into a custom “TAG” labelled: XENServer.  You may or may not want this designation which can be changed in the following tag: XenServer.
  2. Adding a Generic Syslog Data source for each XEN pool to be monitored.
  3. Enabling the custom Xenserver ASP rules for just the XenServer data source!
  4. Modify /etc/syslog.conf on Xen server pool master to send local6 events to receiver, and then restart syslog service.

The following events are currently captured, there are more and should appear in subsequent releases of this XENServer ASP parser rule.   Please feel free to modify and contribute!

  • VM.Start
  • VM.set_protection_policy  (unfortunately Citrix XEN in all their wisdom has deprecated this feature in 6.2!)
  • VM.set_name_label
  • VM.set_VCPUs_max
  • VM.Destroy
  • VDI.Destroy
  • Session.Create
  • Message.Destroy
  • Async.host.license_apply
  • Async.VMPP.create (deprecated in Xen6.2)
  • Async.VM.snapshot
  • Async.VM.Provision
  • Async.VM.clone
  • Async.VDI.Destroy

Link to XENCenter SIEM ASP rule file