Archive for June, 2011

How to capture Delete and Truncate T-SQL using Database Audit Specification

June 30, 2011 Leave a comment

It really becomes evident to capture Delete and Truncate T-SQL in an environment which is supported by a Heterogeneous Environment. It really helped me to understand the scenario and pattern when DBA/ Jobs/SSIS Packages deletes or truncate Data from a Table based on Application / Business requirement.

Here we will see how to configure Database Audit Specification to capture delete and Truncate T-SQL .Later we will see how we can view the audit report and send the same in an Email.

Connect to the SQL Server Instance and navigate to SecurityàAuditàNew Audit

Provide the Audit Name (Filename) Audit Destination (File) and File Path (x:\).

Audit Configuration first created is Disable (Enable the same)

Select the Database on which delete and Truncate T-SQL to be captured.

DatabaseàSecurityàDatabase Audit Specification

Type a Name for the Database Audit and select the Audit name which was configured to capture the T-SQL Commands.

In “Audit Action Type” Select the Delete and “Object Class” as Database and Object Name will be the Database Name. Select the Principal name as Schema Name by which objects where created.

Truncate Commands are translated as Delete in the Audit file. Enable the Database Audit Speciation once configuration is complete.

Now execute delete and truncate commands against the database on which Audit is configured. Once commands are executed and completed check the commands are captured and written in an Audit File.


SELECT event_time,succeeded,object_id,object_name,session_server_principal_name,server_principal_name,server_instance_name,database_name,Statement,file_name,audit_file_offset

from fn_get_audit_file (‘D:\*–SM*’,null,null) order by event_time desc


To simplify this and schedule the same in a report I have created Stored Procedure which gives the output as UseràCommand Execution DateàNo of Commands Executed

The stored procedure has two parameters

  1. @ReportforDays: Provide number of days for which audit report will be pulled. (Report for last 2 days or 20 days)
  2. @AuditFile: File Location and filename (Ex: D:\*abc*)

Store Procedure Code:



PROCEDURE dba_audit_varify
(@ReportforDays VARCHAR(2),@AuditFile VARCHAR(MAX))









DECLARE @Exe_User     VARCHAR(255)

DECLARE @ex_dt                 DATETIME


DECLARE @AuditFilename         VARCHAR(MAX)

DECLARE @Executor             TABLE (ex_id INT
IDENTITY (1,1),ex_user VARCHAR(255)

TOP 1 *
where name like

TABLE #Executor_statement

TABLE #Executor_statement
(ex_id INT

IDENTITY (1,1),ex_user VARCHAR(255),Statements VARCHAR(MAX),execution_dt DATETIME

DECLARE @session_server_principal_name_text varchar(MAX)

SET @ReportDays=@ReportforDays

SET @AuditFilename=@AuditFile

                SET @session_server_principal_name_text=‘SELECT DISTINCT session_server_principal_name FROM fn_get_audit_file

                (‘+””+@AuditFilename+”’,null,null) WHERE session_server_principal_name !=””’+‘ and event_time between GETDATE() ‘+ ‘- ‘ +@ReportDays +‘ and GETDATE()’

‘[-] User Audit Report for Database ‘



INTO @Executor

        EXEC (@session_server_principal_name_text)

        SET @StartLoop=@@ROWCOUNT

        SET @CountLoop=1

            While @CountLoop <= @StartLoop


                    SELECT @Exe_User= ex_user from @Executor WHERE ex_id=@CountLoop

                    –PRINT space(11)+’|’

space(5)+‘[-]’+ ‘ User :[‘ + @Exe_User +
‘] ‘



INTO #Executor_statement

                        SELECT event_time,session_server_principal_name,Statement
from fn_get_audit_file    (@AuditFilename,null,null)

                        WHERE session_server_principal_name != and event_time between
and session_server_principal_name=@Exe_User

BY event_time desc

                            SET @StStartLoop=@@ROWCOUNT

                            SET @StCountLoop=1

                            –Print @StCountLoop

                            While @StCountLoop <= @StStartLoop


                                    SELECT @ex_dt=execution_dt,@Statements=Statements from #Executor_statement WHERE ex_id=@StCountLoop

space(6)+‘|’+space(5)+‘[|-]’+‘Execution_Date :[‘ +

space(6)+‘|’+space(5)+‘[|-]’+‘Query :’+@Statements

                                    SET @StCountLoop=@StCountLoop+1



                    SET @CountLoop=@CountLoop+1








Categories: Uncategorized

How to manually generate cluster Log on Win 2008

June 17, 2011 1 comment

So far I have seen cluster logging is not enabled by default and it really makes difficult to troubleshoot cluster issues; to troubleshoot the issue I have to manually generate the log for command prompt.

Here we will learn how to generate the cluster log from command prompt.


  • Open the command with administrator privileges. (Better to open the command prompt with “Run as Administrator”.)
  • Type “Cluster /Cluster: ClusterName log /gen /copy “C:\temp”. You should get output as follows



Categories: Uncategorized

Running Ad Hoc Queries on Remote SQL Server without Linked Server

June 10, 2011 Leave a comment

How to run Ad Hoc T-SQL Queries between SQL Server Instances (on Local or Remote Location) without Linked Server.


To execute Ad Hoc T-SQL you need to enable ‘Ad Hoc Distributed Queries’
server option from sp_configure.


‘Ad Hoc Distributed Queries’,1




TO access remote data from an OLE DB data source
you can use OPENROWSET to Run




‘SELECT name from master.sys.databases’)
AS a;





Categories: Uncategorized

How Bulk-Logged Recovery Model Works

June 8, 2011 Leave a comment


It’s always been suggested / recommended to use Bulk-Logged Recovery model for Bulk Operation. But do we really know how Bulk-Logged Recovery model Works and what the risk associated to it.

Bulk Recovery model improves the performance of Certain Large Scale Bulk Operations; Such as Large BCP File (IN), Database Maintenance (Index Maintenance, DBCC CHECKS).


The bulk-logged recovery model minimally logs bulk operations, although fully logging other transactions. The bulk-logged recovery model protects against media failure and, for bulk operations, provides the best performance and least log space usage.

In case the log is damaged after the bulk-logged operation and no backup or transactional log backup is available, then all the changes need to be redone (data will be lost).


A good comparison has been provided in BOL


Recovery model


Work loss exposure

Recover to point in time?


No log backups.

Automatically reclaims log space to keep space requirements small, essentially eliminating the need to manage the transaction log space.

Changes since the most recent backup are unprotected. In the event of a disaster, those changes must be redone.

Can recover only to the end of a backup.


Requires log backups.

No work is lost due to a lost or damaged data file.

Can recover to an arbitrary point in time (for example, prior to application or user error).

Normally none.

If the tail of the log is damaged, changes since the most recent log backup must be redone. For more information, see Tail-Log Backups.

Can recover to a specific point in time, assuming that your backups are complete up to that point in time. For more information, see Restoring a Database to a Point Within a Backup.

Bulk logged

Requires log backups.

An adjunct of the full recovery model that permits high-performance bulk copy operations.

Reduces log space usage by using minimal logging for most bulk operations. For more information, see Operations That Can Be Minimally Logged.

If the log is damaged or bulk-logged operations occurred since the most recent log backup, changes since that last backup must be redone.

Otherwise, no work is lost.

Can recover to the end of any backup. Point-in-time recovery is not supported.

Under the bulk-logged recovery model, if a log backup covers any bulk operations, the log backup contains both log records and the data pages that were changed by bulk operations. This is necessary to capture the results of the bulk-logged operations. The incorporated data extents can make a log backup very large. Additionally, backing up the log requires access to the data files that contain the bulk-logged transactions. If any affected database file is inaccessible, the transaction log cannot be backed up and all operations committed in that log are lost.

To track the data pages, a log backup operation relies on a bulk-changes bitmap page that contains a bit for every extent. For each extent updated by a bulk-logged operation since the last log backup, the bit is set to 1 in the bitmap. The data extents are copied into the log followed by the log data. The following illustration shows how a log backup is constructed.



Under the full or bulk-logged recovery models, until the first full backup, automatic checkpoints truncate the unused part of the transaction log, unless other factors are keeping log records active. After the first full backup, truncation requires backing up the transaction log. For information about factors that delay truncation,


Categories: Uncategorized

How to Backup Database in Multiple Files

June 3, 2011 Leave a comment

As the subject goes lot of DBA think why to do database backup in multiple files; what’s the use and in what scenario this will be applicable; well if you have a huge database and the backup file is huge (ex: +500 GB) and you have to copy the same on different location for Replication, Database Mirroring, Log-shipping and copy the same over tape.

Copying the backup file to a network location will take too much of time where in copy multiple files of single backup will be much faster (Robocopy will also take much time in copying single file on to the network location where in it will take much less time in copying multiple files as Robocopy will user Multiple threads for Multiple files)


Here we will learn how to do database backup in multiple files using GUI or T-SQL



Backup File created at the Location provided


T-SQL Code for backing up database in multiple files


N’I:\DBBackup\e k_03June2011_1.BAK’,

N’I:\DBBackup\e k_03June2011_2.BAK’,

N’I:\DBBackup\e k_03June2011_3.BAK’,

N’I:\DBBackup\e k_03June2011_4.BAK’,

N’I:\DBBackup\e k_03June2011_5.BAK’,

N’I:\DBBackup\e k_03June2011_6.BAK’,

N’I:\DBBackup\e k_03June2011_7.BAK’,

N’I:\DBBackup\e k_03June2011_8.BAK’,

N’I:\DBBackup\e k_03June2011_9.BAK’


    NAME =
N’e k-Full Database Backup’,

= 10




While restoring the database from backup provide all the backup filename with location


DISK = N’I:\DBBackups\e k_03June2011_1.bak’,

DISK = N’I:\DBBackups\e k_03June2011_2.bak’,

DISK = N’I:\DBBackups\e k_03June2011_3.bak’,

DISK = N’I:\DBBackups\e k_03June2011_4.bak’,

DISK = N’I:\DBBackups\e k_03June2011_5.bak’,

DISK = N’I:\DBBackups\e k_03June2011_6.bak’,

DISK = N’I:\DBBackups\e k_03June2011_7.bak’,

DISK = N’I:\DBBackups\e k_03June2011_8.bak’,

DISK = N’I:\DBBackups\e k_03June2011_9.bak’,


MOVE N’e kData’ TO N’H:\Mount3\e k.mdf’,

MOVE N’e kdata1′ TO N’H:\Mount4\e k_1.ndf’,

MOVE N’e kdata2′ TO N’H:\Mount5\e k_2.ndf’,

MOVE N’e kdata3′ TO N’H:\Mount6\e k_3.ndf’,

MOVE N’e kdata4′ TO N’H:\Mount7\e k_4.ndf’,

MOVE N’e kdata5′ TO N’H:\Mount8\e k_5.ndf’,

MOVE N’e kdata6′ TO N’H:\Mount9\e k_6.ndf’,

MOVE N’e kdata7′ TO N’H:\Mount10\e k_7.ndf’,

MOVE N’e kdata8′ TO N’H:\Mount3\e k_8.ndf’,

MOVE N’e kdata9′ TO N’H:\Mount4\e k_9.ndf’,

MOVE N’e kLog’ TO N’O:\e k_Log.ldf’,





Categories: Uncategorized