Category Archives: SQL Server 2012

Capture and log error of TSQL code in SQL Server 2012

Capture and log error of TSQL code in SQL Server 2012

SQL Server provides a robust way to capture error with TRY….CATCH but I wanted to have centralized information about error to maintain history and for debugging purpose so that I have created one stored procedure to log error information in one table. I used to call that stored procedure which capture error in my every stored procedure.

I have created “[dbo].[usp_LogErrorHistory] ” stored procedure which logs all errors in “[dbo].[ErrorHistory]” table. I used to call “[dbo].[usp_LogErrorHistory] ” SP in every stored procedure I create and hence I have, now, one central location (“[dbo].[ErrorHistory]” table) where I can go and look for total number of error withing given time period and for given objects.

Let us create “[dbo].[ErrorHistory]” table and “[dbo].[usp_LogErrorHistory] ” stored procedure with following TSQL:

CREATE TABLE [dbo].[ErrorHistory](
[ErrorLogID] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
[ErrorTime] [datetime] NOT NULL DEFAULT GETDATE(),
[UserName] [sysname] NOT NULL,
[ErrorNumber] [int] NOT NULL,
[ErrorSeverity] [int] NULL,
[ErrorState] [int] NULL,
[ErrorProcedure] [nvarchar](126) NULL,
[ErrorLine] [int] NULL,
[ErrorMessage] [nvarchar](4000) NOT NULL,
[ContextInfo] [varbinary](max) NULL,
CONSTRAINT [PK_ErrorLog_ErrorLogID] PRIMARY KEY CLUSTERED
(
[ErrorLogID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
GO

CREATE PROCEDURE [dbo].[usp_LogErrorHistory]
-- contains the ErrorLogID of the row inserted
@ErrorLogID [int] = 0 OUTPUT
AS
BEGIN
SET NOCOUNT ON;
-- Output parameter value of 0 indicates that error
-- information was not logged
SET @ErrorLogID = 0;

BEGIN TRY
-- Return if there is no error information to log

IF ERROR_NUMBER() IS NULL
RETURN;

-- Return if inside an uncommittable transaction.
-- Data insertion/modification is not allowed when
-- a transaction is in an uncommittable state.
IF XACT_STATE() = -1
BEGIN
PRINT 'Cannot log error since the current transaction is in an uncommittable state. '
+ 'Rollback the transaction before executing usp_LogErrorHistory in order to successfully log error information.';
RETURN;
END

INSERT [dbo].[ErrorHistory]
(
[UserName],
[ErrorNumber],
[ErrorSeverity],
[ErrorState],
[ErrorProcedure],
[ErrorLine],
[ErrorMessage],
[ContextInfo]
)
VALUES
(
CONVERT(sysname, CURRENT_USER),
ERROR_NUMBER(),
ERROR_SEVERITY(),
ERROR_STATE(),
ERROR_PROCEDURE(),
ERROR_LINE(),
ERROR_MESSAGE(),
CONTEXT_INFO()
);

-- Pass back the ErrorLogID of the row inserted
SET @ErrorLogID = @@IDENTITY;
END TRY
BEGIN CATCH
PRINT 'An error occurred in stored procedure usp_LogErrorHistory: ';
RETURN -1;
END CATCH
END;
GO

Once we are done with above given SP and table, we will create one more user stored procedure which we will execute and it will generate an error which will be logged in our error history table.

CREATE PROC TestErrorLog1
AS
BEGIN
--Specifies whether SQL Server automatically rolls back the current transaction
--when a Transact-SQL statement raises a run-time error.
--this is the reason I keep having XACT_ABORT in all my SP when transaction is used
SET XACT_ABORT ON;

--used to keep following statement in all my SP for performance point of view
SET NOCOUNT ON

BEGIN TRY
--I am not performing any DML statement here though I have kept transaction here
--to demonstrate the usage
BEGIN TRANSACTION
SELECT 1/0
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF @@TRANCOUNT>0
ROLLBACK

--to log the error
EXEC [dbo].[usp_LogErrorHistory] ;

--to show the error to the end user
THROW
END CATCH
END
GO

Now this is a time to check functionality of our error SP by executing following TSQL script:

--execut TestErrorLog1 SP to confirm whether error is being logged or not
EXECUTE TestErrorLog1
GO

Okay so I hope your first error after creating this environment has logged. Let us confirm it by executing SELECT query in Error History table.

--Looking at [dbo].[ErrorHistory] table to see captured error:
SELECT * FROM [dbo].[ErrorHistory]

Here is the output of my SELECT statement on ErrorHistory table.

1Error

Please note that This code should work in older version of SQL Server eg: SQL Server 2008 or 2008 R2 as well. You have to use RAISERROR instead of THROW statement.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Create custom error message by sys.sp_addmessage in SQL Server 2012

Create custom error message by sys.sp_addmessage in SQL Server 2012

error

Social media is a wonderful way to share knowledge. I have my own Facebook page where I used to share my knowledge. Yesterday I see one message in my Facebook inbox. One SQL Server community member asked me whether it is possible to create our own error messages in SQL Server or not. He told me that after reading two of my previous articles (Error handling with “THROW” command in SQL Server 2012 and List of Errors and severity level in SQL Server with catalog view sysmessages) related to error handling, he has looked at catalog view “Master.DBO.SysMessages” and he wanted to define his own custom error messages.

Microsoft have given control to us to create custom messages as per our need and system stored procedure “sys.sp_addmessage” helps us to do this.

Let us look at one example for this:

--I want to add error with error number 50001
--let us check whether any error is having same number or not
--in my case, I don't find any error with 50001 number
select * from master.dbo.sysmessages where error ='50001';

--adding error message with 50001 number and severity 16
--if record duplicate records comes, we can execute this error
EXEC sys.sp_addmessage 50001, 16, '%s is already there in table %s.';
GO

--now we will get one row for default language (in my case it is 1033 which is US English)
--with 50001 number
select * from master.dbo.sysmessages where error ='50001';

Now, we have one custom message ready with us, let us check it by creating one sample table with some sample data row.


--now testing the error message whether it is working fine or not.
BEGIN TRY
DECLARE @Name VARCHAR(50),@City AS VARCHAR,@Count INT
SET @Name='Rajan Shah'
SET @City='Mumbai'

SELECT @Count=COUNT(1) FROM TestCustomError WHERE Name=@Name

--In this case, I have considered that Name column should be unique
--there may or may not be Primary or Unique Key defined
--but we can test it in business logic
--there may be argument for this approach but I just wanted to show
--whether custom error is working or not.
IF @Count<=0
BEGIN
BEGIN TRANSACTION
INSERT INTO TestCustomError (Name,City)
SELECT @Name,@City
COMMIT TRANSACTION
END
ELSE
BEGIN
DECLARE @ErrMessage varchar(500) = FORMATMESSAGE(50001, @Name, 'TestCustomError');
THROW 50001, @ErrMessage, 1;
END
END TRY

BEGIN CATCH
THROW;
END CATCH
GO

Scope of this article was to show how to add custom error message and check it. I have kept one loophole by not putting ROLLBACK anywhere in the code which itself is an interesting topic and out of the scope of this article. I will cover this point very soon.

Please note that This code should work in older version of SQL Server eg: SQL Server 2008 or 2008 R2 as well. You have to use RAISERROR instead of THROW statement.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Independent multiple query window in multiple screen (monitor) in SQL Server 2012

Independent multiple query window in multiple screen (monitor) in SQL Server 2012

 

I love working with dual monitor and I really appreciate one GUI feature in SQL Server 2012 which support query window across multiple monitor screens. Many time I have situation that I have on SP which return some records and new version of same SP which again return some records and I have to compare both record set, I used to do it by copying it to excel but now in SQL Server 2012, I can compare it right withing two query window in dual monitor. SQL Server 2012 management studio is build up based on Visual Studio 2010 which also support same feature.

 

It is really very easy to use this functionality. Just double click on query window table and you can see that window comes out of the boundary of SSMS so that you can drag it to anywhere and make the size of your choice. See the following screen capture to see where exactly we need to double click.

1SSMSQueryTab

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Once window comes out of the SSMS, drag the window anywhere you want, I drag it to another monitor and it looks like following screen capture.

2DualScreen

 

Now, I can easily compare the record set from both the query.

 

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Start and stop server side profiler trace automatically in SQL Server 2012

StartStopStart and stop server side trace profiler automatically in SQL Server 2012

Profiler is a very powerful and old weapon for DBA and DBA can’t afford to make distance with this powerful weapon.  Profiler shows which TSQL statement/batch or SP is being processed by SQL Server, how it resolve internally and what efforts SQL Server has to make in order to execute that batch/SP. It shows duration of batch/TSQL/SP etc. along with CPU time, IO and so many other vital information which is mandatory to dig the issues which come across the way in DBA day-to-day life.

DBA can’t keep his eye personally on each database/instance 24*7 but server side trace (Profiler) can do it for you very easily. It adds little overhead on server but keep trust on my words it is really worth.

You can run trace by Profiler GUI as well as from server side trace. I least prefer GUI as it adds more overhead especially if you are not running it directly from the server where SQL Server installed. We won’t discuss pro/cons of profiler GUI and server side trace as it is beyond the scope of this article but we will discuss how we can automate server side trace process.

I will create two stored procedure out of which one SP will start trace and second SP will stop trace for my AdventureWorks2012 database.

I generally prefer to run trace during peak business hours. For me, it is 10AM to 7PM so I can set one SQL Server Agent job which calls one SP at 10AM so that my trace starts at 10AM sharp and second job which call second SP which will stop trace at 7PM sharp. It doesn’t need human interaction.

Let us see the script of both SPs.

1st SP should have following code:

Note: you can add/remove column as well as event as per your need. My SP is capturing some important column for SP:Complete event only.

 CREATE PROCEDURE [StartTrace]
 AS
 BEGIN
 DECLARE @trc INT
 DECLARE @TraceID INT
 DECLARE @maxfilesize BIGINT

SET @maxfilesize = 100

DECLARE @file NVARCHAR(500)
 SELECT @file = N'D:\TraceCollection\AdventureWorks2012'

EXEC @trc = SP_TRACE_CREATE
 @traceid = @TraceID output,
 @options = 2,  --TRACE_FILE_ROLLOVER
 @tracefile = @file,
 @maxfilesize = @maxfilesize,
 @stoptime  = NULL,
 @filecount = 10000

-- Set the events
 --Event number 43 is SP:Complete
 --right after 43, we have number like 1, 12, 14 etc. which are column numbers of event
 --you can get complete list of event and column number from here
 --http://msdn.microsoft.com/en-us/library/ms186265.aspx
 DECLARE @on BIT, @ret INT
 SET @on = 1
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 1, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 13, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 14, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 15, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 18, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 28, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 34, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 35, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 48, @on
 EXEC @ret = SP_TRACE_SETEVENT @TraceID, 43, 59, @on

EXEC SP_TRACE_SETFILTER
 @TraceID,
 28,  -- 28 is a column number for &quot;Object Type&quot;
 0,            -- and
 0,            -- ==
 8272 --we are filtering that only SPs should come and store in trace file

-- Set the trace status to start
 EXEC sp_trace_setstatus @TraceID, 1

END
 GO

Now second SP this will be called by second job which will stop the trace:

 CREATE PROCEDURE [StopTrace]
 AS
 BEGIN
 DECLARE @trace_id INT;

SELECT @trace_id = id FROM sys.traces WHERE path LIKE  'D:\TraceCollection\AdventureWorks2012%';

IF @trace_id IS NOT NULL BEGIN
 --             Stops the specified trace.
 EXEC sp_trace_setstatus @trace_id, 0;
 -- Closes the specified trace and deletes its definition from the server.
 EXEC sp_trace_setstatus @trace_id, 2;
 END
 END
 GO

I will cover topic about how to read this .TRC file generated by startTrace SP in future article.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Introduction to Filestream in SQL Server 2012

Introduction to Filestream in SQL Server 2012

Filestream feature is not new in SQL Server 2012, it is introduced in SQL Server 2008. Last week I visited one company where they are storing some document in physical disk on server and stores its location in database with appropriate records. This is one of the very old approach and nothing wrong in it but problem starts when you decide to change location of folder because of any reason. You have to update all pointers of location stored in database. Apart from this problem, you have to take manual backup of document folder as it won’t come along with SQL Server database backup.

You can have another approach too, you can store documents / images etc. in VARBINARY fields in SQL Server It creates an issue of performance though It saves you from taking manual backup of document because when you take backup of database, you will have all your document inside the backup in form of VARBINARY field but still performance is a big issue in this case. Apart from that there is one more limitation that we can’t store file / document which is bigger then 2GB.

I have written articles about how to use BLOB (Varbinary) fields.

Image store in SQL-Server 2005 database and retrieve it in asp.net application with C# (Click Here to read more)

Insert and Update image field in SQL Server 2008/2005 (Click Here to read more)

Filestream solve your problem regarding performance and backup. Filestream can store document / image / video / music etc. in NTFT file system though it is accessible right from from the SQL Server as well as from the WIN32 API and .NET class. You will gain performance if you access your filestream data from .NET class or from WIN32 API, by doing this, you can reduce overhead from SQL Server. When you will take full backup of database, you can have all your filestream files in backup at the same time you are eliminating 2GB limitation of VARBINARY field.

There are some disadvantages hovering around filestream too like mirroring doesn’t support filestream data but you can ignore this because mirroring itself deprecated from SQL Server 2012 version. Apart from that there is one more major disadvantage which is you can’t apply Transparent Data Encryption (TDE) on filestream data. These are two major disadvantage but I can ignore it as I will have less overhead on my SQL Server due to filestream, my data backup issue get solved and I wouldn’t have limitation of 2GB. These are the main advantage which force me to choose this feature.

You can enable filestream while installing SQL Server instance but if you have not enabled it at the time of installation, let us see how can we enable it in our existing SQL Server instance.

1.) Open “SQL Server Configuration Manager” from “Start | All Programs | SQL Server 2012 | Configuration Tools “. Once you have “SQL Server Configuration Manager” open, move to the SQL Server instance, right click on it and click on “Property”.

1SQLServerConfigurationManager

2.) Once you click on “Property”, you will get instance’s property. Move to the “FILESTREAM” tab and follow the setting which marked in red rectangle in following screen capture.

2FilestreamProperty

Once you setup property, open SQL Server Management Studio (SSMS) and execute following script to allow access to TSQL as well as file I/O for filestream.

EXEC sp_configure Filestream_access_level,2
GO
RECONFIGURE
GO

Once you setup permission for filestream at instance level, let us create one database with filestream enable filegroup.


CREATE DATABASE ExtremeAdviceFileStream ON PRIMARY

(

NAME = ExtremeAdviceFileStream_data,

FILENAME = N'D:\TestDB\ExtremeAdviceFileStream_data.mdf'

),

FILEGROUP ExtremeAdviceFileStream_FS CONTAINS FILESTREAM

(

NAME =ExtremeAdviceFileStream_FILESTREAM,

FILENAME = N'D:\TestDB\ExtremeAdviceFileStream_FS'

)

LOG ON

(

NAME = ExtremeAdviceFileStream_LOG,

FILENAME = N'D:\TestDB\ExtremeAdviceFileStream_log.ldf'

);

GO

Well, this is how you can enable filestream. I will cover few exercises for filestream in coming articles very soon.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Customize toolbar and menu in SSMS of SQL Server 2012

Customize toolbar and menu in SSMS of SQL Server 2012

Microsoft always provides a way to customize appearance of the tool/software it provides, for the convenience of the user and SQL Server Management studio is not an exception. Actually this customization option is not new in SQL Server 2012 but it is one of the unappreciated feature due to lake of awareness.

Let us see, how we can customize Toolbar and Menu in SSMS.

Once you open SSMS click on “Customize” option from “Tools” menu and you will get dialog box like this:

1Customize

Here you can get two tab “Toolbars” and “Commands”.

From the first tab “Toolbars”, you can on/off any toolbar of your need which either SQL Server provide or provided by any 3rd Party software for example Redgate toolbelt.

From the second tab “Commands” tab you can add / remove / move up / move down toolbar option/ menu/ menu option. You can see that “Project” menu comes in 5th position by default but I made it 3rd,, right after Edit menu.

2EditSSMSMenu

This is how you can setup your own appearance as per your convenience.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Move replicated database file to new location in SQL Server 2012

Move replicated database file to new location in SQL Server 2012

Setting up replication is somehow little easy when we compare it with maintaining replication. We have already discussed some of the topics related to replication like set up replication, keep watch on replication latency, add & remove article from GUI and from Script, remove replication to name a few.

Moving replicated database file from one location to another location is little tricky and not performing proper steps in sequence would be resulted in replication failure.

There are many cases when we need to move file of database from one location to another. We may have purchased new improved IO subsystem or we are running low disk space is current drive and need to move all or some data/log file to another drive.

I will be using the same AdventureWorks2012 database which I have used in transactional replication I had in my earlier article.

Well, before we do anything, we have to confirm that there is no pending (Undistributed) transaction there in replication from “Replication Monitor”. If you don’t know how to open “Replication Monitor”, kindly click here to know.

After having replication monitor on the screen, click on “Publisher Name” then click on “Subscriber Name” from the tree view in left hand side. Once “Subscriber” is selected, you find, “All Subscriber” tab in right hand side, double click on the subscriber where your database is participated in. You have to confirm undistributed command is zero.

Once you confirm undistributed commands, close the popup window of Subscription and click on “Publisher” name in the tree view of “Replication Monitor”. Let this window open and start SSMS to confirm where data/log file are located at the moment.

I have used following TSQL query to check current location of my database “AdventureWorks2012”.

 SELECT * FROM sys.sysaltfiles
 WHERE DB_NAME(dbid) = 'AdventureWorks2012'
 GO
 

I have my database files in “C” drive.

Once, you keep this list of data/log file location, let us again move back to “Replication Monitor” and stop “Log Reader Agent”.

Now, stop the “Queue Reader Agent”, if it is setup in your environment.

Snapshot Agent should be completed and stop as well. Generally it supposed to be stopped only so I have not included its steps here.

Anyway, let us move on and take database “AdventureWorks2012” offline with following commands:

 USE master
 GO

-- Making database with restricted user and offline
 ALTER DATABASE AdventureWorks2012 SET restricted_user with rollback immediate;
 ALTER DATABASE AdventureWorks2012 SET OFFLINE;

I have my data/log file at “C:\AdventureWorks” folder and I want to move it to “D:\AdventureWorks” folder so I execute following command.

 USE master
 GO

-- Mapping database with new file path
 ALTER DATABASE AdventureWorks2012 MODIFY FILE ( NAME = AdventureWorks2012_data , FILENAME = 'D:\AdventureWorks\AdventureWorks2012_data.mdf');
 ALTER DATABASE AdventureWorks2012 MODIFY FILE ( NAME = AdventureWorks2012_log , FILENAME = 'D:\AdventureWorks\AdventureWorks2012_log.ldf');
 

Now, we have changed the path in system catalog but physically my files are still there in “C:\AdventureWorks” folder so I will manually copy it from current location and paste it in new location in “D” drive.

Once I finish manual copy of database files, I will execute following command to start database back online.

 USE master
 GO

-- Making database online and make available with multiuser
 ALTER DATABASE AdventureWorks2012 SET ONLINE;
 ALTER DATABASE  AdventureWorks2012 SET multi_user;
 

Now, let us confirm that whether our files are being read from new location or not by executing following command:

 SELECT * FROM sys.sysaltfiles
 WHERE DB_NAME(dbid) = 'AdventureWorks2012'
 GO
 

Yes, finally I have moved database file:

Now, start your all agents which you have stopped from replication monitor and you are done.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Transactions/Sec and Batch Requests/Sec performance counter in SQL Server

PerformanceCounterPerformance counter is one of the biggest weapon system administrator and Database administrator can have. I have already written few things about performance counter in past article mentioned below:

  • Performance Counter in SQL Server (Click Here)
  • Performance Counter in Microsoft Windows Server 2008 (Click Here)
  • Repl. Pending Xacts performance counter and high value in SQL Server (Click Here)

As long as SQL Server concern, “Transactions/Sec” and “Batch Requests/Sec” are two of my very favorite performance counter and I used to keep my close eye on this in each server I manage. The main reason to keep eye on both these performance counter is to get an idea about how much work my SQL Server can perform per seconds. As a DBA it is mandatory to know this so that you can plan out capacity of server accurately.

“Transaction/Sec” tracks down all DML statements like Insert/Update/Delete along with SELECT statement executed between transaction whereas “Batch Requests/Sec” counter gives information about each batch sends to SQL Server no matter whether it is between transaction or not so, in my opinion, “Batch Requests/Sec” can gives you more idea about throughput being performed on server.

Sometime I also want to see which is the peak time for my database and it requires to have data captured for complete 24 hours for few weeks or even month so that we can aggregate data we have and find out peak period of day, peak day of week and so on.

Let us create environment to capture all these information by following the steps given:

Note: Please test script given here in your development/beta environment first.

Step 1:

We have to create one table in database which keeps all the captured data for our counters we have created.

--Step1
CREATE SCHEMA MONITOR
GO

CREATE TABLE [Monitor].[PerformanceCounterHistory](
[Seq] [int] IDENTITY(1,1) NOT NULL,
[ServerName] [nvarchar](256) NULL,
[CounterType] VARCHAR(100),
[CounterName] VARCHAR(100),
[CounterValue] [bigint] NULL,
[CreatedDate] [datetime] NULL
)
GO
ALTER TABLE [Monitor].[PerformanceCounterHistory] ADD  DEFAULT (getdate()) FOR [CreatedDate]
GO

Step 2:

Create one Stored Procedure which will insert data into “[Monitor].[PerformanceCounterHistory]” table we have created above.

--Step2
CREATE PROC [Monitor].[SPInsertPerfStats]
AS

INSERT INTO [Monitor].[PerformanceCounterHistory]
(
ServerName
,CounterType
,CounterName
,CounterValue
)
Select
@@ServerName
,instance_name
,counter_name
,cntr_value
FROM
sys.dm_os_performance_counters
WHERE
(counter_name = 'Transactions/sec' and instance_name = '_Total')
OR counter_name like '%Batch Requests/sec%'
GO

Step 3:

Keep calling Stored Procedure “Monitor.SPInsertPerfStats” at certain time period. I generally prefer to call this SP at every 30 seconds, if you are sure about your business peak timing, you can call this SP in that time period only otherwise keep it for complete 24 hours cycle. You can use SSIS package and schedule that package in Windows Schedule task or you can call this SP directly from SQL Server Job Agent.

Step 4:

Since we are calling this SP at every 30 seconds interval, we have to aggregate data for every seconds and store the aggregated data in another table. Do aggregate data every morning for previous day so that we can generate different kind of report from this aggregation itself.

--Step4
CREATE TABLE [Monitor].[PerfStatAggr](
[Seq] [int] IDENTITY(1,1) NOT NULL,
[CounterType] [nvarchar](256) NULL,
[ServerName] [nvarchar](256) NULL,
[CounterName] VARCHAR(100),
[CounterValue] [bigint] NULL,
[CreatedDate] [datetime] NULL,
[AggrValue] [numeric](18, 3) NULL
)
GO

CREATE PROCEDURE [Monitor].[SPInsertPerfStatAggr]
/*EXEC [Monitor].[SPInsertPerfStatAggr]
@Day = 0,  --1 is for previous day
@StartHour = 4, --start capturing of data from 4 AM previous day
@EndHour = 23 --capture data of previous day till 11 */
(
--generally I used to aggregate data of previous day always
--so kept logic of day. if you pass 1, it will aggregate data of previous day only
@Day int,
--I am interested to capture business hours data
--in our environment, we get maximum traffic in application From 8AM to 6PM, so I kept
--8 & 18. you can even aggregate data from 1 to 24, as per your requirement
@StartHour int, --8
@EndHour int --18
)
AS
BEGIN

SET ARITHABORT OFF
SET ANSI_WARNINGS OFF
SET NOCOUNT ON

DECLARE @StartTime DATETIME , @EndTime DATETIME

SET @StartTime =  DATEADD(hh,@StartHour,DATEDIFF(DD,0,GETDATE()- @Day))

SET @EndTime =  DATEADD(hh,@EndHour,DATEDIFF(DD,0,GETDATE()- @Day))

;WITH ctePerfStat
AS
(
SELECT ROW_NUMBER() OVER (PARTITION BY ServerName,CounterName ORDER BY CreatedDate) as Rn, *
FROM [Monitor].[PerformanceCounterHistory] PerfCount1 (nolock)
WHERE (CreatedDate between @StartTime and @EndTime)
)
INSERT INTO [Monitor].[PerfStatAggr]
(
ServerName,
CounterName,
CounterValue,
CreatedDate,
--following are the aggregated fields
--I am more interested in that only
AggrValue
)
SELECT
C1.ServerName,
C1.CounterName,
cast(c2.CounterValue as numeric(18,3)) as CounterValue,
C2.CreatedDate,
cast((c2.CounterValue - c1.CounterValue)as numeric(18,3))/DATEDIFF(SS,C1.CreatedDate,C2.CreatedDate) as AggrValue
FROM
ctePerfStat C2
inner join ctePerfStat C1
ON (C1.ServerName = C2.ServerName  AND C1.CounterName =C2.CounterName AND C1.rn-1=c2.Rn )
END

Step 5:

Keep collecting and aggregating data for a week or so to capture the transaction pattern for your database/server. You can return the result of all these capture by following Stored Procedure.

--Step5
CREATE PROCEDURE [Monitor].[SPGetPerfState]
/*Declare @FromDate1 datetime
Declare @ToDate1 datetime
set @ToDate1 = DATEADD(DD,0,DATEDIFF(DD,0,getdate()))
Set @FromDate1 = DATEADD(DD,0,DATEDIFF(DD,0,getdate()))
Exec [Monitor].[SPGetPerfState] @Unit = 'day', @FromDate = @FromDate1, @ToDate = @ToDate1*/
--it depends of the report you require,
--if you pass day, it will return the data between From/To Date
--by getting MAX day wise
@Unit VARCHAR(10), --@Unit = 'day', -- 'Minute,Hour,Day,Month,Year'
@FromDate DATETIME,
@ToDate DATETIME
AS
BEGIN
DECLARE
@SQL nvarchar(4000)

SET @SQL = '
SELECT
CounterName,
DATEADD(DD,0,DATEDIFF(DD,0,CreatedDate)) as CreatedDate,
DATEPART('+@Unit+',CreatedDate) as Unit,
CEILING(MAX(AggrValue)) as AggrValue
FROM [Monitor].[PerfStatAggr] as I with (Nolock)
where CreatedDate between '''+cast(@FromDate as varchar)+''' and '''+cast(@ToDate as varchar)+'''
group by CounterName,DATEADD(DD,0,DATEDIFF(DD,0,CreatedDate)),datepart('+@Unit+',CreatedDate)
order by Unit
'
--print @SQL
EXEC sp_executesql @SQL
END
GO

----uncomment following SP to see the aggregated result
----ofcourse, you have to change unit as per your need.
----you can use Minute,Hour,Day,Month,Year etc.
----and change the from/to date too
--EXEC [Monitor].[SPGetPerfState]
--@Unit='HOUR',
--@FromDate='2012-12-05 06:00:00',
--@ToDate='2012-12-05 23:00:00'

So, this is how you have to capture your transaction pattern for few days. There are few different ways to capture transactions but whenever there is a question of transaction pressure on the server where I have only SQL Server installed, this is my favorite way. We will talk about different way (from windows OS) of capturing transactions information sometime later, this is all for now…..

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Quick Learn

Note: Microsoft Books online is a default reference of all articles .

Find Object participated in replication

Find Object participated in replication

It happens many times that we want to know the object participated in replication from publisher database. We may not have populated all database objects in replication so we might want to compare the object.

Sometime we may have populated all tables, view etc. in replication but over the time we add some more tables or other database objects in primary (Publisher) database and we want to find out what are those objects.

Sometime we want to remove some of the objects from subscriber database as we might have removed it either from replication or from publisher database and wanted to remove it from subscriber database.

Reason may be more then I have discussed above but in any case, whenever, we want to get list of objects participated in replication; we can use script given below:

 SELECT * FROM
 (
 SELECT
 Name as ObjectName
 ,CASE WHEN xtype='P' THEN 'Stored Procedure'
 WHEN xtype='V' THEN 'View'
 WHEN xtype='FN' THEN 'Function'
 WHEN xtype='U' THEN 'User Table'
 END AS ObjectType
 FROM
 sys.sysobjects
 WHERE
 replinfo=1
 ) AS T
 WHERE
 ObjectType IS NOT NULL
 GO

Here is the list of some more articles I have written for replication, have a look if title of the article attracts you.

  • Understanding Replication with implementation of Snapshot replication in SQL Server 2012 (Click Here)
  • Remove Replication from SQL Server 2012 (Click Here)
  • Error Fix – Cannot execute as the database principal(Click Here)
  • Setup Transaction Replication in SQL Server 2012 (Click Here)
  • Add new article in transactional replication in SQL Server 2012 (Click Here)
  • Delete article from transactional replication in SQL Server 2012 (Click Here)
  • Find highest latency in transactional replication in SQL Server 2012 (Click Here)
  • Be familiar with important terms of replication in SQL Server (Click Here)

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Be familiar with important terms of replication in SQL Server

Be familiar with important terms of  replication in SQL Server

After reading replication articles two of my regular blog reader asked me question about some terms I have used in articles. Here is their asking in question format for all blog reader.

Q.1) What is PULL replication which you have used in almost all your replication article?

Ans.) PULL and PUSH are two different types of replication which SQL Server offers.

PULL: PULL subscription demands the data from publisher as needed.

PUSH: PUSH subscription pushes data from publisher to subscriber via distributor based on configuration eg: on demand, on schedule basis or continuously.

Q.2) What is “Snapshot Agent” which is used in Snapshot Replication and transactional replication in your articles?

Ans.) While working with replication, you have to familiar with following five main agents.

Snapshot Agent: Snapshot Agent plays very important role for any type of replication and runs from distributor database. Snapshot agent keeps information about synchronization; prepare schema and data for initial transfer.

Queue Reader Agent: This agent comes into the picture if you have snapshot/transactional replication configured with queued updating option, again this agent runs in distributor database.  It reads messages stored in a Microsoft SQL Server queue or a Microsoft Message Queue and then applies those messages to the Publisher

Log Reader Agent: – This Agent is used with transactional replication. It moves transactions marked for replication from in transaction log from Publisher to the distribution database. Each database published using transactional replication has its own Log Reader Agent that runs on the Distributor.

Merge Agent: Merge Agent comes in picture when there is a merge replication setup. It applies the initial snapshot to the Subscriber and merges incremental data changes. Each merge subscription has its own Merge Agent that connects to both the Publisher and the Subscriber to update it. Unlike snapshot / transactional replication, merge agent runs either from distributor or from subscriber.

Distribution Agent: Distribution agent runs either from distributor or from subscriber based on your configuration in snapshot / transactional replication. Initial snapshot will be transferred from publisher to subscriber via distribution agent. It also moves all transaction held under distribution database to the subscriber.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Sharing IT tips at “Quick Learn

Note: Microsoft Books online is a default reference of all articles.