Read IIS log with LOG Parser in SQL-Like language

Read IIS log with LOG Parser in SQL-Like language

Before we start discussion regarding reading of IIS log, I would like to request you to please read my previous article which provides introduction of Log Parser along with some sample query. Basic knowledge of log parser required for this article, if you don’t have knowledge of Log Parser, kindly click here.

After having understanding of Log Parser, you have to find IIS log file location for your server. It may be:

C:\inetpub\logs\LogFiles\W3SVC1

or

C:\WINDOWS\system32\LogFiles\W3SVC1

Administrator can change the location of log file from IIS so it is better to confirm the location of log file first before executing the commands given below.

Once you are ready with Log Parse in your server along with path of IIS log file, you can execute following commands to get various information.

Command 1: This command will give you total number of execution of .ASPX page in your IIS based on given file path and date. I wanted to read only one log file “ex121025.log”, you can give “*” there to read all log files available under “W3SVC1” folder.

[sourcecode]
logparser.exe -i:W3C -o:DATAGRID "SELECT COUNT(*) FROM C:\WINDOWS\system32\LogFiles\W3SVC1\ex121025.log where date>’2012-10-19′ AND date<‘2012-10-21’ and cs-uri-stem LIKE ‘%.aspx’"
 [/sourcecode]

Command 2: Following command will return total number of process/pages which takes more than 1 second to load, if we know long running pages, we can troubleshoot it and optimize it as well. Please not that “Time-Taken” field gives value in millisecond

[sourcecode]logparser.exe -i:W3C -o:DATAGRID "SELECT COUNT(*) FROM C:\WINDOWS\system32\LogFiles\W3SVC1\ex121025.log where time-taken>1000" [/sourcecode]

Command 3: following command will return total number of pages whose size is greater than 300KB. We might want to make page little lower in size so that it can be loaded in browser fast.

[sourcecode] logparser.exe -i:W3C -o:DATAGRID "SELECT COUNT(*) FROM C:\WINDOWS\system32\LogFiles\W3SVC1\ex121025.log where cs-bytes>307200"
[/sourcecode]

Command 4: Following command will give you top 10 heaviest pages which has taken long time to load in browser.

[sourcecode]logparser.exe -i:W3C -o:DATAGRID "SELECT TOP 10 cs-uri-stem as URL, MAX(time-taken) As Max, MIN(time-taken) As Min, Avg(time-taken) As Average FROM C:\WINDOWS\system32\LogFiles\W3SVC1\ex121025.log GROUP BY URL ORDER By MAX DESC" [/sourcecode]

Command 5: following command will list out IP with reverse DNS and count how many request came from the IP.

[sourcecode]logparser.exe -i:W3C -o:DATAGRID "SELECT c-ip As IP, REVERSEDNS(c-ip) As DNS, COUNT(*) As Requests FROM C:\WINDOWS\system32\LogFiles\W3SVC1\ex121025.log GROUP BY IP ORDER BY Requests DESC" [/sourcecode]

BTW, “W3C” parameter value provided when you want to read log of IIS for further details about input and output parameter, please refer my previous article Log Parser.

Hope you find it useful.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

LOG Parser – Wonderful yet under-appreciated tool to read log of windows, IIS

LOG Parser – Wonderful yet under-appreciated tool to read log of windows, IIS

LOG Parser it one of the wonderful tool which helps you to read the log Widows, IIS etc. Log Parser is command line utility which helps you to read text based log file files such as XML, CSV and some key data source of Windows operating system such as event log and registry.  As an administrator, you face situation, many times, when you have to read various log files of windows and/or IIS and generally we used to open log file in notepad/XML viewer or some other 3rd party tool but if you think of LOG Parser in that situation, you will have familiar SQL based syntax to read the log and it returns data very fast.

You can download LOG Parser from here.

I am sure you must have seen EventViewer of Windows. It is very good tool to use for getting information about various system error, notification, warnings etc. we can access the same with help of LOG Parser with simple SQL based query. We have few different ways to access that information, out of those different ways, I generally check information in command prompt itself small amount of information is expected and I would go for DATAGRID if I am expecting big result set.

If you have download and installed LOG Parser, open it from Start->All Programs->Log Parser 2.2->Log Parser 2.2

You will get command prompt of Log Parser where you have to execute following command.

[sourcecode]
logparser.exe -i:EVT "SELECT TOP 5 * FROM System WHERE EventID>10"
[/sourcecode]

Detail about the command we have executed above:

“Logparser.exe” is the command name which supposed to use to read any log information

“-i:EVT”: “-I” (input engine”) tells log parser which log supposed to use. “EVT” means windows event.

We have had “TOP 5” in SELECT query, it means that we will get only first 5 event based on the condition we have specified in WHERE clause.

For more information, look at screen capture given below:

Now, if you want to see the result in DATAGRID, add “-o DATAGRID” which is output engine argument.

[sourcecode] logparser.exe -i:EVT -o:DATAGRID "SELECT * FROM System WHERE EventID>10" [/sourcecode]

As soon as you execute the command in command prompt, you will get datagrid something like this:

This is just an introduction of one of the versatile and powerful yet under-appreciated tool. I will have follow-up articles to show different usage of Log Parser.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Understanding replication with Implementation of Snapshot Replication in SQL Server 2012 (Part 1)

Understanding of replication with Implementation of Snapshot Replication in SQL Server 2012 (Part 1)

As per MSDN, replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency. Using replication, you can distribute data to different locations and to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet.

There are four types of replication, excluding subtype, available in SQL Server, such as:

  • Merge Replication
  • Snapshot Replication
  • Transaction Replication
  • Pee-to-peer replication

We will cover Snapshot Replication in this article; remaining type of replication will be available in follow up blog post.

Before we start configuring replication, it is mandatory to understand some of the terminology we will use in replication.

Publisher: Publisher is a server which hosts the database which acts as a source of data to distribute among other databases.

Distributor: Distributor is a server which hosts the database which acts as a mediator between publisher and subscriber. It receives the data from publisher database and sends it to subscriber database. You can set distributor server same as publisher server but it is HIGHLY recommended that you keep distributer server/database in separate server so that publisher server doesn’t be heavily loaded.

Subscriber: Subscriber is the server which hosts the databases where publisher/distributer sends the data to.

You can set replication between any server(s) which can be accessible to each other via network, VPN, dial up or anything else. In production environment, I would prefer to have three different servers (if budget permits) one host publisher, second host distributor and third host subscriber but here for this article, I have installed two different instance of SQL Server 2012. You can even host two different virtual machines in your computer to test this article.

My first instance is WIN-9H6QATRKY81\SQL2K12DEV and I will refer it as ServerA in this article.

My second instance is WIN-9H6QATRKY81\SQL2K12DEVTest and I will refer it as ServerB in this article.

Snapshot Replication:

Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. When synchronization occurs, the entire snapshot is generated and sent to Subscribers. This is one of the very simple replication and easy to setup/maintain. In simple words, Snapshot replication creates snapshot of publisher database and send it to subscriber via distributor.

You should select Snapshot replication if your database has infrequent changes, your database is small in size or your database is having very large volume of data change in short duration. If these are not the case for you, you have to choose another replication not snapshot.

Before you actually proceed towards replication, you have to check few things.

  • List table, view, Stored procedure and any other SQL Server objects which you want to replicate
  • You have to have one shared folder which can be accessible from any of the servers which are part of replication in any capacity (Publisher, Distributer, and Subscriber). My shared folder is at \\win-9H6QATRKY81\ReplicationShare.

Now let us implement snapshot replication on AdventureWorks2012 database. If you don’t, already, have this database, kindly download it from here or use your own database.

1.) Connect to you SQL Server 2012 which you want as a publisher and right click “Replication” then click on “Configure Distribution” option. I connected to “ServerA

2.) Select the distributor server. If you want the current SQL Server 2012 instance to be act as distributor server, left the default selection as it is. That is what I did here for this article. If you wish, you can change selection via radio button and click on “Add” button to select another SQL server 2012 instance to work as distributor.

3.)    Give the path of snapshot folder where the snapshot will be created. This path should be accessible from all servers which are part of replication.  In my case, it is \\win-9H6QATRKY81\ReplicationShare.

4.)  You have to create or select distributor database. I have created new database with “Adv2012Distributor” name.

5.)    Select check box “Configure Distributor” and click on Finish.

though screen capture is not showing script check box selected above but if you would have selected it, it would have generated script to add distributor. script would look something like this:

[sourcecode language=”sql”]

/****** Scripting replication configuration. Script Date: 10/21/2012 10:19:49 AM ******/
/****** Please Note: For security reasons, all password parameters were scripted with either NULL or an empty string. ******/

/****** Installing the server as a Distributor. Script Date: 10/21/2012 10:19:49 AM ******/
use master
exec sp_adddistributor @distributor = N’WIN-9H6QATRKY81\SQL2K12DEV’, @password = N”
GO
exec sp_adddistributiondb @database = N’Adv2012Distributor’, @data_folder = N’E:\Data’, @log_folder = N’E:\Data’, @log_file_size = 2, @min_distretention = 0, @max_distretention = 72, @history_retention = 48, @security_mode = 1
GO

use [Adv2012Distributor]
if (not exists (select * from sysobjects where name = ‘UIProperties’ and type = ‘U ‘))
create table UIProperties(id int)
if (exists (select * from ::fn_listextendedproperty(‘SnapshotFolder’, ‘user’, ‘dbo’, ‘table’, ‘UIProperties’, null, null)))
EXEC sp_updateextendedproperty N’SnapshotFolder’, N’\\win-9H6QATRKY81\ReplicationShare’, ‘user’, dbo, ‘table’, ‘UIProperties’
else
EXEC sp_addextendedproperty N’SnapshotFolder’, N’\\win-9H6QATRKY81\ReplicationShare’, ‘user’, dbo, ‘table’, ‘UIProperties’
GO

exec sp_adddistpublisher @publisher = N’WIN-9H6QATRKY81\SQL2K12DEV’, @distribution_db = N’Adv2012Distributor’, @security_mode = 1, @working_directory = N’\\win-9H6QATRKY81\ReplicationShare’, @trusted = N’false’, @thirdparty_flag = 0, @publisher_type = N’MSSQLSERVER’
GO
[/sourcecode]

6.)Let us confirm whether our distributor database is created or not in ServerA. You can find it under “System Databases” Group. Refer following screen capture

7.)Once you are done with “Distributor” configuration, you should proceed towards “Publication”.  Connect to the server which you want to make as publisher, in my case, it is ServerA. Follow the given screen capture to know the option to click

8.) You will have to select the publisher database, in our case, it is AdventureWorks2012

9.)    Now this is the time to choose which replication you wanted create. At the point, we will create “Snapshot Replication”

10.) Now, you have to select table, view, stored procedure, indexed view and/or user defined functions. You can select all if you want. As of now, I have selected only two tables.

11.) Now if you don’t wish to replicate some data based on filter, you can have filter by click on “Add” button but I didn’t choose any filter and clicked on “Next” button in following screen or wizard.

12.) we are instructing that snapshot of publisher database should be created at the moment by first check box. scheduling is VERY IMPORTANT for replication as whatever change you will make in data in your publisher database, it will take effect in subscriber database(s) once the schedule time reach. Schedule should be very from case to case, it shouldn’t be too low otherwise before your snapshot applies to subscriber, your second run of snapshot will be reached. As against, it shouldn’t be too high that subscriber doesn’t have point-in time data. You can click on “Change” button and run it at every 10 minutes for now.

13.) this step is very important. You have to give proper login credential so that replication doesn’t meet with any permission issue. I would suggest you give windows authentication there in form of domainName\Login and Password.

14.) select both the check box given in following screen.

15.) give the proper path so that you will have .SQL file of publication script.

16.) Now give publication name here. one publisher database may have few different publication.

Once you will click on “Finish” button in above given screen, your publisher will be created and you will have publication script as follow:

[sourcecode language=”sql”]
use [AdventureWorks2012]
exec sp_replicationdboption @dbname = N’AdventureWorks2012′, @optname = N’publish’, @value = N’true’
GO
— Adding the snapshot publication
use [AdventureWorks2012]
exec sp_addpublication @publication = N’AdvertureWorks2012AddressObjects’, @description = N’Snapshot publication of database ”AdventureWorks2012” from Publisher ”WIN-9H6QATRKY81\SQL2K12DEV”.’, @sync_method = N’native’, @retention = 0, @allow_push = N’true’, @allow_pull = N’true’, @allow_anonymous = N’true’, @enabled_for_internet = N’false’, @snapshot_in_defaultfolder = N’true’, @compress_snapshot = N’false’, @ftp_port = 21, @ftp_login = N’anonymous’, @allow_subscription_copy = N’false’, @add_to_active_directory = N’false’, @repl_freq = N’snapshot’, @status = N’active’, @independent_agent = N’true’, @immediate_sync = N’true’, @allow_sync_tran = N’false’, @autogen_sync_procs = N’false’, @allow_queued_tran = N’false’, @allow_dts = N’false’, @replicate_ddl = 1
GO

exec sp_addpublication_snapshot @publication = N’AdvertureWorks2012AddressObjects’, @frequency_type = 4, @frequency_interval = 1, @frequency_relative_interval = 1, @frequency_recurrence_factor = 0, @frequency_subday = 8, @frequency_subday_interval = 1, @active_start_time_of_day = 0, @active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0, @job_login = null, @job_password = null, @publisher_security_mode = 1

use [AdventureWorks2012]
exec sp_addarticle @publication = N’AdvertureWorks2012AddressObjects’, @article = N’Address’, @source_owner = N’Person’, @source_object = N’Address’, @type = N’logbased’, @description = null, @creation_script = null, @pre_creation_cmd = N’drop’, @schema_option = 0x000000000803509D, @identityrangemanagementoption = N’manual’, @destination_table = N’Address’, @destination_owner = N’Person’, @vertical_partition = N’false’
GO

use [AdventureWorks2012]
exec sp_addarticle @publication = N’AdvertureWorks2012AddressObjects’, @article = N’AddressType’, @source_owner = N’Person’, @source_object = N’AddressType’, @type = N’logbased’, @description = null, @creation_script = null, @pre_creation_cmd = N’drop’, @schema_option = 0x000000000803509D, @identityrangemanagementoption = N’manual’, @destination_table = N’AddressType’, @destination_owner = N’Person’, @vertical_partition = N’false’
GO
[/sourcecode]

17.) Now, we will create subscription. please connect with the SQL Server 2012 which you want to be subscription. In my case, it is ServerB. follow the steps given in following screen capture.

18.) For creating subscriber, you have to connect with publisher server.

19.) after connecting to publisher server, select the publication we have created.

20.) I don’t want to run SQL Server Agent job of replication at Publisher server because publisher server suppose to have load from various sources so I would go for PULL subscription so that replication job will run from ServerB and fetch the data from ServerA.

21.) Select or Create subscription database. since this is first time we are setting this subcription, we have no chance to have already created database so we will create new DB with name “AdventureWorks2012Subs”.

22.) This steps is VERY IMPORTANT, You must give proper login credential here. I would suggest to make sure that credential you are giving here, is having proper access of distributor and subscriber server. You can give two separate login credential here. I would like to go for windows authentication here too. Highly recommend, don’t go with “Impersonate Process account” as shown in following screen capture.

23.) select “Run Continuously” for “Agent Schedule” so that it keeps running at the interval we have selected.

24.) now select “Immediately” for “Initialize When” drop down list so that your configuration of snapshot agent takes effect immediately. if your credential provided in above screen is not proper, replication will not be able to move data between server.

25.) select both the check box same as we did two times before.

26.) give the proper path to save .SQL file for your Subscription script for future use.

Subscription scrip would look something like this:

[sourcecode language=”sql”]—————–BEGIN: Script to be run at Publisher ‘WIN-9H6QATRKY81\SQL2K12DEV’—————–
use [AdventureWorks2012]
exec sp_addsubscription @publication = N’AdvertureWorks2012AddressObjects’, @subscriber = N’WIN-9H6QATRKY81\SQL2K12DEVTEST’, @destination_db = N’AdventureWorks2012Subs’, @sync_type = N’Automatic’, @subscription_type = N’pull’, @update_mode = N’read only’
GO
—————–END: Script to be run at Publisher ‘WIN-9H6QATRKY81\SQL2K12DEV’—————–

—————–BEGIN: Script to be run at Subscriber ‘WIN-9H6QATRKY81\SQL2K12DEVTEST’—————–
use [AdventureWorks2012Subs]
exec sp_addpullsubscription @publisher = N’WIN-9H6QATRKY81\SQL2K12DEV’, @publication = N’AdvertureWorks2012AddressObjects’, @publisher_db = N’AdventureWorks2012′, @independent_agent = N’True’, @subscription_type = N’pull’, @description = N”, @update_mode = N’read only’, @immediate_sync = 1

exec sp_addpullsubscription_agent @publisher = N’WIN-9H6QATRKY81\SQL2K12DEV’, @publisher_db = N’AdventureWorks2012′, @publication = N’AdvertureWorks2012AddressObjects’, @distributor = N’WIN-9H6QATRKY81\SQL2K12DEV’, @distributor_security_mode = 1, @distributor_login = N”, @distributor_password = null, @enabled_for_syncmgr = N’False’, @frequency_type = 64, @frequency_interval = 0, @frequency_relative_interval = 0, @frequency_recurrence_factor = 0, @frequency_subday = 0, @frequency_subday_interval = 0, @active_start_time_of_day = 0, @active_end_time_of_day = 235959, @active_start_date = 20121021, @active_end_date = 99991231, @alt_snapshot_folder = N”, @working_directory = N”, @use_ftp = N’False’, @job_login = null, @job_password = null, @publication_type = 0
GO
—————–END: Script to be run at Subscriber ‘WIN-9H6QATRKY81\SQL2K12DEVTEST’—————–
[/sourcecode]

Finally we have successfully implemented Snapshot replication. You can confirm whether replication is working or not by insert/update few records in “Person.Address” table which was one of our two articles (table) we have added in this replication.

Please not that insert/update you have made, will not be immediately replicated. It will be replicated when you

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Sharing IT tips at “Quick Learn

Note: Microsoft Books online is a default reference of all articles.

Useful functions in SQL Server

Useful functions in SQL Server

I love SQL Server and the productivity we used to get in SQL Server. It has so many useful function which increase the productivity of SQL Developer. Today is Sunday and Indian holiday “Navratri” is just over so don’t want to give heavy dose to blog reader rather I want to brush up memory with some of my favorite and useful SQL Server function.

There are hundreds of functions available in SQL Server and out of those functions I have some of my favorite functions which are part of my routine life.

So here we go with the list of SQL Server functions:

Function Link to learn
CHOOSE Click Here
LEAD Click Here
LEG Click Here
CUME_DIST Click Here
PERCENT_RANK Click Here
LAST_VALUE Click Here
FIRST_VALUE Click Here
EOMONTH Click Here
DATEADD Click Here
DATEFROMPARTS Click Here
DATETIMEFROMPARTS Click Here
TIMEFROMPARTS Click Here
PARSE Click Here
TRY_PARSE Click Here
TRY_CONVERT Click Here
ISNULL Click Here
COALESCE Click Here
CONCAT_NULL_YIELDS_NULL Click Here

Now last but not least, SPLIT function. Actually we need to split values based on different separator, SQL Server doesn’t support any SPLIT function by default so  I have created my own user defined SPLIT function. Click here to use the script.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

LEAD and LAG Analytic functions in SQL Server 2012

LEAD and LAG Analytic functions in SQL Server 2012

LEAD and LAG functions were highly awaited and demanded function in SQL Server and finally it is now introduced in SQL Server 2012. I really like some of new functions introduced in SQL Server 2012 which I have listed in my earlier blog post. If you wish, you can access those articles from the links given below:

New Datetime Functions in SQL Server 2012 (Click Here)

New PARSE, TRY_PARSE & TRY_CONVERT conversion functions in SQL Server 2012 (Click Here)

New LAST_VALUE and FIRST_VALUE Analytic function in SQL Server 2012 (Click Here)

CUME_DIST and PERCENT_RANK new analytic functions in SQL Server 2012 (Click Here)

Well, now moving on!!!

Do you remember when you wanted to get previous and next value of particular column for particular row? I do remember it and I know I have used self-join, CTE, cursor , loop and many different type of logical solution but now our life become little easier with two new analytic functions in SQL Server which are known as LEAD & LAG.

LEAD: This function accesses the data from next row without any help of self-join or CTE.

LAG: This function accesses the data from previous row without any help of self-join or CTE.

Though, definition of these functions is short but use of these functions is really big and save you from developing customized logic many time. Let us understand this by small example here.

[sourcecode language=”sql”]
IF OBJECT_ID(‘tempdb..#AnalyticFunctionCheck’) IS NOT NULL
DROP TABLE #AnalyticFunctionCheck

CREATE TABLE #AnalyticFunctionCheck
 (
 ID INT IDENTITY(1,1)
 ,ClientID VARCHAR(10)
 ,TestName VARCHAR(20)
 ,TotalSample INT
 )

INSERT INTO #AnalyticFunctionCheck
SELECT ‘CHEM02′,’VOCMS GROUP 1’,4 UNION ALL
SELECT ‘CHEM02′,’SVOC GROUP 1’,6 UNION ALL
SELECT ‘CHEM02′,’SVOC STARS’,12 UNION ALL
SELECT ‘CHEM02′,’PESTICIDE’,4 UNION ALL
SELECT ‘SHAW01′,’PESTICIDE’,10 UNION ALL
SELECT ‘SHAW01′,’SVOC STARS’,3 UNION ALL
SELECT ‘SHAW01′,’SVOC GROUP 1’,9 UNION ALL
SELECT ‘EPAW01′,’SVOC GROUP 1’,14

SELECT
*
FROM
#AnalyticFunctionCheck

SELECT
*
,LAG(ID, 1) OVER (ORDER BY ID) AS PreviousQuota
,LEAD(ID, 1) OVER (ORDER BY ID) AS PreviousQuota
FROM
#AnalyticFunctionCheck

SELECT
*
,LAG(ID, 2) OVER (ORDER BY ID) AS PreviousQuota
,LEAD(ID, 2) OVER (ORDER BY ID) AS PreviousQuota
FROM
#AnalyticFunctionCheck
[/sourcecode]

After executing above given 3 queries, you will find result set like this:

First result set is simple “SELECT *” query on the temp table we created.

Second result set had offset “1” (refer LAG(ID, 1)). 1 is a default offset and it provides previous row if it is LAG function and next row if it is LEAD function.

Third result set returns ID of 2nd previous/Next row as we have provided offset “2” (refer LAG(ID, 2)) based on the function LEAD/LAG provided.

If function will not find previous/next value, it will return NULL by default.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

CUME_DIST and PERCENT_RANK new analytical functions in SQL Server 2012

CUME_DIST and PERCENT_RANK new analytical functions in SQL Server 2012

analytical_function

Regular reader of this blog already new that we have already seen new conversion functions and date time functions introduced in SQL Server 2012. Today we are going to look at some of the new analytical function in SQL Server 2012. Following is the list of functions we are going to cover in this article.

  • CUME_DIST ()
  • PERCENT_RANK ()

Before we look at the example of all these wonderful analytical functions in SQL Server 2012, let us look at the brief definition of these functions.

CUME_DIST : CUME_DIST function returns the cumulative distribution of a value in a group of values. That is, CUME_DIST computes the relative position of a specified value in a group of values. For a row r, assuming ascending ordering, the CUME_DIST of r is the number of rows with values lower than or equal to the value of r, divided by the number of rows evaluated in the partition or query result set. This function gives the percentage of values less than or equal to the current value in the group.

PERCENT_RANK : This function is almost same as CUME_DIST function. The range of values returned by PERCENT_RANK is greater than equal to 0 and less than or equal to 1. The first row in any set has a PERCENT_RANK of 0. This function calculates the percentage of values less than the current value in the group, excluding the highest value, Percent_Rank() for the highest value in a group will always be 1.

After having short definition of both functions, let us examine this fact by following TSQL Script:

[sourcecode language=”sql”]
CREATE TABLE #AnalyticFunctionCheck
(
ID INT IDENTITY(1,1)
,ClientID VARCHAR(10)
,TestName VARCHAR(20)
,TotalSample INT
)
INSERT INTO #AnalyticFunctionCheck
SELECT ‘CHEM02′,’VOCMS GROUP 1’,4 UNION ALL
SELECT ‘CHEM02′,’SVOC GROUP 1’,6 UNION ALL
SELECT ‘CHEM02′,’SVOC STARS’,12 UNION ALL
SELECT ‘CHEM02′,’PESTICIDE’,4 UNION ALL
SELECT ‘SHAW01′,’PESTICIDE’,10 UNION ALL
SELECT ‘SHAW01′,’SVOC STARS’,3 UNION ALL
SELECT ‘SHAW01′,’SVOC GROUP 1’,9 UNION ALL
SELECT ‘EPAW01′,’SVOC GROUP 1’,14

SELECT
*
,CUME_DIST() OVER(PARTITION BY TestName ORDER BY ClientID) AS CumeDist
,PERCENT_RANK() OVER(PARTITION BY TestName ORDER BY ClientID) AS PercentRank
FROM
#AnalyticFunctionCheck
[/sourcecode]

As soon as your run the above give code, you will get result set something like following screen capture:

Now observe we have first group “Pesticide” test. We have two rows in first group so CumeDist starts with “0.5”. This value comes by following calculation:

Row Number 1 (ID=4) : 1st row in group/total number of rows (1/2)=0.5

Row Number 2 (ID=5) : 2nd row in group/total number of rows (2/2)=1

In short, CUME_DIST=Number of the row we are calculating in the group/total number of row in group.

This is how Cume_Dist column is calculated.

Percent_Rank always starts with 0 so first value in the group will always have 0. Remaining values will be calculated as formula. See we have three rows (3, 4 and 5) in “Group 2”. First value is coming 0 as it is the first value as per our Order By clause. Second and third value calculated as per following formula.

Row Number 3 (ID=2) : (1st row in group-1.0)/(total number of rows-1.0) = (1-1.0)/(3-1) =0

Row Number 4 (ID=8) : (2ND row in group-1.0)/(total number of rows-1.0) = (2-1.0)/(3-1) =0.50

Row Number 5 (ID=7) : (3rd row in group-1.0)/(total number of rows-1.0) = (3-1.0)/(3-1) =1.0

In short, PERCENT_RANK=(Number of the row we are calculating in the group-1.0)/(total number of row in group-1).

Both of these are really wonderful and useful function. I am happy that it is included in SQL Server 2012.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Capture Schema Change in SQL Server to maintain history

Capture Schema Change in SQL Server to maintain history

Every production database should have strict audit/history policy for schema change so that each and every change can be tracked. SQL Server 2005+ providing trigger at database level which is being called at database level every time it meets criteria.

Here in this article, I am providing with one small TSQL Script of database trigger for DDL commands which activates itself automatically as and when any DDL command will be executed against any schema ( tables, views, stored procedures, triggers, index and more) of any objects in the database where we have create the database level DDL trigger.

We will create one table in database which will capture all schema change in SQL Server after creating that table, we will create DDL database trigger which detects any change made in schema and capture it in the table we have created. You can even set an email alert to authorize person so that whenever any change happens in the schema of table, Stored Procedure, View, Index, Function or any other objects, that person(s) will get an alert in an email along saving the history in table. I have not covered that email part here in this article but as long as you are having all the values which we are going to store in table, you can implement simple email script for your need.

By the way, I have already written one article on DDL trigger few years back but you can consider this article as an improved version, you can visit previous article here.

Let us now create the table which will store the history of schema change in the database.

Note: I have used “AdventureWorks2012” database for this article, you can use the same or else you can have your own database, code won’t have much changes.

[sourcecode language=”sql”]
USE [AdventureWorks2012]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [CaptureSchemaChange](
[EventType] [nvarchar](max) NULL,
[SchemaName] [nvarchar](max) NULL,
[ObjectName] [nvarchar](max) NULL,
[ObjectType] [nvarchar](max) NULL,
[EventDate] [datetime] NULL,
[SystemUser] [varchar](100) NULL,
[CurrentUser] [varchar](100) NULL,
[OriginalUser] [varchar](100) NULL,
[DatabaseName] [varchar](100) NULL,
[CommandText] [nvarchar](max) NULL,
[EventData] xml NULL,
[HostName] [varchar](50) NULL
)
GO

SET ANSI_PADDING OFF
GO

ALTER TABLE [CaptureSchemaChange] ADD  DEFAULT (host_name()) FOR [HostName]
GO[/sourcecode]

Once you are ready with table given in above script, we will now create DDL database trigger in SQL Server 2012 database.

[sourcecode language=”sql”]
USE [AdventureWorks2012]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TRIGGER [trgCaptureSchemaChange] ON DATABASE
FOR DDL_DATABASE_LEVEL_EVENTS
AS

SET NOCOUNT ON

DECLARE @EventType NVARCHAR(MAX)
DECLARE @SchemaName NVARCHAR(MAX)
DECLARE @ObjectName NVARCHAR(MAX)
DECLARE @ObjectType NVARCHAR(MAX)
DECLARE @DBName VARCHAR(100)
DECLARE @Message VARCHAR(1000)
DECLARE @TSQL NVARCHAR(MAX)

SELECT
@EventType = EVENTDATA().value(‘(/EVENT_INSTANCE/EventType)[1]’,’NVARCHAR(MAX)’)
,@SchemaName = EVENTDATA().value(‘(/EVENT_INSTANCE/SchemaName)[1]’,’NVARCHAR(MAX)’)
,@ObjectName = EVENTDATA().value(‘(/EVENT_INSTANCE/ObjectName)[1]’,’NVARCHAR(MAX)’)
,@ObjectType = EVENTDATA().value(‘(/EVENT_INSTANCE/ObjectType)[1]’,’NVARCHAR(MAX)’)
,@DBName = EVENTDATA().value(‘(/EVENT_INSTANCE/DatabaseName)[1]’,’NVARCHAR(MAX)’)
,@TSQL = EVENTDATA().value(‘(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]’,’NVARCHAR(MAX)’)

IF @SchemaName = ‘ ‘
BEGIN
SELECT
@SchemaName = default_schema_name
FROM
sys.sysusers SysUser
INNER JOIN
sys.database_principals Pri
ON
SysUser.uid = Pri.principal_id
WHERE
SysUser.name = CURRENT_USER
END

INSERT INTO [CaptureSchemaChange]
([EventType]
,[SchemaName]
,[ObjectName]
,[ObjectType]
,[EventDate]
,[SystemUser]
,[CurrentUser]
,[OriginalUser]
,[DatabaseName]
,[CommandText]
,[EventData]
)
SELECT
@EventType
, @SchemaName
, @ObjectName
, @ObjectType
, getdate()
, SUSER_SNAME()
, CURRENT_USER
, ORIGINAL_LOGIN()
, @DBName
, @TSQL
, EVENTDATA()
GO

SET ANSI_NULLS OFF
GO

SET QUOTED_IDENTIFIER OFF
GO

ENABLE TRIGGER [trgCaptureSchemaChange] ON DATABASE
GO[/sourcecode]

So, you have now successfully created DDL Database Trigger in SQL Server 2012. You schema is secured now, If anything will be changed, you will get an email, if email alert is set inside the trigger otherwise you can manually check the history in “CaptureSchemaChange” table we have created.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Sharing IT tips at “Quick Learn

Note: Microsoft Books online is a default reference of all articles.

LAST_VALUE and FIRST_VALUE function in SQL Server 2012

LAST_VALUE and FIRST_VALUE function in SQL Server 2012

After providing introduction to the new Conversion functions and DATE TIME functions in SQL Server, I come up with analytical functions LAST_VALUE and FIRST_VALUE.

Remember ROW_NUMBER, RANK & DENSE_RANK functions in SQL Server 2005 as well as in SQL Server 2008? LAST_VALUE & FIRST_VALUE function shares the similar syntax. After seeing all these functions, I remember those hard days of SQL Server 2000 when we had to write script for this kind of need. Really!!! Technology make us so productive ( and dumb too :) )

LAST_VALUE: As name suggests, this function will return the last value from the ordered set values.

FIRST_VALUE: FIRST_VALUE function is also behaving like its name; it returns the first value from the ordered set values.

Let us see how it actually works with following simple example which only works in SQL Server 2012.

[sourcecode language=”sql”]DECLARE @AppraisalHistory TABLE(

ID INT IDENTITY(1,1),

Name VARCHAR(35),

AppDate DATE,

TotalSalary INT

)

&nbsp;

INSERT INTO @AppraisalHistory

SELECT ‘Ritesh’,DATEADD(YEAR,-2,GETDATE()),10000 UNION ALL

SELECT ‘Ritesh’,DATEADD(YEAR,-1,GETDATE()),12000 UNION ALL

SELECT ‘Ritesh’,GETDATE(),15000 UNION ALL

SELECT ‘Bihag’,DATEADD(YEAR,-2,GETDATE()),9000 UNION ALL

SELECT ‘Bihag’,DATEADD(YEAR,-1,GETDATE()),10000 UNION ALL

SELECT ‘Bihag’,GETDATE(),16000

SELECT

*,

LAST_VALUE(TotalSalary) OVER (PARTITION BY NAME ORDER BY ID) AS LastSalary ,

FIRST_VALUE(TotalSalary) OVER (PARTITION BY NAME ORDER BY ID) AS FirstSalary

FROM @AppraisalHistory[/sourcecode]

ENJOY new SQL Server with all its power!!!!

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

New DATE TIME functions in SQL Server 2012

New DATE TIME functions in SQL Server 2012

SQL Server 2012 has introduced many new functions to cater the need of SQL developer. I have provided introduction of some new conversion function before (click here to look at new conversion functions). Today I am giving introduction of few of my favorite new Datetime related functions in SQL Server 2012.

There are already many datetime functions available in SQL Server since long but still people want more to rescue their script from complexity. Microsoft listen to developer and gave some more powerful functions (though I would need few more but hoping those will be included in next version of SQL Server, may be SQL Server 2014!!!!??? :))

 EOMonth: This is one of my favorite new functions in DATETIME group in SQL Server 2012. EOMonth stands for End of Month. As name suggests, it will return the last date (only DATE not TIME) of any month. Remember, we had to make customize logic to find end of month date before SQL Server 2012???

Here is the very small query to show the usage of EOMonth function in SQL Server 2012

[sourcecode language=”sql”]

SELECT

GETDATE() AS CurrentDate

,EOMONTH(GETDATE()) AS LastDayOfMonth

,DATEADD(DAY,1,EOMONTH(GETDATE())) AS FirstDayofNextMonth

,EOMONTH(GETDATE(),-1) AS LastDayOfPreviousMonth

,EOMONTH(GETDATE(),1) AS LastDayOfNextMonth

[/sourcecode]

DATEFROMPARTS: It happens many time that we have day and/or year and/or month as a separate part and need a date from those parts based on our customize logic, what do you in that scenario? Concate the string or develop customize logic to cater this need, right? DATEFROMPARTS function will save you from developing customize logic or concating the string. See the sample script given below to see the use of DATEFROMPARTS function in SQL Server 2012.

[sourcecode language=”sql”]

DECLARE @DAY INT=1

,@MONTH INT=11

,@YEAR INT=2012

&nbsp;

SELECT

DATEFROMPARTS(@YEAR,@MONTH,@DAY) AS ConcatedDate

[/sourcecode]

Remember that DATEFROMPARTS function will return only date part not time.

DATETIMEFROMPARTS: Above given DATEFROMPARTS function will return only date what, if we need DATETIME format? You have to use DATETIMEFROMPARTS function. Look at the script below:

[sourcecode language=”sql”]

DECLARE @DAY INT=1

,@MONTH INT=11

,@YEAR INT=2012

–along with all of the above variables,

–you have to pass hour, minutes, second and milliseconds too

–no matter whether you pass it as variable or static value

SELECT

DATETIMEFROMPARTS(@YEAR,@MONTH,@DAY,5,59,0,150) AS ConcatedDateTime

[/sourcecode]

TIMEFROMPARTS: Since we have DATEFROMPARTS function which provides us only DATE without TIME but some time we may need to concat the TIME without DATE. TIMEFROMPARTS function will come as a rescue in that situation.

[sourcecode language=”sql”]

DECLARE

@HOUR INT=11

,@MINUTE INT=59

,@SECONDS INT=59

,@FRACTIONS INT=150

–last argument "6" is precision for milliseconds

SELECT

TIMEFROMPARTS(@HOUR,@MINUTE,@SECONDS,150,6) AS ConcatedTime

[/sourcecode]

You can find so many DATETIME related new functions in SQL Server 2012, from BOL, as I have mentioned only few of my favorite new DATE TIME related functions.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Capture long running query in SQL Server

Capture long running query in SQL Server

Capture long running query in SQL Server, finds most resource intensive query/SP and fix it to meet optimal performance is one of the prime responsibility as DBA has to keep up the performance of SQL Server and use the ideal resource available on the server. Database Administrator (DBA) man not be in front of computer 24*7 but it is his duty to capture  all long running processes / query / Stored Procedure to find and fix later on.

There are many way to do this task and out which, I am going to show one of my favorite way to capture long running query.

Note: Before we move further for this article, I would like to update all of you that I have used the view “WhatIsGoingOn” as a base script for this article so if you don’t have that view or you have not read that article, I will highly recommend to read that article, click here to read it.

Once you read the article I recommended in above paragraph, let us continue with this article.

Create table “LongRunningCapture” in your database along with two indexes (one clustered index and one non clustered index). “LongRunningCapture” is going to be huge table once you collect your long running data for few weeks or months so these indexes will help you to select data faster from your table “LongRunningCapture”. Data growth will be depending on how many time you will run the stored procedure we will create and how many long running queries you get based on the criteria you select.

Anyway, have a look at the table along with all its fields and index to create it in your server.

[sourcecode language=”sql”]
CREATE TABLE LongRunningCapture
(
ID INT IDENTITY(1,1)
,ObjectName VARCHAR(100)
,statement_text  VARCHAR(MAX)
,DatabaseName  VARCHAR(50)
,CPU_Time  INT
,RunningMinutes  INT
,Percent_Complete  INT
,RunningFrom  VARCHAR(100)
,RunningBy  VARCHAR(250)
,SessionID  INT
,BlockingWith  INT
,reads  INT
,writes  INT
,[program_name]  VARCHAR(250)
,login_name  VARCHAR(50)
,status  VARCHAR(25)
,last_request_start_time  DATETIME
,logical_reads  BIGINT
)
GO

CREATE CLUSTERED INDEX IDX_LongRunningCapture_ID ON LongRunningCapture(ID)
GO

CREATE NONCLUSTERED INDEX IDX_LongRunningCapture_last_request_start_time ON LongRunningCapture(last_request_start_time)
GO
[/sourcecode]

Once you will done with creating the table “LongRunningCapture” and index, make sure you have “WhatIsGoingOn” view available. If you don’t have it, create it with following script.

[sourcecode language=”sql”]
CREATE VIEW WhatIsGoingOn
AS
SELECT
OBJECT_NAME(objectid) as ObjectName
,SUBSTRING(stateText.text, (statement_start_offset/2)+1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(stateText.text)
ELSE statement_end_offset
END – statement_start_offset)/2) + 1) AS statement_text
,DB_Name(database_id) as DatabaseName
,req.cpu_time AS CPU_Time
,DATEDIFF(minute, last_request_start_time, getdate()) AS RunningMinutes
,req.Percent_Complete
,sess.HOST_NAME as RunningFrom
,LEFT(CLIENT_INTERFACE_NAME, 25) AS RunningBy
,sess.session_id AS SessionID
,req.blocking_session_id AS BlockingWith
,req.reads
,req.writes
,sess.[program_name]
,sess.login_name
,sess.status
,sess.last_request_start_time
,req.logical_reads

FROM
sys.dm_exec_requests req
INNER JOIN sys.dm_exec_sessions sess ON sess.session_id = req.session_id
AND sess.is_user_process = 1
CROSS APPLY
sys.dm_exec_sql_text(sql_handle) AS stateText
[/sourcecode]

Once you have “LongRunningCapture” table and “WhatIsGoingOn” view ready with you, please create stored procedure given in following TSQL snippet.

[sourcecode language=”sql”]CREATE PROC SPGet_LongRunning
AS
IF OBJECT_ID(‘tempdb..#CaptureLongProcessWithObjectName’) IS NOT NULL
DROP TABLE #CaptureLongProcessWithObjectName

–Capturing All long running operations in Temp table
SELECT
ObjectName
,statement_text
,DatabaseName
,CPU_Time
,RunningMinutes
,Percent_Complete
,RunningFrom
,RunningBy
,SessionID
,BlockingWith
,reads
,writes
,[program_name]
,login_name
,status
,last_request_start_time
,logical_reads
INTO
#CaptureLongProcessWithObjectName
FROM
WhatIsGoingOn
WHERE
–want to capture each query/SP which is taking more than 1 min.
–you can increase limit as per your need
RunningMinutes>1

–Please note that full/differential/log backup may take more time
–if you wish, you can exclude it by filtering either by ObjectName
–or by Statement_Text as per your need

–separating record in two temp table
–based on with and without ObjectName
IF OBJECT_ID(‘tempdb..#CaptureLongProcessWithoutObjectName’) IS NOT NULL
DROP TABLE #CaptureLongProcessWithoutObjectName

SELECT
*
INTO
#CaptureLongProcessWithoutObjectName
FROM
#CaptureLongProcessWithObjectName
WHERE
ObjectName IS NULL

DELETE FROM
#CaptureLongProcessWithObjectName
WHERE
ObjectName IS NULL

ALTER TABLE #CaptureLongProcessWithObjectName
ADD AutoID INT IDENTITY(1,1)

ALTER TABLE #CaptureLongProcessWithoutObjectName
ADD AutoID INT IDENTITY(1,1)

DECLARE @Count INT,@CountMax INT
SET @Count=1

SELECT @CountMax=COUNT(*) FROM #CaptureLongProcessWithoutObjectName

–if there is no ObjectName came by default
–generating ObjectName,
–generally it happens in Ad-Hoc query of in some system processes
WHILE @Count <= @CountMax
BEGIN
DECLARE @Session INT
SELECT
@Session=SessionID
FROM
#CaptureLongProcessWithoutObjectName
WHERE AutoID=@Count

DECLARE
@InputBuffer TABLE
(
EventType NVARCHAR(MAX),
Parameters NVARCHAR(MAX),
EventInfo NVARCHAR(MAX)
)
DECLARE @Command NVARCHAR(MAX),@ObjectName VARCHAR(MAX)

SELECT  @Command = ‘DBCC INPUTBUFFER(‘ + CAST(@Session AS VARCHAR) + ‘) WITH NO_INFOMSGS’

INSERT INTO @InputBuffer
EXEC (@Command)

SELECT
@ObjectName = LEFT(REPLACE(REPLACE(REPLACE(EventInfo, CHAR(13), ”), CHAR(10),”), ‘  ‘, ‘ ‘),50) + ‘…..’
FROM @InputBuffer

UPDATE
#CaptureLongProcessWithoutObjectName
SET
ObjectName=@ObjectName
WHERE
AutoID=@Count

SELECT @Count=@Count+1
END

–inserting all long running query/sp into table
INSERT INTO LongRunningCapture (
ObjectName
,statement_text
,DatabaseName
,CPU_Time
,RunningMinutes
,Percent_Complete
,RunningFrom
,RunningBy
,SessionID
,BlockingWith
,reads
,writes
,[program_name]
,login_name
,status
,last_request_start_time
,logical_reads
)
SELECT
ObjectName
,statement_text
,DatabaseName
,CPU_Time
,RunningMinutes
,Percent_Complete
,RunningFrom
,RunningBy
,SessionID
,BlockingWith
,reads
,writes
,[program_name]
,login_name
,status
,last_request_start_time
,logical_reads
FROM
#CaptureLongProcessWithObjectName
UNION ALL
SELECT
ObjectName
,statement_text
,DatabaseName
,CPU_Time
,RunningMinutes
,Percent_Complete
,RunningFrom
,RunningBy
,SessionID
,BlockingWith
,reads
,writes
,[program_name]
,login_name
,status
,last_request_start_time
,logical_reads
FROM
#CaptureLongProcessWithoutObjectName
GO [/sourcecode]

Now you are almost ready to start capturing your long running queries / stored procedure / processes. You are just one step behind; let us complete that last step.

You have to schedule above given SP “SPGet_LongRunning” in your SQL Server Job or Windows Schedule tasks with help of SSIS package to run it at every 2 minutes to capture any process / job / stored procedure / query which are running for more than 1 minute.

You increase your limit of capture from stored procedure “SPGet_LongRunning” as well as in Job. I generally prefer to run it at every 2 minute but in this case you will have too much data to handle in “LongRunningCapture” is you have slow performing server. You have to make policy to clean up old data in “LongRunningCapture” table.

You might delete two-week old data from “LongRunningCapture” table.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Sharing IT tips at “Quick Learn

Note: Microsoft Books online is a default reference of all articles.