SQLHub News: Interview of Ritesh Shah is published on PACKT website


There is a big difference between writing a book and writing a blog. I understand this when I have started writing a book “SQL Server 2012 Performance Tuning Cookbook”.
SQL Server 2012 Performance Tuning Cookbook
SQL Server 2012 Performance Tuning Cookbook
As soon as PACKT proposed me to write the book on this subject, I have started deciding the topics to cover in this book with my co-author Bihag Thaker. It was a big challenge to decide the content of this book because performance tuning is a big topic and covering all aspects under the one book is little difficult but finally we have managed the book in three following parts.
1.)  Performance Monitoring
2.) Performance Tuning
3.) Performance Management
Each topic covered the practical recipe for hands-on experience of the readers. PACKThas conducted my interview regarding my journey of this book writing process. Further reading the interview, please click here.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Implementing physical Database Structure in SQL Server

Our book “SQL Server 2012 Performance Tuning cookbook” is published this week. This book is based on the concept of performance tuning in SQL Server 2012. We have one chapter “Implementing physical Database Structure” in our book “SQL Server 2012 Performance Tuning cookbook”. Following article is completely based on the chapter we have in our book “SQL Server 2012 Performance Tuning cookbook”.
Database structure is one of the important factor which affects performance of your database. Your database performance heavily depends on how you have physically placed your database objects and how you have configured your disk subsystem. Designing the physical layout of your database correctly is the key factor to improve the performance of your database queries and thus the performance of your database. However, the correct decision on a physical design structure of the database depends on the available hardware resources that you might have. This includes the number of processors, RAM, and storage resources, such as how many, disks or RAID controllers you might have in your database environment. The best thing while designing physical layout of the database is to have multiple physical disks for your database. If you configure your database in such a way that it spreads across multiple disks, it can benefit from parallel I/O operations.
The following are some of the decisions that influence your database performance:
  • Where do you place data files?
  • Where do you place log files?
  • Where do you place large objects?
  • Where do you place indexes?
  • Where do you place the tempdb database?
You can control the physical location of database objects within the database by using files and filegroups.
To read complete article kindly click here.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

350 Article on the blog and SQL Server 2012 book release

I have completed 350 articles on this blog and apart from that my book “SQL Server 2012 Performance Tuning Cookbook” has released yesterday officially.
I would like express my deepest gratitude to all my blog readers and online SQL Server community which always inspire me to work hard and returning back to the community.
In today’s blogpost I am not going to discuss any technical stuff but I would like express my truthful thanks to few person who has always played an important role in my life. Though I have expressed my thankfulness to them in the “Acknowledgement” section of my book, I can’t stop my temptation to express my gratitude blogpost.
I would sincerely like to thank Packt Publishing, for showing their confidence in me and providing the invaluable opportunity of being a part of this book. Individuals at Packt whom I am deeply grateful to, are Kedar Bhat, Sai Gamare, Madhuri Das, Ashwin Shetty, Apoorva Bolar, and Dhwani Devater. They have been very co-operative and supportive at all the stages of this book. I am extremely thankful to Michael Zilberstein (Blog) and Maria Zakourdaev (Blog, Twitter) and Satya (Blog,Twitter) the technical reviewers, for their excellent work of getting the accuracy of the technical details of the book in perfect shape.
I wouldn’t have been able to author this book alone, so thanks should go to Mr. Bihag Thaker, as well, as he agreed to co-author this book with me and has worked even harder on it than I have myself.
Two individuals to whom I am indebted and whose disciple I have always been are Mr. Pinal Dave(Blog, Twitter) and Mr. Jacob Sebastian (Blog, Twitter). I have learnt a lot from them, and they are the reason I’m part of the IT community today.
1.) Pinal Dave, who blogs at SQLAuthority.com and is an author of several SQL Server books. Currently, he is working as a Technology Evangelist at Microsoft.
2.) Jacob Sebastian, who blogs at BeyondRelational.com  and is a SQL Server MVP, book author, well-known speaker in SQL Server technology, and much more.
Without my family support, a task such as writing a book would not have been achievable. I would like to heartily thank my parents, Mr. Ashwinkumar Shah and Mrs. Divya Shah. It is because of them that I exist, and I cherish their blessings, which are always with me. I would also like to thank my one-and-a-half-year-old son, Teerth, who used to often start crying at midnight, because of which I would lose my sleep and, not being able to get it back so started researching more on the subjects that helped me write this book. Last but not least, I would like to thank my wife, Alka Shah!
Finally I would appreciate help and support of all my friends who has directly and indirectly helped me a lot.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Find SQL Server agent job ran on specific date with its status

It is very mandatory for SQL Server DBA to keep an eye on the SQL Server Job and take an immediate action if job gets failed. There are several ways of doing this. I will be showing one query which will return all jobs ran on the specific date along with its status like whether it was failed or succeed.
In order to gain information regarding JOBs, we have to query following list of system tables in MSDB database because all SQL Server jobs are stored in MSDB database only.
1.)     Sysjobhistory
2.)    Sysjobs
3.)    sysjobsteps
Here is the query which will return the desired result:

[sourcecode language=”sql”]USE MSDB
GO

DECLARE @Today AS VARCHAR(8)
SET @Today = CONVERT(VARCHAR(8),GETDATE(),112)
SELECT * FROM (
SELECT
SysJob.name,
CASE WHEN  JobHist.run_status =1
THEN ‘Success’
WHEN JobHist.run_status =0
THEN ‘Fail’
END AS JobRunStatus,
JobHist.run_date,
JobHist.run_time,
JobHist.run_duration AS RunDuration,
JobStep.step_id,
JobStep.command,ROW_NUMBER() OVER(PARTITION BY SysJob.name,JobStep.step_id ORDER BY run_time DESC) AS NumberOfExecution
FROM
dbo.sysjobhistory AS jobHist
INNER JOIN
dbo.sysjobs AS SysJob
ON
JobHist.job_id = SysJob.job_id
INNER JOIN
dbo.sysjobsteps AS JobStep
ON
(JobStep.job_id = SysJob.job_id)
WHERE
JobHist.run_date = @Today
)
AS T
WHERE
NumberOfExecution=1[/sourcecode]

As a DBA, I keep this script handy as I never know when I will need this. We already have monitoring over each and every jobs and failure of any jobs will be notified to me via email though this has become time saving for me so many times.
One tip I would like to give especially when I am talking about JOB is, keep regular backup policy for MSDB database too as all the JOBs are being stored here and while restoring SQL Server from any disaster, we need to create all the JOBs again and MSDB database backup become life savior in that case.
I have explained the importance of backup in one of my past article, if you are interested to read, click here.
Happy Scripting!!!!
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

TableDiff utility to compare data between two databases

Introduction
As a database professional, we might come across in situation where we need to compare data row by row or column wise between two tables which either resides in same database or in same instance or may be in different instance in different server.
What do you in this situation?
1.) Do you write down script of your own?
2.) Do you use any third party software?
3.) Do you use “TableDiff” utility comes with SQL Server itself?
Third option, out of all of the above, seems good to me as we neither need to invent the zero again by writing down the script by our own nor we need to pay extra money to compare data.
“TableDiff” is one of the wonderful and oldest utility provided by Microsoft. It works fine with SQL Server 2000 to the latest SQL Server edition. However, I am providing you the script and example from my SQL Server 2008 instance.
Getting Ready

Before you move forward, you need to find out two tables whose data you wanted to compare. It might be in publisher/subscriber in replication, it might be in two different databases you are using for scale out or may be anywhere else.
If you don’t have this situation at the moment in your environment, don’t worry, I will be giving a script to raise the scenario to test “TableDiff” utility.


How to do it…

1.)    Open New Query window in you SQL Server
2.)    Create two different database by using following script:
USE master
GO
CREATE DATABASETableDiffDb1
GO
CREATE DATABASETableDiffDb2
GO
3.)    Create a sample table in “TableDiffDB1” database with following script
 USETableDiffDb1
GO
–if orders table is already there. you can delete it than create new one with name “Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLEorders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–inserting 1000 sample rows into table
INSERT INTOorders(OrderDate, Amount, Refno)
SELECT TOP1000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOINsys.all_objects b
GO
4.)    Creating “Orders” table in second database by copying 900 records (out of total 1000 records) from “Orders” table from “TableDiffDB1” database by using following script.
USE TableDiffDb2
GO
–if orders table is already there. you can delete it than create new one with name “Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLEorders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–inserting 900 sample rows into table from TableDiffDb1 database’s Orders table
INSERT INTOorders(OrderDate, Amount, Refno)
SELECT TOP900 OrderDate,Amount,Refno FROM TableDiffDb1.dbo.orders
5.)    Now use following command to see the difference between two tables.
exec master..xp_cmdshell‘tablediff -sourceserver [RITESH-SHAH\MSSQL2008] -sourcedatabase TableDiffDb1 -sourcetable Orders -destinationserver [RITESH-SHAH\MSSQL2008] -destinationdatabase TableDiffDb2 -destinationtable Orders -et Difference -f D:\OrdersDifference.sql’

Replace your server instance name in “SourceServer” and “destinationServer” parameter in above given command and you will get one .SQL file in D drive. Running that SQL file will insert all missing records in “Orders” table of “TableDiffDb2” database as it shows you the list of all missing records there.

There’s more…

I would like to draw your attention to some of the facts which can help you if you don’t find “TableDiff” working in your environment.
Remember that “TableDiff.exe”  file resides in installation directory of SQL Server by default which is “C:\Program Files\Microsoft SQL Server\100\COM” in my case.  So, there is chance that “TableDiff” command is not accessible via DOS prompt, you have to set path for “TableDiff” in “ServerVariable”.
You can reach “ServerVariable” by “MY Computer Properties > Advanced System Settings > Advanced > Environment Variables > System Variables > PATH
If you find any path under “PATH” in “ServerVariable”, you can put “;” (semicolon) after that path and can add your path for “TableDiff”.
Generally people tend to use “TableDiff” from DOS prompt itself or via .bat (batch file) file but I have used “xp_cmdshell” extended stored procedure to show the use of command right from SQL Server but there may be a chance that “xp_cmdshellis disable in your environment. If your security constraint allows, you can enable “xp_cmdshell”. For more details about the steps, click here.
Reference: Ritesh Shah
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

NOLOCK hint in SQL Server, Good guy or bad guy?

Introduction

It is quite often in production environment that multiple application and/or users try to access same resource at the same time which may create locking and blocking issues. It is tricky to deal with this situation as there is no rocket science which suits for all situations. It is all depends on situation because in some production environment, it is not possible to resolve blocking issues easily over the night.
The main root cause for locking is, we have long running transaction which keeps your object locked and meanwhile any request comes to access the same object, has to wait until the current transaction complete its operation. Best choice should be to minimize the transaction length so that it releases the lock quickly and other request doesn’t need wait due to lock but unfortunately it is possible to solve it easily in ALL environments.

Getting ready

I am going to perform this example in my SQL Server 2012 RTM version but it may work as it is in SQL Server 2005 / 2008 too.
Connect to your SQL Server and open three query window. We will call these query windows with following name:
·         Win1
·         Win2
·         Win3

How to do it…

1.       After connecting to SQL Server Management Studio (SSMS), open Win1
2.       Create one sample database, named SQLHub and create one table with sample rows with following T-SQL script:
create database SQLHub
GO
USE SQLHub
GO
–if orders table is already there. you can delete it than create new one with name “Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLE orders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–inserting 100000 sample rows into table
INSERT INTO orders(OrderDate, Amount, Refno)
SELECT TOP 100000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
3.  Execute one UPDATE statement in Win1 with BEGIN TRANSACTION. Note that we don’t have COMMIT or ROLLBACK after the UPDATE Statement at the moment. ROLLBACK is commented so it won’t be executed.
BEGIN TRANSACTION
UPDATE Orders SET Amount=5.00 WHERE OrderID<10
–ROLLBACK
4.  In Win2, try to execute following SELECT statement:
select * from Orders Where orderID<=15
5.       You will observe that SELECT statement wouldn’t return any results so in WIN3, try to execute following T-SQL and know what is going on behind the screen:
sp_who2
go
6.       We will get many rows in the result set from the above command but we have to look at last two rows of SQLHub database as per given in following screen capture:
7.       We can see that out UPDATE statement runs with SPID 54 and SELECT statement runs with SPID 55. SELECT query is blocked by SPID 54 given in row no.2 column no.5 in screen capture. Now I have two ways. Either I issue COMMIT / ROLLBACK command which is not a good idea to interrupt UPDATE or cancel SELECT query in SSMS which I will do now.
8.       Since we have cancelled SELECT query, we will not execute same SELECT statement with NOLOCK hint.
select * from Orders WITH (NOLOCK) Where orderID<=15
9.       Here is the result return by above query.
10.   Go to Win 1 and execute “ROLLBACK” statement.
11.   Go back to Win 2 and execute SELECT statement with or without “NOLOCK”. For eg:
select * from Orders  Where orderID<=15
12.   Here is the screen shot of result return by above query:

How it works…

When we have executed UPDATE statement in Step no 3 without COMMIT or ROLLBACK, It updates the records but didn’t release the lock it has acquired on the table so SELECT query was not able read data and return it.
How do you know that whether UPDATE has really updated the values or not? Since our select query is not giving results. We have executed same select query with “WITH (NOLOCK)” hint in step no. 8 and we can confirm that values are update with the screen capture given in step 9. Isn’t it good situation? Yes, may be as we were not even able to get the result of SELECT statement in step no 4. But I would say we can’t decide whether it is a good or bad without evaluating the business need.
Since UPDATE statement has updated the value but it is not saved so there is a chance to ROLLBACK. If you see resultset given in steps no 9, you will see “5.00” in “Amount” column which may not be a proper value as after ROLLBACK, it comes back to “2.00” again as per step no. 12.
Think if this would be a result of stock exchange, what would happen? Can we run with dirty read we seen in screen shot in step no.9? No certainly not that is why I have conveyed that NOLOCK is all depends on the business need and situation. Use it wisely as it is two sided sword.

See also

NOLOCK as known as READUNCOMMITED concept is somehow related to ISOLATION level.  Here are the links of some of my past article on the subject.
Happy Reading!!!
Reference: Ritesh Shah
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Error Fix: Msg 605 Attempt to fetch logical page (3:1307866) in database 6 failed. It belongs to allocation unit 72057594113359872 not to 72057594113490944

Msg 605, Level 21, State 3, Line 1
Attempt to fetch logical page (3:1307866) in database 6 failed. It belongs to allocation unit 72057594113359872 not to 72057594113490944.
Or
Msg 21, Level 21, State 1, Line 1
Warning: Fatal error 605 occurred at OCT  31 2011  6:08AM. Note the error and time, and contact your system administrator.
This error comes in red color is disguise, if you login with any user which is not sysadmin type user, you will see this kind of error in corruption of page(s), if you login with SA, Administrator or and other SysAdmin login, you will not see error comes in red color but the descriptive error given in the title
There was a festival holidays in our Indian office from 26thto 28th OCT 2011 (Wed/Friday) so obviously I was out of town and back after 5 days on 31st OCT 2011. As soon as I come to the office and checked few of my email, I suddenly get complain that few pages in our software are throwing an error so I picked up the query which was there in those pages along with the same parameter they were providing and found the error given above.
As soon as I read this error, I came to know that there is a page corruption in database.  It may be due to heavy snow fall in NJ in October, they have got power failure there from few days and tried to keep the server up and running via power generator. As soon as Power Generator’s fuel get finished, everything gets down and after refuel, they starts everything again. I came to know this as soon as I come back from holiday. I think this is the Indian New Year gift to me.
I tried to gather some more information about the data file and page number given in error message with DBCC PAGE command.
dbcc traceon(3604)
dbcc page(‘MyDatabaseName’,3,1307866,3)

dbcc traceoff(3604)

You can use option 0,1,2,3 as the last parameter (bold 3) of DBCC PAGE command.
So now I have two options.
1.)    Restore that page from full Database backup
2.)    Try DBCC commands and if needed, allow data loss
I HIGHLY RECOMMEND to go for 1st option but if you are out of luck and don’t have last full backup, you must have to go for second way.
I have restored page from my last full database backup with below given TSQL.
RESTORE DATABASE MyDatabaseName
PAGE = ‘3:1307866’
FROM DISK = ‘D:\MyDatabaseName.bak’
WITH NORECOVERY
Now, let us talk about second option if you don’t have full backup of your database. Use DBCC CHECKDB command.
–checking database’s integrity and won’t show so many informational message,
–it will only shows error messages and warnings.
DBCC CHECKDB(‘MyDatabaseName’) WITH NO_INFOMSGS
Once you execute above command, it will recommend you repair level. it may recommend REPAIR_REBUILD if you really lucky but if you will see repair level REPAIR_ALLOW_DATA_LOSS, you have to be ready to lose some of your data.
You may use either of the below given command based on the Repair Level you have been suggested.
1.)
ALTER DATABASE MyDatabaseName SETSINGLE_USER
GO
DBCC CHECKDB(‘MyDatabaseName’, REPAIR_REBUILD)
GO
ALTER database MyDatabaseName SETMULTI_USER
GO
2.)
ALTER DATABASE MyDatabaseName SETSINGLE_USER
GO
DBCC CHECKDB(‘MyDatabaseName’, REPAIR_ALLOW_DATA_LOSS)
GO
ALTER database MyDatabaseName SETMULTI_USER
GO
Moral of the story is, always have FULL recovery model for your database and schedule full/ transaction/ differential backup policy. This is a MUST DO for any DBAs. You never know, when will you need it!!!!
Reference: Ritesh Shah
 
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Playing with OPTION KEEPFIXED PLAN hint in SQL Server

My previous article was on Compilation & Recompilation of Stored Procedure. After reading that article, two of my reader sent me an email about more clarification on some terminology and that is the base of writing this article.
Actually SQL Server Query Optimizer is really one of the smart tools which used to find or generate best execution plan for query or stored procedure and make the task easy for DBA. Basically it uses Cost Based Optimizer (CBO) to do this task, even sometime feels to have specific Execution Plan for query so Microsoft have provided many different hints at different level.
Query Optimizer (QO) used to save execution plan when it compiles Stored Procedure first time and use the same execution plan when the same stored procedure is being used again so that you can get rid of overhead of generating execution plan again and again.
When index statistics changes heavily, QO used to recompile the statement to get best plan for your stored procedure but if you are sure that the existing Plan would works fine than one should go for OPTION KEEPFIXED PLAN for that statement so that you can save your resources and boost up performance.
I will demonstrate this with few queries run in SQL Server Management Studio (SSMS) and capture the results in Profiler.
I will select following Events in Profiler while creating new Trace for my server.
Stored Procedure
  • SP:Completed
  • SP:Recompile
  • SP:smtpCompleted
  • SP:smtpStarting

 

Now execute following TSQL in your SSMS.
–create one database which you can delete after running this example
create database SQLHub
GO
USE SQLHub
GO
–if orders table is already there. you can delete it than create new one with name “Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLE orders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–creating nonclustered index
CREATE NONCLUSTERED INDEXIDX_ORD ON Orders(amount)
GO
–inserting only 1 record in our table
INSERT INTO orders VALUES(GETDATE(),1001,111)
GO
–creating SP which will SELECT all records from Orders table where Amount is 1001
CREATE PROC spGet_ord
AS
SELECT * FROM orders WHERE Amount=1001
GO
–execute SP
exec spGet_ord
GO
If you will see in profiler than you would get following event captured
Now, make change in Index statistics by inserting so many records in table so that we can test the effect in profiler after executing SP again.
–inserting 50000 fack rows into table
INSERT INTO orders(OrderDate, Amount, Refno)
SELECT TOP 50000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
–execute SP again
exec spGet_ord
GO
Now, if you see the profiler, you will see “SP:Recompile” event as your statistics are updated so it need recompilation of statement. If you again few more thousands record in same table and execute the same SP again, you will see recompilation again.
Now, after droping table and SP, we will make one small change in Stored Procedure and will use “KEEPFIXED PLAN” hint so that we can avoid recompilation.
drop proc spGet_ord
GO
drop table orders
go
–creating table
CREATE TABLE orders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
CREATE NONCLUSTERED INDEXIDX_ORD ON Orders(amount)
GO
INSERT INTO orders VALUES(GETDATE(),1001,111)
GO
CREATE PROC spGet_ord
AS
SELECT * FROM orders WHERE Amount=1001
–adding below statement other than that, same code as previous SP
OPTION (KEEPFIXED PLAN);
GO
exec spGet_ord
GO
After recreating table along with fresh records, we are going to see what happened while executing SP in above statement.
Since our table now has only 1 record so it is time to change statistics with bulk insert as follows again.
–inserting 100000 fack rows into table (BTW, thank to Jacob Sebastian, copy in INSERT script from one of his demo to generate big table)
INSERT INTO orders(OrderDate, Amount, Refno)
SELECT TOP 50000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
exec spGet_ord
GO
–if you wish, you can uncomment below code and delete SQLHub database
–use master
–go
–drop database sqlhub
Here is the screen capture of Profiler which doesn’t show Recompilation.

 

You can even add few more thousands of rows and execute SP again, you won’t get recompilation.
Reference: Ritesh Shah
 
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Some facts about Stored Procedure Compilation & Recompilation in SQL Server

I generally ask one question in interview “If we have one complex query which JOIN five table, what would work best from .NET application, Call of Query or Stored Procedure?” 80% of candidate used to say “SP works fast”. My next question always would be “WHY?” than out of those 80%, 60% would say “Because SP is a compiled code”. After that I fire my follow up question, I can see option of “Compile page or project in .NET but I never see that kind of option in SSMS, how do you compile your SP?”, once I EXECUTE this question, 90% candidate of those 60% candidate would like to keep silence or divert the talk.
Anyway, intention of this article is to let you know some facts about SP compilation & Recompilation. Since it is big topic and wouldn’t cover under on article, I may come up with some more articles on this topic but right now, let me explain some basic facts only.
First of all, let me tell you that you don’t need to compile Stored Procedure manually, when you execute it, SQL Server will compile your Stored Procedure for you and save the execution plan for future use so that it doesn’t need to compile again and again, this is generic understanding, it doesn’t work all time as few facts are there which cause recompilation many time or every time. If you want to recompile your Stored Procedure manually, you should use “SP_Recompile” Stored Procedure given by SQL Server.
Now, you think if recompilation is that bad than why Microsoft has given facility to recompile? Well, let me have an opportunity to break the ice, recompilation of stored procedure is not always bad. It may be beneficial or may be harmful, it is totally depends on the situation.
Actually compilation of Stored Procedure stores the execution plan first time you execute your Stored Procedure and every follow up call would use the same execution plan but recompilation of SP would be helpful if you have new statistics or new index on the table. BTW, in SQL Server 2008+ there is in-built functionality to recompile at statement level rather than recompiling whole stored procedure which is less resource centric.
Following is the list of basic cause which forces Stored Procedure to recompile.
·        
  • Change in SET option within Stored Procedure
  • Execution plan is very old
  • Schema change in table, index, view or temp tables which are used in Stored Procedure
  •  “Deferred object resolution”, means object was not available while compiling Stored Procedure  but you have created later on, may be some temp table you have created in Stored Procedure.
  •  Call of “SP_Recompile” Stored Procedure.
  • Call of RECOMPILE clause in Stored Procedure.
  • Statistics are old
How to avoid Stored Procedure recompilations?
  • Avoid using temp table or other DDL statements as long as possible.
  • Use table variable in Stored Procedure if needed
  • Avoid changing SET option in Stored Procedure like ARITHABORT, Quoted_Identifier, ANSI_NULLS, ANSI_WARNINGS etc.
  • Avoiding recompilation by statistics change by using “KEEPFIXEDPLAN” hint.
  • Disable Auto Update statistics for your database.
Well, these are very basic understanding and each point of this article may consume separate dedicated article and I may come up with series on this very soon.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah