Tag Archives: MS SQL Server

350 Article on the blog and SQL Server 2012 book release

I have completed 350 articles on this blog and apart from that my book “SQL Server 2012 Performance Tuning Cookbook” has released yesterday officially.
I would like express my deepest gratitude to all my blog readers and online SQL Server community which always inspire me to work hard and returning back to the community.
In today’s blogpost I am not going to discuss any technical stuff but I would like express my truthful thanks to few person who has always played an important role in my life. Though I have expressed my thankfulness to them in the “Acknowledgement” section of my book, I can’t stop my temptation to express my gratitude blogpost.
I would sincerely like to thank Packt Publishing, for showing their confidence in me and providing the invaluable opportunity of being a part of this book. Individuals at Packt whom I am deeply grateful to, are Kedar Bhat, Sai Gamare, Madhuri Das, Ashwin Shetty, Apoorva Bolar, and Dhwani Devater. They have been very co-operative and supportive at all the stages of this book. I am extremely thankful to Michael Zilberstein (Blog) and Maria Zakourdaev (Blog, Twitter) and Satya (Blog,Twitter) the technical reviewers, for their excellent work of getting the accuracy of the technical details of the book in perfect shape.
I wouldn’t have been able to author this book alone, so thanks should go to Mr. Bihag Thaker, as well, as he agreed to co-author this book with me and has worked even harder on it than I have myself.
Two individuals to whom I am indebted and whose disciple I have always been are Mr. Pinal Dave(Blog, Twitter) and Mr. Jacob Sebastian (Blog, Twitter). I have learnt a lot from them, and they are the reason I’m part of the IT community today.
1.) Pinal Dave, who blogs at SQLAuthority.com and is an author of several SQL Server books. Currently, he is working as a Technology Evangelist at Microsoft.
2.) Jacob Sebastian, who blogs at BeyondRelational.com  and is a SQL Server MVP, book author, well-known speaker in SQL Server technology, and much more.
Without my family support, a task such as writing a book would not have been achievable. I would like to heartily thank my parents, Mr. Ashwinkumar Shah and Mrs. Divya Shah. It is because of them that I exist, and I cherish their blessings, which are always with me. I would also like to thank my one-and-a-half-year-old son, Teerth, who used to often start crying at midnight, because of which I would lose my sleep and, not being able to get it back so started researching more on the subjects that helped me write this book. Last but not least, I would like to thank my wife, Alka Shah!
Finally I would appreciate help and support of all my friends who has directly and indirectly helped me a lot.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Find SQL Server agent job ran on specific date with its status

It is very mandatory for SQL Server DBA to keep an eye on the SQL Server Job and take an immediate action if job gets failed. There are several ways of doing this. I will be showing one query which will return all jobs ran on the specific date along with its status like whether it was failed or succeed.
In order to gain information regarding JOBs, we have to query following list of system tables in MSDB database because all SQL Server jobs are stored in MSDB database only.
1.)     Sysjobhistory
2.)    Sysjobs
3.)    sysjobsteps
Here is the query which will return the desired result:
USE MSDB
GO

DECLARE @Today AS VARCHAR(8)
SET @Today = CONVERT(VARCHAR(8),GETDATE(),112)
SELECT * FROM (
SELECT
SysJob.name,
CASE WHEN  JobHist.run_status =1
THEN 'Success'
WHEN JobHist.run_status =0
THEN 'Fail'
END AS JobRunStatus,
JobHist.run_date,
JobHist.run_time,
JobHist.run_duration AS RunDuration,
JobStep.step_id,
JobStep.command,ROW_NUMBER() OVER(PARTITION BY SysJob.name,JobStep.step_id ORDER BY run_time DESC) AS NumberOfExecution
FROM
dbo.sysjobhistory AS jobHist
INNER JOIN
dbo.sysjobs AS SysJob
ON
JobHist.job_id = SysJob.job_id
INNER JOIN
dbo.sysjobsteps AS JobStep
ON
(JobStep.job_id = SysJob.job_id)
WHERE
JobHist.run_date = @Today
)
AS T
WHERE
NumberOfExecution=1
As a DBA, I keep this script handy as I never know when I will need this. We already have monitoring over each and every jobs and failure of any jobs will be notified to me via email though this has become time saving for me so many times.
One tip I would like to give especially when I am talking about JOB is, keep regular backup policy for MSDB database too as all the JOBs are being stored here and while restoring SQL Server from any disaster, we need to create all the JOBs again and MSDB database backup become life savior in that case.
I have explained the importance of backup in one of my past article, if you are interested to read, click here.
Happy Scripting!!!!
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

“Max degree of parallelism” or “MAXDOP”, good guy or bad guy?

Introduction

Yesterday I have written article for NOLOCK in good guy or bad guy series. Today I am presenting one topic on the same subject which is “max degree of parallelism”. This is also one of the topics which can raise the big debate on. Now a days people used to have powerful servers with multiple processor for processing their database request and they tend to believe that more processor can perform well always to process the query which is a myth but in reality more processor can performs well for some query but perform really worst for others too so rather then setting up a fix figure to process each query, I believe to leave the decision on SQL Server as SQL Server 2012 has such a smart engine which can decide whether to go for parallelism or not.
As long as my personal practise concern, I used to use this option at very last resort as most of the query performance can be done via following:
  • Proper Index
  • Properly written query which can use index
  • Updated statistics

Getting ready

I am going to perform this example in my SQL Server 2012 RTM version but it may work as it is in SQL Server 2005 / 2008 too.
Deciding the processor for the query could be done via following two popular ways:
·         Set server wide value with option “max degree of parallelism” via “sp_configure” stored procedure
·         Use the MAXDOP hint
In this article I will show both ways in coming sections.
We are going to use “orders” table in this article, you can get the script to generate that table from previous article, click here.

How to do it…

1.       After connecting to SQL Server Management Studio (SSMS), open new query window.
2.       Set the 0 for “max degree of parallelism” in “sp_configure” system stored procedure with following T-SQL:
–enable advanced option in sp_configure procedure
EXEC sp_configure‘show advanced option’,1
RECONFIGURE WITH OVERRIDE
GO
–setting 0 for max degree of parallelism
–0 is the default value so even by executing
–following command, we are not making any change
sp_configure ‘max degree of parallelism’,0
RECONFIGURE WITH OVERRIDE
GO
3.  Before we test any query let us clear cache and buffer so nothing will be used from saved execution plan. I highly recommend using following command in development or testing server only. Clearing cache and buffer in production server may give you slow performance temporarily.
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
4.  Now specifying parallelism at query level with MAXDOP hint by executing following T-SQL
set statistics IO on
set statistics time on
select * from orders option (MAXDOP 1)
set statistics time off
set statistics IO off
set statistics IO on
set statistics time on
select * from orders option (MAXDOP 2)
set statistics time off
set statistics IO off
5.       Since we had “Statistics IO” and “Statistics Time” enabled while executing both of the above SELECT query. We can get results of SET commands in “Message” tab besides “Results” panel. Here is the result I am getting in my testing server, you might get different result.
(100000 row(s) affected)
Table ‘orders’.Scan count 1, logical reads 409, physical reads 0, read-ahead reads 414, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
 SQL Server Execution Times:
   CPU time = 16 ms,  elapsed time = 586 ms.
(100000 row(s) affected)
Table ‘orders’.Scan count 1, logical reads 409, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
 SQL Server Execution Times:
   CPU time = 62 ms,  elapsed time = 575 ms.

How it works…

Setting value 0 (zero) either in MAXDOP or for “max degree of parallelism” indicate that SQL Server has choice to decide whether to use multiple processor for the query or not. Apart from 0, any number specified more then 0 indicate that, maximum upto that number of processor could be used by processor.
We have used MAXDOP 1 for first SELECT query and MAXDOP 2 for second SELECT query and you can see results of “STATISTICS TIME” that one processor has done the better job by consuming 16ms in CPU time as against 62ms CPU time in second try where we had MAXDOP 2 which is indeed a big difference.
CPU time shows the time it has taken to process the query.

Conclusion

If there is no other alternative, then and then go for selecting the processor for your query otherwise let SQL Server take decision by its own.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of http://www.SQLHub.com
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

NOLOCK hint in SQL Server, Good guy or bad guy?

Introduction

It is quite often in production environment that multiple application and/or users try to access same resource at the same time which may create locking and blocking issues. It is tricky to deal with this situation as there is no rocket science which suits for all situations. It is all depends on situation because in some production environment, it is not possible to resolve blocking issues easily over the night.
The main root cause for locking is, we have long running transaction which keeps your object locked and meanwhile any request comes to access the same object, has to wait until the current transaction complete its operation. Best choice should be to minimize the transaction length so that it releases the lock quickly and other request doesn’t need wait due to lock but unfortunately it is possible to solve it easily in ALL environments.

Getting ready

I am going to perform this example in my SQL Server 2012 RTM version but it may work as it is in SQL Server 2005 / 2008 too.
Connect to your SQL Server and open three query window. We will call these query windows with following name:
·         Win1
·         Win2
·         Win3

How to do it…

1.       After connecting to SQL Server Management Studio (SSMS), open Win1
2.       Create one sample database, named SQLHub and create one table with sample rows with following T-SQL script:
create database SQLHub
GO
USE SQLHub
GO
–if orders table is already there. you can delete it than create new one with name “Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLE orders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–inserting 100000 sample rows into table
INSERT INTO orders(OrderDate, Amount, Refno)
SELECT TOP 100000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
3.  Execute one UPDATE statement in Win1 with BEGIN TRANSACTION. Note that we don’t have COMMIT or ROLLBACK after the UPDATE Statement at the moment. ROLLBACK is commented so it won’t be executed.
BEGIN TRANSACTION
UPDATE Orders SET Amount=5.00 WHERE OrderID<10
–ROLLBACK
4.  In Win2, try to execute following SELECT statement:
select * from Orders Where orderID<=15
5.       You will observe that SELECT statement wouldn’t return any results so in WIN3, try to execute following T-SQL and know what is going on behind the screen:
sp_who2
go
6.       We will get many rows in the result set from the above command but we have to look at last two rows of SQLHub database as per given in following screen capture:
7.       We can see that out UPDATE statement runs with SPID 54 and SELECT statement runs with SPID 55. SELECT query is blocked by SPID 54 given in row no.2 column no.5 in screen capture. Now I have two ways. Either I issue COMMIT / ROLLBACK command which is not a good idea to interrupt UPDATE or cancel SELECT query in SSMS which I will do now.
8.       Since we have cancelled SELECT query, we will not execute same SELECT statement with NOLOCK hint.
select * from Orders WITH (NOLOCK) Where orderID<=15
9.       Here is the result return by above query.
10.   Go to Win 1 and execute “ROLLBACK” statement.
11.   Go back to Win 2 and execute SELECT statement with or without “NOLOCK”. For eg:
select * from Orders  Where orderID<=15
12.   Here is the screen shot of result return by above query:

How it works…

When we have executed UPDATE statement in Step no 3 without COMMIT or ROLLBACK, It updates the records but didn’t release the lock it has acquired on the table so SELECT query was not able read data and return it.
How do you know that whether UPDATE has really updated the values or not? Since our select query is not giving results. We have executed same select query with “WITH (NOLOCK)” hint in step no. 8 and we can confirm that values are update with the screen capture given in step 9. Isn’t it good situation? Yes, may be as we were not even able to get the result of SELECT statement in step no 4. But I would say we can’t decide whether it is a good or bad without evaluating the business need.
Since UPDATE statement has updated the value but it is not saved so there is a chance to ROLLBACK. If you see resultset given in steps no 9, you will see “5.00” in “Amount” column which may not be a proper value as after ROLLBACK, it comes back to “2.00” again as per step no. 12.
Think if this would be a result of stock exchange, what would happen? Can we run with dirty read we seen in screen shot in step no.9? No certainly not that is why I have conveyed that NOLOCK is all depends on the business need and situation. Use it wisely as it is two sided sword.

See also

NOLOCK as known as READUNCOMMITED concept is somehow related to ISOLATION level.  Here are the links of some of my past article on the subject.
Happy Reading!!!
Reference: Ritesh Shah
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Error Fix: Msg 605 Attempt to fetch logical page (3:1307866) in database 6 failed. It belongs to allocation unit 72057594113359872 not to 72057594113490944

Msg 605, Level 21, State 3, Line 1
Attempt to fetch logical page (3:1307866) in database 6 failed. It belongs to allocation unit 72057594113359872 not to 72057594113490944.
Or
Msg 21, Level 21, State 1, Line 1
Warning: Fatal error 605 occurred at OCT  31 2011  6:08AM. Note the error and time, and contact your system administrator.
This error comes in red color is disguise, if you login with any user which is not sysadmin type user, you will see this kind of error in corruption of page(s), if you login with SA, Administrator or and other SysAdmin login, you will not see error comes in red color but the descriptive error given in the title
There was a festival holidays in our Indian office from 26thto 28th OCT 2011 (Wed/Friday) so obviously I was out of town and back after 5 days on 31st OCT 2011. As soon as I come to the office and checked few of my email, I suddenly get complain that few pages in our software are throwing an error so I picked up the query which was there in those pages along with the same parameter they were providing and found the error given above.
As soon as I read this error, I came to know that there is a page corruption in database.  It may be due to heavy snow fall in NJ in October, they have got power failure there from few days and tried to keep the server up and running via power generator. As soon as Power Generator’s fuel get finished, everything gets down and after refuel, they starts everything again. I came to know this as soon as I come back from holiday. I think this is the Indian New Year gift to me.
I tried to gather some more information about the data file and page number given in error message with DBCC PAGE command.
dbcc traceon(3604)
dbcc page(‘MyDatabaseName’,3,1307866,3)

dbcc traceoff(3604)

You can use option 0,1,2,3 as the last parameter (bold 3) of DBCC PAGE command.
So now I have two options.
1.)    Restore that page from full Database backup
2.)    Try DBCC commands and if needed, allow data loss
I HIGHLY RECOMMEND to go for 1st option but if you are out of luck and don’t have last full backup, you must have to go for second way.
I have restored page from my last full database backup with below given TSQL.
RESTORE DATABASE MyDatabaseName
PAGE = ‘3:1307866’
FROM DISK = ‘D:\MyDatabaseName.bak’
WITH NORECOVERY
Now, let us talk about second option if you don’t have full backup of your database. Use DBCC CHECKDB command.
–checking database’s integrity and won’t show so many informational message,
–it will only shows error messages and warnings.
DBCC CHECKDB(‘MyDatabaseName’) WITH NO_INFOMSGS
Once you execute above command, it will recommend you repair level. it may recommend REPAIR_REBUILD if you really lucky but if you will see repair level REPAIR_ALLOW_DATA_LOSS, you have to be ready to lose some of your data.
You may use either of the below given command based on the Repair Level you have been suggested.
1.)
ALTER DATABASE MyDatabaseName SETSINGLE_USER
GO
DBCC CHECKDB(‘MyDatabaseName’, REPAIR_REBUILD)
GO
ALTER database MyDatabaseName SETMULTI_USER
GO
2.)
ALTER DATABASE MyDatabaseName SETSINGLE_USER
GO
DBCC CHECKDB(‘MyDatabaseName’, REPAIR_ALLOW_DATA_LOSS)
GO
ALTER database MyDatabaseName SETMULTI_USER
GO
Moral of the story is, always have FULL recovery model for your database and schedule full/ transaction/ differential backup policy. This is a MUST DO for any DBAs. You never know, when will you need it!!!!
Reference: Ritesh Shah
 
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Playing with OPTION KEEPFIXED PLAN hint in SQL Server

My previous article was on Compilation & Recompilation of Stored Procedure. After reading that article, two of my reader sent me an email about more clarification on some terminology and that is the base of writing this article.
Actually SQL Server Query Optimizer is really one of the smart tools which used to find or generate best execution plan for query or stored procedure and make the task easy for DBA. Basically it uses Cost Based Optimizer (CBO) to do this task, even sometime feels to have specific Execution Plan for query so Microsoft have provided many different hints at different level.
Query Optimizer (QO) used to save execution plan when it compiles Stored Procedure first time and use the same execution plan when the same stored procedure is being used again so that you can get rid of overhead of generating execution plan again and again.
When index statistics changes heavily, QO used to recompile the statement to get best plan for your stored procedure but if you are sure that the existing Plan would works fine than one should go for OPTION KEEPFIXED PLAN for that statement so that you can save your resources and boost up performance.
I will demonstrate this with few queries run in SQL Server Management Studio (SSMS) and capture the results in Profiler.
I will select following Events in Profiler while creating new Trace for my server.
Stored Procedure
  • SP:Completed
  • SP:Recompile
  • SP:smtpCompleted
  • SP:smtpStarting

 

Now execute following TSQL in your SSMS.
–create one database which you can delete after running this example
create database SQLHub
GO
USE SQLHub
GO
–if orders table is already there. you can delete it than create new one with name “Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLE orders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–creating nonclustered index
CREATE NONCLUSTERED INDEXIDX_ORD ON Orders(amount)
GO
–inserting only 1 record in our table
INSERT INTO orders VALUES(GETDATE(),1001,111)
GO
–creating SP which will SELECT all records from Orders table where Amount is 1001
CREATE PROC spGet_ord
AS
SELECT * FROM orders WHERE Amount=1001
GO
–execute SP
exec spGet_ord
GO
If you will see in profiler than you would get following event captured
Now, make change in Index statistics by inserting so many records in table so that we can test the effect in profiler after executing SP again.
–inserting 50000 fack rows into table
INSERT INTO orders(OrderDate, Amount, Refno)
SELECT TOP 50000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
–execute SP again
exec spGet_ord
GO
Now, if you see the profiler, you will see “SP:Recompile” event as your statistics are updated so it need recompilation of statement. If you again few more thousands record in same table and execute the same SP again, you will see recompilation again.
Now, after droping table and SP, we will make one small change in Stored Procedure and will use “KEEPFIXED PLAN” hint so that we can avoid recompilation.
drop proc spGet_ord
GO
drop table orders
go
–creating table
CREATE TABLE orders(OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
CREATE NONCLUSTERED INDEXIDX_ORD ON Orders(amount)
GO
INSERT INTO orders VALUES(GETDATE(),1001,111)
GO
CREATE PROC spGet_ord
AS
SELECT * FROM orders WHERE Amount=1001
–adding below statement other than that, same code as previous SP
OPTION (KEEPFIXED PLAN);
GO
exec spGet_ord
GO
After recreating table along with fresh records, we are going to see what happened while executing SP in above statement.
Since our table now has only 1 record so it is time to change statistics with bulk insert as follows again.
–inserting 100000 fack rows into table (BTW, thank to Jacob Sebastian, copy in INSERT script from one of his demo to generate big table)
INSERT INTO orders(OrderDate, Amount, Refno)
SELECT TOP 50000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
exec spGet_ord
GO
–if you wish, you can uncomment below code and delete SQLHub database
–use master
–go
–drop database sqlhub
Here is the screen capture of Profiler which doesn’t show Recompilation.

 

You can even add few more thousands of rows and execute SP again, you won’t get recompilation.
Reference: Ritesh Shah
 
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Some facts about Stored Procedure Compilation & Recompilation in SQL Server

I generally ask one question in interview “If we have one complex query which JOIN five table, what would work best from .NET application, Call of Query or Stored Procedure?” 80% of candidate used to say “SP works fast”. My next question always would be “WHY?” than out of those 80%, 60% would say “Because SP is a compiled code”. After that I fire my follow up question, I can see option of “Compile page or project in .NET but I never see that kind of option in SSMS, how do you compile your SP?”, once I EXECUTE this question, 90% candidate of those 60% candidate would like to keep silence or divert the talk.
Anyway, intention of this article is to let you know some facts about SP compilation & Recompilation. Since it is big topic and wouldn’t cover under on article, I may come up with some more articles on this topic but right now, let me explain some basic facts only.
First of all, let me tell you that you don’t need to compile Stored Procedure manually, when you execute it, SQL Server will compile your Stored Procedure for you and save the execution plan for future use so that it doesn’t need to compile again and again, this is generic understanding, it doesn’t work all time as few facts are there which cause recompilation many time or every time. If you want to recompile your Stored Procedure manually, you should use “SP_Recompile” Stored Procedure given by SQL Server.
Now, you think if recompilation is that bad than why Microsoft has given facility to recompile? Well, let me have an opportunity to break the ice, recompilation of stored procedure is not always bad. It may be beneficial or may be harmful, it is totally depends on the situation.
Actually compilation of Stored Procedure stores the execution plan first time you execute your Stored Procedure and every follow up call would use the same execution plan but recompilation of SP would be helpful if you have new statistics or new index on the table. BTW, in SQL Server 2008+ there is in-built functionality to recompile at statement level rather than recompiling whole stored procedure which is less resource centric.
Following is the list of basic cause which forces Stored Procedure to recompile.
·        
  • Change in SET option within Stored Procedure
  • Execution plan is very old
  • Schema change in table, index, view or temp tables which are used in Stored Procedure
  •  “Deferred object resolution”, means object was not available while compiling Stored Procedure  but you have created later on, may be some temp table you have created in Stored Procedure.
  •  Call of “SP_Recompile” Stored Procedure.
  • Call of RECOMPILE clause in Stored Procedure.
  • Statistics are old
How to avoid Stored Procedure recompilations?
  • Avoid using temp table or other DDL statements as long as possible.
  • Use table variable in Stored Procedure if needed
  • Avoid changing SET option in Stored Procedure like ARITHABORT, Quoted_Identifier, ANSI_NULLS, ANSI_WARNINGS etc.
  • Avoiding recompilation by statistics change by using “KEEPFIXEDPLAN” hint.
  • Disable Auto Update statistics for your database.
Well, these are very basic understanding and each point of this article may consume separate dedicated article and I may come up with series on this very soon.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Do I need backup of my database?

If you are working as a DBA or you have responsibility to maintain your SQL Server database up & running, take my words, you have NO OPTION of full database backup.
Recently I came to know that one person is not taking backup just because he has RAID and Mirroring setup so he is least worried about backup. Believe me, no matter what redundant hardware / software and high availability setup you have, you MUST take backup regularly because even good and up-to-date disaster recovery system could get failed. So, you never know, when and how do you need your latest backup file.
Let us discuss why database backup is really that much important.
Suppose you have high availability solutions like RAID or server clustering but what if you hardware gets corrupt and may be complete disk array get corrupt? You have to build up your database from backup only.
Sometime, catastrophic events or natural disasters like flooding, earth quake etc. could damage your hardware or may be complete data center and if you have no backup at other location, you have no place to go for.
May be sometime Security issues comes to the picture and somebody intentionally or unintentionally damage you data which could be affected adversely and in that situation you might look for the latest database backup to restore that data.
Sometime your OS or even your SQL Server gets corrupted and you might need to restore data from backup itself.
In short, I would HIGHLY recommend database backup as “Prevention is always better than cure”. So, keep strong backup policies with full database backup, differential database backup and transaction log backup, depends on your need and business policies.
BTW, I am not against the high availability concepts like mirroring, clustering, replication or log shipping. You should implement those too as per your business needs but also MUST keep strong backup policies.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Index Fill Factor in SQL Server

Today once again I felt to write something about Index, so open up the list of Index articles I have written, find out the topic which are still missing for concept of Index and thought to write about fill factor.
Generally, when you create an Index, your data get stored in B-Tree format where you can find “Root Page”, may be “Intermediate Page” and “LEAF Page”. “Leaf Page” would have your actual data sorted in order of Index key and each “Leaf Page” would be of 8KB.
When you insert/update/delete data to/from table, it tries to insert in the proper data page according to the record you are inserting. For example, you have Index on SSN number you are inserting new row with SSN number, SQL Server tries to insert that record in the appropriate page, suppose your SSN starts with “2” so it will find what is the last page which has SSN number starts with “2”, SQL Server will insert your new row in that page only. If your 8 KB page is full and don’t have room to accommodate new row whose SSN starts with “2”, it will split page and the data in that page will be shared between two pages so now you have two pages which are half full so your row will be accommodated in that page.
If your page would already had space for accommodating new row, you wouldn’t need to wait for extra I/O overhead and wait until page split gets finish and space for your row would become available.
This is the time when FillFactor comes into the picture. Fill Factor decides how much your page would be filled up initially. Suppose you give 10 in FillFactor than your data page will consume only 10% of your 8KB page size and when you exceed this limit of 10%, it keeps 90% page empty and create new page for other records.  Now, when you insert new records, you don’t need to worry about I/O overhead of page split as you would have 90% free space and your record will be accommodate in that space easily. So, if you have lower number of Fillfactor, you can decrease I/O over head generated by Page Split which helps you to write your data faster.
Now, you might be thinking now that why shouldn’t I use low number always like 5% or 10% in Fillfactor? Well, it will decrease your page split but it will increase number of data page in your index so every time you scan your index, you have to read more pages and it is again over head while reading the data and decrease the speed of reading data. Suppose you have 10 records in one table and its size is 8KB, you can fit all 10 records in one page only but if you have fill factor of 50 than those 10 records will be stored in two pages rather than one while creating an index.
Uff, what to do now? if we decrease fillfactor figure, it helps in writing but creates problem in reading and if we increase fillfactor figure may be 100% than helps in reading but creates issues while writing in database?
You have to be reasonable and have to take decision based the load on your database, you have to decide first that you have more writes or more read?
Personally I majority go for figure somewhere between 80 to 90 for fillfactor so that you have some space remain for new records and decrease page split at the same time, we don’t keep too much free space in data pages so number of pages doesn’t get increased. Even, it is not final figure I use for all my databases, if my database is read only or very rare insert/update, I would go even for 100% Fillfactor and if my database is write intensive, I would have even 50% Fillfactor.
BTW, you can define Fill Factor while creating an Indexor rebuilding an Index or you can set default value for fill factor via following T-SQL (by default it is 0 in SQL Server).
–turning on advanced configuration option
Sp_configure ‘show advanced options’,1
GO
RECONFIGURE
GO
–setting up fill factor
sp_configure ‘fill factor’, 90
GO
RECONFIGURE
GO
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah