Tag Archives: MS SQL Server

ISNULL, COALESCE or CONCAT_NULL_YIELDS_NULL in SQL Server

Before we jump into the core logic, let me explain you what are the use of this functions / property.
BTW, this is the table to be used in example:
CREATE TABLE EMPTEST
(
FirstName varchar(10)
,MiddleName varchar(10)
,LastName varchar(10)
)
INSERT INTO EMPTEST
select ‘Ritesh’,‘A’,‘Shah’ UNION ALL
select ‘Roger’,NULL,‘Federer’ UNION ALL
select ‘Steffi’,NULL,‘Graf’
GO
Select FirstName,LastName fromEMPTEST
ISNULL: This function replaces the NULL value with specified value given in function. Suppose we have probability to have Middle Name NULL in Employee table, we could do something like below given TSQL statement.
Select FirstName, ISNULL(Lastname,) as LastName From EMPTEST
If Lastname will be NULL, it will be replaced with blank in result set.
COALESCE: Basically COALESCE function will return first not null value from the given list, so we can use this function in place of ISNULL too, like this:
Select FirstName, COALESCE(Lastname,) as LastName From EMPTEST
NULL is nothing but the absent of value, it doesn’t even represent a blank or space or zero. When you try to concatenate two or more than two strings and any of the string is NULL, it will return the NULL only. See following example:
 
Select FirstName + ‘ ‘ + MiddleName + ‘ ‘ + LastName as FullName fromEMPTEST
You will get first record “Ritesh A Shah” as full name and remaining two records as NULL as those two records are having NULL value in its MiddleName field so concatenate results will be NULL only.
You can have solution for that with ISNULL & COALESCE.
Select FirstName + ‘ ‘ + ISNULL(MiddleName,) + ‘ ‘ + LastName asFullName from EMPTEST
Select FirstName + ‘ ‘ + COALESCE(MiddleName,) + ‘ ‘ + LastName asFullName from EMPTEST
In this solution, you have to wrap up all the fields all the fields with either ISNULL or with COALESCE if it has probability of containing NULL value. If you have many fields in one query which needs this wrapping, it is bit tedious for you as a developer. At this time, CONCAT_NULL_YIELDS_NULL property comes as a rescue.
CONCAT_NULL_YIELDS_NULL: this property controls concatenation of the string, like what should do if any of the string is NULL and it is being used in concatenation.
You can set this property at database level by ALTER DATABASE command or you can use this for your current session/connection.
Select FirstName + ‘ ‘ + MiddleName + ‘ ‘ + LastName as FullName fromEMPTEST
As I have already told you that above query returns two record with NULL value as FullName, you can do something like below TSQL to set CONCAT_NULL_YIELDS_NULL  for your batch.
SET CONCAT_NULL_YIELDS_NULL OFF;
Select FirstName + ‘ ‘ + MiddleName + ‘ ‘ + LastName as FullName fromEMPTEST
SET CONCAT_NULL_YIELDS_NULL ON;
By default CONCAT_NULL_YIELDS_NULL  set with TRUE (ON) for every database but you can turn it OFF for your batch or for your database (by ALTER DATABASE command). It will save you from writing ISNULL and COALESCE command for many times in your query.
If you want to check whether your database is set to TRUE or FALSE for this property or your SESSION has this property TRUE or FALSE, you can use following TSQL.
SELECT DATABASEPROPERTYEX(‘YourDatabaseName’, ‘IsNullConcat’)
SELECT SESSIONPROPERTY(‘CONCAT_NULL_YIELDS_NULL’)
Personally I prefer to go for either ISNULL or COALESCE rather than CONCAT_NULL_YIELDS_NULL . There are few reasons for that.
–> CONCAT_NULL_YIELDS_NULL  is supported in even SQL Server 2008 R2 along with previous versions of SQL Server but as per Microsoft, They will remove this feature in new releases, it will be by default ON in new versions and you will not be able to set it OFF
 –> If you have this option in Stored procedure, it will force your Stored Procedure to recompile every time you execute your SP.
Apart from these, as long as possible, we should avoid tempering with default settingsof SQL Server unless you know what exactly your doing.
 Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Should Auto_Close database property be ON or OFF in SQL Server?

One of my friends just leased one dedicated SQL Server hosting server with managed support for his own business purpose. Though it is hosted at hosting company’s data centre with very good hardware configuration and being used for his own (one) database only, it was performing very slowly at certain point of time though there is no heavy work load.
He asked me if I can help him out in this matter so I had quick look into it and after sometime I found Auto_Close property a culprit. 
Let me first tell you what this property is and how & where does it comes into the picture.
When we first time access or open the database in SQL Server, our database is assigned some resources to maintain its own stat, some memory for its use, some buffer space and many more. Auto_Close helps you to free up these resources when last user disconnects from the database.
This is good thing to keep unused resource free but what if your last user disconnects and every resource gets free for claim and after few minutes of freeing up resource another user came to connect to this database? Obviously SQL Server will assign all resource to the database again and user has to wait until all resource assigned to the database means, slow performance for that user first time.
If you are having only one or may be few databases in your instance, I highly recommend keep Auto_Close property to OFF. I have seen that hosting company keep these settings on for their shared hosting server where you can find hundreds of databases in one instance.
BTW, As per Microsoft, this feature will be removed from later version so this is one more reason not to use this setting in your database.
Here is the TSQL to check whether Auto_Close is ON or OFF for your database.
SELECT DATABASEPROPERTYEX(‘YourDatabaseName’, ‘IsAutoCLose’);
Here is the syntax which helps you to set ON or OFF for Auto_Close property in your database.
Alter Database YourDatabaseName SetAuto_Close ON
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Know your processor by XP_Instance_RegRead SP in SQL Server 2005/2008

XP_Instance_RegRead could be quite a useful extended stored procedure which ships with SQL Server 2005/2008.
Sometime it happens that you don’t know which processor your server is using and you want to know it; however, you don’t have physical access of the server than you can use following T-SQL to know those information.
EXEC master.dbo.xp_instance_regread
‘HKEY_LOCAL_MACHINE’,
‘HARDWARE\DESCRIPTION\System\CentralProcessor\0’,
‘ProcessorNameString’;
BTW, I have already written two articles before too with use of this kind of extended stored procedure.
Please note that if the user you are using to access SQL Server, wouldn’t have permission in OS to read registry, you will face an error rather than result.
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

SP_RefreshSqlModule in SQL Server 2005/2008

According to the MSDN, SP_RefreshSQLModule:
Updates the metadata for the specified non schema-bound stored procedure, user-defined function, or view. Persistent metadata for these objects, such as data types of parameters, can become outdated because of changes to their underlying objects.
Rather than going into detailed theory session, let me give you one example to make it clear in your mind.
Suppose you have one table, named “baseTable” and you create one SP, ”usp_getBaseTableData” , which used to return all records of “baseTable” table.  ”usp_getBaseTableData” is now depend on ”BaseTable”. SQL Server used to store this relational information in “sysdepends”.  In “sysdepends”, information gets stored with Object_ID rather than Object_Name so if you drop your table “baseTable” and check the dependency of ”usp_getBaseTableData”, you won’t get anything but the message like this:
Object does not reference any object, and no objects reference it.
Obviously you will not get any reference as you have already dropped the table, now create the table with same name and structure and after that run SP_Dependsto check dependency and you will again get the same message, though you have created table and the same table name is being reference in SP. So if you will run your SP, it will work fine now as it will find the object but  SP_Depends won’t be able to find new object as new “baseTable” would have different Object_ID than older one.
Surprised!!!!!…. No, you shouldn’t if you have tried this before….
Now, to make reference of new “BaseTable” with ”usp_getBaseTableData”, you have to drop SP and recreate again. But this shouldn’t be the practical solution and that is why you have “SP_RefreshSQLModule” to refresh non schema-bound SPs or functions or views. Let us see how it works.
–create one table
create table baseTable
(
ID INTIDENTITY(1,1)
)
GO
–create one SP to return data of baseTable
create proc usp_getbaseTableData
as
select * from baseTable
GO
–check whether SP_Depends returns any data or not
–I am sure, it will return :)
sp_dependsusp_getbaseTableData
GO
–now drop the basetable
drop table baseTable
GO
–now if you try sp_depends, it will show you below message as you don’t “BaseTable”
–Object does not reference any object, and no objects reference it.
sp_dependsusp_getbaseTableData
GO
–now create base table again with same structure
create table baseTable
(
ID INTIDENTITY(1,1)
)
GO
–use the same command, it will again show you same message.
–though you have now table, if you will run your SP, it will work
–but sp_depends will not show you proper information
sp_dependsusp_getbaseTableData
GO
–now you have two ways
–1.) drop and re-create SP
–2.) use sp_refreshsqlmodule.
EXEC sp_refreshsqlmodule ‘usp_getbaseTableData’
GO
–now you will again get proper information
–after taking any of the previous suggestion in comment
sp_dependsusp_getbaseTableData
GO
BTW, personally I prefer “information_schema.routines” then “SP_Depends” to get dependency of object.
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

What version to go for (updgrade) after SQL Server 2000???

I have developed one project very long back in .NET Framework 2.0 and SQL Server 2000 for one doctor in USA. He is still using that software but since than many rules gets changed in USA so he come back to me with those changes and also told me that I want to use latest technology for my project and database.
He might have searched internet for getting knowledge about the latest version of .NET and SQL Server. Talking about SQL Server, he told me that the latest version of SQL Server is “Denali” and I want to use it in my project. While talking about this, I made him understand that we should go for SQL Server 2008 R2 rather than “Denali” by giving some very strong and valid reason and one of that strong reason is “Denali” is not in production, it is in CTP (Community Technology Preview).
But after this discussion, I felt to share some of my personal opinions on this blog.
After SQL Server 2000 version, you have three choices for upgrade.
1.)    SQL Server 2005
2.)    SQL Server 2008
3.)    SQL Server 2008 R2
I am excluding “SQL Server Denali” from this as it is still CTP version, not good to choose it for production as of now though it has eye catchy and flashy features which attract you.
Out of these three versions, I would, personally, like to go for “SQL Server 2008 R2”. Let me tell you why I am thinking this.
SQL Server 2005 is not a good deal at this time, though it has upgraded engine and many new features as compared with SQL Server 2000, SQL Server 2005 is now in “Extended Support”. It means you will neither get any “No-Charge” support nor claim any warranty. Soon after release of “Denali”, Microsoft may end even “Extended Support” for SQL Server 2005 (may be in 2014/2015).
While compare SQL Server 2008 with SQL Server 2008 R2, R2 gives much more facility than SQL Server 2008, mainly multiserver management, greater security, self service BI and many more.
Apart from that, SQL Server 2008 was released in 2008 and SQL Server 2008 R2 is released in 2010 so you will get longer support in “SQL Server 2008 R2” then “SQL Server 2008”, means better reward of your money. 
Generally in any SQL Server, you get support for your version for minimum 10 years. Suppose SQL Server 2008 R2 releases in 2010, Microsoft will definitely give you support for SQL Server 2008 R2 till 2020. For detailed understanding of support life cycle, refer Microsoft website from here. You have to go to “SUPPORT LIFECYCLE” tab in that link.
This is what I am thinking; this article is totally based on my personal view. You have to decide the proper version by looking at your need; this article is just the guideline to make you aware with some of the facts which may help you in your decision.
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Msg 7301 Cannot obtain the required interface (“IID_IDBSchemaRowset”) from OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “LinkToAceess”

While executing example given in my previous article, one of the reader faced an issue so he sent an email to me by complaining that he is facing following error:
Msg 7399, Level 16, State 1, Procedure sp_tables_ex, Line 41
The OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “LinkToAceess” reported an error. Access denied.
Msg 7301, Level 16, State 2, Procedure sp_tables_ex, Line 41
Cannot obtain the required interface (“IID_IDBSchemaRowset”) from OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “LinkToAceess”.
This is really very simple error regarding the setting of the provider you have used in your linked server. In our case, we have used “Microsoft.ACE.OLEDB.12.0” so we have to set the property of the same.
In your SSMS, go to “Object Explorer” and follow the below path:
Server->Server Objects->Linked Server->Providers->Microsoft.Ace.Oledb.12.0
Right click on it and click on “property” from popup menu, then set “Allow process” to true.
Click on “Ok” button and enjoy you OPENQUERY.
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

OpenRowSet and OpenQuery in SQL Server 2005/2008

Said both of the techniques (OpenRowSetand OpenQuery) are almost same and may confuse many developers. I have been asked many time personally by few of my team member in recent years as well as I have seen the same question in few forums like Experts-Exchange and Forums.Asp.Net.
Basically both commands requires to access remote data from other data source like other SQL Server or may be Oracle or may be Access or Excel or anything else. Basically both of these commands requires connection information of source database, table information etc.
OpenQuery gets all these information from Linked Server where as in OpenRowSet, you have to provide all these information at run time. There is no other difference.
Syntaxes of both the commands from Microsoft’s website:
OPENROWSET
( { ‘provider_name’ , { ‘datasource’ ; ‘user_id’ ; ‘password’
   | ‘provider_string’ }
   , {   [ catalog. ] [ schema. ] object
       | ‘query’
     }
   | BULK ‘data_file’ ,
       { FORMATFILE = ‘format_file_path’ [ ]
       | SINGLE_BLOB | SINGLE_CLOB | SINGLE_NCLOB }
} )
::=
   [ , CODEPAGE = { ‘ACP’ | ‘OEM’ | ‘RAW’ | ‘code_page’ } ]
   [ , ERRORFILE = ‘file_name’ ]
   [ , FIRSTROW = first_row ]
   [ , LASTROW = last_row ]
   [ , MAXERRORS = maximum_errors ]
   [ , ROWS_PER_BATCH = rows_per_batch ]
   [ , ORDER ( { column [ ASC | DESC ] } [ ,…n ] ) [ UNIQUE ]
OpenSource:
OPENQUERY ( linked_server ,'query' )
Let us see how does it works, we are going to create one access file named “acc1.accdb” and will create one Employee table inside “acc1.accdb” file which will be stored in D drive of computer. You can have your own access file at your desired location.
Here is the query for the same:
–add link server
exec sp_addlinkedserver
@server=‘LinkToAceess’,
@srvproduct=‘AccessDatabase’,
–if you have older version of access then kindly use old jet provider
–Microsoft.Jet.OLEDB.4.0
@provider=‘Microsoft.ACE.OLEDB.12.0’,
@datasrc=‘D:\db1.accdb’
GO
–add MDB’s credental, if any
EXEC sp_addlinkedsrvlogin ‘LinkToAceess’, ‘false’
exec sp_serveroption [LinkToAceess],‘Data Access’,‘true’
–check whether ‘LinkToAccess’ has been added
select * from sys.servers
–list all tables available in Testing.MDB
exec sp_tables_ex ‘LinkToAceess’
–now query with OPENSOURCE
SELECT * FROM OPENQUERY(LinkToAceess, ‘SELECT id,empname FROM employee’)
–now query with OPENROWSET,
–which will not use our linked server
SELECT *
   FROM OPENROWSET(‘Microsoft.ACE.OLEDB.12.0’,
      ‘D:\db1.accdb’;
      ;,Employee);
     
–if you get following error
–Msg 15281, Level 16, State 1, Line 1
–SQL Server blocked access to STATEMENT ‘OpenRowset/OpenDatasource’ of component ‘Ad Hoc Distributed Queries’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Ad Hoc Distributed Queries’ by using sp_configure. For more information about enabling ‘Ad Hoc Distributed Queries’, see “Surface Area Configuration” in SQL Server Books Online.
–you have to enable “Ad Hoc Distributed Queries”
sp_configure ‘show advanced options’,1
reconfigure
GO
sp_configure ‘Ad Hoc Distributed Queries’,1
reconfigure
GO
–now try again same query
SELECT * FROM
OPENROWSET(‘Microsoft.ACE.OLEDB.12.0’,
      ‘D:\db1.accdb’;
      ‘admin’;,Employee);
Hope you have enjoyed!!!!
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Audit Trail with OUTPUT clause in SQL Server 2005/2008/Denali

Audit Trail is one of the task client asked for in most of the project so that they can keep watch on crucial fields regarding its change.
I have seen many small & few medium sized companies is still not hiring SQL Server professional and used to go ahead with .NET developer. .NET developer generally keep their knowledge updated with .NET technology but they lack of knowledge in SQL Server domain so for Audit Trail, they still rely on old pseudo table (INSERTED, DELETED) accessible in Trigger. But in SQL Server 2005 and later, you don’t even need to rely on Triggers for Audit Trail but you can use new “OUTPUT” clause.
“OUTPUT” clause is still much underutilized feature even after 7 years. That is the reason I am writing on Audit trail and OUTPUT clause once again.
Within “OUTPUT” clause, you can access pseudo table and generate your “Audit Trail” log. I will show you with an example.
We will have one table of Test comes to an environmental company to analyze, if client change the Test s/he wants to perform, we have to keep trail on that, if he cancel any test, we have to keep trail on that too.
–we will keep “Audit Trail” for below given table
create table ClientOrder
(
      OrderID varchar(5)
      ,ClientID varchar(5)
      ,Test varchar(20)
      ,OrderDate datetime default getdate()
)
GO
–following is the table, we will keep “Audit Trail” in.
–this will keep track of all data changed
CREATE TABLE AuditOfOrder
(
      Id INT Identity(1,1)
      ,OrderID varchar(50)
      ,OldTest varchar(20)
      ,NewTest varchar(20)
      ,DMLPerformed varchar(15)
      ,ChangeDate datetime default getdate()
)
GO
–inserting data in “ClientOrder” table
–all insert will be stored in audit trail table too via “OUTPUT” clause
INSERT INTO ClientOrder(OrderID,ClientID,Test)
Output Inserted.OrderID,Inserted.Test,Inserted.Test,‘Insert’ intoAuditOfOrder(OrderID,OldTest,NewTest,DMLPerformed)
VALUES(‘A1001’,‘CHEM1’,‘VOCMS Group1’)
GO
–inserting data in “ClientOrder” table
–all insert will be stored in audit trail table too via “OUTPUT” clause
INSERT INTO ClientOrder(OrderID,ClientID,Test)
Output Inserted.OrderID,Inserted.Test,Inserted.Test,‘Insert’ intoAuditOfOrder(OrderID,OldTest,NewTest,DMLPerformed)
VALUES(‘A1001’,‘CHEM1’,‘Pesticide Group1’)
GO
–let us see what we have in both tables now.
select * from ClientOrder
select * from AuditOfOrder
go
–we will perform UPDATE on “ClientOrder” table
–which will be recorded in “AuditOfOrder” table too
INSERT INTO AuditOfOrder(OrderID,OldTest,NewTest,DMLPerformed)
SELECT
t.OrderID,
t.OldTest,
t.NewTest,
t.DMLPerformed
FROM
(
UPDATEClientOrder
      SET Test =‘SVOC Stars’
OUTPUT
      inserted.OrderID AS OrderID,
      deleted.Test AS OldTest,
      inserted.Test AS NewTest,
      ‘UPDATE’ as DMLPerformed
WHEREClientOrder.Test=‘VOCMS Group1’
) t
GO
–let us see what we have in both tables now.
select * from ClientOrder
select * from AuditOfOrder
go
–Finally the log of Delete will be stored in the same way.
INSERT INTO AuditOfOrder(OrderID,OldTest,NewTest,DMLPerformed)
SELECT
t.OrderID,
t.OldTest,
t.NewTest,
t.DMLPerformed
FROM
(
DELETE FROM ClientOrder
OUTPUT
      deleted.OrderID AS OrderID,
      deleted.Test AS OldTest,
      deleted.Test AS NewTest,
      ‘DELETE’ as DMLPerformed
WHEREClientOrder.Test=‘SVOC Stars’
) t
GO
–let us see what we have in both tables now.
select * from ClientOrder
select * from AuditOfOrder
go
Isn’t this interesting? Start using it.
BTW, I have previously written some articles regarding Audit Trail techniques (old & new both), if you want to refer it, have a look at below links:
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Find third highest salary from Employee table

I had discussion regarding few of the TSQL techniques with our .NET developer team and in-between one of the new .NET developer girl, who have just joined the team, told me that she has been asked in few of the interviews regarding this question “Find third highest salary from Employee table” and she had provided solution with big query and calculation.
After listening this, it comes to my mind that we don’t need any big query with calculations, neither we need Rank or row_number etc., it could be achieved with very short and simple query and don’t need any version specific functions like Row_Number, Rank, Dense_rank.
Let us see how we can achieve this:
create table tblEmp
(
ID INTIDENTITY(1,1)
,FirstName varchar(10)
,LastName varchar(10)
,JoiningDate datetime defaultgetdate()
,Salary numeric(10,2)
)
GO
INSERT INTO tblEmp (FirstName,LastName,Salary)
SELECT ‘Rushik’,‘Shah’,21000 UNION ALL
SELECT ‘Prapa’,‘Acharya’,21000 UNION ALL
SELECT ‘Kalpan’,‘Bhalsod’,35000 UNION ALL
SELECT ‘Ashish’,‘Patel’,18000 UNION ALL
SELECT ‘Hetal’,‘Shah’,18000
GO
SELECT * FROM tblEmp
GO
–solution given by new .NET developer
declare @maxsal float
set @maxsal = (select  max(salary) from tblEmp
where salary not in (select max(salary) from tblEmp))
select distinct salary fromtblEmp
where (salary != @maxsal) and (salary != (select max(salary) from tblEmp))
GO
–this could be easily achieved by Dense_Rank function
Select Salary FROM(
SELECT distinct Salary,dense_rank() over (order by salary desc) as rn FROM tblEmp
) as t where rn=3
GO
–even easy then windows partioning function
–like Dense_Rank
–especially this is not a SQL Server version specific query
SELECT top 1 Salary FROM
(
      select distincttop 3 salary fromtblEmp order bySalary
) as t
GO
You can compare all three different queries as performance point of view via execution plan.
Have fun!!!
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah

Understand SET STATISTICS IO as a first step for performance tuning in SQL Server

SET STATISTICS IO” provides us very crucial information about the query we run, like scan count, logical read, physical read etc. but this command is really very ignored. While it’s coming to query optimization, many newbie or less experience person used to talk about DTA, profiler etc. but they JUST IGNORE simple yet powerful command “SET STATISTICS IO” whereas they first need to look at the information provided by “SET STATISTICS IO” so that you can move ahead and dig more details with the information provided by “SET STATISTICS IO”.
Let us see how it practically useful for us.
We are going to create one database, named “SQLHub” and one table, named “Orders” under “SQLHub” database. “Orders” table would have approx 1,00,000 rows dynamically inserted.
–create one database which you can delete after running this example
create database SQLHub
GO
USE SQLHub
GO
–if orders table is already there. you can delete it than create new one with name “Orders”
IF OBJECT_ID(‘orders’, ‘U’) IS NOT NULL BEGIN
      DROP TABLE orders
END
GO
–creating table
CREATE TABLE orders (OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO
–inserting 100000 fack rows into table (BTW, thank to Jacob Sebastian, copy in INSERT script from one of his demo to generate big table)
INSERT INTO orders (OrderDate, Amount, Refno)
SELECT TOP 100000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST(‘2010-02-01’ AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
Now we are ready to see how query works. Let us create one simple “SELECT” T-SQL query with “SET STATICTICS IO”.
SET STATISTICS IO ON
–you might have different refno in your table, so please check it first
SELECT * from orders where refno=23
SET STATISTICS IO OFF
–here is the stats info came via “SET STATISTICS ON” in “Message” tab.
–(1982 row(s) affected)
–Table ‘orders’. Scan count 1, logical reads 409, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
After running this query, you would get its results in result tab and some stats in “Message” tab, look at the screen capture.

You can see “Logical Read 409”. To find out all rows which has refno 23, SQL Server has to go for 409 pages internally. Now let us create an Index on RefNo column and see stats.
–creating clustered index, however it is not meeting the criteria I have
–given in article, but this is ok for example
CREATE CLUSTERED INDEXidx_refno ON orders(refno)
GO
After creating “Clustered Index” on “Orders” table for “refno” column, let us see the same query.
SET STATISTICS IO ON
SELECT * from orders where refno=23
SET STATISTICS IO OFF
–here is the stats info came via “SET STATISTICS ON” in “Message” tab.
–(1982 row(s) affected)
–Table ‘orders’. Scan count 1, logical reads 14, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
GO

See screen capture of this query:

 

You can see now “Logical Read 14” rather than “409”, you see how much difference clustered index have made? And you don’t need to go to profiler to see this difference, even no need for execution plan in this case. However I am not against the use of profiler and execution plan. They both are very intelligent and useful tools but just wanted to clear it up that; we can get important information from simple command like “SET STATISTICS IO” too.
Apart from “Logical Read”, it provides you many more information in “Message” tab which you can see above like “Scan Count”, “Physical Read”, “Read-Ahead reads”, “log logical reads”, “lob physical reads” etc. you can get description of all these from Microsoft’s MSDN.
If you want to refer all other articles related to index, click here.
Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah