SQL Server 2016 Service Pack 1

I have started working with Microsoft SQL Server 7.0 in late 1999 and even today I become very excited with the new version of Microsoft SQL Server.  

SQL Server 2016 has been in market for quite some time but now finally Microsoft has released service pack 1 for SQL Server 2016 to download. This is one of the big release after SQL Server 2012 so I am happy to welcome SQL Server 2016 in market officially as now I can start using SQL Server 2016 in my production system and can upgrade my old SQL Server box to new and latest RDBMS.

I am happy that Microsoft has given many enterprise level feature into standard edition of SQL Server 2016 at the same time I am little disappointed as well because I am seeing no sign of horizontal scale out feature in this big release which is becoming mandatory in today’s data world.

Let us not talk negative today as I, really, want to adopt this new system into my production as soon as possible especially the powerful standard edition (I can save big money of my company by using standard edition).

Here is the official product page of Microsoft SQL Server 2016. I have already downloaded my free copy of SQL Server 2016 Developer Edition.

microsoft-sql-server-2016-geospatial-query-support

I could not resist to mention some of my favorite features, mostly enterprise level , which are available in SQL Server 2016 Standard Edition.

Dynamic Data Masking: Data security is becoming very critical nowadays especially if you are dealing with sensitive data/information like SSN number, Credit Card number. By having dynamic data masking, you can store full SSN/CC number in database but user can see only those digits, (may last few digits) which you want whereas if user has UNMASK permission, they can see complete SSN/CC number.

Row level Security: Row level security is very important especially (not limited to) for the SAAS product. I might have billing/payroll information of many different vendors in my billing/payroll table but Vendor1 shouldn’t be able to see the data of Vendor2.

Database Snapshot: As per MSDN, A database snapshot is a read-only, static view of a SQL Server database (the source database). The database snapshot is transactionally consistent with the source database as of the moment of the snapshot’s creation. A database snapshot always resides on the same server instance as its source database. As the source database is updated, the database snapshot is updated. Therefore, the longer a database snapshot exists, the more likely it is to use up its available disk space.

I am still trying to understand where I can use this feature in some of the production system I manage as it has a list of pro/cons and needs to decide whether the pro list is big or the cons.

Columnstore: First I have learnt about Column store index in SQL Server 2008 R2 during some BI project. Columnstore index is available in database engine (from SQL Server 2012) but still I think it is more usable for analytical queries, maybe I am very much used to with row-store traditional index, especially non-clustered covering index.

Compression: SQL Server 2016 supports GZip algorithm for COMPRESS/DECOMPRESS data during DML operations. Finally, I could save some more valuable SSD disk space in my server.

In-Memory OLTP: In-Memory OLTP is a memory-optimized database engine integrated into the SQL Server engine, optimized for transaction processing. In-Memory OLTP could surely increase performance of application by improving throughput and by reducing latency for transaction processing.

Always Encrypted: As per MSDN, Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national identification numbers (e.g. U.S. social security numbers), stored in Azure SQL Database or SQL Server databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data (and can view it) and those who manage the data (but should have no access). By ensuring on-premises database administrators, cloud database operators, or other high-privileged, but unauthorized users, cannot access the encrypted data, Always Encrypted enables customers to confidently store sensitive data outside of their direct control. This allows organizations to encrypt data at rest and in use for storage in Azure, to enable delegation of on-premises database administration to third parties, or to reduce security clearance requirements for their own DBA staff.

Polybase: And lasts but not the least in my favorite list, PolyBase is a query engine designed to fill the gap between the relational database of SQL Server with the big unstructured data mostly stored in Hadoop which can be used for analytics. We could even use Polybase with Azure Blob storage.

  

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

 

Search objects like Stored Procedure, View, Table in SQL Server

SearchSearch objects like Stored Procedure, View, Table in SQL Server

It happens many time that we don’t know the exact name of stored procedure or table or view etc. and we have to either look at object explorer and scroll all the way down for object we are looking for or we can use “Filter” facility of SSMS but I prefer one small function I have written very long back because I am a script buddy and try to avoid mouse as long as possible.

Sometime, we may want to find out whether function, table or view we are looking for, is used in any other SP or View to check dependency, I used the same function. BTW, you can use “SP_Depends” or “Information_Schema.routines” to find dependency of the object which is more accurate way for find dependency.

Here is the table valued user defined function which I have developed a long back and still using it. In fact, I am so used to it now.

[sourcecode language=”sql”]

CREATE FUNCTION ObjectSearch

(

@SearchString VARCHAR(100)

)

RETURNS TABLE

RETURN

SELECT

DISTINCT scm.name AS SchemaName,

obj.name AS ObjectName,

obj.type_desc AS ObjectType

FROM sys.objects obj

INNER JOIN syscomments sysCom ON sysCom.Id = obj.object_id

INNER JOIN sys.schemas scm ON scm.schema_id = obj.schema_id

WHERE

is_ms_shipped = 0 AND sysCom.text like ‘%’ + @SearchString + ‘%’

GO

[/sourcecode]

Okay, not we have “ObjectSearch” function in our database. I have created this function in my AdventureWorks2012 database and now I am going to find where “Department” table is used so I would execute following simple SELECT statement with “ObjectSearch” function.

[sourcecode language=”sql”]

SELECT * FROM ObjectSearch(‘Department’)

GO

[/sourcecode]

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Check mirroring status from mirroring monitor in SQL Server

Check mirroring status from mirroring monitor in SQL Server

Mirroring is one of the best and easy to maintain high availability feature as long as SQL Server version is 2008 R2 or earlier. I wrote article about how to set certificate mirroring in SQL Server sometime back. Once you setup mirroring, you have to keep your eyes open about the status of mirroring whether it is “Synchronized” or still in “Synchronizing” mode, status is on or off etc.

SQL Server providing one powerful tool to keep watch on mirroring status, named “Mirroring Monitor” which you can ope from pop-up menu come from right click on Principal/Mirror database. Have a look at following screen capture for further information.

Once you click on “Launch Database Mirroring Monitor”, you have to add your server in the monitor and it will look like following screen capture:

This monitor will provide you with important information like send rate, restore rate, commit overhead etc along-with whether you principal and mirror and completely synchronized or still synchronizing.

If you like this article, do like “Extreme-Advice” page in Facebook.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Note: Microsoft Books online is a default reference of all articles.

Find Blocking in SQL Server and use it to send an alert

Find Blocking in SQL Server and use it to send an alert:

As a DBA, I am always interested to find whether blocking is going on, in my environment or not? How long blocking exists? Which are the queries/SPs creating blocking etc.

Blocking generally occurs when one SQL Server connection (user process or application process) places a lock on a table or a number of rows and a second connection attempts to read or modify the data under the lock by first connection. Depending on the type of the lock, this can cause the second connection to wait until the first connection releases its lock. A blocked connection waits indefinitely for the blocking connection to release its lock.

Sometime It may happens that blocking gets converted into deadlock situation where neither of both connection able to finish its job and one of the connection identified as deadlock victim by SQL Server. In short, Locking, Blocking and deadlocking is not good to keep. This is the reason I always eager to know the details about blocking happening in my system.

Few days back, I have created one SQL Server View “WhatIsGoingOn” which was using sys.dm_exec_requests and sys.dm_exec_sessions DMVs to find information about currently running processes in SQL Server instance along with sessionID and blocking Session ID, if any. I am going to use “WhatIsGoingOn” SQL Server View in one SP to find only blocked session and insert its detail into following SQL Server table.

[sourcecode language=”sql”]
CREATE TABLE tblBlockingDetails

(

ID INT IDENTITY(1,1),

ObjectName SYSNAME NULL,

BlockingObjectName SYSNAME NULL,

StatementText VARCHAR(MAX),

BlockingStatementText VARCHAR(MAX),

SessionID INT,

BlockingSessionID INT,

Status VARCHAR(50),

BlockingStatus VARCHAR(50),

DateCreated DATETIME

)[/sourcecode]

After having above table in my database, I will create following Stored Procedure which will capture details of any session being locked and insert that detail in “tblBlockingDetails” table.

[sourcecode language=”sql”]
CREATE PROC uspInsertBlockingRecords

AS

INSERT INTO tblBlockingDetails

SELECT

W1.ObjectName AS ObjectName,

W2.ObjectName AS BlockingObjectName,

W1.Statement_Text AS StatementText,

W2.Statement_Text AS BlockingStatementText,

W1.sessionid AS SessionID,

W2.sessionid AS BlockingSessionID,

W1.status AS Status,

W2.status AS BlockingStatus,

GETDATE() AS DateCreated

FROM

WhatIsGoingOn W1 INNER JOIN WhatIsGoingOn W2

ON

W1.BlockingWith > 0

AND W1.BlockingWith = W2.sessionid

[/sourcecode]

Now, I can schedule SP “uspInsertBlockingRecords” every few minutes for business hours in SQL Server Job Agent so that I can see details of blocking anytime I wish from table “tblBlockingDetails”. I can even set email schedule either to get information about blocking, if I wish.

If you want to create DeadLock situation forcefully to check its behavior, you can use the script given in one of my past article here.

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Quick Learn

Note: Microsoft Books online is a default reference of all articles but examples and explanations

Write process in TRASACTION LOG (.LDF) file in SQL Server

I have written one article about Transaction Log File (.LDF) yesterday, today I am presenting one more article on the same subject.

There is a myth going on in the mind of many developers that only DML statement like INSERT/ UPDATE & DELETE are being written in the log file but the truth is that SQL Server writes DDL operation like CREATE TABLE or CREATE INDEX etc. in the log file too.

Let us see this fact by small demonstration.

  • Open SQL Server Management Studio
  • Keep two query window open
  • In First Query window, execute following TSQL Code which will begin one transaction & create one SQL Server table but wouldn’t COMMIT or ROLLBACK

[sourcecode language=”sql”]
Begin Tran
Create Table TestTable
(
Col1 INT
)[/sourcecode]

  • Since our CREATE TABLE script ran successfully in first query windows, we will execute simple SELECT statement on this table in second Query Window

[sourcecode language=”sql”]SELECT * FROM testtable[/sourcecode]

  • Your SELECT statement won’t finish execution until you either COMMIT or ROLLBACK. So let us now go to first query window and execute following statement

[sourcecode language=”sql”]
ROLLBACK[/sourcecode]

  • If you go to your second query window, you will observe that your SELECT statement would have finished execution with error something like “Invalid Object name ‘Test Table”
  • Now again go to First query window and execute following command once again.

[sourcecode language=”sql”]
Begin Tran
Create Table TestTable
(
Col1 INT
)[/sourcecode]

  • Since our CREATE TABLE script ran successfully in first query windows, we will execute simple SELECT statement on this table in second Query Window

[sourcecode language=”sql”]SELECT * FROM testtable[/sourcecode]

  • Your SELECT statement won’t finish execution until you either COMMIT or ROLLBACK. So let us now go to first query window and execute following statement

[sourcecode language=”sql”]
COMMIT[/sourcecode]

Now look at your second query window and you will see your SELECT query has successfully finished execution.

Isn’t this proves that DDL commands are written in LOG file too?

Reference: Ritesh Shah

http://Extreme-Advice.com

http://www.sqlhub.com

Quick Learn


Note: Microsoft Books online is a default reference of all articles but examples and explanations

Transaction Log File (.LDF) in SQL Server along with ACID property and recover mode of database

SQL Server databases created with two types of file:

Data File (File with extension .MDF/.NDF)

Log File (File with extension .LDF)

Data File can be one or more than one but physical log file will always remain one for each database. You have control over the size of log file but you can’t have more than one physical log file.

There are so many mysteries and myth roaming around log file in developer’s mind so I thought to take an initiative to break this ice.

What is the need of Log File?

Log file plays an important role for SQL Server databases. Any DML or schema change TSQL you apply, it first goes to log file and later on it will be save to data file. By having this kind of architecture, SQL Server ensures your data integrity. If anything wrong happens to your TSQL command and it doesn’t get finished then SQL Server can ROLLBACK it or even user can do it too if transaction was used.  Because of LOG (Transaction Log) file, SQL Server guarantees ACID (Atomicity, Consistency, Isolation, and Durability).

Before we move further, it is worth to look at the definition of ACID.

Atomicity ensures that either all operation in one transaction apply or none. It means that either 100% COMMIT or 100% ROLLBACK.

Consistency ensures that a transaction either creates a new and valid state of data, or returns all data to its state before the transaction was started. It means that you have guarantee that a transaction never leaves your database in a half-finished state.

Isolation ensures transaction separation until they are finished.

Durability ensures that the data is saved by the system such that, even in the event of a failure and system restart, the data is available in its correct state.

Since we can have more than one database data file, we can either keep it on one drive or may distribute it over multiple disks as per our requirement of performance as well as disk space but we can have only one transaction log file (.LDF) so it is necessary to have proper size of .LDF file based on database requirement.

Note: Though we have only one transaction log file, we can have multiple Virtual Log File (VLF) inside our one transaction log file. This is different topic and out of the scope of this article so I will come up with separate article on this topic.

There is no rocket science which suggests you exact size of your database transaction log file. It is completely up to the requirement of business which vary from case to case. All I can suggest is neither keep log file too big then your requirement nor keep it too small.

Bigger transaction log file takes time to recover and somehow it affects the overall performance of database too at the same time, smaller log file size resulted into frequent trigger to auto grow transaction log file size.

Prevent transaction log file from frequent auto grow; keep its size reasonably big to accommodate your business transaction. No matter how big transaction log file you have, it used to be filled completely sooner or later and at that time, you have no option other than either to increase transaction log file size or truncation log file.

I personally against both the way I have mentioned in above paragraph. We shouldn’t truncate log file, not at least if we have other alternative because truncating the log file increase the risk of break up in log backup chain.

I would suggest taking transaction log backup frequently which helps you in two ways:

1.)    You can restore your database in point-in time whenever needed

2.)    It free-up unused part of log file so that new transaction can use that free space inside transaction log file and you don’t need to grow it much

Last but not the least, we have three recovery mode (Simple, Bulk-Logged and Full) for database which affect our transaction log file very much.

Simple: This recovery mode don’t use transaction log file too much. It writes the data in transaction log file and as soon as checkpoint runs and data transfers to data file, it truncate transaction log automatically so there is no possibility to take transaction log backup and hence in time of crisis, we can only restore the last full backup of data base available.

Bulk-Logged: This recovery mode uses transaction log file and log all DML operation except BULK Insert, BCP, CREATE INDEX,  SELECT INTO and text & image operation. We can take transaction log file backup in this mode. In time of crisis, we can have full database backup and all transaction log backup we have.

Full: This is one of my favorite recovery modes for mission-critical production database which logs each and every transaction. It helps to achieve Point-In time recovery.

Reference: Ritesh Shah

http://Extreme-Advice.com
http://www.sqlhub.com

Quick Learn


Note: Microsoft Books online is a default reference of all articles but examples and explanations

Find locked table name with duration and count in SQL Server

Finding LOCK table in SQL Server is one of essential task of DBA. Locked table for long time in SQL Server adversary affect on performance of the application, especially in pick business hours.

I have one SQL Server VIEW which help us to find all the table which are being locked currently along with duration since it is being locked.

[sourcecode language=”sql”]
CREATE VIEW WhatIsLocked
AS
SELECT
Locks.request_session_id AS SessionID,
Obj.Name AS LockedObjectName,
DATEDIFF(second,ActTra.Transaction_begin_time, GETDATE()) AS Duration,
ActTra.Transaction_begin_time,
COUNT(*) AS Locks
FROM    sys.dm_tran_locks Locks
JOIN sys.partitions Parti ON Parti.hobt_id = Locks.resource_associated_entity_id
JOIN sys.objects Obj ON Obj.object_id = Parti.object_id

JOIN sys.dm_exec_sessions ExeSess ON ExeSess.session_id = Locks.request_session_id
JOIN sys.dm_tran_session_transactions TranSess ON ExeSess.session_id = TranSess.session_id
JOIN sys.dm_tran_active_transactions ActTra ON TranSess.transaction_id = ActTra.transaction_id
WHERE   resource_database_id = db_id() AND Obj.Type = ‘U’
GROUP BY ActTra.Transaction_begin_time,Locks.request_session_id, Obj.Name
[/sourcecode]

You can use this view to keep automated watch. you can query this view every few minutes in database in business hours and if find lock bigger then certain duration, send email with detail. You can even keep watch by putting this query in web/windows application which keeps refreshing every few seconds and show details of locking.

I have written one more article in past which displays lock table, table with  ‘SCH-M’ lock and which command causing the lock. read article in detail, click here.

Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations

use sys.dm_exec_requests and sys.dm_exec_sessions to find who are logged in to your SQL Server Instance

Many time we need to find who are logged in to SQL Server instance with which tool/program/software. SQL Server gives this very useful information to us with following two DMVs.

sys.dm_exec_requests
sys.dm_exec_sessions

I have written VIEW in my SQL Server Instance to capture all these necessary information.

here is the TSQL script for the same:

[sourcecode language=”sql”]
CREATE VIEW WhoisLoggedIn
AS
SELECT
ISNULL(sess.host_name, ”) as host_name,
ISNULL(DB_NAME(req.database_id) , ”)  as database_Name,
ISNULL(sess.program_name, ”)   as ProgramName,
ISNULL(sess.login_name, ”)  as login_name,
ISNULL(CAST(req.start_time AS VARCHAR), ”)  as start_time,
DATEDIFF(MINUTE,ISNULL(req.start_time,GETDATE()),GETDATE())  as RunningMinutes,
req.Session_id

FROM
sys.dm_exec_requests req
INNER JOIN
sys.dm_exec_sessions sess
ON
req.session_id=sess.session_id
WHERE
–will eliminate all system processes
req.session_id >= 50
AND
sess.host_name IS NOT NULL
[/sourcecode]

Once you create this VIEW in your SQL Server, you don’t need to remember complex query but to use only user friendly name “WhoisLoggedIn” with standard SELECT statement like this:

[sourcecode language=”sql”]
SELECT * FROM WhoisLoggedIn
[/sourcecode]

Note: I have written one blog post in past which was finding currently running processes in SQL Server with the help of above given two DMVs.  click here to read it.

If you wish you can, even, use this VIEW in your .NET web page/windows application so that you don’t even need to connect to database directly to see the status.

Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations

Find currently running session/process in SQL Server database/instance with help of sys.dm_exec_requests and sys.dm_exec_sessions

As a DBA, it is mandatory to keep an eye on your database and server too, especially during the business hour. SQL Server providing very helpful Dynamic Management View to help in this situation. Dynamic Management View (DMV) are as follow:

sys.dm_exec_requests
sys.dm_exec_sessions

I have created one SQL Server view with the help of following script which I used to use, often, during business hours to check whether everything is going smooth or not.

 

[sourcecode language=”sql”]
CREATE VIEW WhatIsGoingOn
AS
SELECT
OBJECT_NAME(objectid) as ObjectName
,SUBSTRING(stateText.text, (statement_start_offset/2)+1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(stateText.text)
ELSE statement_end_offset
END – statement_start_offset)/2) + 1) AS statement_text
,DB_Name(database_id) as DatabaseName
,req.cpu_time AS CPU_Time
,DATEDIFF(minute, last_request_start_time, getdate()) AS RunningMinutes
,req.Percent_Complete
,sess.HOST_NAME as RunningFrom
,LEFT(CLIENT_INTERFACE_NAME, 25) AS RunningBy
,sess.session_id AS SessionID
,req.blocking_session_id AS BlockingWith
,req.reads
,req.writes
,sess.[program_name]
,sess.login_name
,sess.status
,sess.last_request_start_time
,req.logical_reads

FROM
sys.dm_exec_requests req
INNER JOIN sys.dm_exec_sessions sess ON sess.session_id = req.session_id
AND sess.is_user_process = 1
CROSS APPLY
sys.dm_exec_sql_text(sql_handle) AS stateText
[/sourcecode]

Now, whenever I want to see the status whether anything is blocking to my query/queries or which query/SP is being executed from which time. How many reads/writes happening by each objects. if there is a task like backup/restore, I can even see, how many percent complete by just executing one simple SELECT statement:

 

[sourcecode language=”sql”]
SELECT * FROM WhatIsGoingOn
[/sourcecode]

If you wish you can, even, use this VIEW in your .NET web page/windows application so that you don’t even need to connect to database directly to see the status.

 

Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations

Implementing physical Database Structure in SQL Server

Our book “SQL Server 2012 Performance Tuning cookbook” is published this week. This book is based on the concept of performance tuning in SQL Server 2012. We have one chapter “Implementing physical Database Structure” in our book “SQL Server 2012 Performance Tuning cookbook”. Following article is completely based on the chapter we have in our book “SQL Server 2012 Performance Tuning cookbook”.
Database structure is one of the important factor which affects performance of your database. Your database performance heavily depends on how you have physically placed your database objects and how you have configured your disk subsystem. Designing the physical layout of your database correctly is the key factor to improve the performance of your database queries and thus the performance of your database. However, the correct decision on a physical design structure of the database depends on the available hardware resources that you might have. This includes the number of processors, RAM, and storage resources, such as how many, disks or RAID controllers you might have in your database environment. The best thing while designing physical layout of the database is to have multiple physical disks for your database. If you configure your database in such a way that it spreads across multiple disks, it can benefit from parallel I/O operations.
The following are some of the decisions that influence your database performance:
  • Where do you place data files?
  • Where do you place log files?
  • Where do you place large objects?
  • Where do you place indexes?
  • Where do you place the tempdb database?
You can control the physical location of database objects within the database by using files and filegroups.
To read complete article kindly click here.
Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile
Microsoft SQL Server Blog. Fight the fear of SQL with SQLHub.com. Founder is Ritesh Shah