Thursday, July 18, 2013

List Table Names, no.of Columns & Rows

Query to retrieve all the table names with no.of columns and no.of rows

 

Note: This applies from SQL Server 2005 onwards.

Recently, I have started working on a performance issue where customer says the data retrieval operation from the table takes longer time. The table structure is really a shocking one for us as most of the tables are having more than 300 columns which are being used for OLTP system. At this time, we wanted to retrieve all the table names with total no.of columns and no.of rows for each table in a single report. This made me to prepare a small query with basic joins between sysobjects, sys.indexes and syscolumns system views.

Here is the query which has given me the desired output.

select a.name as ObjectName,b.Rows,count(c.object_id) AS [No.Of.Columns] from sysobjects a JOIN sysindexes b ON a.id = b.id
JOIN sys.columns c ON a.id = c.object_id
where a.type = 'U' and b.indid <=1 group by c.object_id,b.rows,a.name order by 3 DESC

Here is a sample report for the above query which I ran against the AdventureWorksDW2012 database.

Sample output of the query
 

Monday, July 15, 2013

What's new in new Task Manager ?

Task Manager changes in Windows Server 2012/Windows 8

 
Windows Server 2012/Windows 8 is providing most important information in its task manager. I would like to show a demo of some of the useful tips of it for the DBAs.
 
Here is a snap how the task manager looks like in Windows Server 2012. This even applies for Windows 8 as well.
 
Fig 1: Task Manager in Windows 8 Professional X64
 
 
From the image above, we can get to know about most interesting things about CPU.

Looking at the right top corner, we could see the processor type mentioned. I have got Intel Core i5 processor for my laptop which I can see from there.

Looking at the right most bottom corner of the image, we could see pretty useful information.

Maximum speed: is the speed of the CPU installed on the machine.

As you can see in the figure 2 & 3 below, a socket is a holder which holds the processor in the mother board. Sockets in the task manager indicates how many CPU sockets are exists on the machine.


                                            Fig 2: Socket         Fig 3: Processor installed in the socket.
Cores: value in Fig 1 represents the information about how many cores exists per processor in the socket of the machine. For example, as you can see below for a dual core processor 2 cores exists for the processor.

Fig 4: Dual core processor (Core 0,1)

Virtualization: represents the configuration option of Virtualization enabled for processor.

Logical Processors: This represents the Logical CPUs count in the machine. For Intel processors if Hyper Threading (called as HT) is enabled, the logical processors count would be doubled. Means, we will get additional performance. But care should be taken while we deal with this as this may reduce the performance of SQL Server in some specific scenarios.

L1,2,3 Cache: are the cache sizes of L1, L2, L3 caches which are meant for the CPU to use to maintain the data needed for the execution.

I would recommend you to read the "SQL Server Hardware" book authored by the famous and my favourite SQL Server MVP "Glenn Berry" for more information about the Hardware choices for SQL Server. My special thanks to "Glenn Berry" for correcting me on a mistake in the initial version of this article.
 
 

Saturday, July 13, 2013

RESTART the Restore

How to begin the restore process from where it got failed ?

Note: This applies to SQL Server 2005 or later.

In every SQL Server DBA's day to day experience, backup and restore is a common thing and we get used to it as we do lot many restores. I'm sure that at some point of time we would have got into the situation where the restoration process failed in between or just before the restore gets completed and this happens because of many reasons. One of the common issue I found most of the times is with because of the communication drop between the SQL Server and the backup file.

For example, if we do a restore from a file which is located in a network path and because of what ever reasons, SQL Server lost the communication with the backup file while the restore is going on. After waited for some time, SQL Server will just terminate the restore session and we will see a error message pertaining to the error like no communication etc. If the restore is with a small sized database, we could bear to start the restart the restore. But, Imagine if we are dealing with large database restoration, it would be a for sure problem.

I had same situation in my experience that we needed to perform a restore of 600+ GB database every month between two SQL Server instances located at different geographical locations and the restore was taking more than 24hrs of time all the time as the backup file is located remotely and we were using remote share path in the restore command. If the restore fails, we need to restart the restore which has increased the outage for the application. If we have a way using which we can resume the restore from the point where it got failed, we will be very happy to touch the restore of that database again and this will be helpful to minimise the downtime for the applications.

Using the option "RESTART", the resume of the restore process if possible. Using this option is very simple like any other options including MOVE, REPLACE etc.


Let me show a demo of using it. I have the backup file in a USB drive which is connected to my system as D: drive. I initially used the script below to restore the database to my instance.

USE [master]
RESTORE DATABASE [RESTART] FROM DISK = N'D:\SQL Server\ADWDB2012.bak' WITH 
MOVE N'AdventureWorksDW2012_Data' TO N'C:\1\AdventureWorksDW2012_Data.mdf', 
MOVE N'AdventureWorksDW2012_Log' TO N'C:\1\AdventureWorksDW2012_log.ldf', STATS = 5

After the restore is started and shown as 10%, I have un-plugged the USB drive to make the restore process fail and I could see the below error after I did that.

5 percent processed.
10 percent processed.
Msg 3203, Level 16, State 1, Line 2
Read on "D:\SQL Server\ADWDB2012.bak" failed: 1110(The media in the drive may have changed.)
Msg 3013, Level 16, State 1, Line 2
RESTORE DATABASE is terminating abnormally.
 



So, now we have the restore process failed. Let's try using the option of RESTART and see how it goes. I have plugged-in my USB drive which has the backup file and executed the same query above adding RESTART option as you can see below.


USE [master]
RESTORE DATABASE [RESTART] FROM DISK = N'D:\SQL Server\ADWDB2012.bak' WITH 
MOVE N'AdventureWorksDW2012_Data' TO N'C:\1\AdventureWorksDW2012_Data.mdf', 
MOVE N'AdventureWorksDW2012_Log' TO N'C:\1\AdventureWorksDW2012_log.ldf', 
STATS = 5, RESTART

Now, I could see the database is restored from there and became accessible.

I would like to alert you about the checkpoint file a restore process creates and the impact if we delete that file with using the RESTART option. When you do a restore operation, SQL Server creates a checkpoint file in the default backup location with .CKP extension as you can see it here.




Because of what ever reasons if this file is not available or accessible, the RESTART option will not help and we can see the below message.
 
The restart-checkpoint file
'C:\Program Files\Microsoft SQL Server\MSSQL11.V2014\MSSQL\Backup\RestoreCheckpointDB6.CKP' was not found.
The RESTORE command will continue from the beginning as if RESTART had not been specified.


Hope this will help you when you have a restore process fails and need to restart.

Thursday, July 11, 2013

In-Memory OLTP (Hekaton)

In-Memory technology in SQL Server 2014

(Code name Hekaton)

 
Like the other players in the database space, Microsoft has started evolving in the In-Memory database technology a way back in 2009. After 4years they started working for it, we have now got the initial release of it released with SQL Server 2014. Microsoft has announced the release of SQL Server 2014 in the TechED2013 and you can download the CTP1 version of it today. I have downloaded my copy and started learning this as this concept can address many concerns that the previous releases can't address. I would like to take an opportunity to share my experience and knowledge about it.
 

What is In-Memory technology ?

In simple words, I can say that this technology felicitates the possibility to hold the data of a database completely with in the memory (RAM). Having the whole data available in the RAM, we can overcome the latency issues of the IO Disk subsystem.
 

How Hekaton is different from normal SQL Server database engine ?

The moment I started learning the In-Memory, I was scratching my head to understand how this is really benefit if we have more amount of RAM available in the server compared to the database size. Well, even if we have more amount of RAM available for SQL Server compared to the database size, we still need to operate with the 8KB sized data page(s) even in bufferpool after it is completely read in to the buffer. Imagine if we need to search for a row from a table which satisfies my search condition, SQL Server needs to retrieve couple of data pages from Disk to bufferpool and get the row displayed. In doing so, SQL Server would uses locks and latches to maintain the logical and physical consistency of the data. So, even if we have more amount of RAM available, we have no option to mitigate the lock and latch contention. Also, when a page is locked or latched just to operate with some portion of it, the other session gets blocked which needs to operate the different portion of the data in the same page. The Hekaton addresses these concerns mainly. With Hekaton, the database engine becomes lock and latch free.
 

Is this In-Memory a new separate release of SQL Server ?

Microsoft has introduced this technology as a feature in SQL Server 2014 similar to any other features like AlwaysON AGs etc. Which means that, we can choose tables which are to be memory optimized.
 

What is memory optimized ?

Now with SQL Server 2014, we have two types of tables available which are nothing but disk based and memory optimized. The memory optimized tables are used by Hekaton and not maintained in 8KB allocation units. The disk based tables are the conventional tables which are stored in 8KB allocation units on the disk. I will talk more about this in a separate blog.
 

Can I have both disk based and memory optimized tables for a single database?

Yes, we have the option to migrate selective tables to Hekaton of a database.
 

Which editions of SQL Server provides this feature?

This would be made available in Enterprise, Developer and Evaluation editions of SQL Server 2014.

Any other features available in Hekaton ?

Yes, the Stored Procedures can be now native which means the SPs can now be compiled and executed as C programs which requires less CPU cycles compared to the normal SP execution process by SQL Server. Under the hoods, the SP gets converted as a C program and execute the login we specify in the SP.
 

What is the benefit of Native SPs ?

The CPU execution speed is not growing aggressively like before. So, we may have CPU bottleneck when we deal with complex SP execution. Using the native SPs, the logic can be executed with less no. of CPU cycles with which we can overcome the issue of CPU bottleneck.

 

Will I see a guaranteed benefit by moving tables to Memory optimized or using Hekaton?

Well it depends. If there is no latch or lock contention exists for the tables with the regular workload, there may not be a benefit of migrating them to memory optimized.
 

What is the impact of Paging with Hekaton ?

No Paging.
 

What happens if there is no free memory available for new inserts ?

Well, SQL Server just rejects the new transactions. It is like a transactionlog file when there is no free space available in the Transactionlog, new modifications are not allowed.
 

Any maximum size limit exists for a table which is memory optimized ?

Yes as of CTP1, SQL Server 2014 supports a max of 512GB.
 

Is there any change for the max length of the row allowed with Hekaton ?

Like the normal database engine, Hekaton supports up to 8060 Bytes per ROW.
 

Does Hekaton uses the BufferPool ?

No, Hekaton uses non bufferpool area of memory portion.

Wednesday, July 10, 2013

Clustered Columnstore index

How to create a Clustered Columnstore index in SQL Server 2014 ?



The recent release of SQL Server 2014 (CTP1 version) is providing the “updatable Columnstore indexes” which is a new type of Index introduced with SQL Server 2012 which improves the query performance in OLTP environments.

In brief, the columnstore technology is different from the conventional B-Tree indexes which uses the row-store mechanism to maintain the indexes. In a columnstore indexes, the data from each column is stored in a separate set of disk pages which can make the queries to run faster. But, in SQL Server 2012, the biggest problem with this index is that this is a read-only index which means that updates to the underlying data is not allowed when there is a columnstore index created on the table. Also, in SQL Server 2012, a columnstore index is just a non-clustered. Before I installed the SQL Server 2014 CTP1, I was in the impression that this is still just the Non-Clustered Columnstore index but now this can be updated when the underlying data is changed. I just wanted to check how this works and to my surprise, there is an option to create a CLUSTERD Columnstore index available which is shown in Figure 1. I have downloaded the AdventureworksDW2012 sample database and attached to the SQL Server 2014 CTP1 instance I installed on my machine. I have then chosen the largest table "FactProductInventory" from this database to create a clustered Columnstore index. But, it has shown me an error saying that this table doesn’t satisfy the pre-requisites for the clustered columnstore index. The reason being for that was the table I have chosen has foreign key constraints associated with it. So, I just have imported the data to a new table with name "FactProductInventory1" with no foreign key constraints to create the clustered columnstore index.  


Figure 1: Creating Clustered Columnstore index in SQL Server 2014.
 
As I have marked in the Figure 1 above, we can see multiple index types for the columnstore indexes are available (clustered and non-clustered) in SQL Server 2014. I have observed few constraints about creating these and please refer the limitations section below about them.

Unlike the non-clustered columnstore indexes or B-Tree indexes, for the clustered columnstore index there is no option to choose a particular column for the cluster key. This indicates that clustered columnstore index will have all the columns stored in it. And, remember that we can create only one clustered columnstore index on a table which is similar to that of the B-Tree clustered index.



Figure 2: Index creation page.
 





















The below figure 3 represents how the clustered columnstore index looks like in the Indexes section of a table in the SQL Server Management Studio.



Figure 3: Columnstore index of clustered type created.


 















Limitations I have observed for the Clustered Columnstore index:
  1. We can’t have the one if the table is having any foreign key constraints associated.
  2. Only one clustered column store index is possible similar to that of conventional row store clustered index.
  3. Clustered columnstore index can’t be created when we have a clustered row store index.
  4. No possibility to create the clustered Columnstore index on selected columns.
  5. No non-clustered Columnstore index is allowed if we have a clustered columnstore index on a table.
 
-----------------------------------------------------------------------------------------------------------------