Shrinking data files recovers space by moving pages of data from the end of the file to unoccupied space closer to the front of the file. When enough free space is created at the end of the file, data pages at end of the file can deallocated and returned to the file system.

To shrink a data or log file

  1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.
  2. Expand Databases and then right-click the database that you want to shrink.
  3. Point to Tasks, point to Shrink, and then click Files or Database.


Note: Shrinking the database via script seems to be much more stable. You can check the status using the script below.

USE [MyDB01]

Checking the Status of Database Shrink.

SELECT percent_complete, start_time, status, command, estimated_completion_time, cpu_time, total_elapsed_time
FROM sys.dm_exec_requests
WHERE command = 'DbccFilesCompact'

, command
, b.text
, percent_complete
, done_in_minutes = a.estimated_completion_time / 1000 / 60
, min_in_progress = DATEDIFF(MI, a.start_time, DATEADD(ms, a.estimated_completion_time, GETDATE() ))
, a.start_time
, estimated_completion_time = DATEADD(ms, a.estimated_completion_time, GETDATE() )
from sys.dm_exec_requests a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) b
where command like '%dbcc%'

Check Free Space

SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;

Best practices

Consider the following information when you plan to shrink a file:

  • A shrink operation is most effective after an operation that creates a large amount of unused space, such as a truncate table or a drop table operation.
  • Most databases require some available free space for regular day-to-day operations. If you shrink a database repeatedly and its size grows again, then it’s likely that regular operations require the shrunk space. In these cases, repeatedly shrinking the database is a wasted operation.
  • A shrink operation doesn’t preserve the fragmentation state of indexes in the database, and generally increases fragmentation to a degree. This fragmentation is another reason not to repeatedly shrink the database.
  • Shrink multiple files in the same database sequentially instead of concurrently. Contention on system tables can cause blocking and lead to delays.
  • Typically it’s the log file that appears not to shrink. This non-shrinking is usually the result of a log file that hasn’t been truncated. To truncate the log, you can set the database recovery model to SIMPLE, or back up the log and then run the DBCC SHRINKFILE operation again.

Database Not Shrinking

Database is set up to use FULL recovery. You have never taken a log back up (let alone a full one) based on the image above. If this is the case then you log file is growing and growing and can’t clear out because you haven’t taken a backup.

If you have the room take a full backup, then a log backup and try your shrink again. Should work fine.

Worst case (and I do mean worst, particularly if this is a production database) is to set the database to SIMPLE recovery, shrink and then take a backup. This should work even though you don’t have enough space for a shrink normally.

Why you should not shrink your data files

Don’t confuse shrinking the transaction log with shrinking data files. Shrinking the log may be necessary if your log has grown out of control, or as part of a process to remove excessive VLF fragmentation (see Kimberly’s excellent posts on this here and here). However, shrinking the log should be a rare operation and should not be part of any regular maintenance you perform.

In either case, I’m not talking about using the TRUNCATEONLY option – all that does is lop off any unused space at the end of the files – that’s perfectly fine. I’m talking about actually running the shrink algorithm.

Shrinking of data files should be performed even more rarely, if at all. Here’s why: data file shrink can cause *massive* index fragmentation (of the out-of-order pages kind, not the wasted-space kind) and it is very expensive (in terms of I/O, locking, transaction log generation).


Last modified: October 19, 2021



Write a Reply or Comment