Description of support for SQL Server databases on compressed volumes


Microsoft SQL Server databases are not supported on NTFS or FAT compressed volumes except under special circumstances for SQL Server 2005 and later versions. A compressed volume does not guarantee sector-aligned writes, and these are necessary to guarantee transactional recovery under some circumstances.

For SQL Server 2005 and later versions, database file storage on compressed drives behaves as follows:

  • If your data file belongs to a read-only filegroup, the file is allowed.
  • If your data file belongs to a read-only database, the file is allowed.
  • If your transaction log file belongs to a read-only database, the file is allowed.
  • If you try to bring up a read/write database with files on a compressed drive, SQL Server generates the following error:
    Msg 5118, Level 16, State 2, Line 1
    The file “<file_name>” is compressed but does not reside in a read-only database or filegroup. The file must be decompressed.

For more information about exclusions for read-only databases and read-only filegroups in SQL Server 2008, go to the following MSDN website:

Note This topic also applies to SQL Server 2012 and later versions.

Read more…
Posted in Uncategorized

SQL Tip: Pre and post installation checklists


SQL build standards

Following are the standards recommended by me after considering best practices and configurations specific to SQL Server Infrastructure


This setting cannot be changed after installation easily and without disruption to service therefore it is absolutely necessary to ask about it before starting. Getting this wrong will require rebuild, reinstall or total discarding of the whole effort.

Backup Retention Policy

The Standard backup retention policy is 2 weeks for both full and log backups unless otherwise specified.

Backup Frequency

Unless otherwise specified, standard is to perform a full back up every night. For systems requiring point in time recovery log backups are setup to be performed every hour. Both subjected to 2 weeks retention policy.

Access to backup path

Server should have access to backup path folder where the

Backup compression default

Even with deduplication and compression of the storage management systems like Data Domain that is in place in ASTA with everything considered such as backup window and Iops etc. Keeping Backup compression turned ON, on server level and explicitly or on individual jobs.

Backup job and test

As a precaution when setting up for the first time testing backup job is necessary to ensure no errors will occur on scheduled events.

Net 3.5/4 installation

Is a prerequisite for SQL Server Setup usually?Correct Disks allocations (Data, logs & Tempdb)

As per standard, there should be 3 different drives unless performance and organization is not much of concern e.g. deployments where SQL Servers role is limited.

Service Accounts configured

Security is based on AD accounts therefore AD accounts created by Infrastructure are required where passwords are saved in central passwords storage.

Instant File Initialisation

“Perform Volume Maintenance Operations” set for the main Database engine service account is always set at the build stage for Instant File Initialisation.

Memory cap settings

For dedicated servers after leaving 2-4 GB for OS operations… 70% should be dedicated to SQL Server for reasonable performance.

Model Database settings

Any setting set on this database is inherited by every new database that is locally created on this instance. Therefore things like recovery models, data file growth parameters, log file growth parameters should be set here.


Data file sizing, growth and additional file(s)

Fixed sizes in case of dedicated disks eliminates potential performance issues in case of growth events.

Having additional data files on Tempdb removes contention issues. Usually it is number of cores but do look into this in detail.

Optimize for adhoc work loads

Usually keep this setting turned ON, on all new deployments. So that the execution plans are used efficiently. But depending the on the situation the DBA can decide himself.

SQL Agent auto start

Changing from default manual to Automatic is necessary.

Antivirus config

Confirmation with Infrastructure that on the new / existing server .mdf and .ldf files and any other directories or files excluded from scanning. This can cause performance issues.

Maxdop (# p cores)

Leaving it to default isn’t too bad but sometime the products require certain settings therefore this is included on the list of things to check.

Cost Threshold of Parallelism

The standard is 50. This is the recommendation in the industry as opposed to defaults since the processing power of servers has increased.

Enable Dedicated Administrator Connection

For troubleshooting especially in case of irresponsive systems.

Authentication & Security configuration

Local Admin group and “Gbn Inf DBA” Group (diff for Duw and TCS) set as SA.

Any other groups/users given access

Any installation only accounts to be removed?

Recovery model review

Full or Simple? Ask RPO and RTO questions


Model Database settings

Will affect all new databases created.

Authentication mode


Is SA pwd is recorded

Local instance SA user password resetted

Windows Update after installation

For performance and security.

SQL Service Packs (If required)

It’s best to start with the latest release provided the third party product (as per official documentation) or the in-house development tools support it.

SQL Services restarted

Bouncing services a couple of times is a recommended test.

Server reboots

Even if it’s not asked for after installation or Windows patching. Just in case.

Enable ‘show advanced options’ setting

For advanced settings needed by ISV

SCOM / Monitoring tool configurations

Or whatever monitoring tool in use for SQL instances.

Schedule a week after handover Health check

To check.


Posted in SQL Tips

SQL Tip: Change owner on all Agent Jobs – TSQL code

Quick note. After creation of maintenance plans and agent jobs to be careful always replace owner with SA or any other more permanent account. This will reduce the chances of jobs failing due to any issues with accounts.

DECLARE @name_holder VARCHAR(1000)
SELECT [name] FROM msdb..sysjobs
OPEN My_Cursor
FETCH NEXT FROM My_Cursor INTO @name_holder
exec msdb..sp_update_job
@job_name = @name_holder,
@owner_login_name = ‘sa’
FETCH NEXT FROM My_Cursor INTO @name_holder
CLOSE My_Cursor
–Changes the owner of the jobs to sa

Tagged with:
Posted in SQL Tips, tsql

SQL Tip: Afraid of PowerShell? Use with Powershell ISE



User it with PowerShell ISE. The Windows PowerShell Integrated Scripting Environment (ISE) is one of two hosts for the Windows PowerShell engine and language. With it you can write, run, and test scripts in ways that are not available in the Windows PowerShell Console.

I used the combination of these tools for the very first time for copying databases across servers and man was that a blissful breeze. Highly recommended. See this command that I used >



  1. free & open source: No cost to you or your company
  2. community driven: created by the SQL Server community
  3. automation is awesome: less room for human error
  4. lots of upvotes–ise-?view=powershell-5.1



Tagged with: , ,
Posted in powershell, SQL Server, SQL Tips

Upgrade from TFS 2010 to TFS 2017 (with SQL 2016). Fast!

Old Live TFS 2010 needs to upgrade to latest TFS 2017.2. There doesn’t exist a direct for an upgrade from TFS 2010 to TFS 2017. It is too big of a gap it seems. To achieve this the 2 steps should be or the route I took was. First, upgrade to TFS 2015 and then to TFS 2017.
In this particular instance, we opted for a new server with new OS and latest SQL 2016 to make the most of it.

tfs upgrade

The Staging and Live can be the same machine but for a smooth transition and to be able to do the faster upgrade repeatedly because of issues in the upgrade or multiple environment upgrades. This will save the time spent in installing, reinstalling and uninstalling TFS versions multiple times.
Microsoft recommends using TFS 2015 as staging. But our Staging has SQL 2016 hence the need for TFS 2015.5. .4+ version supports SQL 2016 otherwise earlier can be used to looking at their

Another area to speed up is the copy of databases. One can as well write up a script for every relevant database across the 3 environments this will include
1. Take latest backups from Old live
2. Copy backups to an accessible fast location for Staging
3. Restore databases on Staging
4. After the TFS 2015.5 upgrade
I used a free PowerShell commandlet dbatools from and using it in PowerShell ISE.Using command dba-databasecopy makes all the above-mentioned steps a breeze.

1. Restore free database backups from Old live to Staging
2. Kick off the TFS upgrade after installation on Staging. Use a local account for report reader account or domain as you wish
3. Restore free database backups from Staging to new Live
4. Kick off the TFS upgrade after installation on Staging. Use a local account for report reader account or domain as you wish
5. If required to repeat, do not uninstall TFS just remove Application services from TFS admin console and start from step 1.

Voila! Done under 2 hours if not faster. Once familiar with the process It can be done repeatedly and fast using the suggestions given above.

Tagged with: , ,
Posted in Visual Studio

SQL Tip: Identifying Column store index potential candidate tables in your databases for major query performance improvements

Column Store indexes appeared in SQL Server 2012 and by 2016 and beyond have progressed much further allowing their usage with fewer restrictions than before and this new storage mechanism for indexes and data provides significant gains for performance especially though not only in Data-warehouse workloads.

There are certain criteria defined to help Database professionals to see if CCI are an option that should be tried on their database tables for performance gains.

Recently SQL Server Tiger team at Microsoft wrote a nice little script incorporating these selection criteria which will simply list the tables that should be considered on each database this script is run.

This script makes sure that the table meets the criteria of size, data types allowance, type of DML being executed and workload on these tables.

Obviously, with any real scenario, things can be different or way too complex than this but the script is a great starting point for someone like me who was starting to think of ways to identify potential candidates to use this significantly improved new tool.

I will post some stuff separately about basics of Column Store Indexes and kind of performance improvements.

Without further ado, this is the script with ample comments in it to be understood thankfully put in by the creators.

-- The queries below need to be executed per database.
-- Also, please make sure that your workload has run for
-- couple of days or its full cycle including ETL etc
-- to capture the relevant operational stats
-- picking the tables that qualify CCI
-- Key logic is
-- (a) Table does not have CCI
-- (b) At least one partition has > 1 million rows and does not have unsupported types for CCI
-- (c) Range queries account for > 50% of all operations
-- (d) DML Update/Delete operations < 10% of all operations
select table_id, table_name
from (select quotename(object_schema_name(dmv_ops_stats.object_id)) + N'.' + quotename(object_name (dmv_ops_stats.object_id)) as table_name,
dmv_ops_stats.object_id as table_id,
SUM (leaf_delete_count + leaf_ghost_count + leaf_update_count) as total_DelUpd_count,
SUM (leaf_delete_count + leaf_update_count + leaf_insert_count + leaf_ghost_count) as total_DML_count,
SUM (range_scan_count + singleton_lookup_count) as total_query_count,
SUM (range_scan_count) as range_scan_count
from sys.dm_db_index_operational_stats (db_id(),
null, null) as dmv_ops_stats
where (index_id = 0 or index_id = 1)
AND dmv_ops_stats.object_id in (select distinct object_id
from sys.partitions p
where data_compression = 1048576
AND object_id in (select distinct object_id
from sys.partitions p, sysobjects o
where o.type = 'u' and p.object_id =
AND dmv_ops_stats.object_id not in ( select distinct object_id
from sys.columns
where user_type_id IN (34, 35, 241)
OR ((user_type_id = 165 OR user_type_id = 167) and max_length = -1))
group by dmv_ops_stats.object_id
) summary_table
where ((total_DelUpd_count * 100.0/(total_DML_count + 1) 50.0))

For further details see

Tagged with: ,
Posted in Column Store Indexes, SQL Server, sql server performance, Uncategorized

SQL Monitoring Solution Development – SQL Performance Dashboard Reports – Part 2 Customisation for centralised remote monitoring of SQL instances

In Part 1 of this series on SQL Server Performance Monitoring Dashboard I provided an intro.

Now what we would like to do is to be able to run these reports remotely… without logging into a server or ever connecting to them via SQL Server Management Studio. More like point and click fashion of SSRS reports.

This is an important project and if done correctly it can save your shop a good amount of money spent on monitoring tools available off the shelf. Some costing around hundreds of pounds per instance.

In Future parts, we will proceed to further monitoring, baselining and other aspects. I will create a post tracking all these steps in a single place shortly.

Now if at any point you find anything confusing. Feel free to contact me.

For this what we need to do roughly is…

  1. Download the reports
  2. Change connection parameters to be dynamic via parameters
  3. Upload them to a central SSRS server.
  4. Create supporting objects and code in the servers you want to monitor once via deployment script.
  5. Browse the report and see data via the central SSRS server and input the SQL instance names by providing it as the parameter value.


6. Create a central dashboard with List of servers and pass instance names in links to the main dashboard to have a smooth point click and drill down experience with reports. But I will cover this in Part 3.

Ok let’s go over these steps one by one

  • Download the reports: Use the links in  Part 1
  • Change connection parameters to be dynamic via parameters. Refer to Fig A at bottom
  • Upload them to a central SSRS server.
  • Create supporting objects and code in the servers you want to monitor once via deployment script.
  • Browse the report and see data via the central SSRS server and input the SQL instance names by providing it as the parameter value.

Dashboardreport params


Fig A: Showing use of parameters to form a dynamic connection string




Tagged with: ,
Posted in Monitoring, SQL Server, SQL Server Administration, sql server performance

How to get GDPR compliant with Microsoft

When the EU’s GDPR (General Data Protection Regulation) is introduced next year, the bar for data privacy protection in the UK will be raised.

For thousands of businesses GDPR highlights daunting issues of compliance. Organisations that fail to comply with the regulation will face massive fines: up to £17 million or four percent of global revenue, whichever is higher.

The May 25, 2018, deadline for GDPR compliance is drawing closer but many businesses, including some Microsoft Partners and customers, are unsure where to begin. Businesses cannot afford to waste time in GDPR preparation. Likewise, Microsoft Partners cannot waste the opportunity to help their customers get compliant.

As part of Inspire 2017, Microsoft’s annual global partner conference, Microsoft has released a set of assets that will help businesses and Microsoft partners achieve GDPR compliance. Let’s take a look at the new resources (and some older ones which deserve a mention).

Read the rest at.




Tagged with: , ,
Posted in gdpr, security, SQL Server, SQL Server Administration, Uncategorized

SQL Tip: Memory limit too low. Instance failing to start

4527.ConnectionErrorQuick note.

SQL startup parameters set mistakenly by someone to be too low and instance is failing to start.

Page/memory allocation errors in startup log of various types…

Add startup parameter ‘-f’ and start the instance and change the memory cap and restart service.

Tagged with: , ,
Posted in SQL Server, SQL Server Administration, SQL Tips, Uncategorized

SQL Server 2016 – Always Encrypted and the GDPR

Matthew McGiffen DBA

The European General Data Protection Regulation (GDPR) is coming, bringing new rules about the protection of Personally Identifiable Information (PII).

For those of us in Europe – or wishing to sell software products and solutions within Europe – this is a big deal. It’s going to mean an increased focus on data protection, and the potential for large penalties in the case of data breaches. Those penalties are also increased if we can’t show that we have taken sufficient measures to protect that data.

This affects us where we host applications and data for our clients, but also when we sell software to clients to use on-premise. They are going to want to be sure that the data in those applications is as secure as possible.

Security infrastructure and data governance are going to be of critical importance to everyone, but for software vendors it seems likely that Encryption is…

View original post 718 more words

Posted in Uncategorized
%d bloggers like this: