dbatools is a free PowerShell module with over 200 SQL Server administration, best practice and migration commands included.
Please note that documentation and command names may be out of date while we work furiously towards 1.0
Following are the standards recommended by me after considering best practices and configurations specific to SQL Server Infrastructure
This setting cannot be changed after installation easily and without disruption to service therefore it is absolutely necessary to ask about it before starting. Getting this wrong will require rebuild, reinstall or total discarding of the whole effort.
The Standard backup retention policy is 2 weeks for both full and log backups unless otherwise specified.
Unless otherwise specified, standard is to perform a full back up every night. For systems requiring point in time recovery log backups are setup to be performed every hour. Both subjected to 2 weeks retention policy.
Server should have access to backup path folder where the
Even with deduplication and compression of the storage management systems like Data Domain that is in place in ASTA with everything considered such as backup window and Iops etc. Keeping Backup compression turned ON, on server level and explicitly or on individual jobs.
As a precaution when setting up for the first time testing backup job is necessary to ensure no errors will occur on scheduled events.
As per standard, there should be 3 different drives unless performance and organization is not much of concern e.g. deployments where SQL Servers role is limited.
Security is based on AD accounts therefore AD accounts created by Infrastructure are required where passwords are saved in central passwords storage.
“Perform Volume Maintenance Operations” set for the main Database engine service account is always set at the build stage for Instant File Initialisation.
For dedicated servers after leaving 2-4 GB for OS operations… 70% should be dedicated to SQL Server for reasonable performance.
Any setting set on this database is inherited by every new database that is locally created on this instance. Therefore things like recovery models, data file growth parameters, log file growth parameters should be set here.
Fixed sizes in case of dedicated disks eliminates potential performance issues in case of growth events.
Having additional data files on Tempdb removes contention issues. Usually it is number of cores but do look into this in detail.
Usually keep this setting turned ON, on all new deployments. So that the execution plans are used efficiently. But depending the on the situation the DBA can decide himself.
Changing from default manual to Automatic is necessary.
Confirmation with Infrastructure that on the new / existing server .mdf and .ldf files and any other directories or files excluded from scanning. This can cause performance issues.
Leaving it to default isn’t too bad but sometime the products require certain settings therefore this is included on the list of things to check.
The standard is 50. This is the recommendation in the industry as opposed to defaults since the processing power of servers has increased.
For troubleshooting especially in case of irresponsive systems.
Local Admin group and “Gbn Inf DBA” Group (diff for Duw and TCS) set as SA.
Any other groups/users given access
Any installation only accounts to be removed?
Full or Simple? Ask RPO and RTO questions
Will affect all new databases created.
Local instance SA user password resetted
For performance and security.
It’s best to start with the latest release provided the third party product (as per official documentation) or the in-house development tools support it.
Bouncing services a couple of times is a recommended test.
Even if it’s not asked for after installation or Windows patching. Just in case.
For advanced settings needed by ISV
Or whatever monitoring tool in use for SQL instances.
Quick note. After creation of maintenance plans and agent jobs to be careful always replace owner with SA or any other more permanent account. This will reduce the chances of jobs failing due to any issues with accounts.
DECLARE @name_holder VARCHAR(1000)
DECLARE My_Cursor CURSOR
SELECT [name] FROM msdb..sysjobs
FETCH NEXT FROM My_Cursor INTO @name_holder
WHILE (@@FETCH_STATUS <> -1)
@job_name = @name_holder,
@owner_login_name = ‘sa’
FETCH NEXT FROM My_Cursor INTO @name_holder
–Changes the owner of the jobs to sa
User it with PowerShell ISE. The Windows PowerShell Integrated Scripting Environment (ISE) is one of two hosts for the Windows PowerShell engine and language. With it you can write, run, and test scripts in ways that are not available in the Windows PowerShell Console.
I used the combination of these tools for the very first time for copying databases across servers and man was that a blissful breeze. Highly recommended. See this command that I used > https://dbatools.io/functions/copy-dbadatabase/
Old Live TFS 2010 needs to upgrade to latest TFS 2017.2. There doesn’t exist a direct for an upgrade from TFS 2010 to TFS 2017. It is too big of a gap it seems. To achieve this the 2 steps should be or the route I took was. First, upgrade to TFS 2015 and then to TFS 2017.
In this particular instance, we opted for a new server with new OS and latest SQL 2016 to make the most of it.
The Staging and Live can be the same machine but for a smooth transition and to be able to do the faster upgrade repeatedly because of issues in the upgrade or multiple environment upgrades. This will save the time spent in installing, reinstalling and uninstalling TFS versions multiple times.
Microsoft recommends using TFS 2015 as staging. But our Staging has SQL 2016 hence the need for TFS 2015.5. .4+ version supports SQL 2016 otherwise earlier can be used to looking at their
Another area to speed up is the copy of databases. One can as well write up a script for every relevant database across the 3 environments this will include
1. Take latest backups from Old live
2. Copy backups to an accessible fast location for Staging
3. Restore databases on Staging
4. After the TFS 2015.5 upgrade
I used a free PowerShell commandlet dbatools from http://www.dbatools.io and using it in PowerShell ISE.Using command dba-databasecopy makes all the above-mentioned steps a breeze.
1. Restore free database backups from Old live to Staging
2. Kick off the TFS upgrade after installation on Staging. Use a local account for report reader account or domain as you wish
3. Restore free database backups from Staging to new Live
4. Kick off the TFS upgrade after installation on Staging. Use a local account for report reader account or domain as you wish
5. If required to repeat, do not uninstall TFS just remove Application services from TFS admin console and start from step 1.
Voila! Done under 2 hours if not faster. Once familiar with the process It can be done repeatedly and fast using the suggestions given above.
Column Store indexes appeared in SQL Server 2012 and by 2016 and beyond have progressed much further allowing their usage with fewer restrictions than before and this new storage mechanism for indexes and data provides significant gains for performance especially though not only in Data-warehouse workloads.
There are certain criteria defined to help Database professionals to see if CCI are an option that should be tried on their database tables for performance gains.
Recently SQL Server Tiger team at Microsoft wrote a nice little script incorporating these selection criteria which will simply list the tables that should be considered on each database this script is run.
This script makes sure that the table meets the criteria of size, data types allowance, type of DML being executed and workload on these tables.
Obviously, with any real scenario, things can be different or way too complex than this but the script is a great starting point for someone like me who was starting to think of ways to identify potential candidates to use this significantly improved new tool.
I will post some stuff separately about basics of Column Store Indexes and kind of performance improvements.
Without further ado, this is the script with ample comments in it to be understood thankfully put in by the creators.
-- The queries below need to be executed per database.
-- Also, please make sure that your workload has run for
-- couple of days or its full cycle including ETL etc
-- to capture the relevant operational stats
-- picking the tables that qualify CCI
-- Key logic is
-- (a) Table does not have CCI
-- (b) At least one partition has > 1 million rows and does not have unsupported types for CCI
-- (c) Range queries account for > 50% of all operations
-- (d) DML Update/Delete operations < 10% of all operations
select table_id, table_name
from (select quotename(object_schema_name(dmv_ops_stats.object_id)) + N'.' + quotename(object_name (dmv_ops_stats.object_id)) as table_name,
dmv_ops_stats.object_id as table_id,
SUM (leaf_delete_count + leaf_ghost_count + leaf_update_count) as total_DelUpd_count,
SUM (leaf_delete_count + leaf_update_count + leaf_insert_count + leaf_ghost_count) as total_DML_count,
SUM (range_scan_count + singleton_lookup_count) as total_query_count,
SUM (range_scan_count) as range_scan_count
from sys.dm_db_index_operational_stats (db_id(),
null, null) as dmv_ops_stats
where (index_id = 0 or index_id = 1)
AND dmv_ops_stats.object_id in (select distinct object_id
from sys.partitions p
where data_compression = 1048576
AND object_id in (select distinct object_id
from sys.partitions p, sysobjects o
where o.type = 'u' and p.object_id = o.id))
AND dmv_ops_stats.object_id not in ( select distinct object_id
where user_type_id IN (34, 35, 241)
OR ((user_type_id = 165 OR user_type_id = 167) and max_length = -1))
group by dmv_ops_stats.object_id
where ((total_DelUpd_count * 100.0/(total_DML_count + 1) 50.0))
In Part 1 of this series on SQL Server Performance Monitoring Dashboard I provided an intro.
Now what we would like to do is to be able to run these reports remotely… without logging into a server or ever connecting to them via SQL Server Management Studio. More like point and click fashion of SSRS reports.
This is an important project and if done correctly it can save your shop a good amount of money spent on monitoring tools available off the shelf. Some costing around hundreds of pounds per instance.
In Future parts, we will proceed to further monitoring, baselining and other aspects. I will create a post tracking all these steps in a single place shortly.
Now if at any point you find anything confusing. Feel free to contact me.
For this what we need to do roughly is…
6. Create a central dashboard with List of servers and pass instance names in links to the main dashboard to have a smooth point click and drill down experience with reports. But I will cover this in Part 3.
Ok let’s go over these steps one by one
Fig A: Showing use of parameters to form a dynamic connection string
When the EU’s GDPR (General Data Protection Regulation) is introduced next year, the bar for data privacy protection in the UK will be raised.
For thousands of businesses GDPR highlights daunting issues of compliance. Organisations that fail to comply with the regulation will face massive fines: up to £17 million or four percent of global revenue, whichever is higher.
The May 25, 2018, deadline for GDPR compliance is drawing closer but many businesses, including some Microsoft Partners and customers, are unsure where to begin. Businesses cannot afford to waste time in GDPR preparation. Likewise, Microsoft Partners cannot waste the opportunity to help their customers get compliant.
As part of Inspire 2017, Microsoft’s annual global partner conference, Microsoft has released a set of assets that will help businesses and Microsoft partners achieve GDPR compliance. Let’s take a look at the new resources (and some older ones which deserve a mention).
Read the rest at.
SQL startup parameters set mistakenly by someone to be too low and instance is failing to start.
Page/memory allocation errors in startup log of various types…
Add startup parameter ‘-f’ and start the instance and change the memory cap and restart service.
The European General Data Protection Regulation (GDPR) is coming, bringing new rules about the protection of Personally Identifiable Information (PII).
For those of us in Europe – or wishing to sell software products and solutions within Europe – this is a big deal. It’s going to mean an increased focus on data protection, and the potential for large penalties in the case of data breaches. Those penalties are also increased if we can’t show that we have taken sufficient measures to protect that data.
This affects us where we host applications and data for our clients, but also when we sell software to clients to use on-premise. They are going to want to be sure that the data in those applications is as secure as possible.
Security infrastructure and data governance are going to be of critical importance to everyone, but for software vendors it seems likely that Encryption is…
View original post 718 more words