SQL Tip: Changing Database names along with file names (logical and physical)

Sometimes you need to rename a databases.
Although it seems pretty easy to rename it on the surface.
However for long term maintainability it is crucial that the underlying logical files and physical files also reflect the changes.
Without the deeper changes to the files it is easy to get confused and get into trouble even by the same person in future even if the change was documented.

Following script snippets help in this regards. It can additionally and easily help in moving files for user database with little to no modification.

-- Confirm the files related to the database you are working with.
-- You might have to change the script depending on the files the following query returns

WHERE database_id = DB_ID(N'ExistingDatabaseName')

Set Database as a Single User
In case of issues try to make sure that the applications, users and services trying to connect to this database are disconnected.

You might have to use activity monitori in SSMS to make sure no connections are on the database going through this process of renaming.



Change Logical File Name
Logical file name ne

ALTER DATABASE [ExistingDatabaseName] MODIFY FILE (NAME = 'ExistingDatabaseName', NEWNAME = NewDatabaseName);
ALTER DATABASE [ExistingDatabaseName] MODIFY FILE (NAME = 'ExistingDatabaseName_log', NEWNAME = NewDatabaseName_log);
/* Detach Current Database to detach database. */
USE [master]
EXEC master.dbo.sp_detach_db @dbname = N'ExistingDatabaseName'

--/* Enable Command Shell */
USE master
sp_configure 'show advanced options',1
/* 0 = Disabled , 1 = Enabled */
sp_configure 'xp_cmdshell', 1
Rename Physical Files
If The following errors out. Do not worry and rename the files appropriately with Windows explorer or DOS command line yourself.
USE [master]
EXEC xp_cmdshell 'RENAME "D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\ExistingDatabaseName.mdf", "NewDatabaseName.mdf"'
EXEC xp_cmdshell 'RENAME "E:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Data\ExistingDatabaseName_log.ldf", "NewDatabaseName_log.ldf"'
/* Attach Renamed Database Online */
USE [master]
( FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\NewDatabaseName.mdf' ),
( FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Data\NewDatabaseName_log.ldf' )
--/* Disable Command Shell */
USE master
/* 0 = Disabled , 1 = Enabled */
sp_configure 'xp_cmdshell', 0
sp_configure 'show advanced options'
-- Rechecking file changes to verify.
WHERE database_id = DB_ID(N'NewDatabaseName')
/* Rename User */
USE [master]
ALTER LOGIN [ExistingDatabaseNameUser] WITH NAME = [NewDatabaseNameUser];
/* Add db_owner role to renamed user in case if thats not already assigned.*/
USE [NewDatabaseName]
ALTER ROLE [db_owner] ADD MEMBER [NewDatabaseNameUser]

Tagged with:
Posted in security, sql, SQL Server, tsql

How to configure SQL Database mail (or any other application) to send emails using Office 365 (Exchange Online): A walk through and options




What I have come to understand is that with Office365 email routing the configuration settings for email relay are,

Server:                                 smtp.office365.com

Port Number:    587

Authentication:                Basic (An email address and password)

We can set this up on each individual server that needs to send out emails or we can configure a single outbound email relay server.

Option 1

An example below email config for SQL servers other applications would be similar along with related network and firewall settings.



Option 2

Alternatively. We can setup an internal relay server to which every server can send email and from that server we can forward to office365 with above credentials.

Additionally, the Exchange settings in Office365 can be set for inbound connection permissions to specify which servers can relay.

Both approaches have their pros and cons which need to be discussed.




Tagged with: ,
Posted in email, SQL Server, Uncategorized

SQL Tip: Read + Write (or any) role on databases, all existing and new ones

Trying to avoid the double hop issue which configuring linked servers, I opted for local users.

After setting them up I asked the developer who requested this as to which databases he wanted to be read across the servers / instances and his answers was all now and in future.

Now for the requirement for all it was a case of creating a cursor and looping through it execute the right TSQL but for the future created databases I wondered should I create a DDL trigger if the SQL allows it?

Until I remember that the Model database’s purpose is to serve as the template for the new databases that get created and all the properties and settings that are not explicitly overwritten are inherited from this appropriately named template or the system database called Model.

Therefore I included Model database in my script and doing so will apply the user and its role assignment on Model explicitly therefore on any database created in the future.

Heres the script.

Use master
DECLARE @statement NVARCHAR(max)
-- User account

FROM MASTER.sys.databases
WHERE name NOT IN ('master','msdb','tempdb','distribution') and [is_read_only]=0
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @dbname

SELECT @statement = 'use ['+ @dbname +'];'+ 'CREATE USER [' + @UserName + ']
FOR LOGIN [' + @UserName + ']; EXEC sp_addrolemember N''db_datareader'',
[' + @UserName + '];EXEC sp_addrolemember N''db_datawriter'', [' + @UserName + ']'

print @statement
exec sp_executesql @statement

FETCH NEXT FROM db_cursor INTO @dbname
CLOSE db_cursor
DEALLOCATE db_cursor

Tagged with:
Posted in SQL Server, tsql, Uncategorized

SQL Tip: How to find SQL Server installation date and time

Getting right down to business… heres the script

SELECT create_date
FROM sys.server_principals
WHERE sid = 0x010100000000000512000000
And for Eval versions
-- Evaluation Version Expire Date
SELECTcreate_date ASInstallationDate,
DATEADD(DD, 180, create_date) AS'Expiry Date'
WHEREsid = 0x010100000000000512000000
Reference: Pinal Dave (http://blog.sqlauthority.com)


Tagged with:
Posted in sql, SQL Server, tsql, Uncategorized

SQL Server 2016 SP1 – The game changing release… seriously

SQL Server 2016 SP1, In short with this release Microsoft has enabled many features in Standard, Web and even Express editions which were only previously available in Enterprise.

The following table compares the list of features which were only available in Enterprise edition which are now enabled in Standard, Web, Express, and LocalDB editions with SQL Server 2016 SP1


  • Change Data Capture
  • Database Snapshots
  • ColumnStore Indexes
  • Partitioning
  • Compression
  • In-Memory OLTP
  • Always Encrypted
  • PolyBase
  • Auditing
  • Multiple FILESTREAM Containers



Need I say more … ?

Get more details from Microsoft here

Tagged with:
Posted in Uncategorized

Datawarehouse Database Cleanup SQL query

Real world problems with good old SCOM 2012 R2


IMPORTANT: Always perform a FULL Backup of the database before doing anything to it !!!

This article applies to SCOM 2007 and SCOM 2012

Somtimes you may have event storms where you end up having old entries in the Data Warehouse database i.e data that is older than the grooming threshold. This may happen beacuse the grooming processes can’t keep up because they run on a regular interval but only delete a fixed number of rows per run.

The following SQL query may also be valuable in case you end up with the issue of SQL Timeouts from the Data Warehouse database when the StandardDataSetMaintenance stored procedure is executed by the RMS.

More on that issue here: http://blogs.technet.com/b/kevinholman/archive/2010/08/30/the-31552-event-or-why-is-my-data-warehouse-server-consuming-so-much-cpu.aspx

To check if this is the case for you, run this SQL Query on the Data Warehouse database:

DECLARE    @MaxDataAgeDays INT,    @DataSetName NVARCHAR(150) SET @DataSetName = ‘Event’ SELECT @MaxDataAgeDays = MAX(MaxDataAgeDays) FROM StandardDatasetAggregation WHERE DatasetId =…

View original post 221 more words

Posted in Uncategorized

SQL 2005 – 2016, Migrated CLR objects / Assemblies causing access error messages

Microsoft has acknowledge this error in SQL 2005 but I recently saw it on an instance of SQL Server 2016.


Consider the following scenario. You detach or back up a database that is in an instance of SQL Server 2005. The instance of SQL Server 2005 is running on Server A. Later, you attach or restore that database to an instance of SQL Server 2005 that is running on Server B. In this scenario, you may experience the following symptoms:

  • When you try to run an existing common language runtime (CLR) object that has the external_access or unsafe permission set from the database that is on Server B, you receive the following error message:
    Msg 10314, Level 16, State 11, Line 2
    An error occurred in the Microsoft .NET Framework while trying to load assembly id 65536. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error:
    System.IO.FileLoadException: Could not load file or assembly ‘AssemblyName, Version=, Culture=neutral, PublicKeyToken=null’ or one of its dependencies. An error relating to security occurred. (Exception from HRESULT: 0x8013150A) System.IO.FileLoadException:
    at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection)
    at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection)
    at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection)
    at System.Reflection.Assembly.Load(String assemblyString)
  • When you try to create a new assembly that has the external_access or unsafe permission set in the same database, you receive the following error message:
    Server: Msg 10327, Level 14, State 1, Line 1
    CREATE ASSEMBLY for assembly ‘AssemblyName‘ failed because assembly ‘AssemblyName‘ is not authorized for PERMISSION_SET = EXTERNAL_ACCESS. The assembly is authorized when either of the following is true: the database owner (DBO) has EXTERNAL ACCESS ASSEMBLY permission and the database has the TRUSTWORTHY database property on; or the assembly is signed with a certificate or an asymmetric key that has a corresponding login with EXTERNAL ACCESS ASSEMBLY permission.

The issues occur even if you have already set the Trustworthy database property to ON.

But do make sure its is on using command like




This problem occurs because the login that you use to create the database on Server A is not in the instance of SQL Server 2005 on Server B. This login could be either the Microsoft Windows login or the SQL Server login.


To work around this problem, use one of the following methods.

Note Before you use the following methods, make sure that you enable the Trustworthy database property.

  • Use the sp_changedbowner stored procedure to change the database owner to sa or to an available login on Server B. For example, you may use the following statement to change the database owner to sa:
    USE <DatabaseName>
    EXEC sp_changedbowner ‘sa’

    Note In this statement, <DatabaseName> is a placeholder of the name of the database that you are working on. The changed database owner should have the corresponding permissions to perform a certain task. For example, the database owner should have the CREATE ASSEMBLY permission to create an assembly.

  • Add the login on the instance of SQL Server 2005 on Server A that is used to create the database to the instance of SQL Server 2005 on Server B.If the login is a domain account, you can create the same login on Server B. Then grant the required permissions to the login on the instance of SQL Server 2005 on Server B.If the login is a SQL Server login, make sure that the SID of this login matches the new SQL Server login that you create on the instance of SQL Server 2005 on Server B. To do this, specify the SID argument of the CREATE LOGIN statement.


If you access the CLR object from a different database, and that database has a mismatching DBO SID, the same problem can occur.

For more information, visit the following blog:


Tagged with:
Posted in SQL Server, Uncategorized

SQL Server 2016 new features & common performance issue of Tempdb Contention

Tempdb the “shared resource” or the “scratch pad” or even the “waste bin” as described by many names in SQL Server has of course a lot many important things to do but by very nature of any shared resource there is always issues in concurrent usage. For example creation and deletion of temp tables.

Tempdb contention is one of the most common performance issues in the field. One of the ways around contention problem without diving too deep into how and why it occurs, is to divide the tempdb data file into multiple files.

The tribe of DBAs under the guidance of grand priests of the craft and MS engineers have implemented trace flags and multiple files and it has been one of the things you don’t intuitively but have to research a little into.

Good news!

If you will or already have deployed SQL Server 2016, you would notice that one of the tab in dialogues during setup that look different is the one where you setup Tempdb distribution.

SQL Server 2016 has come a long way forward and has offered much better if not entirely perfect solution. If you look at the following screenshot and consider the explanation below you will understand the whole story.



Data Files

  1. Number of files – this will default to the lower value of 8 or number of logical cores as detected by setup. The value can be increased as required for a specific workload. The file names for secondary data files will follow the tempdb_mssql_#.ndf naming convention where # represents the file ordinal for each additional file.
  2. Initial size  – is specified in MB and applies to each tempdb data file. This makes it easier to configure all files of same size. Total initial size is the cumulative tempdb data file size (Number of files * Initial Size)  that will be created.
  3. Autogrowth – is specified in MB (fixed growth is preferred as opposed to a non-linear percentage based growth) and applies to each file. The default value of 64MB was chosen to cover one PFS interval. Since TF 1117 is enabled by default for tempdb, all files will grow at the same time by the specified size. Total autogrowth reflects the cumulative size the database will grow each time an autogrow is triggered.

Note: If you specify a very large initial size or autogrowth value, ensure that Instant File Initialization is enabled for SQL Server service startup account.

  1. Data directories – specify multiple folders/drives to spread the data files across several volumes. Each file will be placed in a round-robin manner. For example: if you have specified 8 data files and 3 volumes – files 1,4,7 will go to vol 1; files 2,5,8 will go to vol 2; files 3,6 will go to vol 3.

Log File

  1. Initial size  – is specified in MB and log file with that size is created.
  2. Autogrowth – is specified in MB (fixed growth is preferred as opposed to a non-linear percentage based growth)

A default value of 64MB is provided to so that the number of Virtual Log Files (VLFs) during initial creation is a small and manageable number and with appropriate size so that the unused log space can be reclaimed easily.

Enabling all these configuration settings as part of the standard setup experience allows one to deploy and run at peak scalability levels.

Lastly but important

  • Trace Flag 1118 reduces Shared Global Allocation Map (SGAM) contention.
  • Trace Flag 1117 strictly forces all data files within the filegroup to grow at the same time.

With SQL Server 2016, the functionality of trace flags 1117 and 1118 have been enabled by default.

Tagged with: , ,
Posted in log, sql, SQL Server, Uncategorized

SQL Server Restoring backups from UNC Network paths and Recovery Time Objectives

Might sound very basic but I still come across colleagues who are confused on this topic. SQL Server management studio when restoring backups does no let you browse to network drives.

That is why if quick workarounds are not known people end up for large databases in GBs or TBs,

  1. Copy large backups onto local drives
  2. Start the restoration process
  3. Delete the large backup file

This is time consuming and seems like repeating of the same process as the contents in form of backup file and then data and log files is copied twice then the space is released for the place where the backup was originally copied perhaps remaining part of the drive till it is eventually used which can be a long time.

This can seriously impact your Recovery Time Objectives due to potential errors and restarts of transfer. Furthermore the sad thing is that the workaround have existed since SQL 2000 days.

The solution that I mostly use is to restore such backups over the network wherever the network is reasonably reliable is to just simply put in the full path of the backup file including extension in UNC format in the file opening dialogue and once file is selected proceeding as normal.


You can do the same with the file parameter in Restore command and achieve the same result and saving a lot of valuable time in case of large backup files.

For a more complicated solution see this blog post (Restore Database from a UNC path – Why SSMS doesn’t show network drives/paths in the restore wizard)  which shows you how you can force the Management studio to browse unc paths. Based on its complexity is it worth it is your decision.

Until next time take care.! 🙂



Tagged with: , ,
Posted in backup, sql, SQL Server

Usage data for SSRS reports to decide on reports to consider for re-engineering & retiring

You have a SSRS instance on either live or production and you want to find out which reports and most / least used.

Luckily SQL Server maintains by default SSRS usage info in audit logs it maintains. Although you can go only as far back as 60 days but this is quite useful in many cases and this can be extended.

To be able to access this information you would have to used System view named ExecutionLog traditionally and for backward compatibility. ExecutionLog2 for SQL Server 2008 and ExecutionLog3 for SQL server 2008 R2. Each subsequent newer version added more info and renamed and few columns.

For detailed documentation checkout here in MSDN. To be able to extend the 60 days default limit, fiddle with ExecutionLogDaysKept property (The number of days to keep report execution information in the execution log. Valid values for this property include 0 through 2,147,483,647. If the value is 0 entries are not deleted from the Execution Log table.)

An example of TSQL to extend this limit is like this,


EXEC SetConfigurationInfo @Name=N'ExecutionLogDaysKept',@Value=N'90'



Below are some example queries to play around with.

ex.UserName, ex.Format, ex.TimeStart, cat.Name, ex.Parameters, CONVERT(nvarchar(10), ex.TimeStart, 101) AS rundate
FROM ExecutionLog AS ex INNER JOIN
Catalog AS cat ON ex.ReportID = cat.ItemID
ORDER BY ex.TimeStart DESC

select * from ExecutionLog3
select * from Catalog

select cat.path + '/' + cat.name Report,   max(TimeStart) LastRunTime, count(reportid) NoOfExec
from ExecutionLog3 ex, Catalog cat
where ex. = cat.ItemID
group by cat.path , cat.name
order by NoOfExec desc
order by  max(TimeStart) desc

select name, CreationDate  from Catalog cat
where cat.itemid not in (select  reportid from ExecutionLog ex )
order by CreationDate desc

where ex.ReportID = cat.ItemID)

Tagged with: , , ,
Posted in sql, SQL Server, ssrs, tsql
%d bloggers like this: