The server principal owns one or more event notification(s) and cannot be dropped.


I was doing some testing with event notifications and noticed that my AD login was created on the SQL Server (normally, my access is through an AD group and not an individual user account defined on the server). When I attempted to delete the login, I received the following error:


In my opinion, having objects created and owned by an end user isn’t great and should be avoided if possible. For example, if the DBA who originally created the object leaves the organisation, the above error is returned when removing the associated account. Instead, have the objects owned by a generic user that isn’t likely going to cause any issues with object ownership on the server.

Usually, a quick ALTER AUTHORIZATION does the trick (e.g. Mirroring endpoints) but unfortunately it doesn’t work on event notification objects as described in the “Remarks” section of the following article:

“Ownership of the following entities cannot be transferred: linked servers, statistics, constraints, rules, defaults, triggers, Service Broker queues, credentials, partition functions, partition schemes, database master keys, service master key, and event notifications.”

The following example will create an event notification for the EVENTLOG event type:

USE [mydb]

TO SERVICE 'ServiceBroker_Service', 'CURRENT DATABASE';

Notice the principal_id belongs to the user who executed the example above:



In order to change this to ‘sa’ for example, simply drop the event notification and recreate with an additional step invoking the EXECUTE AS LOGIN syntax:

USE [mydb]


TO SERVICE 'ServiceBroker_Service', 'CURRENT DATABASE';


What are your thoughts on end users owning objects such as event notifications or mirroring endpoints? Does your organisation have any rules around this?


Get Active Transaction Details

Ever wanted to get more detail on active transactions on your SQL Server? Well then, this nifty piece of T-SQL will provide the following information:

  • The size of the transaction log usage and records generated for a particular transaction. So for example, if you ever needed to roll back a transaction, you’ll have a fairly good idea of the effort required e.g. whether the transaction log size is a few 100 MB or a few GB. This is vital information when you’re under pressure to make a decision to kill a process or not.
  • The database the transactions are occurring in e.g. it gives you an idea of tempdb usage of a transaction – the T-SQL below could potentially give you an idea of who or what is filling up your tempdb if you’re quick enough.
  • The T-SQL that is executing and the plan handle so you can potentially get query stats later.
  • The elapsed time of the transaction and its current state and transaction start time (for SQL 2008+)
  • The last request start time, end time, reads, writes, cpu_time – i.e. is it working?
  • The program name, server host name it’s originating from and the process ID.

-- Get Active Transaction Information
-- @niftysql /
-- Tip: use in conjunction with

SELECT es.session_id
 ,tst.open_transaction_count -- Comment this for SQL 2008
 ,DB_NAME(tdt.database_id) AS [database_name]
 ,tdt.database_transaction_begin_time -- Comment this for SQL 2005.
 ,CASE tdt.database_transaction_type
      WHEN 1 THEN 'Read/write transaction'
      WHEN 2 THEN 'Read-only transaction'
      WHEN 3 THEN 'System transaction'
 END AS [database_transaction_type]
 ,CASE tdt.database_transaction_state
      WHEN 1 THEN 'The transaction has not been initialized.'
      WHEN 3 THEN 'The transaction has been initialized but has not generated any log records.'
      WHEN 4 THEN 'The transaction has generated log records.'
      WHEN 5 THEN 'The transaction has been prepared.'
      WHEN 10 THEN 'The transaction has been committed.'
      WHEN 11 THEN 'The transaction has been rolled back.'
      WHEN 12 THEN 'The transaction is being committed. In this state the log record is being generated, but it has not been materialized or persisted.'
 END AS [database_transaction_state]
 ,tdt.database_transaction_log_bytes_used -- tdt.database_transaction_log_bytes_used/1024/1024/1024 = /KB/MB/GB
FROM sys.dm_tran_session_transactions tst
JOIN sys.dm_tran_database_transactions tdt
ON tst.transaction_id = tdt.transaction_id
JOIN sys.dm_exec_sessions es
ON tst.session_id = es.session_id
JOIN sys.dm_exec_requests er
ON tst.session_id = er.session_id
CROSS APPLY sys.dm_exec_sql_text (er.sql_handle)
WHERE tdt.database_transaction_begin_time IS NOT NULL
-- AND DB_NAME(tdt.database_id) = 'tempdb' -- E.g: filter only by tempdb.
-- AND tdt.database_transaction_log_bytes_used > 100000000 -- E.g: filter only transaction log usage > 100MB
-- ORDER BY tdt.database_transaction_log_bytes_use DESC -- Order by most transaction using the most log.

SSAS – Seeing double

Background: A customer called recently saying that they were seeing some strange things happening on a server with two SSAS 2012 tabular instances (Default and TEST). When they connected to either instance through SSMS they found dropping cubes in one instance also dropped the cube in the other. Furthermore, a cube deployed to the Default instance showed up in TEST. Checking the configuration of the instances under SSMS showed that both pointed to TEST.

Investigation: The following was double checked to ensure there weren’t any incorrect configurations:

  • Registry entries (HKLM\Software\Microsoft\Microsoft SQL Server\MSAS12.MSSQLSERVER\Setup and HKLM\Software\Microsoft\Microsoft SQL Server\MSAS12.TEST\Setup)
    • What you’re looking for here is to make sure the DataDir, SQLBinRoot, SQLPath etc., are different in each instance.
  • SSAS Services (start > run > services.msc)
    • Check to make sure the “Path to executable” for each instance is different and each service is pointing to the correct config path.
  • msmdsrv.ini file found in C:\Program Files\Microsoft SQL Server\MSAS12.MSSQLSERVER\OLAP\Config and C:\Program Files\Microsoft SQL Server\MSAS12.TEST\OLAP\Config
    • Check the DataDir, etc., are set correctly.
  • msmdredir.ini file found in C:\Program Files (x86)\Microsoft SQL Server\90\Shared\ASConfig
    • Make sure under <Instances> you have the TEST instance defined, e.g. in my case we had:
    • <Instances>

Wait a second! It turns out msmdredir.ini had the TEST port set to 2383 – this is the default instance port. With some further digging, I found the default instance was configured to listen on 2384. To test if this was causing our issue, I reset both SSAS instance port configurations through SSMS to 0 – dynamic – and restarted both instances. Bingo – this was it.

Solution: Implement either of the below, both will require you to restart the instances:

  1. Let SQL manage it by setting both port configurations to 0 (through SSMS or updating msmdsrv.ini directly). To do it through SSMS, you’ll need to force the connection to the instances by using the port syntax in the connect dialog box, for example: and Then right-click the instance > properties and under general, update the “port” configuration.ssas1SSAS2.PNG
  2. Set the port manually as below for each instance:
  • MSSQLSERVER (Default), Port: 2383
  • TEST, Port: 2384

Double check msmdredir.ini and you should find the <Instances> section updated with the new port after the restart. Don’t forget to double check your applications and update any connection strings.

I was also able to reproduce this again on SQL Server 2016 instances so be careful when defining static SSAS instance ports, especially when configuring a named instance to use the default port and later on to adding a default instance.


Scenario: High CPU or I/O utilization through task manager or resource monitor by the SQL Server process and you need to know what session is causing it and what it’s executing.

Solution: Run this nifty piece of T-SQL and choose the order by.

-- Get top CPU / I/O spid
-- @niftysql /

spid int
,cpu bigint
,physical_io bigint

INSERT INTO @t1 (spid,cpu,physical_io)
FROM sys.sysprocesses

WAITFOR DELAY '00:00:10'; -- Found this to be a good start.

,p.cpu - t.cpu as cpu_diff
,p.physical_io - t.physical_io as physical_io_diff
FROM sys.sysprocesses p
CROSS apply sys.dm_exec_sql_text (sql_handle)
JOIN @t1 t ON p.spid = t.spid
AND p.cpu - t.cpu &gt; 10
--AND p.physical_io - t.physical_io &gt; 10 --uncomment this if you're interested in I/O.
ORDER BY cpu_diff DESC;
--ORDER BY physical_io_diff; -- Great for seeing which spid&amp;amp;nbsp;is hammering the disk.

Large msdb data file due to excessive Service Broker conversations

Note to readers: This is my first blog post – apologies if it doesn’t read well. The website still needs a lot of work – but I guess I have to start somewhere and feedback is appreciated. I’m going to try find my style and see how it goes. The blog’s main purpose is to help remind me of the quirks I’ve come across and the ways I’ve solved them – trying to keep it short and sweet. I also hope it also helps you, if you ever stumble across it.

Note to self:

Background & Investigation:

Something I came across today at a client site on one of their BizTalk development servers. There was a sizable msdb data file and space was running out quickly on the drive caused by overzealous ssis logging configured by one of the developers. The “Disk Usage by Top Tables Report” gives you the necessary details to see where to look first [but not everything]. Also, checking the error logs, I found the msdb had actually run out of space a few times already:

Error: 1105, Severity: 17, State: 2.
Could not allocate space for object 'dbo.sysssislog'.'PK__sysssisl__3213E83F33EC335A' in database 'msdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.


So, we backed up the msdb and then my colleague truncated the sysssislog but it wasn’t enough. Nor was the usual shrinking of the data file, as that can only go so far and, anyway, it’s not the best thing to do. Also, the “Disk Usage Report” showed 99% utilization and the “Disk Usage by Top Tables Report” showed all tables at small sizes – what gives?

Some further digging around (aka Googling) we found this post that pointed us in the right direction. There were a lot of conversations in [msdb].[dbo].[WMIEventProviderNotificationQueue].

453,538 to be exact, under multiple conversation handles – all related to the deadlock event notifications that were never cleaned up during testing.

At this stage, since getting the space freed up was the priority, one way to clear the space is to run the following. This would get you a list of conversation handles and the number of associated conversations:

SELECT conversation_handle, count(conversation_handle)
FROM [msdb].[dbo].[WMIEventProviderNotificationQueue] WITH(NOLOCK)
GROUP by conversation_handle

All you need to do then is make sure you have sufficient space on your log drive and execute the following T-SQL (for example, if the conversation handle was : ‘F71BE6C4-C864-E611-80D3-005056A547EE’):

DECLARE @handle uniqueidentifier;
SET @handle = 'F71BE6C4-C864-E611-80D3-005056A547EE';


OK, now it’s done, do the shrink(s), rebuild indexes and call it a night :).

Edit: 24/08/2016 – formatted code!