Stored Procedure to determine when a job is done

This is a stored procedure that I have found useful in a number of circumstances. It came into being because there are times that I need to run a job via T-SQL and wait for it to complete before continuing the script. The problem is that sp_start_job does just what it says: it starts the job and then immediately continues. There is no option to wait for it to finish.

I needed some way to pause script execution pending the completion (hopefully successfully) of the job.

One important note: this procedure using the undocumented xp_sqlagent_enum_jobs system procedure. While this has been around for ages, it is unsupported. I get why using it may bother some, but this procedure is the only way that I know of to reliably determine the current run status of a job.

The procedure accepts five parameters. The first is a string @job_name containing the name of the job to be monitored. This is the only mandatory parameter. The second parameter is an integer @job_timeout_minutes, which controls how long in minutes that the procedure will wait for the job to complete. If that threshold is exceeded, the proc will raise an error. It defaults to 60 minutes. If @job_timeout_minutes is 0, the procedure will run until the job exits.

The remaining three parameters are all outputs from the procedure. @job_failed is a boolean (bit) value that is true (1) if the job completed but in a failed state. The last two, @job_start_time and @job_complete_time, indicate the beginning and end times for the job execution.

The procedure checks the job status every 5 seconds, so there may be a brief lag between the actual completion of the job and when the procedure returns. For me, this short delay has never been an issue.

Here is some sample code for calling the procedure:

exec msdb.dbo.sp_start_job @job_name = 'Name of job to run';

declare @job_failed bit,
    @job_start_time datetime,
    @job_complete_time datetime;

exec dbo.usp_WaitForJobToFinish @job_name = 'Name of job to run',
    @job_timeout_minutes = 60,
    @job_failed = @job_failed output,
    @job_start_time = @job_start_time output,
    @job_complete_time = @job_complete_time output;

if @job_failed = 1
begin
       raiserror ('Job failed', 16, 1);
end

And finally, here is the stored procedure definition.

create procedure dbo.usp_WaitForJobToFinish
(
       @job_name sysname,
	   @job_timeout_minutes INT = 60,
       @job_failed BIT = 0 OUTPUT,
	   @job_start_time datetime = null output,
	   @job_complete_time datetime = null output
)
AS
set nocount on;

-- Brief wait to let job system stabilize.
waitfor delay '0:00:05';
 
if not exists (select * from msdb.dbo.sysjobs j where j.name = @job_name)
begin
	return;
end

declare @timeoutTime datetime = dateadd(minute, @job_timeout_minutes, getdate());

DECLARE @xp_results TABLE
(
                     job_id                UNIQUEIDENTIFIER NOT NULL,
                     last_run_date         INT              NOT NULL,
                     last_run_time         INT              NOT NULL,
                     next_run_date         INT              NOT NULL,
                     next_run_time         INT              NOT NULL,
                     next_run_schedule_id  INT              NOT NULL,
                     requested_to_run      INT              NOT NULL,
                     request_source        INT              NOT NULL,
                     request_source_id     sysname          COLLATE DATABASE_DEFAULT NULL,
                     running               INT              NOT NULL,
                     current_step          INT              NOT NULL,
                     current_retry_attempt INT              NOT NULL,
                     job_state             INT              NOT NULL
);

/* Values for "job_state":
              0 = Not idle or suspended
              1 = Executing
              2 = Waiting For Thread
              3 = Between Retries
              4 = Idle
              5 = Suspended
              [6 = WaitingForStepToFinish]
              7 = PerformingCompletionActions
*/

DECLARE @can_see_all_running_jobs INT = 1;
DECLARE @job_owner sysname = SUSER_SNAME();
DECLARE @job_state INT = 0;

WHILE @job_state != 4
BEGIN
       DELETE @xp_results;

       INSERT INTO @xp_results
       EXECUTE master.dbo.xp_sqlagent_enum_jobs @can_see_all_running_jobs, @job_owner;

       SELECT @job_state = x.job_state
       FROM @xp_results x
       JOIN msdb.dbo.sysjobs j ON j.job_id = x.job_id
       WHERE j.name = @job_name;

       IF GETDATE() > @timeoutTime
       AND @job_timeout_minutes > 0
       BEGIN
              RAISERROR('Timeout expired', 16, 1);
              RETURN;
       END

       IF @job_state != 4
       BEGIN
              WAITFOR DELAY '0:00:05';
       END
END

SELECT TOP (1) @job_failed = CASE WHEN jh.run_status = 0 /* Failed */ THEN 1 ELSE 0 END,
	@job_start_time = msdb.dbo.agent_datetime(jh.run_date, jh.run_time),
	@job_complete_time = dateadd(second, jh.run_duration, msdb.dbo.agent_datetime(jh.run_date, jh.run_time))
FROM msdb.dbo.sysjobs j
JOIN msdb.dbo.sysjobhistory jh ON jh.job_id = j.job_id
WHERE j.name = @job_name
and jh.step_id = 0
ORDER BY jh.run_date DESC, jh.run_time DESC;
GO

Update 3/23/2020: I addressed a couple of issues in the original procedure. First, if this procedure is executed immediately after the call to sp_start_job, sometimes the initial time that xp_sqlagent_enum_jobs is called, the job is still shown with an idle status, causing the procedure to exit prematurely. I have therefore added a 5-second delay at the beginning of execution to allow the job system to stabilize.

The second fix was to the final select statement. I added jh.step_id = 0 into the WHERE clause so that the overall job status is retrieved, and not just the last step to run. Normally, this will be one and the same, but I added the additional predicate for more correctness.

Update 3/24/2020: Based on Mario’s suggestion in the comments, I have made another change so that if @job_timeout_minutes = 0, the procedure will wait indefinitely until the calling job completes.

Testing Scalar UDF Performance on SQL Server 2019

One of the more compelling features (so far) in SQL Server 2019 (starting with the recently released CTP 2.1) is inlining of scalar user-defined functions. This feature has the potential to significantly improve throughput for queries that use scalar UDFs, which has long been a performance bottleneck for a variety of reasons.

In one of my sessions, Set Me Up: How to Think in Sets, I discuss a variety of performance-inhibiting query constructs, including scalar UDFs. I thought it would be interesting to take the simple scalar function that I use in the demo and see what kind of difference that scalar inlining might make.

First, I restored the CorpDB database that I use in the session to my SQL Server 2019 CTP 2.1 instance and initially set the compatibility level to 140. I also ran script 001 from the demo to create the needed database tables (no need to create the CLR objects for this test). I then ran script 030 to execute the scalar UDF test. In a nutshell, this script

  • creates a UDF
  • runs a query that calls the UDF about 13,000 times, capturing the time required to do so
  • repeated this test five times
  • discards the fastest and slowest tests
  • reports the average time for the remaining three tests

The UDF definition is

create function dbo.GetLine1ProductId (@OrderId int)
returns int
as
begin
	declare @ProductId int;

	select top 1 @ProductId = od.ProductId
	from dbo.OrderDetail od
	where od.OrderId = @OrderId
	order by od.OrderDetailId;

	return @ProductId;
end

The UDF is called in the traditional fashion.

select oh.OrderId,
	oh.OrderDate,
	oh.CustomerId,
	dbo.GetLine1ProductId(oh.OrderId) Line1ProductId
from dbo.OrderHeader oh
where oh.OrderDate >= '2016-01-01';

At compatibility level 140 (meaning that SQL Server is not able to inline the function), the test takes 373.530 seconds to operate over the 13,147 rows.

Next, I changed the compatibility level to 150 so that inlining is possible. First, I checked to make sure that SQL Server agreed that it could be done.

select o.name, sm.is_inlineable
from CorpDB.sys.sql_modules sm
join CorpDB.sys.objects o on sm.object_id = o.object_id
where o.name = 'GetLine1ProductId';

Since is_inlineable returns with a value of 1, it seems that SQL Server agrees that the function qualifies to be inlined.

At compatibility level 150, this query now takes 3.572 seconds complete.

This represents about a 100-fold performance boost with absolutely no code changes!

The query plan changes are interesting as well. The non-inlined query’s estimated plan shows both the overall query plan as well as the query plan for the UDF. Nothing surprising here. The overall query is dead simple, hiding the details of the scalar function.

The query for the UDF also contains no particular surprises.

With the compatibility level moved to 150 and with inlining enabled, SQL Server is able to roll the UDF operations into a single query plan.

The UDF is represented by the lower branch in this case. This is a dead-simple query in the UDF, but even so, it gets expanded out in a non-trivial manner. I’m still trying to sort out what is happening this branch of the query plan, but I think (though I’m far from sure) that it’s essentially pre-computing the “TOP 1” values for all of the relevant records. (This is similar to what I do in script 022, but instead of using a temp table, the optimizer utilizes an eager spool.)

Regardless, it’s awesome to note that this query goes parallel. One of the many performance implications related to scalar UDFs is that they have traditionally inhibited parallelism, but no longer!

In any case, this is a promising new world for scalar user-defined functions, and am certainly eager to see where things go from here.

SELECTing from a VALUES clause

This post falls into the category of something that I am always forgetting how to do, so this post is a reminder to self.

It is possible to create a virtual table in SQL Server using just the VALUES clause. The secret sauce is proper placement of parenthesis (which is where I always go wrong).

select *
from
(
	values
	(1, 'Virgil', 'Davila', '18 E Bonnys Rd', 'Natchez', 'MS'),
	(2, 'Jill', 'White', '675 W Beadonhall Rd', 'Providence', 'RI'),
	(3, 'David', 'Walden', '12663 NE Sevenside St', 'Bloomington', 'IN')
) tbl (CustomerID, FirstName, LastName, Address, City, State);

Now just wrap that up in a CTE or subquery and it can be used pretty much like any SQL table, such as the source for an INSERT or UPDATE statement.

One Year Gone

It has been exactly one year since I published a post on this blog. I never intended to let things go that long, but time has a way of slipping past, and things we planned on doing never happen.

In that one year’s time I have attended, if my count is correct, fourteen technical conferences, and I have spoken at thirteen of those events. Toward the end of that 12-month period, I was called on a number of times to speak more than once at an event, so I wound up presenting a total of 20 times.

I have been surprised at how much time can go into pulling together a presentation. Some topics I have chosen have been familiar, old friends to me — aspects of SQL Server with which I have many years of experience. Even with that kind of subject matter, I have found myself going into deep research mode, wanting to familiarize myself with the material at a far deeper level than I will be able to approach in the session itself.

I also chose a couple of topics where I knew relatively little of the subject matter going in. It is said that if you want to learn something well, one approach is to submit to speak on it. That is most certainly true. Here, even more so than with the familiar, comfortable material, I found myself often asking “what if someone asks about (fill in the blank).” It is easy to go deep down the rabbit hole of “what ifs” — the result being a lot of preparation and research.

The time required to put together a slide deck and demo scripts has been surprising to me as well, even after doing it several times. Even for a session that I have presented before, I always refine the session each time, often in non-trivial ways.

This past year has been exhausting. I haven’t kept track of how many hours I spent on presentations, but they have been considerable. I did keep track of the distance I drove to attend these 14 events, as I traveled by car to them all: 17,408 miles in all. It has been a lot of emotional investment. It’s time to slow down.

I say none of this by way of complaint. I have quite enjoyed all of it. At the same time, I am glad to be taking a break for a few months and to clear my head for a bit. And I am hoping to return to blogging. I have nothing specific in the wings at this point, though I do have a number of general ideas about what to write. In some ways, this post is my attempt to kick myself in the rear to get back at the blogging thing.

Yes, I intend to keep presenting, but not at the same pace. That said, even as I write this, I’m spending time putting together a new session that I’ll start submitting soon. One of my goals now is improve my presentation style, as I realize I can use quite a bit of work in that area.

Three years ago, I wasn’t even contemplating the idea of presenting technical content. The very notion of standing in front of roomful of strangers runs nearly 100% counter to my personality. It’s amazing to me how quickly I adapted. The first time I did it, I was absolutely terrified; the second time it scarcely bothered me.

I highly recommend speaking to everyone. I truly believe that every single person out there, no matter how much or little experience they have, has something to bring to table. Everyone can teach someone else something new.

Halloween Protection

I thought it would be appropriate today, on the 40th anniversary of discovery of the problem, to address the issue of Halloween protection. Describing the problem is nothing new, of course, but nonetheless, I thought it would be interesting to illustrate in my own words.

Reportedly, the Halloween problem was identified on October 31, 1976 by three researchers, and the name stuck. Imagine that we have a table of employees with salary information. We’ll create such a table and make the employee name the primary key and the clustering key, and then put a nonclustered index on salary. Here is such a table with some random data.

create table Employee
(
	FirstName nvarchar(40) not null,
	Salary money not null,
	constraint pk_Employee primary key clustered (FirstName)
);
 
create nonclustered index ix_Employee__Salary on Employee (Salary);
 
insert Employee (FirstName, Salary)
values ('Hector', 22136), ('Hannah', 21394), ('Frank', 28990), ('Eleanor', 29832), ('Maria', 28843),
       ('Maxine', 23603), ('Sylvia', 26332), ('Jeremy', 20230), ('Michael', 27769), ('Dean', 24496);

The task at hand is to give all employees with a salary of less than 25,000 a 10% raise. This can be accomplished with a simple query.

update Employee
set Salary = Salary * 1.10
where Salary < 25000.00;

Now imagine that the SQL Server optimizer decides to use the nonclustered index to identify the rows to update. The index initially contains this data (bearing in mind that the index implicitly contains the clustering key).

employeencidx

Without Halloween protection, we can imagine that SQL walks through the rows in the index and makes the updates. So the Jeremy row is updated first, and the new salary is 22,253.

employeeafterjeremyupdate

However, the index is keyed by Salary and should be sorted on that column, so SQL Server will need to move the row in the index to the appropriate location.

employeeafterjeremyupdateandindexmove

Next, SQL updates the Hannah row and moves it within the index.

employeeafterhannahupdateandindexmove

And then for Hector:

employeeafterhectorupdateandindexmove

However, at this point, the next row is Jeremy again. So the database updates the Jeremy row for a second time.

employeeafterjeremyupdateandindexmove2

Hannah again.

employeeafterhannahupdateandindexmove2

Then Maxine.

employeeaftermaxineupdateandindexmove

Finally Hector for a second time, followed by Jeremy for a third time, and then Dean.

employeeafterhectorjeremydeanupdateandindexmove

At this point the next row is for Hannah again. However, her salary is greater than or equal to 25,000 and therefore doesn’t qualify for update based on the query predicate. Since none of the remaining rows meet the WHERE clause condition, processing stops and query execution is complete.

That, in a nutshell, is the Halloween problem: some rows get process multiple times as a result of the row being physically moved within the index being used to process the query. For comparison, the (incorrect) results described the above processing are on the left, and the correct results are on the right.

employeeupdatefinalresultsincorrect employeeupdatefinalresultscorrect

In this particular case, the process results in every employee having a salary of at least 25,000. However, imagine if the query didn’t have the WHERE clause but still chose to use the Salary index to process. The result would be an infinite loop where each employee’s salary would be repeatedly increased by 10% until an overflow condition occurred.

So how does SQL Server protect against the Halloween problem? There are a variety of mechanisms in place for this very purpose.

  • SQL Server may choose an alternate index. In fact, this is precisely what SQL will do in our example by reading from the clustered index to drive the updates. Because the clustered index isn’t sorted by Salary in any way, updates during the processing phase don’t affect the row location and the problem does not arise. This separation of the read and write portion avoids the problem.

    halloweenalternateindexselection

  • But what if scanning the clustered index is a relatively expensive operation. When I load 1,000,000 rows into the Employee table and only 5 employees have salaries below 25,000, SQL Server will naturally favor the nonclustered index.

    halloweennonclusteredindexselection

    Note particularly the highlighted Sort operator. Sort is a blocking operation, that is, the entire input set must be read (and then sorted) before any output is generated. Once again, this separates the read and write phases of query processing.

  • Suppose that the Salary index is the only means of access the data. For instance, if I drop the nonclustered Salary index and the primary key on the table, and then create a clustered index on Salary, SQL Server is forced to use the Salary index. In this case, SQL Server will insert an eager spool into the plan, once again separating the reading and writing.

    halloweeneagerspool

  • Memory-optimized (Hekaton) tables avoid the Halloween problem by their very nature. Rows in a memory table are read-only, and “updates” to the row cause the current version of the row to be invalidated a new version of the rows to be written. Once again, read and write separation happens, in this case organically.

I don’t intend this to be a comprehensive list of protections in SQL Server, but it covers the most common scenarios.

To me, the key takeaway here is that Halloween protection is an integral and crucial aspect of query processing that the optimizer simply must take into account. This most certainly affects the plan generation processes and likely adds overhead to the plan to ensure correct results.

Alerting on Missed Scheduled SQL Jobs, Part 3

The first two parts of this series addressed the general approach that I use in an SSIS script task to discover and alert on missed SQL Agent jobs. With apologies for the delay in producing this final post in the series, here I bring these approaches together and present the complete package.

To create the SSIS, start with an empty SSIS package and add a data flow task. In the task, add the following transformations.

1. An OLE DB Source. Point it to a connection manager with the instance to be monitored and a SQL command containing this script:

set nocount on
declare @xp_results table
(
			job_id                UNIQUEIDENTIFIER NOT NULL,
			last_run_date         INT              NOT NULL,
			last_run_time         INT              NOT NULL,
			next_run_date         INT              NOT NULL,
			next_run_time         INT              NOT NULL,
			next_run_schedule_id  INT              NOT NULL,
			requested_to_run      INT              NOT NULL,
			request_source        INT              NOT NULL,
			request_source_id     sysname          COLLATE database_default NULL,
			running               INT              NOT NULL,
			current_step          INT              NOT NULL,
			current_retry_attempt INT              NOT NULL,
			job_state             INT              NOT NULL
);
 
/* Values for "job_state":
		0 = Not idle or suspended
		1 = Executing
		2 = Waiting For Thread
		3 = Between Retries
		4 = Idle
		5 = Suspended
		[6 = WaitingForStepToFinish]
		7 = PerformingCompletionActions
*/
 
DECLARE @can_see_all_running_jobs INT = 1;
DECLARE @job_owner sysname = suser_sname();
 
INSERT INTO @xp_results
EXECUTE master.dbo.xp_sqlagent_enum_jobs @can_see_all_running_jobs, @job_owner;
 
select	@@SERVERNAME InstanceName,
		j.name as job_name,
		j.date_created,
		x.last_run_date,
		x.last_run_time,
		x.next_run_date,
		x.next_run_time,
		x.current_step,
		x.job_state,
		x.job_id,
		s.name schedule_name,
		s.freq_type,
		s.freq_interval,
		s.freq_subday_type,
		s.freq_subday_interval,
		s.freq_relative_interval,
		s.freq_recurrence_factor,
		s.active_start_date,
		s.active_end_date,
		s.active_start_time,
		s.active_end_time,
		@@servername ServerName
from	@xp_results x
inner join msdb.dbo.sysjobs j on x.job_id = j.job_id
inner join msdb.dbo.sysjobschedules js on js.job_id = j.job_id
inner join msdb.dbo.sysschedules s on s.schedule_id = js.schedule_id
where j.enabled = 1
and s.enabled = 1
and s.freq_type in (4, 8, 16, 32)
and x.job_state not in (1, 2, 3);

2. Add a sort transformation, with “job_id” as the sort key input column (in ascending order).

3. Add a script component. When prompted, add it as a transformation. On the “Input Columns” tab, select all columns. On the “Inputs and Outputs” tab, select “Output 0” and change SynchronousInputID to “None.”

Expand “Output 0” and select “Output columns.” Add the following as outputs:

  • InstanceName, Unicode string, length 256
  • JobId, unique identifier
  • JobName, Unicode string, length 256
  • ExpectedRunDate, database timestamp with precision
  • LastRunDate, database timestamp with precision

Return to the “Script” tab and “Edit Script.” At the bottom, find the Input0_ProcessInputRow method and delete it, replacing it with the contents of this ZIP file.

At the top of the script, expand the “Namespaces” section and add these statements:

using System.Collections.Generic;
using System.Linq;

Add the following declarations inside the ScriptMain class, such as immediately before the PreExecute method:

private bool _isFirstRow = true;
private RowData _previousRow = null;
private DateTime _previousJobRunDate;
private List<DateTime> _forecast = null;

4. Add a conditional split transformation.

In the first row (order = 1), set Output Name = ExpectedRunDateIsNull, Condition = ISNULL(ExpectedRunDate).

In the second row (order = 2), set Output Name = ExpectedRunDateInFuture, Condition = ExpectedRunDate >= DATEADD(“minute”,-30,GETDATE()).

Set the default output name = ExpectedRunDateInPast

5. Add an OLE DB Destination, linking to the ExpectedRunDateInPast output from the condition split. In a database somewhere, create a table to collect the results.

create table MissedScheduleMonitorResults
(
	ResultId int not null identity(1,1),
	MonitorRunTime datetime2 not null default (sysdatetime()),
	InstanceName int not null,
	JobId uniqueidentifier not null,
	JobName nvarchar(256) not null,
	LastRunDate datetime null,
	ExpectedRunDate datetime null,
	AlertSent bit not null default (0),
	constraint pk_MissedScheduleMonitorResults primary key clustered (ResultId),
);

Create a connection manager to this database, and in the OLD DB destination point to this connection manager and select the MissedScheduleMonitorResults table. Use the default mappings (where column names match).

The data flow task should looks something like this. Yeah, no naming standards here.

missedschedulemonitordft

That’s it! Now, you can create an job with a script that reads from the table where AlertSent = false, sends an appropriate message, and sets AlertSent to true.

Why TF3604?

I continue to be surprised at how few people in the SQL Server community “get” my username.

About a year ago, in mid-August 2015, I was trying to figure a Twitter username to use as a professional account. On a lark, I searched to see if “tf3604” was available, figuring that surely someone had already appropriated it. To my surprise it was available, so I grabbed it.

A few weeks passed before it even occurred to me to wonder if “tf3604.com” were available. Surely, surely that was in use somewhere, maybe even for non-SQL Server purposes. But nope, it was still out there. Even then, I hesitated for a day or two before buying the domain. That was September 17, 2015.

So why TF3604?

There are many undocumented commands and features in SQL Server that require trace flag 3604 to be enabled in order to produce useful output. For instance, DBCC PAGE will run fine but appear to do nothing without TF3604 turned on, but this trace flag tells SQL Server to route the output to the client. In SSMS, this output is then displayed on the Messages tab.

Virtually all of these undocumented commands and features that pair with TF3604 deal with SQL Server internals.

I am fascinated with SQL Server internals. To me, the name TF3604 relates to shining a light on these internals.

So here we are, a year after picking this username on a whim. And I still love the name.

Alerting on Missed Scheduled SQL Jobs, Part 2 (Custom Aggregates in SSIS)

In the previous post in this series, I introduced the goal of finding missed scheduled jobs in SQL Server. The next step in the solution required what essentially amounted to a custom aggregate in SSIS. This just wasn’t something I’d had to deal with before, so I’ll address that part of the problem in this post.

To keep things simple, I will use an example that is decoupled from the problem at hand. In part 3 of this series, I will use these techniques to present the final missed schedules solution.

My contrived example here is to do a string concatenation aggregate on a customer table where I want all of my customers who live in the same city to be aggregated into the same record. My source query is:

select c.City, c.FirstName + ' ' + c.LastName Name
from dbo.Customer c
where c.State = 'KS'
order by c.City, c.LastName, c.FirstName;

The first few records from this query are:

CustomersWithCity

Since we have only one customer in Abilene, I would expect for my output to contain a record with just one name for that city.

Abilene | Joe Wolf

Since there are multiple customers in Andover, the output should have this record:

Andover | Clara Anderson, Shirley Holt, Wanda Kemp, Brandon Rowe, Sandra Wilson

Here is how I do the aggregate. I start by creating a data flow task with three components.

CustomAggregateDataFlowTask

In the Source component, I connect to my database and give it the SQL listed above.

Next, I add a Script Component. SSIS prompts me to select the type of script component that this object will be:

ScriptComponentOptions

This will be a transformation script. After I connect my source to the script component and open the script component, I get this dialog.

ScriptTransformationEditor

On the Input Columns tab, I make sure that both City and Name are selected, and then I move to the Inputs and Outputs tab. I click on “Output 0.” Here is the secret sauce: I change the SynchronousInputId property to None. The default is that SSIS will have one output row per input row, but making this change allows me to control when an output row is generated. Finally, I expand “Output 0” and click on “Output Columns.” I press the Add Column button, rename it to City, and set the DataType property to “string [DT_STR].” The default length of 50 is OK for this situation. I then add another column, call it “CustomerNames” and make it a string as well and set the length to 8000.

I then go back to the Script tab and edit the script. I replace the Input0_ProcessInputRow method with the following.

public override void Input0_ProcessInputRow(Input0Buffer Row)
{
	string currentCity = Row.City_IsNull ? null : Row.City;
	if (currentCity == _previousCity)
	{
		if (_customers != null)
		{
			_customers += ", ";
		}
		_customers += Row.Name_IsNull ? string.Empty : Row.Name;
	}
	else
	{
		if (_customers != null)
		{
			Output0Buffer.AddRow();
			Output0Buffer.City = _previousCity;
			Output0Buffer.CustomerNames = _customers;
		}
 
		_customers = Row.Name_IsNull ? string.Empty : Row.Name;
	}
 
	_previousCity = currentCity;
}
 
public override void Input0_ProcessInput(Input0Buffer Buffer)
{
	base.Input0_ProcessInput(Buffer);
	if (Buffer.EndOfRowset())
	{
		if (_customers != null)
		{
			Output0Buffer.AddRow();
			Output0Buffer.City = _previousCity;
			Output0Buffer.CustomerNames = _customers;
		}
	}
}
 
private string _previousCity = null;
private string _customers = null;

What this code does is to take advantage of the fact that our input is sorted by City and reads through the records one at a time, watching for changes to the city field. As it goes, it keeps track of the customers associated with that city, and when the city changes, it writes a records to the output. The code in Input0_ProcessInput exists just to output the final row.

Finally, I create a destination in my data flow task. The destination can be anything we want; I just output to a table.

When I run the package and select from the output table, I get exactly what I expected.

CustomAggregateResults

Note that is very similar to how the SQL Stream Aggregate processes data, and it is absolutely dependent on having the input sorted by the aggregation column. If the data is not sorted, we would have to take a very different approach.

In the final part of this post, I will use this same technique to complete the missed jobs monitor.

Alerting on Missed Scheduled SQL Jobs, Part 1

We had an interesting situation at work recently where a SQL Agent went into what I would call a “zombie state” where the service was actually still alive and running but not really doing much. Scheduled jobs weren’t being executed. Well, mostly. It did appear that some jobs were going and others just sat there for several days without running. If a backup job doesn’t run, it’s a big deal.

I still don’t understand quite what happened, and likely never will since so much of SQL Agent is a black box and the various logs didn’t reveal a whole lot. Based on internet searches, this happens to folks from time to time, and rebooting the server is the general solution, which worked for us. But there doesn’t seem to be much out there about how to alert if the problem happens.

Such a monitor initially seemed like it would be a really simple thing. The more I dug in, however, it turned out to be considerably more involved than it first appeared, primarily for two reasons. The first is that the scheduling metadata is a bit complicated, and unraveling it is far more than just a simple query. Second, as I created the solution in SSIS, I came to realize that would need to write was basically amounted to a custom aggregate, something I’d not needed to do before and which was a learning process for me.

This is going to be a three-part post. I’ll address the first difficulty in this post, and the aggregation difficulty in the next one. Finally, I’ll pull it all together into the final monitoring package.

Scheduling Metadata

Job metadata is stored in msdb. There is a dbo.sysjobs table that stores one row per job. The dbo.sysschedules table stores the scheduling information for jobs, and dbo.sysjobschedules ties the other two together. All of these tables are well documented in BOL. There is also a sysjobhistory table to store the history, but we aren’t going need that. We are going to have use the undocumented xp_sqlagent_enum_jobs procedure to get some live job information.

For now, I’ll focus on sysschedules since that contains the meat of the scheduling metadata. The key bit of information here the freq_type column which indicates how frequently the schedule is activated. I am only concerned about recurring schedules, so the values I’m interested in are 4 (daily), 8 (weekly), 16 (monthly using days of the month) and 32 (monthly using weeks of the month). Some of the other columns in the table get interpreted in different ways depending on the value of freq_type. Most notably, the freq_interval columns gets used in varying ways for different frequency types.

Why Do We Even Care?

Why do we even need to parse through the scheduling metadata? It’s because SQL Server doesn’t have any built-in functionality (at least so far as I’ve been able to discover) to determine the next time a schedule should fire after an arbitrary date. There are ways, including with sp_help_job, to identify the next time a job should run, but this basically determines the next time the job should run from the current time. This doesn’t particularly help us.

For example, suppose a job last ran on January 1 and should fire every week. If today is January 20, SQL will tell us that the next run will be on January 22. Without knowing how to read through the schedule metadata, we don’t know that anything was missed. Knowing that the job runs every week, we can compute that the next time should be on January 8. Since this date is in the past, we know that some runs got skipped.

So the general idea for the monitor is this. We will identify all jobs that are enabled and that have at least one recurring schedule that is enabled. We will determine the last time that the job executed, and determine when it should run next. If the next run date is in the past, something happened, and we need to send an alert.

We will not consider any jobs are that currently executing. It is entirely possible to miss a scheduled run because the job is already active, and this monitor doesn’t care about that. However, it is a good idea to separately monitor for jobs that get stuck and run beyond what is normal.

Forecasting Schedules

I initially started writing the forecasting logic in T-SQL, but it very quickly got ugly and I switched to C#. It got ugly anyway. But as it stands now, the solution is in C# code that I later integrated into a script task in SSIS.

For now, we will ignore the fact that a job can have multiple schedules, which we address in Part 2 of this article. The code that follows treats one schedule at a time. Later, we will combine any multiple schedules for the job together.

The code consists of a single static class called SqlAgentSchedule with a single public method called Forecast. The signature of Forecast basically takes the values of the relevant columns from sysschedules.

public static List<DateTime> Forecast(
	DateTime startingFromDate,
	int type,
	int interval,
	int subdayType,
	int subdayInterval,
	int relativeInterval,
	int recurrenceFactor,
	int? startDateInt,
	int? endDateInt,
	int? startTimeInt,
	int? endTimeInt)

The method returns a list of the next few times that the job should run. Specifically, it will return, at a minimum, any remaining times that the job should run today as well any times that the job should run on the next scheduled day. This is definitely overkill, but will guarantee that the returned list contains at least one item, except in the case where the job’s end date has passed.

After doing some simple computation to determine the start and end dates and times for the job, we call one of four different functions to compute the schedule based on the freq_type value. Finally, we filter out any run times that are prior to the specified start time.

List<DateTime> forecast = null;
switch (type)
{
	case 4:
		forecast = ForecastDailySchedule(
			startingFromDate.Date, 
			interval, 
			subdayType, 
			subdayInterval, 
			startDate.Value, 
			endDate.Value);
		break;
 
	case 8:
		forecast = ForecastWeeklySchedule(
			startingFromDate.Date, 
			interval, 
			subdayType, 
			subdayInterval, 
			recurrenceFactor, 
			startDate.Value, 
			endDate.Value);
		break;
 
	case 16:
		forecast = ForecastMonthlyByDaysSchedule(
			startingFromDate.Date, 
			interval, 
			subdayType, 
			subdayInterval, 
			recurrenceFactor, 
			startDate.Value, 
			endDate.Value);
		break;
 
	case 32:
		forecast = ForecastMonthlyByWeeksSchedule(
			startingFromDate.Date, 
			interval, 
			subdayType, 
			subdayInterval,
			relativeInterval, 
			recurrenceFactor, 
			startDate.Value, 
			endDate.Value);
		break;
 
	default:
		return null;
}
 
forecast = forecast.Where(d => d > startingFromDate).ToList();

The devil is in those schedule type-specific forecast functions. Disclaimer: I don’t by any means guarantee that the code is right, especially for some edge-case scenarios. I am also quite unsure about the exact logic that SQL uses for weekly and monthly schedules where the “relative interval” is greater than 1.

This code is also pretty raw; it could definitely stand a good refactoring. I just haven’t had the time to invest in that effort. That said, the code should be functional as-is.

General Approach to Forecasting Schedules

The idea here is that we first need to find the first day on or after the “starting from date” value sent into the function that meets the schedule criteria. Each of the schedule types have a parameter indicating how often the schedule recurs. For daily schedules, this is the freq_interval column; for weekly and monthly schedules it is the freq_recurrence_factor columns. I’ll call it the Recurrence Interval in the following discussion.

We first compute the number of days (or weeks, or months) that have elapsed since the Start Date for the schedule, and then take this number modulo the Recurrence Interval. If the modulo is non-zero, we add enough days, weeks or months to get to the next multiple of Recurrence Interval. For weekly and monthly schedules, this may result in a date that doesn’t meet some of the other qualifications, so we need to keep advancing the date until all of the criteria are satisfied.

Next, we compute the next date that the schedule should run, either in the current current Recurrence Interval or in the next one.

Schedules also have information about when the job should run within a day. This is indicated by the freq_subday_type column. A value of 1 indicates that the schedule does not recur within the day and should just run once. Other values indicate the number of time units that are represented by the freq_subday_interval column. A freq_subday_type of 2 indicates that the time units are seconds; 4 represents minutes; and 8 represents hours.

We also have the active_start_time and active_end_time columns that indicate when in the day the schedule starts and stops. These are integer values that indicate the times using the formula hour * 10000 + minute * 100 + second, so we have to reverse this formula to get the times in to usable formats. So then we begin at the start time each day and go to the end time at freq_subday_interval seconds/minutes/hours and add an entry to the forecast for each one.

A special case is when end time is less than the start time. This seems to mean that SQL will run the job into the following day until the end time, so we also have to account for that.

The code to handle the sub-day time logic is as follows.

TimeSpan subDayFrequency = GetSubdayFrequency(subdayType, subdayInterval);
TimeSpan startTime = startDate.TimeOfDay;
TimeSpan endTime = endDate.TimeOfDay;
if (endTime < startTime)
{
	endTime = endTime.Add(_oneDay);
}

With this information we can loop through and find all times that meet the schedule criteria within the time span:

for (TimeSpan time = startTime; time <= endTime; time = time + subDayFrequency)
{
	forecast.Add(nextRunDate + time);
}

GetSubdayFrequency is defined as follows. Note that in the case of a one-time schedule (sub-day type 1) we use a bit of hack and defines the recurrence as the number of seconds in a day. This guarantees that we will only get one hit in the current day.

private static TimeSpan GetSubdayFrequency(int subdayType, int subdayInterval)
{
	int seconds = 0;
	switch (subdayType)
	{
		case 1:
			seconds = 24 * 60 * 60;
			break;
 
		case 2:
			seconds = subdayInterval;
			break;
 
		case 4:
			seconds = subdayInterval * 60;
			break;
 
		case 8:
			seconds = subdayInterval * 60 * 60;
			break;
 
		default:
			throw new ApplicationException("invalid subdayType.");
	}
 
	return new TimeSpan(0, 0, seconds);
}
Daily Schedules

The daily schedule is by far the simplest to implement. Since the job simply runs every x days, really all we need to do is to determine the next day on a multiple of x from the schedule start date.

int daysElapsed = (int)((startingFromDate - startDate.Date).TotalDays);
int daysInCurrentInterval = daysElapsed % interval;
int daysUntilNextRun = daysInCurrentInterval == 0 ? 0 : interval - daysInCurrentInterval;
DateTime nextRunDate = startingFromDate.AddDays(daysUntilNextRun);
Weekly Schedules

With a weekly schedule, things start off pretty much the same as for daily. If a schedules fires every x weeks, it’s really the same thing as saying the schedule files every 7x days, so the logic is pretty much the same as before. Remember, we now need to use the freq_recurrence_factor column instead of the freq_interval column.

int daysElapsed = (int)((startingFromDate - startDate.Date).TotalDays);
int daysInCurrentInterval = daysElapsed % (recurrenceFactor * 7);
int daysUntilNextRun = daysInCurrentInterval == 0 ? 0 : (recurrenceFactor * 7) - daysInCurrentInterval;
DateTime nextRunDate = startingFromDate.AddDays(daysUntilNextRun);

But recall that we can specify which days of the week that job runs, so we have to find the next day within the week that qualifies. The freq_interval column is used to identify the days of the week that are selected as a bitmap. I have a C# flag-based enum called SqlDayOfWeek that directly maps to the bitmap.

SqlDayOfWeek dayOfWeek = (SqlDayOfWeek)interval;
nextRunDate = AdjustToNextMatchingDayOfWeekInValidWeek(nextRunDate, dayOfWeek, startDate, recurrenceFactor);

The function that is called basically just the checks the date passed in to see if it matches both the scheduled days as well as the recurrence interval. If not, we advance to the next day and keep on rechecking. It’s a brute force approach.

private static DateTime AdjustToNextMatchingDayOfWeekInValidWeek(DateTime currentDate, SqlDayOfWeek sqlDayOfWeek, DateTime startDate, int recurrenceFactor)
{
	bool isDone = false;
	while (isDone == false)
	{
		currentDate = AdjustToNextMatchingDayOfWeek(currentDate, sqlDayOfWeek);
		int weeksElapsed = (int)((currentDate - startDate.Date).TotalDays) / 7;
		if ((weeksElapsed % recurrenceFactor) == 0)
		{
			isDone = true;
		}
		else
		{
			currentDate = currentDate.AddDays(1);
		}
	}
 
	return currentDate;
}
 
private static DateTime AdjustToNextMatchingDayOfWeek(DateTime currentDate, SqlDayOfWeek sqlDayOfWeek)
{
	while (IsDayOfWeekMatch(currentDate.DayOfWeek, sqlDayOfWeek) == false)
	{
		currentDate = currentDate.AddDays(1);
	}
 
	return currentDate;
}
 
private static bool IsDayOfWeekMatch(DayOfWeek dow, SqlDayOfWeek sql)
{
	return (dow == DayOfWeek.Sunday && sql.HasFlag(SqlDayOfWeek.Sunday)) ||
		(dow == DayOfWeek.Monday && sql.HasFlag(SqlDayOfWeek.Monday)) ||
		(dow == DayOfWeek.Tuesday && sql.HasFlag(SqlDayOfWeek.Tuesday)) ||
		(dow == DayOfWeek.Wednesday && sql.HasFlag(SqlDayOfWeek.Wednesday)) ||
		(dow == DayOfWeek.Thursday && sql.HasFlag(SqlDayOfWeek.Thursday)) ||
		(dow == DayOfWeek.Friday && sql.HasFlag(SqlDayOfWeek.Friday)) ||
		(dow == DayOfWeek.Saturday && sql.HasFlag(SqlDayOfWeek.Saturday));
}
Monthly Schedules (Day of Month)

We follow the same basic principle here, with the slight complication of having to deal with rollover into a new year. There is an interesting twist here. Suppose that I schedule a job to fire on the 31st of every month, starting in January, and recurring every three months. The next time the job should run is then “April 31,” a non-existent date. SQL Server will simply skip this invalid date, and next run on July 31, so the code has to handle this odd case.

Monthly Schedules (Week of Month)

This type of schedule takes a little bit more effort to interpret. I define another enumeration called SqlWeek that wraps the relativeInterval parameter, and then a couple of helper methods to determine the days of the week the job should run and to compute the next run date.

SqlDayOfWeek dayOfWeek = DayOfWeekFromRelativeMonthDay(interval);
SqlWeek week = (SqlWeek)relativeInterval;
DateTime nextRunDate = GetRelativeDayInMonth(nextRunYear, nextRunMonth, week, dayOfWeek);

The helper methods are as follows.

private static SqlDayOfWeek DayOfWeekFromRelativeMonthDay(int interval)
{
	switch (interval)
	{
		case 1:
			return SqlDayOfWeek.Sunday;
 
		case 2:
			return SqlDayOfWeek.Monday;
 
		case 3:
			return SqlDayOfWeek.Tuesday;
 
		case 4:
			return SqlDayOfWeek.Wednesday;
 
		case 5:
			return SqlDayOfWeek.Thursday;
 
		case 6:
			return SqlDayOfWeek.Friday;
 
		case 7:
			return SqlDayOfWeek.Saturday;
 
		case 8:
			return SqlDayOfWeek.Day;
 
		case 9:
			return SqlDayOfWeek.Weekday;
 
		case 10:
			return SqlDayOfWeek.WeekendDay;
	}
}
 
private static DateTime GetRelativeDayInMonth(int year, int month, SqlWeek week, SqlDayOfWeek dayOfWeek)
{
	int lowerDayInclusive;
	int upperDayInclusive;
	int lastDayInMonth = DateTime.DaysInMonth(year, month);
	switch (week)
	{
		case SqlWeek.First:
			lowerDayInclusive = 1;
			upperDayInclusive = 7;
			break;
 
		case SqlWeek.Second:
			lowerDayInclusive = 8;
			upperDayInclusive = 14;
			break;
 
		case SqlWeek.Third:
			lowerDayInclusive = 15;
			upperDayInclusive = 21;
			break;
 
		case SqlWeek.Fourth:
			lowerDayInclusive = 22;
			upperDayInclusive = 28;
			break;
 
		case SqlWeek.Last:
			lowerDayInclusive = lastDayInMonth - 6;
			upperDayInclusive = lastDayInMonth;
			break;
 
		default:
			throw new ApplicationException("Invalid week.");
	}
 
	for (int day = lowerDayInclusive; day <= upperDayInclusive; day++)
	{
		DateTime date = new DateTime(year, month, day);
		if (IsDayOfWeekMatch(date.DayOfWeek, dayOfWeek))
		{
			return date;
		}
	}
}
Summary

Well, that was an awful lot of work just get the next run time following an arbitrary date, and this was definitely far more work than I expected.

In Part 2 of this post, I’ll show how I built a custom aggregate in SSIS. In Part 3, I’ll wrap the C# code into an SSIS package and pull everything together.

United States Geographical Data: ZIP Codes

In a previous post, I gave a script to load state and county geographical data into SQL Server. Here, I extend that data to include ZIP code boundaries.

One thing that I have recently learned is that ZIP code data is really a collection of points, in particular, specific places where mail is delivered. These points can be abstracted in a series of lines along a delivery route, for instance, houses on a residential street. However, ZIP code geographic data is, strictly speaking, not a geographic area. It is possible to draw boundaries around the ZIP code delivery points, but these boundaries are only approximations, and different sources will draw the boundaries in different ways.

More specifically, what I present here are known as ZIP code tabulation areas (ZCTAs) used by the Census Bureau to approximate ZIP codes for census purposes. Here is source of the data.

One thing you may note is that the ZCTAs don’t cover all the possible area of the United States. For instance, large lakes and wilderness areas tend to not be included in any ZCTA.

As with the state and county data, the ZIP code boundaries can be loaded into a table with this definition:

CREATE TABLE [dbo].[PoliticalRegion](
	[PoliticalRegionId] [int] NOT NULL IDENTITY(1,1),
	[RegionName] [nvarchar](255) NOT NULL,
	[Type] [nvarchar] (30) NOT NULL,
	[ParentRegionName] [nvarchar] (255) NULL,
	[Boundaries] [geography] NOT NULL
);

Here is the ZIP code data. The file is quite large (41 MB). One way that I been able to get this data to load is via the command line:

sqlcmd -S serverName -d databaseName -E -i ZipCodeInserts.sql