Category Archives: Administration

Stored Procedure to determine when a job is done

This is a stored procedure that I have found useful in a number of circumstances. It came into being because there are times that I need to run a job via T-SQL and wait for it to complete before continuing the script. The problem is that sp_start_job does just what it says: it starts the job and then immediately continues. There is no option to wait for it to finish.

I needed some way to pause script execution pending the completion (hopefully successfully) of the job.

One important note: this procedure using the undocumented xp_sqlagent_enum_jobs system procedure. While this has been around for ages, it is unsupported. I get why using it may bother some, but this procedure is the only way that I know of to reliably determine the current run status of a job.

The procedure accepts five parameters. The first is a string @job_name containing the name of the job to be monitored. This is the only mandatory parameter. The second parameter is an integer @job_timeout_minutes, which controls how long in minutes that the procedure will wait for the job to complete. If that threshold is exceeded, the proc will raise an error. It defaults to 60 minutes. If @job_timeout_minutes is 0, the procedure will run until the job exits.

The remaining three parameters are all outputs from the procedure. @job_failed is a boolean (bit) value that is true (1) if the job completed but in a failed state. The last two, @job_start_time and @job_complete_time, indicate the beginning and end times for the job execution.

The procedure checks the job status every 5 seconds, so there may be a brief lag between the actual completion of the job and when the procedure returns. For me, this short delay has never been an issue.

Here is some sample code for calling the procedure:

exec msdb.dbo.sp_start_job @job_name = 'Name of job to run';

declare @job_failed bit,
    @job_start_time datetime,
    @job_complete_time datetime;

exec dbo.usp_WaitForJobToFinish @job_name = 'Name of job to run',
    @job_timeout_minutes = 60,
    @job_failed = @job_failed output,
    @job_start_time = @job_start_time output,
    @job_complete_time = @job_complete_time output;

if @job_failed = 1
begin
       raiserror ('Job failed', 16, 1);
end

And finally, here is the stored procedure definition.

create procedure dbo.usp_WaitForJobToFinish
(
       @job_name sysname,
	   @job_timeout_minutes INT = 60,
       @job_failed BIT = 0 OUTPUT,
	   @job_start_time datetime = null output,
	   @job_complete_time datetime = null output
)
AS
set nocount on;

-- Brief wait to let job system stabilize.
waitfor delay '0:00:05';
 
if not exists (select * from msdb.dbo.sysjobs j where j.name = @job_name)
begin
	return;
end

declare @timeoutTime datetime = dateadd(minute, @job_timeout_minutes, getdate());

DECLARE @xp_results TABLE
(
                     job_id                UNIQUEIDENTIFIER NOT NULL,
                     last_run_date         INT              NOT NULL,
                     last_run_time         INT              NOT NULL,
                     next_run_date         INT              NOT NULL,
                     next_run_time         INT              NOT NULL,
                     next_run_schedule_id  INT              NOT NULL,
                     requested_to_run      INT              NOT NULL,
                     request_source        INT              NOT NULL,
                     request_source_id     sysname          COLLATE DATABASE_DEFAULT NULL,
                     running               INT              NOT NULL,
                     current_step          INT              NOT NULL,
                     current_retry_attempt INT              NOT NULL,
                     job_state             INT              NOT NULL
);

/* Values for "job_state":
              0 = Not idle or suspended
              1 = Executing
              2 = Waiting For Thread
              3 = Between Retries
              4 = Idle
              5 = Suspended
              [6 = WaitingForStepToFinish]
              7 = PerformingCompletionActions
*/

DECLARE @can_see_all_running_jobs INT = 1;
DECLARE @job_owner sysname = SUSER_SNAME();
DECLARE @job_state INT = 0;

WHILE @job_state != 4
BEGIN
       DELETE @xp_results;

       INSERT INTO @xp_results
       EXECUTE master.dbo.xp_sqlagent_enum_jobs @can_see_all_running_jobs, @job_owner;

       SELECT @job_state = x.job_state
       FROM @xp_results x
       JOIN msdb.dbo.sysjobs j ON j.job_id = x.job_id
       WHERE j.name = @job_name;

       IF GETDATE() > @timeoutTime
       AND @job_timeout_minutes > 0
       BEGIN
              RAISERROR('Timeout expired', 16, 1);
              RETURN;
       END

       IF @job_state != 4
       BEGIN
              WAITFOR DELAY '0:00:05';
       END
END

SELECT TOP (1) @job_failed = CASE WHEN jh.run_status = 0 /* Failed */ THEN 1 ELSE 0 END,
	@job_start_time = msdb.dbo.agent_datetime(jh.run_date, jh.run_time),
	@job_complete_time = dateadd(second, jh.run_duration, msdb.dbo.agent_datetime(jh.run_date, jh.run_time))
FROM msdb.dbo.sysjobs j
JOIN msdb.dbo.sysjobhistory jh ON jh.job_id = j.job_id
WHERE j.name = @job_name
and jh.step_id = 0
ORDER BY jh.run_date DESC, jh.run_time DESC;
GO

Update 3/23/2020: I addressed a couple of issues in the original procedure. First, if this procedure is executed immediately after the call to sp_start_job, sometimes the initial time that xp_sqlagent_enum_jobs is called, the job is still shown with an idle status, causing the procedure to exit prematurely. I have therefore added a 5-second delay at the beginning of execution to allow the job system to stabilize.

The second fix was to the final select statement. I added jh.step_id = 0 into the WHERE clause so that the overall job status is retrieved, and not just the last step to run. Normally, this will be one and the same, but I added the additional predicate for more correctness.

Update 3/24/2020: Based on Mario’s suggestion in the comments, I have made another change so that if @job_timeout_minutes = 0, the procedure will wait indefinitely until the calling job completes.

Alerting on Missed Scheduled SQL Jobs, Part 3

The first two parts of this series addressed the general approach that I use in an SSIS script task to discover and alert on missed SQL Agent jobs. With apologies for the delay in producing this final post in the series, here I bring these approaches together and present the complete package.

To create the SSIS, start with an empty SSIS package and add a data flow task. In the task, add the following transformations.

1. An OLE DB Source. Point it to a connection manager with the instance to be monitored and a SQL command containing this script:

set nocount on
declare @xp_results table
(
			job_id                UNIQUEIDENTIFIER NOT NULL,
			last_run_date         INT              NOT NULL,
			last_run_time         INT              NOT NULL,
			next_run_date         INT              NOT NULL,
			next_run_time         INT              NOT NULL,
			next_run_schedule_id  INT              NOT NULL,
			requested_to_run      INT              NOT NULL,
			request_source        INT              NOT NULL,
			request_source_id     sysname          COLLATE database_default NULL,
			running               INT              NOT NULL,
			current_step          INT              NOT NULL,
			current_retry_attempt INT              NOT NULL,
			job_state             INT              NOT NULL
);
 
/* Values for "job_state":
		0 = Not idle or suspended
		1 = Executing
		2 = Waiting For Thread
		3 = Between Retries
		4 = Idle
		5 = Suspended
		[6 = WaitingForStepToFinish]
		7 = PerformingCompletionActions
*/
 
DECLARE @can_see_all_running_jobs INT = 1;
DECLARE @job_owner sysname = suser_sname();
 
INSERT INTO @xp_results
EXECUTE master.dbo.xp_sqlagent_enum_jobs @can_see_all_running_jobs, @job_owner;
 
select	@@SERVERNAME InstanceName,
		j.name as job_name,
		j.date_created,
		x.last_run_date,
		x.last_run_time,
		x.next_run_date,
		x.next_run_time,
		x.current_step,
		x.job_state,
		x.job_id,
		s.name schedule_name,
		s.freq_type,
		s.freq_interval,
		s.freq_subday_type,
		s.freq_subday_interval,
		s.freq_relative_interval,
		s.freq_recurrence_factor,
		s.active_start_date,
		s.active_end_date,
		s.active_start_time,
		s.active_end_time,
		@@servername ServerName
from	@xp_results x
inner join msdb.dbo.sysjobs j on x.job_id = j.job_id
inner join msdb.dbo.sysjobschedules js on js.job_id = j.job_id
inner join msdb.dbo.sysschedules s on s.schedule_id = js.schedule_id
where j.enabled = 1
and s.enabled = 1
and s.freq_type in (4, 8, 16, 32)
and x.job_state not in (1, 2, 3);

2. Add a sort transformation, with “job_id” as the sort key input column (in ascending order).

3. Add a script component. When prompted, add it as a transformation. On the “Input Columns” tab, select all columns. On the “Inputs and Outputs” tab, select “Output 0” and change SynchronousInputID to “None.”

Expand “Output 0” and select “Output columns.” Add the following as outputs:

  • InstanceName, Unicode string, length 256
  • JobId, unique identifier
  • JobName, Unicode string, length 256
  • ExpectedRunDate, database timestamp with precision
  • LastRunDate, database timestamp with precision

Return to the “Script” tab and “Edit Script.” At the bottom, find the Input0_ProcessInputRow method and delete it, replacing it with the contents of this ZIP file.

At the top of the script, expand the “Namespaces” section and add these statements:

using System.Collections.Generic;
using System.Linq;

Add the following declarations inside the ScriptMain class, such as immediately before the PreExecute method:

private bool _isFirstRow = true;
private RowData _previousRow = null;
private DateTime _previousJobRunDate;
private List<DateTime> _forecast = null;

4. Add a conditional split transformation.

In the first row (order = 1), set Output Name = ExpectedRunDateIsNull, Condition = ISNULL(ExpectedRunDate).

In the second row (order = 2), set Output Name = ExpectedRunDateInFuture, Condition = ExpectedRunDate >= DATEADD(“minute”,-30,GETDATE()).

Set the default output name = ExpectedRunDateInPast

5. Add an OLE DB Destination, linking to the ExpectedRunDateInPast output from the condition split. In a database somewhere, create a table to collect the results.

create table MissedScheduleMonitorResults
(
	ResultId int not null identity(1,1),
	MonitorRunTime datetime2 not null default (sysdatetime()),
	InstanceName int not null,
	JobId uniqueidentifier not null,
	JobName nvarchar(256) not null,
	LastRunDate datetime null,
	ExpectedRunDate datetime null,
	AlertSent bit not null default (0),
	constraint pk_MissedScheduleMonitorResults primary key clustered (ResultId),
);

Create a connection manager to this database, and in the OLD DB destination point to this connection manager and select the MissedScheduleMonitorResults table. Use the default mappings (where column names match).

The data flow task should looks something like this. Yeah, no naming standards here.

missedschedulemonitordft

That’s it! Now, you can create an job with a script that reads from the table where AlertSent = false, sends an appropriate message, and sets AlertSent to true.

Alerting on Missed Scheduled SQL Jobs, Part 2 (Custom Aggregates in SSIS)

In the previous post in this series, I introduced the goal of finding missed scheduled jobs in SQL Server. The next step in the solution required what essentially amounted to a custom aggregate in SSIS. This just wasn’t something I’d had to deal with before, so I’ll address that part of the problem in this post.

To keep things simple, I will use an example that is decoupled from the problem at hand. In part 3 of this series, I will use these techniques to present the final missed schedules solution.

My contrived example here is to do a string concatenation aggregate on a customer table where I want all of my customers who live in the same city to be aggregated into the same record. My source query is:

select c.City, c.FirstName + ' ' + c.LastName Name
from dbo.Customer c
where c.State = 'KS'
order by c.City, c.LastName, c.FirstName;

The first few records from this query are:

CustomersWithCity

Since we have only one customer in Abilene, I would expect for my output to contain a record with just one name for that city.

Abilene | Joe Wolf

Since there are multiple customers in Andover, the output should have this record:

Andover | Clara Anderson, Shirley Holt, Wanda Kemp, Brandon Rowe, Sandra Wilson

Here is how I do the aggregate. I start by creating a data flow task with three components.

CustomAggregateDataFlowTask

In the Source component, I connect to my database and give it the SQL listed above.

Next, I add a Script Component. SSIS prompts me to select the type of script component that this object will be:

ScriptComponentOptions

This will be a transformation script. After I connect my source to the script component and open the script component, I get this dialog.

ScriptTransformationEditor

On the Input Columns tab, I make sure that both City and Name are selected, and then I move to the Inputs and Outputs tab. I click on “Output 0.” Here is the secret sauce: I change the SynchronousInputId property to None. The default is that SSIS will have one output row per input row, but making this change allows me to control when an output row is generated. Finally, I expand “Output 0” and click on “Output Columns.” I press the Add Column button, rename it to City, and set the DataType property to “string [DT_STR].” The default length of 50 is OK for this situation. I then add another column, call it “CustomerNames” and make it a string as well and set the length to 8000.

I then go back to the Script tab and edit the script. I replace the Input0_ProcessInputRow method with the following.

public override void Input0_ProcessInputRow(Input0Buffer Row)
{
	string currentCity = Row.City_IsNull ? null : Row.City;
	if (currentCity == _previousCity)
	{
		if (_customers != null)
		{
			_customers += ", ";
		}
		_customers += Row.Name_IsNull ? string.Empty : Row.Name;
	}
	else
	{
		if (_customers != null)
		{
			Output0Buffer.AddRow();
			Output0Buffer.City = _previousCity;
			Output0Buffer.CustomerNames = _customers;
		}
 
		_customers = Row.Name_IsNull ? string.Empty : Row.Name;
	}
 
	_previousCity = currentCity;
}
 
public override void Input0_ProcessInput(Input0Buffer Buffer)
{
	base.Input0_ProcessInput(Buffer);
	if (Buffer.EndOfRowset())
	{
		if (_customers != null)
		{
			Output0Buffer.AddRow();
			Output0Buffer.City = _previousCity;
			Output0Buffer.CustomerNames = _customers;
		}
	}
}
 
private string _previousCity = null;
private string _customers = null;

What this code does is to take advantage of the fact that our input is sorted by City and reads through the records one at a time, watching for changes to the city field. As it goes, it keeps track of the customers associated with that city, and when the city changes, it writes a records to the output. The code in Input0_ProcessInput exists just to output the final row.

Finally, I create a destination in my data flow task. The destination can be anything we want; I just output to a table.

When I run the package and select from the output table, I get exactly what I expected.

CustomAggregateResults

Note that is very similar to how the SQL Stream Aggregate processes data, and it is absolutely dependent on having the input sorted by the aggregation column. If the data is not sorted, we would have to take a very different approach.

In the final part of this post, I will use this same technique to complete the missed jobs monitor.

Alerting on Missed Scheduled SQL Jobs, Part 1

We had an interesting situation at work recently where a SQL Agent went into what I would call a “zombie state” where the service was actually still alive and running but not really doing much. Scheduled jobs weren’t being executed. Well, mostly. It did appear that some jobs were going and others just sat there for several days without running. If a backup job doesn’t run, it’s a big deal.

I still don’t understand quite what happened, and likely never will since so much of SQL Agent is a black box and the various logs didn’t reveal a whole lot. Based on internet searches, this happens to folks from time to time, and rebooting the server is the general solution, which worked for us. But there doesn’t seem to be much out there about how to alert if the problem happens.

Such a monitor initially seemed like it would be a really simple thing. The more I dug in, however, it turned out to be considerably more involved than it first appeared, primarily for two reasons. The first is that the scheduling metadata is a bit complicated, and unraveling it is far more than just a simple query. Second, as I created the solution in SSIS, I came to realize that would need to write was basically amounted to a custom aggregate, something I’d not needed to do before and which was a learning process for me.

This is going to be a three-part post. I’ll address the first difficulty in this post, and the aggregation difficulty in the next one. Finally, I’ll pull it all together into the final monitoring package.

Scheduling Metadata

Job metadata is stored in msdb. There is a dbo.sysjobs table that stores one row per job. The dbo.sysschedules table stores the scheduling information for jobs, and dbo.sysjobschedules ties the other two together. All of these tables are well documented in BOL. There is also a sysjobhistory table to store the history, but we aren’t going need that. We are going to have use the undocumented xp_sqlagent_enum_jobs procedure to get some live job information.

For now, I’ll focus on sysschedules since that contains the meat of the scheduling metadata. The key bit of information here the freq_type column which indicates how frequently the schedule is activated. I am only concerned about recurring schedules, so the values I’m interested in are 4 (daily), 8 (weekly), 16 (monthly using days of the month) and 32 (monthly using weeks of the month). Some of the other columns in the table get interpreted in different ways depending on the value of freq_type. Most notably, the freq_interval columns gets used in varying ways for different frequency types.

Why Do We Even Care?

Why do we even need to parse through the scheduling metadata? It’s because SQL Server doesn’t have any built-in functionality (at least so far as I’ve been able to discover) to determine the next time a schedule should fire after an arbitrary date. There are ways, including with sp_help_job, to identify the next time a job should run, but this basically determines the next time the job should run from the current time. This doesn’t particularly help us.

For example, suppose a job last ran on January 1 and should fire every week. If today is January 20, SQL will tell us that the next run will be on January 22. Without knowing how to read through the schedule metadata, we don’t know that anything was missed. Knowing that the job runs every week, we can compute that the next time should be on January 8. Since this date is in the past, we know that some runs got skipped.

So the general idea for the monitor is this. We will identify all jobs that are enabled and that have at least one recurring schedule that is enabled. We will determine the last time that the job executed, and determine when it should run next. If the next run date is in the past, something happened, and we need to send an alert.

We will not consider any jobs are that currently executing. It is entirely possible to miss a scheduled run because the job is already active, and this monitor doesn’t care about that. However, it is a good idea to separately monitor for jobs that get stuck and run beyond what is normal.

Forecasting Schedules

I initially started writing the forecasting logic in T-SQL, but it very quickly got ugly and I switched to C#. It got ugly anyway. But as it stands now, the solution is in C# code that I later integrated into a script task in SSIS.

For now, we will ignore the fact that a job can have multiple schedules, which we address in Part 2 of this article. The code that follows treats one schedule at a time. Later, we will combine any multiple schedules for the job together.

The code consists of a single static class called SqlAgentSchedule with a single public method called Forecast. The signature of Forecast basically takes the values of the relevant columns from sysschedules.

public static List<DateTime> Forecast(
	DateTime startingFromDate,
	int type,
	int interval,
	int subdayType,
	int subdayInterval,
	int relativeInterval,
	int recurrenceFactor,
	int? startDateInt,
	int? endDateInt,
	int? startTimeInt,
	int? endTimeInt)

The method returns a list of the next few times that the job should run. Specifically, it will return, at a minimum, any remaining times that the job should run today as well any times that the job should run on the next scheduled day. This is definitely overkill, but will guarantee that the returned list contains at least one item, except in the case where the job’s end date has passed.

After doing some simple computation to determine the start and end dates and times for the job, we call one of four different functions to compute the schedule based on the freq_type value. Finally, we filter out any run times that are prior to the specified start time.

List<DateTime> forecast = null;
switch (type)
{
	case 4:
		forecast = ForecastDailySchedule(
			startingFromDate.Date, 
			interval, 
			subdayType, 
			subdayInterval, 
			startDate.Value, 
			endDate.Value);
		break;
 
	case 8:
		forecast = ForecastWeeklySchedule(
			startingFromDate.Date, 
			interval, 
			subdayType, 
			subdayInterval, 
			recurrenceFactor, 
			startDate.Value, 
			endDate.Value);
		break;
 
	case 16:
		forecast = ForecastMonthlyByDaysSchedule(
			startingFromDate.Date, 
			interval, 
			subdayType, 
			subdayInterval, 
			recurrenceFactor, 
			startDate.Value, 
			endDate.Value);
		break;
 
	case 32:
		forecast = ForecastMonthlyByWeeksSchedule(
			startingFromDate.Date, 
			interval, 
			subdayType, 
			subdayInterval,
			relativeInterval, 
			recurrenceFactor, 
			startDate.Value, 
			endDate.Value);
		break;
 
	default:
		return null;
}
 
forecast = forecast.Where(d => d > startingFromDate).ToList();

The devil is in those schedule type-specific forecast functions. Disclaimer: I don’t by any means guarantee that the code is right, especially for some edge-case scenarios. I am also quite unsure about the exact logic that SQL uses for weekly and monthly schedules where the “relative interval” is greater than 1.

This code is also pretty raw; it could definitely stand a good refactoring. I just haven’t had the time to invest in that effort. That said, the code should be functional as-is.

General Approach to Forecasting Schedules

The idea here is that we first need to find the first day on or after the “starting from date” value sent into the function that meets the schedule criteria. Each of the schedule types have a parameter indicating how often the schedule recurs. For daily schedules, this is the freq_interval column; for weekly and monthly schedules it is the freq_recurrence_factor columns. I’ll call it the Recurrence Interval in the following discussion.

We first compute the number of days (or weeks, or months) that have elapsed since the Start Date for the schedule, and then take this number modulo the Recurrence Interval. If the modulo is non-zero, we add enough days, weeks or months to get to the next multiple of Recurrence Interval. For weekly and monthly schedules, this may result in a date that doesn’t meet some of the other qualifications, so we need to keep advancing the date until all of the criteria are satisfied.

Next, we compute the next date that the schedule should run, either in the current current Recurrence Interval or in the next one.

Schedules also have information about when the job should run within a day. This is indicated by the freq_subday_type column. A value of 1 indicates that the schedule does not recur within the day and should just run once. Other values indicate the number of time units that are represented by the freq_subday_interval column. A freq_subday_type of 2 indicates that the time units are seconds; 4 represents minutes; and 8 represents hours.

We also have the active_start_time and active_end_time columns that indicate when in the day the schedule starts and stops. These are integer values that indicate the times using the formula hour * 10000 + minute * 100 + second, so we have to reverse this formula to get the times in to usable formats. So then we begin at the start time each day and go to the end time at freq_subday_interval seconds/minutes/hours and add an entry to the forecast for each one.

A special case is when end time is less than the start time. This seems to mean that SQL will run the job into the following day until the end time, so we also have to account for that.

The code to handle the sub-day time logic is as follows.

TimeSpan subDayFrequency = GetSubdayFrequency(subdayType, subdayInterval);
TimeSpan startTime = startDate.TimeOfDay;
TimeSpan endTime = endDate.TimeOfDay;
if (endTime < startTime)
{
	endTime = endTime.Add(_oneDay);
}

With this information we can loop through and find all times that meet the schedule criteria within the time span:

for (TimeSpan time = startTime; time <= endTime; time = time + subDayFrequency)
{
	forecast.Add(nextRunDate + time);
}

GetSubdayFrequency is defined as follows. Note that in the case of a one-time schedule (sub-day type 1) we use a bit of hack and defines the recurrence as the number of seconds in a day. This guarantees that we will only get one hit in the current day.

private static TimeSpan GetSubdayFrequency(int subdayType, int subdayInterval)
{
	int seconds = 0;
	switch (subdayType)
	{
		case 1:
			seconds = 24 * 60 * 60;
			break;
 
		case 2:
			seconds = subdayInterval;
			break;
 
		case 4:
			seconds = subdayInterval * 60;
			break;
 
		case 8:
			seconds = subdayInterval * 60 * 60;
			break;
 
		default:
			throw new ApplicationException("invalid subdayType.");
	}
 
	return new TimeSpan(0, 0, seconds);
}
Daily Schedules

The daily schedule is by far the simplest to implement. Since the job simply runs every x days, really all we need to do is to determine the next day on a multiple of x from the schedule start date.

int daysElapsed = (int)((startingFromDate - startDate.Date).TotalDays);
int daysInCurrentInterval = daysElapsed % interval;
int daysUntilNextRun = daysInCurrentInterval == 0 ? 0 : interval - daysInCurrentInterval;
DateTime nextRunDate = startingFromDate.AddDays(daysUntilNextRun);
Weekly Schedules

With a weekly schedule, things start off pretty much the same as for daily. If a schedules fires every x weeks, it’s really the same thing as saying the schedule files every 7x days, so the logic is pretty much the same as before. Remember, we now need to use the freq_recurrence_factor column instead of the freq_interval column.

int daysElapsed = (int)((startingFromDate - startDate.Date).TotalDays);
int daysInCurrentInterval = daysElapsed % (recurrenceFactor * 7);
int daysUntilNextRun = daysInCurrentInterval == 0 ? 0 : (recurrenceFactor * 7) - daysInCurrentInterval;
DateTime nextRunDate = startingFromDate.AddDays(daysUntilNextRun);

But recall that we can specify which days of the week that job runs, so we have to find the next day within the week that qualifies. The freq_interval column is used to identify the days of the week that are selected as a bitmap. I have a C# flag-based enum called SqlDayOfWeek that directly maps to the bitmap.

SqlDayOfWeek dayOfWeek = (SqlDayOfWeek)interval;
nextRunDate = AdjustToNextMatchingDayOfWeekInValidWeek(nextRunDate, dayOfWeek, startDate, recurrenceFactor);

The function that is called basically just the checks the date passed in to see if it matches both the scheduled days as well as the recurrence interval. If not, we advance to the next day and keep on rechecking. It’s a brute force approach.

private static DateTime AdjustToNextMatchingDayOfWeekInValidWeek(DateTime currentDate, SqlDayOfWeek sqlDayOfWeek, DateTime startDate, int recurrenceFactor)
{
	bool isDone = false;
	while (isDone == false)
	{
		currentDate = AdjustToNextMatchingDayOfWeek(currentDate, sqlDayOfWeek);
		int weeksElapsed = (int)((currentDate - startDate.Date).TotalDays) / 7;
		if ((weeksElapsed % recurrenceFactor) == 0)
		{
			isDone = true;
		}
		else
		{
			currentDate = currentDate.AddDays(1);
		}
	}
 
	return currentDate;
}
 
private static DateTime AdjustToNextMatchingDayOfWeek(DateTime currentDate, SqlDayOfWeek sqlDayOfWeek)
{
	while (IsDayOfWeekMatch(currentDate.DayOfWeek, sqlDayOfWeek) == false)
	{
		currentDate = currentDate.AddDays(1);
	}
 
	return currentDate;
}
 
private static bool IsDayOfWeekMatch(DayOfWeek dow, SqlDayOfWeek sql)
{
	return (dow == DayOfWeek.Sunday && sql.HasFlag(SqlDayOfWeek.Sunday)) ||
		(dow == DayOfWeek.Monday && sql.HasFlag(SqlDayOfWeek.Monday)) ||
		(dow == DayOfWeek.Tuesday && sql.HasFlag(SqlDayOfWeek.Tuesday)) ||
		(dow == DayOfWeek.Wednesday && sql.HasFlag(SqlDayOfWeek.Wednesday)) ||
		(dow == DayOfWeek.Thursday && sql.HasFlag(SqlDayOfWeek.Thursday)) ||
		(dow == DayOfWeek.Friday && sql.HasFlag(SqlDayOfWeek.Friday)) ||
		(dow == DayOfWeek.Saturday && sql.HasFlag(SqlDayOfWeek.Saturday));
}
Monthly Schedules (Day of Month)

We follow the same basic principle here, with the slight complication of having to deal with rollover into a new year. There is an interesting twist here. Suppose that I schedule a job to fire on the 31st of every month, starting in January, and recurring every three months. The next time the job should run is then “April 31,” a non-existent date. SQL Server will simply skip this invalid date, and next run on July 31, so the code has to handle this odd case.

Monthly Schedules (Week of Month)

This type of schedule takes a little bit more effort to interpret. I define another enumeration called SqlWeek that wraps the relativeInterval parameter, and then a couple of helper methods to determine the days of the week the job should run and to compute the next run date.

SqlDayOfWeek dayOfWeek = DayOfWeekFromRelativeMonthDay(interval);
SqlWeek week = (SqlWeek)relativeInterval;
DateTime nextRunDate = GetRelativeDayInMonth(nextRunYear, nextRunMonth, week, dayOfWeek);

The helper methods are as follows.

private static SqlDayOfWeek DayOfWeekFromRelativeMonthDay(int interval)
{
	switch (interval)
	{
		case 1:
			return SqlDayOfWeek.Sunday;
 
		case 2:
			return SqlDayOfWeek.Monday;
 
		case 3:
			return SqlDayOfWeek.Tuesday;
 
		case 4:
			return SqlDayOfWeek.Wednesday;
 
		case 5:
			return SqlDayOfWeek.Thursday;
 
		case 6:
			return SqlDayOfWeek.Friday;
 
		case 7:
			return SqlDayOfWeek.Saturday;
 
		case 8:
			return SqlDayOfWeek.Day;
 
		case 9:
			return SqlDayOfWeek.Weekday;
 
		case 10:
			return SqlDayOfWeek.WeekendDay;
	}
}
 
private static DateTime GetRelativeDayInMonth(int year, int month, SqlWeek week, SqlDayOfWeek dayOfWeek)
{
	int lowerDayInclusive;
	int upperDayInclusive;
	int lastDayInMonth = DateTime.DaysInMonth(year, month);
	switch (week)
	{
		case SqlWeek.First:
			lowerDayInclusive = 1;
			upperDayInclusive = 7;
			break;
 
		case SqlWeek.Second:
			lowerDayInclusive = 8;
			upperDayInclusive = 14;
			break;
 
		case SqlWeek.Third:
			lowerDayInclusive = 15;
			upperDayInclusive = 21;
			break;
 
		case SqlWeek.Fourth:
			lowerDayInclusive = 22;
			upperDayInclusive = 28;
			break;
 
		case SqlWeek.Last:
			lowerDayInclusive = lastDayInMonth - 6;
			upperDayInclusive = lastDayInMonth;
			break;
 
		default:
			throw new ApplicationException("Invalid week.");
	}
 
	for (int day = lowerDayInclusive; day <= upperDayInclusive; day++)
	{
		DateTime date = new DateTime(year, month, day);
		if (IsDayOfWeekMatch(date.DayOfWeek, dayOfWeek))
		{
			return date;
		}
	}
}
Summary

Well, that was an awful lot of work just get the next run time following an arbitrary date, and this was definitely far more work than I expected.

In Part 2 of this post, I’ll show how I built a custom aggregate in SSIS. In Part 3, I’ll wrap the C# code into an SSIS package and pull everything together.

A Catch-All Backup Job (with Automatic Restore Script Generation)

Job one as a DBA is to ensure that the systems under your control are backed up. Actually, job one is to make sure that systems are restorable, but that starts with robust backup jobs.

One job that I always set up on my personal SQL Server instances is a catch-all backup job. A job, that is, that backups up all databases except for ones that are specifically excluded. Why? I don’t always think about creating the backup jobs just because I’ve added a new database. A shortcoming, to be sure, but even on those personal system I want to be protected.

I am a firm believer that backups on enterprise systems should be done via dedicated backup jobs due to the variety of RPO/RTO requirements associated with those systems, but I suppose that a case could be made for a catch-all backup job there as well.

The Catch-All Backup Job

First, we need a table to store a list of databases that will be excluded from the backups. I put this in a database called Admin.

create table dbo.DatabasesExcludedFromBackup
(
	 DatabaseName sysname not null
);
 
insert dbo.DatabasesExcludedFromBackup (DatabaseName) values ('tempdb');

Be sure to add other databases that don’t need to be covered this job, including databases that are backed up through other jobs, or databases like AdventureWorks that you can easily recreate from the original source.

Automated Restore Script Generation

I also like to throw in some code to generate restore scripts automatically. We need a couple of tables to store this (also in the Admin database).

create table dbo.BackupType
(
	BackupType varchar(10) not null,
	Description nvarchar(100) null,
	constraint pk_BackupType primary key clustered (BackupType)
);
 
create table dbo.BackupChainType
(
	BackupChainType varchar(25) not null,
	Description nvarchar(100) null,
	constraint pk_BackupChainType primary key clustered (BackupChainType)
);
 
insert dbo.BackupType (BackupType, Description)
	values ('Diff', 'Differential Backup'), ('Full', 'Full backup'), ('Log', 'Transaction log backup');
 
insert dbo.BackupChainType (BackupChainType, Description)
	values ('Full+Diff', 'Full backup plus differential backups'),
	('Full+Diff+Log', 'Full backup plus differential backups plus transaction log backups'),
	('Full+Log', 'Full backup plus transaction log backups'), 
	('FullOnly', 'Full backup only');
 
create table dbo.RestoreScript
(
	RestoreScriptId int identity(1,1) not null,
	DatabaseName sysname not null,
	BackupChainType varchar(25) not null,
	BackupType varchar(10) not null,
	TimeGenerated datetime2(7) not null constraint df_RestoreScript__TimeGenerated  default (sysdatetime()),
	ScriptText nvarchar(max) not null,
	constraint pk_RestoreScript primary key clustered (RestoreScriptId),
	constraint fk_RestoreScript__BackupChainType foreign key (BackupChainType) references dbo.BackupChainType (BackupChainType),
	constraint fk_RestoreScript__BackupType foreign key (BackupType) references dbo.BackupType (BackupType)
);

The Full Backup Job

Next, create a SQL Agent job. I call my job “Backup all databases – FULL” and add a job step with this script. Be sure to set the location of the backup folder at the beginning of the script, and also change all references to the “Admin” database if necessary.

declare	@dbName nvarchar(256),
		@recoveryModel nvarchar(60),
		@backupTime nvarchar(100),
		@fileName nvarchar(255),
		@sql nvarchar(4000),
		@sqlNoRecovery nvarchar(4000),
		@destinationFolder nvarchar(256);
 
-- Set this to the appropriate backup folder.  ** IMPORTANT ** It must end with a backslash!
select @destinationFolder = 'S:\SQL\Backups\Full\';
 
declare dbCsr cursor for select name, recovery_model_desc from sys.databases where name not in (select DatabaseName from Admin.dbo.DatabasesExcludedFromBackup)
	and source_database_id is null;
open dbCsr;
fetch next from dbCsr into @dbName, @recoveryModel;
while @@FETCH_STATUS = 0
begin
	select	@backupTime = replace(replace(replace(CONVERT(nvarchar(100), getdate(), 120), '-', ''), ':', ''), ' ', '_');
	select	@fileName = @destinationFolder + @dbName	+ '.' + @backupTime + '.bak'
	select	@sql = 'backup database [' + @dbName + '] to disk = ''' + @fileName + ''' with init, compression;';
	exec(@sql);
 
	delete Admin.dbo.RestoreScript where DatabaseName = @dbName;
 
	select @sql = 'restore database [' + @dbName + '] from disk = ''' + @fileName + ''' with replace';
	select @sqlNoRecovery = @sql + ', norecovery;';
	select @sql += ';';
 
	if @recoveryModel = 'SIMPLE'
	begin
		insert Admin.dbo.RestoreScript (DatabaseName, BackupChainType, BackupType, ScriptText)
		values (@dbName, 'FullOnly', 'Full', @sql);
 
		insert Admin.dbo.RestoreScript (DatabaseName, BackupChainType, BackupType, ScriptText)
		values (@dbName, 'Full+Diff', 'Full', @sqlNoRecovery);
	end
	else
	begin
		insert Admin.dbo.RestoreScript (DatabaseName, BackupChainType, BackupType, ScriptText)
		values (@dbName, 'Full+Log', 'Full', @sqlNoRecovery);
 
		insert Admin.dbo.RestoreScript (DatabaseName, BackupChainType, BackupType, ScriptText)
		values (@dbName, 'Full+Diff+Log', 'Full', @sqlNoRecovery);
	end
 
	fetch next from dbCsr into @dbName, @recoveryModel;
end
close dbCsr;
deallocate dbCsr;

Set an appropriate schedule on the job. For me, once per day works fine.

The Log Backup Job

Now create another job called “Backup all databases – LOG” and add this step:

declare	@dbName nvarchar(256),
		@backupTime nvarchar(100),
		@fileName nvarchar(255),
		@sql nvarchar(4000),
		@destinationFolder nvarchar(256);
 
-- Set this to the appropriate backup folder.  ** IMPORTANT ** It must end with a backslash!
select @destinationFolder = 'S:\SQL\Backups\Log\';
 
declare dbCsr cursor for
select db.name from sys.databases db
cross apply (select top 1 * from msdb.dbo.backupset bs where bs.database_name = db.name and bs.type = 'D' order by bs.backup_finish_date desc) bs
where db.recovery_model_desc != 'SIMPLE'
and db.name not in (select DatabaseName from Admin.dbo.DatabasesExcludedFromBackup)
and db.source_database_id is null;
 
open dbCsr;
fetch next from dbCsr into @dbName;
while @@FETCH_STATUS = 0
begin
	select	@backupTime = replace(replace(replace(CONVERT(nvarchar(100), getdate(), 120), '-', ''), ':', ''), ' ', '_');
	select	@fileName = @destinationFolder + @dbName	+ '.' + @backupTime + '.trn'
	select	@sql = 'backup log [' + @dbName + '] to disk = ''S:\Backup\SQL2014\Log\' + 
		@dbName	+ '.' + @backupTime + '.trn'' with init, compression';
	exec(@sql);
 
	select @sql = 'restore log [' + @dbName + '] from disk = ''' + @fileName + ''' with norecovery;';
 
	insert Admin.dbo.RestoreScript (DatabaseName, BackupChainType, BackupType, ScriptText)
	values (@dbName, 'Full+Log', 'Log', @sql);
 
	insert Admin.dbo.RestoreScript (DatabaseName, BackupChainType, BackupType, ScriptText)
	values (@dbName, 'Full+Diff+Log', 'Log', @sql);
 
	fetch next from dbCsr into @dbName;
end
close dbCsr;
deallocate dbCsr;

Be sure to schedule the job according to your needs. I run log backups once per hour.

Additional Steps

You will want to add an extra step into your backup jobs to purge the old backup files after a few days, otherwise they will keep accumulating forever.

I also add a step to copy the files off-system because I am definitely a believer in getting backups off to another box as quickly as possible, just in case the machine blows up.

The Restore Script

The backup jobs generate restore scripts as they process the backups. You can find a list of the possible restore paths by executing:

select distinct DatabaseName, BackupChainType
from Admin.dbo.RestoreScript;

Then you can pass in a combination of DatabaseName and BackupChainType into a script such as the following:

select ScriptText
from Admin.dbo.RestoreScript
where DatabaseName = 'Admin'
and BackupChainType = 'Full+Log'
order by TimeGenerated asc;

This will output the commands to restore the database to the last log backup. You will need to recover the database manually, and you easily select which logs to restore or add a STOPAT clause to recover to a point in time.

I even have a small SSIS package which runs a Foreach Loop and extract each restore script out to a text file. The few times I have needed to use it, these pre-generated scripts have made the restores trivially simple to execute.

Potential Enhancements

Even though the script tables allow for differential backups in the mix, there’s nothing here that really uses differentials. For my purposes, that’s overkill, but you can certainly add a “Backup all databases – DIFF’ and include logic to generate appropriate restore scripts.

Creating Realistic Test Data

There are a number of tools out there to create test data, but a few months ago I wanted to try my hand at rolling my own. My initial attempt was a combination of T-SQL and C#, and it works well, but in this blog post I want to demonstrate a pure T-SQL solution. We’ll create random customer data, including first and last names, street address, city and state. This is intended to create addresses that are United States-centric, but the same principles should be applicable anywhere.

Source Data

In order the data to appear realistic, we need to have some basis for creating that data. The U.S. Census Bureau publishes lists of first and last names, and we’ll start with that as a basis.  To keep things simple, imagine that there are only 5 last names in the world:

NameFrequency
Smith1.006
Johnson0.810
Williams0.699
Brown0.621
Jones0.621

We need to “normalize” the frequencies so that they sum to 1. We will also keep a running total on the normalized frequency for all previous rows (there are rounding anomalies in the results).

NameFrequencyNormalized FrequencyRunning Total
Smith1.0060.2678.0000
Johnson0.8100.2156.2678
Williams0.6990.1861.4834
Brown0.6210.1653.6694
Jones0.6210.1653.8347

Now we can generate a random number from 0 to 1 (more on that in a bit) and use that to select a record in the table. For instance, if the random number is 0.7421, we find the first record where the running total is less than or equal to 0.7421, or “Brown.”

The raw data sources I used were:

Male first names (U.S. Census Bureau)
Female first names (U.S. Census Bureau)
Last names (U.S. Census Bureau)
Cities and states (U.S. Census Bureau)
Street names (I am having a hard time finding where I originally obtained this data. If I figure it out later, I will update this post.)

I have somewhat filtered and modified the data sets, mostly to eliminate frequency information that is rounded to 0 in the raw data. I have also included StreetPrefix and StreetSuffix data sets that I came up with myself.

Let’s create a few tables to store the data:

CREATE TABLE [dbo].[Name]
(
	[ID] int NOT NULL IDENTITY(1,1) PRIMARY KEY,
	[Name] nvarchar (40) NULL,
	[NameType] char (6) NULL,
	[Fraction] float NULL
);
 
CREATE TABLE [dbo].[City]
(
	[ID] int NOT NULL IDENTITY(1,1) PRIMARY KEY,
	[City] nvarchar (150) NULL,
	[State] char (2) NULL,
	[Population] int NULL
);
 
CREATE TABLE [dbo].[StreetPrefix]
(
	[ID] int NOT NULL IDENTITY(1,1) PRIMARY KEY,
	[Prefix] nvarchar (20) NULL,
	[Weight] float NULL
);
 
CREATE TABLE [dbo].[StreetSuffix]
(
	[ID] int NOT NULL IDENTITY(1,1) PRIMARY KEY,
	[Suffix] nvarchar (20) NULL,
	[Weight] float NULL
);
 
CREATE TABLE [dbo].[Street]
(
	[ID] int NOT NULL IDENTITY(1,1) PRIMARY KEY,
	[StreetName] nvarchar (150) NULL
);

Note that each of the data sets has some sort of weighting factor except for the Street data. I don’t have any sort of frequency information, so I will just make the assumption that each street is equally likely.

Once the tables are created, download the data file and run it. The data file is a SQL script with a bunch of inserts into the tables. It’s about 200,000 lines long, but it should run fine in SSMS; it just may take a minute or two. Also note that I have combined all of the first and last names into a single table.

Finally, let’s normalize the data sets. It’s easiest to use windowing functions, so this requires at least SQL Server 2012, but with some rewriting it can be done in earlier versions as well. Note that for the names data set we separate out last names from first names (male and female combined). Since the street data does not contain any weighting information, we just use the COUNT function rather than sum. Finally, we’ll create some useful nonclustered indexes.

alter table dbo.Name add NameGroup varchar(10);
 
update dbo.Name set NameGroup = 'Last' where NameType = 'Last';
update dbo.Name set NameGroup = 'First' where NameType = 'Female';
update dbo.Name set NameGroup = 'First' where NameType = 'Male';
 
alter table dbo.Name add FractionStart float;
alter table dbo.City add FractionStart float;
alter table dbo.StreetPrefix add FractionStart float;
alter table dbo.StreetSuffix add FractionStart float;
alter table dbo.Street add FractionStart float;
 
with NameData as
(
	select FractionStart,
		1.0 * isnull(sum(Fraction) over (partition by NameGroup order by (select null) rows between unbounded preceding and 1 preceding), 0) / sum(Fraction) over (partition by NameGroup) NewFractionStart
	from dbo.Name
)
update NameData
set FractionStart = NewFractionStart;
 
with CityData as
(
	select FractionStart,
		1.0 * isnull(sum(Population) over (order by (select null) rows between unbounded preceding and 1 preceding), 0) / sum(Population) over () NewFractionStart
	from dbo.City
)
update CityData
set FractionStart = NewFractionStart;
 
with PrefixData as
(
	select FractionStart,
		1.0 * isnull(sum(Weight) over (order by (select null) rows between unbounded preceding and 1 preceding), 0) / sum(Weight) over () NewFractionStart
	from dbo.StreetPrefix
)
update PrefixData
set FractionStart = NewFractionStart;
 
with SuffixData as
(
	select FractionStart,
		1.0 * isnull(sum(Weight) over (order by (select null) rows between unbounded preceding and 1 preceding), 0) / sum(Weight) over () NewFractionStart
	from dbo.StreetSuffix
)
update SuffixData
set FractionStart = NewFractionStart;
 
with StreetData as
(
	select FractionStart,
		1.0 * isnull(count(*) over (order by (select null) rows between unbounded preceding and 1 preceding), 0) / count(*) over () NewFractionStart
	from dbo.Street
)
update StreetData
set FractionStart = NewFractionStart;
 
create nonclustered index IX_Name_NormalizedFraction on dbo.Name (NameGroup, FractionStart);
create nonclustered index IX_City_NormalizedFraction on dbo.City (FractionStart);
create nonclustered index IX_StreetPrefix_NormalizedFraction on dbo.StreetPrefix (FractionStart);
create nonclustered index IX_StreetSuffix_NormalizedFraction on dbo.StreetSuffix (FractionStart);
create nonclustered index IX_Street_NormalizedFraction on dbo.Street (FractionStart);

Now that we have data, let’s take a look at the idea of randomness.

Randomness

Unfortunately, the built-in SQL Server rand() function has a lot of limitations, not the least of which is that the function returns the same value when called in set-based operations. For instance, try running

select ID, Suffix, Weight, FractionStart, rand() RandValue from StreetSuffix;

You’ll get a different value for the RandValue column every time you run this statement, but the you’ll also get the same value in each row. This makes rand() OK for row-by-row operations, but fairly useless in the set-based world.

Fortunately, the newid() function comes to the rescue. For example, consider

select ID, Suffix, Weight, FractionStart, newid() RandGuid from StreetSuffix;

Every time you run this, you will get a different GUID in each row. To turn this into a random number, we can compute the binary_checksum() of the GUID. Since binary_checksum() returns a value in the range of int, we can normalize the random value by adding a number to it to ensure the result is non-negative and then dividing by the range of int values.

select ID, Suffix, Weight, FractionStart,
	(binary_checksum(newid()) + 2147483648.) / 4294967296. RandValue
from StreetSuffix;

It’s ugly, but we now get a random value that is greater than or equal to zero and less than one.

Now we can generate a random city:

select top 1 City, State
from dbo.City
where FractionStart <= (binary_checksum(newid()) + 2147483648.) / 4294967296.
order by FractionStart desc;

If you execute this statement a number of times, you find that it tends to favor larger cities. This is exactly what we expect. All other things being equal, realistic data will show far more customer from Los Angelos, CA than from a small midwestern town.

Random Values over Ranges of Data

So far, we’ve covered how to select a random item out of a finite list of data, but what about generating random values over ranges. In our situation, we need to create a house number as part of the address. My initial arbitrary rules were:

  • The house number should contain between 1 and 5 digits. The specific number of digits is randomly determined.
  • If the house number has 5 digits, the first digit should be 1. (This is to avoid fairly unrealistic addresses such as 87369 Main St.)
  • There should be no leading zeros in the house number.

We can generate a single random digit from 0 to 9 with this statement:

select cast((binary_checksum(newid()) + 2147483648.) / 4294967296. * 10. as int);

Now, we could string together some code to follows the above rules exactly, but what they really come down is that we need a number from 1 to 19,999:

select 1 + cast((binary_checksum(newid()) + 2147483648.) / 4294967296. * 19999. as int);

Certainly a case could be made that house numbers will tend toward smaller values (and even better, could be correlated with the size of the associated city) but we’ll keep things simple here and stick with this simple requirement.

Putting It Together

Now we have the data that we need, as well as a means to produce data randomly. Using the same strategy that we used before to get a random city, we can get all the different bits of pieces of the customer record:

with FirstName as
(
	select top 1 Name FirstName
	from dbo.Name
	where NameGroup = 'First'
	and FractionStart <= (binary_checksum(newid()) + 2147483648.) / 4294967296.
	order by NameGroup, FractionStart desc
), LastName as
(
	select top 1 Name LastName
	from dbo.Name
	where NameGroup = 'Last'
	and FractionStart <= (binary_checksum(newid()) + 2147483648.) / 4294967296.
	order by FractionStart desc
), HouseNumber as
(
	select 1 + cast((binary_checksum(newid()) + 2147483648.) / 4294967296. * 19999. as int) HouseNumber
), StreetPrefix as
(
	select top 1 Prefix
	from dbo.StreetPrefix
	where FractionStart <= (binary_checksum(newid()) + 2147483648.) / 4294967296.
	order by FractionStart desc
), StreetName as
(
	select top 1 StreetName
	from dbo.Street
	where FractionStart <= (binary_checksum(newid()) + 2147483648.) / 4294967296.
	order by FractionStart desc
), StreetSuffix as
(
	select top 1 Suffix
	from dbo.StreetSuffix
	where FractionStart <= (binary_checksum(newid()) + 2147483648.) / 4294967296.
	order by FractionStart desc
), CityState as
(
	select top 1 City, State
	from dbo.City
	where FractionStart <= (binary_checksum(newid()) + 2147483648.) / 4294967296.
	order by FractionStart desc
), RandomCustomer as
(
	select	FirstName.FirstName, 
			LastName.LastName, 
			cast(HouseNumber.HouseNumber as varchar(5)) + ' ' + StreetPrefix.Prefix + ' ' + StreetName.StreetName + ' ' + StreetSuffix.Suffix Address,
			CityState.City,
			CityState.State
	from	FirstName, LastName, HouseNumber, StreetPrefix, StreetName, StreetSuffix, CityState
)
select *
from RandomCustomer;

It would be nice if we could generate multiple customers at once, but here we run into some trouble. If we try joining the above to a tally table, we might hope that we get a whole bunch of customers. But, alas, the optimizer outsmarts us and all we get is repeated rows (well, mostly repeated; the house number does in fact change from row to row, but otherwise it’s all the same output). Nor can we wrap the above statement into a table-valued function because the newid() function is not allowed.

There is a solution, however. It’s a bit clunky, but it does work. Let’s create a TVF where the random values are passed in as parameters:

if exists (select * from sys.objects where name = 'fnGenerateRandomCustomer' and type = 'IF')
	drop function fnGenerateRandomCustomer;
go
create function fnGenerateRandomCustomer(@r1 float, @r2 float, @r3 float, @r4 float, @r5 float, @r6 float, @r7 float)
returns table
as
return
with FirstName as
(
	select top 1 Name FirstName
	from dbo.Name
	where NameGroup = 'First'
	and FractionStart <= @r1
	order by NameGroup, FractionStart desc
), LastName as
(
	select top 1 Name LastName
	from dbo.Name
	where NameGroup = 'Last'
	and FractionStart <= @r2
	order by FractionStart desc
), HouseNumber as
(
	select 1 + cast(@r3 * 19999. as int) HouseNumber
), StreetPrefix as
(
	select top 1 Prefix
	from dbo.StreetPrefix
	where FractionStart <= @r4
	order by FractionStart desc
), StreetName as
(
	select top 1 StreetName
	from dbo.Street
	where FractionStart <= @r5
	order by FractionStart desc
), StreetSuffix as
(
	select top 1 Suffix
	from dbo.StreetSuffix
	where FractionStart <= @r6
	order by FractionStart desc
), CityState as
(
	select top 1 City, State
	from dbo.City
	where FractionStart <= @r7
	order by FractionStart desc
), RandomCustomer as
(
	select	FirstName.FirstName, 
			LastName.LastName, 
			cast(HouseNumber.HouseNumber as varchar(5)) + ' ' + StreetPrefix.Prefix + ' ' + StreetName.StreetName + ' ' + StreetSuffix.Suffix Address,
			CityState.City,
			CityState.State
	from	FirstName, LastName, HouseNumber, StreetPrefix, StreetName, StreetSuffix, CityState
)
select *
from RandomCustomer;

Then we invoke the function by generating the random values and passing them to the function. We’ll generate a virtual Nums table to control the number of customers produced, and then generate seven random numbers to be passed to the TVF we just created.

declare @customersToGenerate int = 10;
 
with l0 as (select 1 v union all select 1), l1 as (select a.v from l0 a, l0), l2 as (select a.v from l1 a, l1),
l3 as (select a.v from l2 a, l2), l4 as (select a.v from l3 a, l3), l5 as (select a.v from l4 a, l4),
Nums as (select row_number() over (order by (select null)) n from l5),
RandomValues as
(
	select	(binary_checksum(newid()) + 2147483648.) / 4294967296. r1,
			(binary_checksum(newid()) + 2147483648.) / 4294967296. r2,
			(binary_checksum(newid()) + 2147483648.) / 4294967296. r3,
			(binary_checksum(newid()) + 2147483648.) / 4294967296. r4,
			(binary_checksum(newid()) + 2147483648.) / 4294967296. r5,
			(binary_checksum(newid()) + 2147483648.) / 4294967296. r6,
			(binary_checksum(newid()) + 2147483648.) / 4294967296. r7
	from	Nums n
	where	n.n <= @customersToGenerate
)
select cust.FirstName, cust.LastName, cust.Address, cust.City, cust.State
from RandomValues
cross apply dbo.fnGenerateRandomCustomer(r1, r2, r3, r4, r5, r6, r7) cust;

Here is the output from running this script. As you would expect, the results are going to be different every time.

RandomCustomerData

If you look at the actual query plan, you can see that the system is doing an index seek and key lookup on each of the data tables for each customer generated, which is not surprising given the random nature of the query. This means that as the number of customers generated increases, there is a lot overhead doing those key lookups. However, it is still reasonably efficient. On my machine I was able to insert 1,000,000 randomly generated customers into a table in about 30 seconds.

Summary

I have put together this post merely to present an example of how randomized test data can be generated. The focus has been on generating customer names and addresses, but the same principles can be applied to creating a wide variety of data types. It’s not perfect, but it can suffice nicely for a lot of situations.