SQL Server Agent PowerShell Step Tips

Set-Location C:

Over the years of using PowerShell with SQL Server I have learned a few things of how to use the PowerShell step type in a SQL Agent job.  When running a job step of PowerShell there are a few things you should know.  There is a post by SQLHammer that illustrates one of them here. The one that shows that if you are using a PowerShell job step, you will want to use a Set-Location C: at the top of the script, especially if you are going to do anything in the FILESYSTEM as the SQL Server Provider does not like the interaction.

Error Handling in the Step

In the job step you will see the dialog below:


What you don’t see is the way you have the job step succeed or fail. When using most commands in modules, you may find that errors still cause the step to fail because of the way they report the failure (some kind of throw or a Stop condition outside your control). So if you want things to fail that normally would show red on the screen but things would continue, remember that the default ErrorAction is Continue, so even though you get an error, PowerShell will just continue.

The tip is that when you want to have something like a dir c:\dbaduck fail even though it is not a critical error and if you did run this in a job step, by default it would show success in the job step even if the command failed because the directory does not exist.  So on those statements that you would like to cause the job to fail and stop you need to use the –ErrorAction Stop on your command.

dir c:\dbaduck –ErrorAction Stop

The above command would stop the job if dbaduck was not a directory in the file system. So remember that failing the job is important when wanting to govern the outcome or the actions after the step.

Why become a PowerShell DBA

If you find this article valuable and you decide that you want to get your learn on with PowerShell and SQL Server, there is a SQLSkills class that I am teaching this year, please join me to get a great start in how to get your toolset growing by adding PowerShell to your toolbelt. Early Bird discount still available until March 10, 2017.

I have heard many times from many people that it is not worth learning PowerShell as a DBA.  I have been using PowerShell since v1.0 and have found it to be amazingly rewarding. I will illustrate why this is the case in this entry and hopefully some of it will resonate with you and you will find some worth in learning at least a little PowerShell for your DBA job.

First, PowerShell was built to manage Windows Servers and soon manage some part of Linux servers as DotNet Core comes into play.  Back in the day, if you were a DBA and wanted to manage SQL Servers, you used a lot of SMO (Shared Management Objects) and that meant programming in C# or pseudo C# in PowerShell. Creating objects, managing properties, etc.. That was not for everyone I know. I have a developer background and an IT background so managing servers and Active Directory and building programs to do things was part of my world a long time ago.  But what I found was that there were not a lot of tools in the marketplace for doing things against SQL Server with PowerShell then so it was not widely adopted by DBAs.

In the SQL Server 2008 days, Microsoft wrote some PowerShell Snapins that would give us access to some cmdlets (Commands) in PowerShell to manage some things and introduced a PowerShell Provider for SQL Server that turned your SQL Server into a series of Paths (SQLSERVER:\sql\servername\instancename) and that was pretty powerful, at least to a guy like me, looking to automate many repetitive tasks. With those cmdlets and the provider there began to be a way to now use PowerShell to get at SQL Server without writing .NET code. It certainly did not cover the breadth of the product like Exchange had done with their cmdlets, but it was a start of some love. This is where the dawn of becoming a PowerShell DBA began.

Now Why Investigate

For a DBA there are many times you want to know information about your server, and there is plenty of TSQL to get that information. You need to know the DMVs or the system tables to get at that information and for a fairly new DBA, that can be a real challenge. Here is where the paradigm begins. In SMO there are objects (methods, properties) that encapsulate the properties of the objects in SQL Server into a class (Object Oriented term). These classes represent the objects by having the appropriate methods and properties to act on those objects.  Let me illustrate.

We all know that there is a Database object in SQL Server and there are many TSQL commands that we can use to find out information such as…

SELECT * FROM sys.databases

SELECT * FROM sys.database_files

and many more….

It is a common practice to create a set of TSQL scripts and take them with you as you go. That is one of the values of longevity in the industry is what you have created and built up.  There is no argument from me for this mentality. This is where you parallel with the PowerShell mentality in that you can reuse the scripts later on and not have to remember or rebuild them on the fly. But is there a better way? Maybe, but at least there is “another” way.

Enter PowerShell

PowerShell in and of itself is a mechanism to do many things. I advocate DBAs learn it because of the way you can change your job by using Automation, Scripting and Information Gathering that is much different than running TSQL Scripts and copy-pasting them into documents or tables. (or even Excel)  I will take you through a couple of scenarios to give you a flavor of an additional tool (not advocating replacing SSMS or any other tool with PowerShell, only making you more efficient) with examples.

Say I want to have a list of databases with their name and recovery model.


SELECT name, recovery_model FROM sys.databases;


Get-SqlDatabase –ServerInstance servername  | Select Name, RecoveryModel | Format-Table * –AutoSize

We won’t really go into how you get to be able to run these commands, just to compare the commands. They are not much different than each other, one is run in SSMS or sqlcmd and one is run in PowerShell console or ISE or another tool that will execute PowerShell.  The results are pretty similar as well.



Now that doesn’t look too scary. You get the same results.  Here is how you would get it with the provider.


Notice that I used a Path-Like structure to get to the Databases in the Provider.  I used the –Force parameter to ensure that I got the System Databases. Now that seems the same, other than the tool. So you ask the question, why do I do it?  Why do I learn PowerShell to be able to get the same information?  Here is the answer.

Objects are the answer

In TSQL you get results in a Grid or Text, you copy and paste or you just look at it and read it. You just get some text back either in PowerShell or TSQL or so it seems.  In PowerShell everything is an object and they all have (potentially) methods and properties.  Let me illustrate something that is an interesting part of PowerShell and objects.

Say I want to change the recovery model to Full on my DBA database.  in TSQL I could do it in a couple of ways and it would look something like this.


Let’s take a look at how you do it with PowerShell and why it would be valuable to get this tool in your toolbelt.


What you see is the retrieving of the Database object in an SMO object (the provider always returns an SMO object) and there is a Property called RecoveryModel that tells me which recovery mode it is. To set it, the property is read/write so I can change it with the words “FULL”, “SIMPLE” or “BULKLOGGED” and then I use the method Alter() as above.  The methods have the double parenthesis () after the method name and I could have inserted ($true) in the parenthesis to do a WITH ROLLBACK IMMEDIATE like I would do if I were changing a property that required full access.  Now that looks like it may not be very advantageous, but imagine the next scenario.


I create a function to change my recovery model.  I can use parameters for my SQL Server instance and the database and the RecoveryModel.  Now I have a tool that does not require SSMS to be installed and allows me to call a function to change my recovery model.  Once this is written and saved (and even put in a module for easy loading), and loaded into my environment, I can use it pretty easily for any database on any server.

Like I said before, this is contrived and works because I have SMO loaded, which is not hard to do, but this entry is all about why, not how to get things loaded.  I have a SQL PowerShell Stairway at SqlServerCentral.com that you can go and read to help you get that part worked out.  Not hard but not the reason for this post.

Join me in the quest to become a PowerShell DBA. Stay tuned for more on this site and join me in a class soon.

T-SQL Tuesday #77- SMO is my favorite feature


This is a post for TSQL Tuesday #77 about my favorite SQL Server Feature.  I write a lot of automations using PowerShell and I use SMO (Shared Management Objects) all the time. This is my favorite feature because it allows me to get information from SQL Server using an object model instead of using TSQL.  I am pretty good at TSQL and have a lot of SQL scripts that I use, but when I am gathering information or just need a quick piece of information, SMO can give it to me with a canned script that I call with parameters and there is no real client that I have to have installed on the server or my workstation.  It is really lightweight and I can have the information back in an object form.

SQL Server has a lot to offer, but one maybe little known option for all you SQL Server people, is that Microsoft releases Feature Packs for each version of SQL Server/Service Pack so that you can have the objects without having to use the Management Studio and other tools.  To get started you will need the latest version so that you can connect backwards to lower versions. You get these objects for 2014 here.

Then you will need the following to get started in the order below: (either x86 or x64 for your platform for each item)

  • SqlClrTypes
  • Shared Management Objects (SMO)
  • PowerShell Tools

The SqlClrTypes must come first because it has objects for the types that are CLR, like HierarchyID, and geography types, etc. The Shared Management Objects contain the libraries that you use to access SQL objects via a .NET object. Then there is the PowerShell Tools which contains the SQL Server provider that allows you to import a module in PowerShell and have access to SQL Server in a path like structure.

Once you have these installed, you can then use PowerShell, or VB or C# to connect to SQL Server and use them. I will give you an example below to show you how they work and you can take it from there.

Add-Type -AssemblyName "Microsoft.SqlServer.Smo, Version=, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -Args localhost  # replace this with your server name

$server.Databases | Format-Table * -AutoSize

This is just a quick example of how to get information from SQL Server using SMO.  The command above basically is generic and not always the most useful, but it illustrates the point that the objects have properties that you are used to seeing in Management Studio like the Name of the database and Recovery Model. These properties are not always easy to get to in the tools, but in PowerShell and SMO they are.

If you want more in depth information on how to get started with SQL PowerShell, take a look at my Stairway on SQL PowerShell on www.sqlservercentral.com and many other Stairways.

Happy TSQL Tuesday #77.

Using SQL Authentication with SQLPS Module

In PowerShell you know that you can import the module SQLPS when the tools are installed for use on the machine. When you import the module you are dumped to a folder of SQLSERVER:\ > and it automatically uses SSPI or Windows Authentication. Many users would like to be able to use SQL Authentication to access the provider. I ran across this question at a SQL Saturday in OKC #SQLSATOKC.  When you right click on an item in SSMS (Management Studio) you can almost always click “Start PowerShell” which will load SQLPS.EXE and dump you in the context that you are right clicking on.

Right click on Database and click Start PowerShell

Puts you in the context that you right clicked on.

Simple enough, you are now in the provider and in a pathlike structure to get to the items you want.  But you are still using Windows Authentication.  Let’s see what it looks like for the Drives as you can see it says SQLSERVER:\ indicating that it is a new drive.  Get-PSDrive will let you see what drives PowerShell knows about and which providers it is using.  FileSystem is the typical provider when thinking of drives, but there are other providers as you can see in the graphic below.

Get-PsDrive Windows Auth

One thing to keep in mind when you use this provider as above, is that the objects and directories that you see in this provider may seem special or maybe they don’t, but they are the same objects if I were to use Management Studio to right click on the Server or the Database and choose properties. The objects are just represented in path like structures instead of objects in an object explorer.  Although when you look at the view of the Object Explorer, you can see a directory like tree. So not really that much different except there is less mouse clicking in the provider window of PowerShell.

Get-PsDrive SQL Authentication

Now lets get to the part where we find out how you can use the SQLPS provider with SQL authentication instead of Windows authentication. To do this little trick you will need to import the module as below in the graphic. The next step is to create a new drive with New-PSDrive and it’s parameters. See that we specify the name, PSProvider, Root and the Credential that we will use to SQL authenticate. One thing to note, you MUST go to the point of the Instance if you use PSDrive with SQLServer Provider.  You can go further than just the instance, but you must specify at least to the instance level in order to use the New-PSDrive to create a drive. Then you will use the Get-Credential in ( ) to execute the cmdlet to have you type in a credential that will be used to connect you to SQL Server. Once that is typed in and authenticated, it will create a drive with the credentials you specified if they are valid.

SQL Authentication SQL Provider 1

When you have created the drive with New-PSDrive, you can then CD to it or Set-Location (CD is an alias) to the drive by using the entire drive sequence (CD TEST:\).  This will put you at the root of the drive that we created with New-PSDrive. You will also notice in my case that when you use Get-PSDrive you will see that the drive has a +sa on the end, which indicates to you that it is using SQL authentication.

SQL Authentication SQL Provider 2

I hope that was helpful and we will see you out there and later I will post a way to save credentials in a file that no one else can use except the user that created it.

TSQL Tuesday – Monitoring


There are many forms of monitoring. When I thought of monitoring, I used to always refer to things like Red-Gate, SQL Sentry, Idera or Confio Ignite (Solarwinds DPA).  But recently I have began to dive more into monitoring from the perspective of proactive checking things out.

I will share a PowerShell script that I use to monitor or capture state for Tables in an enterprise to allow me to watch trends of growth and activity in a database, mostly related to tables.  An overview of the script is as follows.

What’s in a Table?

When you think about what a table can tell you when you look at it, there are many things that jump right out.

  • Rowcount
  • Size
  • How many Indexes
  • Is it Clustered
  • Defined length of the table
  • and many more

This is just a few.  In my DBA database I have the following tables that will have the code here:

  • stats_Database
  • stats_File
  • stats_Table


You need Write-DataTable for loading the results into a table in SQL Server.
SQL Server SMO installed for use in the PowerShell  (Install SqlClrTypes, SharedManagementObject, PowerShellTools in that order)
Create a Database for the tables below in the TSQL Scripts.
Create a Table for the iterations used in the monitoring. (Script in the list below)

These are used for storing or monitoring Database Space, File Stats and Table Stats.  Take a look and see what you find. Remember, these are not polished and they are a work in progress, but for the sake of TSQL Tuesday 66 I decided to unleash them anyway and I will clean them up and edit the post later.

Uses for this monitoring

After you gather the information below and get it into your tables, I can compare with Iteration dates what did the table look like at the beginning of the month?  What did the table look like from then till now and many other questions that can be posed.  Like I said, it is a little raw, but this is something that I have used to capture some information.  There is many more items to store, but this is a good start and for sake of answering questions in the TSQL Tuesday post, I use PowerShell to get information to store for monitoring purposes.

There are 5 scripts below that I will link to in the bottom of the post that combine all these files into a Zip file for you.



/****** Object:  Table [dbo].[Iterations]    Script Date: 5/12/2015 2:51:57 PM ******/


CREATE TABLE [dbo].[Iterations](
	[IterationId] [BIGINT] IDENTITY(1,1) NOT NULL,
	[IterationId] ASC




/****** Object:  Table [dbo].[stats_Database]    Script Date: 5/12/2015 2:27:28 PM ******/



CREATE TABLE [dbo].[stats_Database](
	[database_name] [VARCHAR](256) NOT NULL,
	[server_name] [VARCHAR](256) NOT NULL,
	[database_size] [DECIMAL](12, 2) NULL,
	[data_space_usage] [DECIMAL](12, 2) NULL,
	[index_space_usage] [DECIMAL](12, 2) NULL,
	[last_backup_date] [DATETIME2](3) NULL,
	[last_diff_backup_date] [DATETIME2](3) NULL,
	[last_log_backup_date] [DATETIME2](3) NULL,
	[auto_shrink] [BIT] NULL,
	[auto_close] [BIT] NULL,
	[auto_create_stats_enabled] [BIT] NULL,
	[auto_update_stats_enabled] [BIT] NULL,
	[collation] [VARCHAR](128) NULL,
	[compatibility_level] [VARCHAR](64) NULL,
	[page_verify] [VARCHAR](32) NULL,
	[default_filegroup] [VARCHAR](64) NULL,
	[database_id] [INT] NOT NULL,
	[recovery_model] [VARCHAR](16) NULL,
	[space_available] [DECIMAL](12, 2) NULL,
	[is_online] [BIT] NULL,
	[IterationId] [BIGINT] NOT NULL,
	[create_date] [DATETIME2](3) NULL,
	[is_system_object] [BIT] NOT NULL DEFAULT ((0)),
 CONSTRAINT [PK_statsDatabase_IterationDatabaseId] PRIMARY KEY CLUSTERED 
	[IterationId] ASC,
	[server_name] ASC,
	[database_name] ASC




/****** Object:  Table [dbo].[stats_File]    Script Date: 5/12/2015 2:31:25 PM ******/



CREATE TABLE [dbo].[stats_File](
	[server_name] [VARCHAR](256) NOT NULL,
	[database_name] [VARCHAR](256) NOT NULL,
	[bytes_read_from_disk] [BIGINT] NULL,
	[bytes_written_to_disk] [BIGINT] NULL,
	[physical_name] [VARCHAR](256) NULL,
	[growth_rate] [INT] NULL,
	[growth_type] [VARCHAR](64) NULL,
	[max_size] [BIGINT] NULL,
	[logical_name] [VARCHAR](256) NULL,
	[number_of_disk_reads] [BIGINT] NULL,
	[number_of_disk_writes] [BIGINT] NULL,
	[size_in_mb] [BIGINT] NULL,
	[used_space_in_mb] [BIGINT] NULL,
	[file_id] [INT] NOT NULL,
	[file_type] [VARCHAR](16) NULL,
	[IterationID] [BIGINT] NOT NULL,
	[available_space_in_mb] [BIGINT] NULL,
	[IterationID] ASC,
	[server_name] ASC,
	[database_name] ASC,
	[file_id] ASC





/****** Object:  Table [dbo].[stats_Table]    Script Date: 5/12/2015 2:34:53 PM ******/



CREATE TABLE [dbo].[stats_Table](
	[server_name] [VARCHAR](256) NOT NULL,
	[database_name] [VARCHAR](256) NOT NULL,
	[table_name] [VARCHAR](256) NOT NULL,
	[schema_name] [VARCHAR](256) NOT NULL,
	[data_space] [BIGINT] NOT NULL,
	[index_space] [BIGINT] NOT NULL,
	[row_count] [INT] NOT NULL,
	[column_count] [INT] NOT NULL,
	[is_clustered] [BIT] NOT NULL,
	[is_pk_clustered] [BIT] NOT NULL,
	[primary_key_name] [VARCHAR](256) NULL,
	[primary_key_columns] [VARCHAR](256) NULL,
	[primary_key_length] [INT] NULL,
	[defined_length] [INT] NOT NULL,
	[index_count] [INT] NOT NULL,
	[IterationId] [INT] NOT NULL




   Gathers information about each table for use in monitoring growth and activity.

.PARAMETER <paramName>
   <An example of using the script>
[string]$StatsInstance = 'localhost',
[string]$StatsDatabase = 'DBA',
[string]$AssemblyVersion = ""

Add-Type -AssemblyName "Microsoft.SqlServer.Smo, Version=$AssemblyVersion, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
Add-Type -AssemblyName "Microsoft.SqlServer.SmoExtended, Version=$AssemblyVersion, Culture=neutral, PublicKeyToken=89845dcd8080cc91"

# https://gallery.technet.microsoft.com/scriptcenter/2fdeaf8d-b164-411c-9483-99413d6053ae
# Save it somewhere to be able to load it.
. d:\bin\Write-DataTable.ps1



$dt = New-Object -TypeName System.Data.DataTable -ArgumentList "AllTablesDef"

$cola = New-Object System.Data.DataColumn server_name, ([string])
$col0 = New-Object system.Data.DataColumn database_name, ([string])
$col0.MaxLength = 256
$col1 = New-Object system.Data.DataColumn table_name,([string])
$col1.MaxLength = 256
$colb = New-Object System.Data.DataColumn schema_name, ([string])
$colb.MaxLength = 256
$col2 = New-Object system.Data.DataColumn data_space,([int64])
$col3 = New-Object system.Data.DataColumn index_space,([int64])
$col4 = New-Object system.Data.DataColumn row_count,([int])
$col5 = New-Object system.Data.DataColumn column_count,([int])
$col6 = New-Object system.Data.DataColumn is_clustered,([boolean])
$col7 = New-Object system.Data.DataColumn is_pk_clustered,([boolean])
$col8 = New-Object system.Data.DataColumn primary_key_name,([string])
$col8.MaxLength = 256
$col9 = New-Object system.Data.DataColumn primary_key_columns,([string])
$col9.MaxLength = 256
$col10 = New-Object system.Data.DataColumn primary_key_length,([int])
$col11 = New-Object system.Data.DataColumn defined_length, ([int])
$col12 = New-Object system.Data.DataColumn index_count,([int])
$col13 = New-Object System.Data.DataColumn IterationId, ([Int64])


$unicodeNarrow = "nvarchar,nchar"
$unicodeWide = "ntext"
$gatherDate = [System.DateTime]::Now
$pkColumns = New-Object System.Text.StringBuilder

$destServer = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -Args $StatsInstance
$destdb = $destServer.Databases[$StatsDatabase]

$idValue = $destdb.ExecuteWithResults("INSERT INTO dbo.Iterations (GatherDate) OUTPUT inserted.IterationId VALUES (GetDate())");
$id = [Int64]($idValue.Tables[0].Rows[0][0]);

$spaceTable = New-Object -TypeName System.Data.DataTable -ArgumentList "DBSpaceStats"

$col1 = New-Object system.Data.DataColumn server_name, ([string])
$col1.MaxLength = 256
$col2 = New-Object system.Data.DataColumn database_name,([string])
$col2.MaxLength = 256
$col3 = New-Object System.Data.DataColumn bytes_read_from_disk, ([Int64])
$col4 = New-Object System.Data.DataColumn bytes_written_to_disk, ([Int64])
$col5 = New-Object system.Data.DataColumn physical_name,([string])
$col5.MaxLength = 256
$col6 = New-Object system.Data.DataColumn growth_rate,([Int])
$col7 = New-Object system.Data.DataColumn growth_type,([string])
$col7.MaxLength = 64
$col8 = New-Object system.Data.DataColumn max_size,([Int64])
$col9 = New-Object system.Data.DataColumn logical_name,([string])
$col9.MaxLength = 256
$col10 = New-Object System.Data.DataColumn number_of_disk_reads, ([Int64])
$col11 = New-Object System.Data.DataColumn number_of_disk_writes, ([Int64])
$col12 = New-Object system.Data.DataColumn size_in_mb,([decimal])
$col13 = New-Object system.Data.DataColumn used_space_in_mb,([decimal])
$col14 = New-Object system.Data.DataColumn file_id,([int])
$col15 = New-Object system.Data.DataColumn file_type,([string])
$col15.MaxLength = 16
$col16 = New-Object System.Data.DataColumn IterationId,([Int64])
$col17 = New-Object system.Data.DataColumn available_space_in_mb, ([decimal])


$dbSpace = New-Object -TypeName System.Data.DataTable -ArgumentList "DBSpace"

$col1 = New-Object system.Data.DataColumn database_name, ([string])
$col1.MaxLength = 256
$col2 = New-Object system.Data.DataColumn server_name,([string])
$col2.MaxLength = 256
$col3 = New-Object System.Data.DataColumn database_size, ([decimal])
$col4 = New-Object System.Data.DataColumn data_space_usage, ([decimal])
$col5 = New-Object System.Data.DataColumn index_space_usage, ([decimal])
$col6 = New-Object system.Data.DataColumn last_backup_date,([datetime])
$col7 = New-Object system.Data.DataColumn last_diff_backup_date,([datetime])
$col8 = New-Object system.Data.DataColumn last_log_backup_date,([datetime])
$col9 = New-Object system.Data.DataColumn auto_shrink,([boolean])
$col10 = New-Object system.Data.DataColumn auto_close,([boolean])
$col11 = New-Object system.Data.DataColumn auto_create_stats_enabled,([boolean])
$col12 = New-Object system.Data.DataColumn auto_update_stats_enabled,([boolean])
$col13 = New-Object system.Data.DataColumn collation,([string])
$col13.MaxLength = 128
$col14 = New-Object system.Data.DataColumn compatibility_level,([string])
$col14.MaxLength = 64
$col15 = New-Object system.Data.DataColumn page_verify,([string])
$col15.MaxLength = 32
$col16 = New-Object system.Data.DataColumn default_filegroup,([string])
$col16.MaxLength = 64
$col17 = New-Object system.Data.DataColumn database_id,([Int])
$col18 = New-Object system.Data.DataColumn recovery_model,([string])
$col18.MaxLength = 16
$col19 = New-Object system.Data.DataColumn space_available,([decimal])
$col20 = New-Object system.Data.DataColumn is_online,([boolean])
$col21 = New-Object System.Data.DataColumn IterationId,([Int64])
$col22 = New-Object system.Data.DataColumn create_date, ([datetime])


$db = Get-SqlDatabase -sqlserver $destdb.Parent.Name -dbname "master"
$server = $db.Parent

$tableFields = $server.GetDefaultInitFields([Microsoft.SqlServer.Management.Smo.Table])
$columnFields = $server.GetDefaultInitFields([Microsoft.SqlServer.Management.Smo.Column])
$databaseFields = $server.GetDefaultInitFields([Microsoft.SqlServer.Management.Smo.Database])




foreach($svr in (gc d:\bin\servers.txt)) {

	$opsserver = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -Args $svr
	@{ "ServerName" = "$($opsserver.Name)"; "StartEnd" = "Begin"; "Date" = "$(Get-Date)" } | ft * -AutoSize
	$opsserver.SetDefaultInitFields([Microsoft.SqlServer.Management.SMO.Table], $tableFields)
	$opsserver.SetDefaultInitFields([Microsoft.SqlServer.Management.SMO.Column], $columnFields)
	$opsserver.SetDefaultInitFields([Microsoft.SqlServer.Management.SMO.Database], $databaseFields)
	foreach($db in $opsserver.Databases) {

	    if($db.Status -match "Normal") {
			$dbrow = $dbSpace.NewRow()
			$dbrow.is_online = $true
			$dbrow.server_name = $server.Name
			$dbrow.database_name = $db.Name
			$dbrow.database_size = $db.Size
			$dbrow.data_space_usage = $db.DataSpaceUsage
			$dbrow.index_space_usage = $db.IndexSpaceUsage
			$dbrow.last_backup_date = $db.LastBackupDate
			$dbrow.last_diff_backup_date = $db.LastDifferentialBackupDate
			$dbrow.last_log_backup_date = $db.LastLogBackupDate
			$dbrow.auto_shrink = $db.AutoShrink
			$dbrow.auto_close = $db.AutoClose
			$dbrow.auto_create_stats_enabled = $db.AutoCreateStatisticsEnabled
			$dbrow.auto_update_stats_enabled = $db.AutoUpdateStatisticsEnabled
			$dbrow.collation = $db.Collation
			$dbrow.compatibility_level = $db.CompatibilityLevel
			$dbrow.page_verify = $db.PageVerify
			$dbrow.default_filegroup =  $db.DefaultFileGroup
			$dbrow.database_id = $db.ID
			$dbrow.recovery_model = $db.RecoveryModel
			$dbrow.space_available = $db.SpaceAvailable
			$dbrow.iterationId = $id
			$dbrow.create_date = $db.CreateDate
	        $tables = $db.Tables

	        foreach($table in $tables) {
		        if(!$IncludeSystem -and $table.IsSystemObject) {
		        $row = $dt.NewRow();
	            $row.server_name = $db.Parent.Name
		        $row.database_name = $db.Name
		        $row.table_name = $table.Name
		        $row.data_space = $table.DataSpaceUsed
		        $row.index_space = $table.IndexSpaceUsed
		        $row.row_count = $table.RowCount
		        $row.column_count = $table.Columns.Count
		        $row.is_clustered = $table.HasClusteredIndex
	            $row.schema_name = $table.Schema
	            $row.is_pk_clustered = $false
		        $definitionLength = 0;
		        $primaryKeyLength = 0;
		        [void]$pkColumns.Remove(0, $pkColumns.Length);

		        $columns = $table.Columns;

		        foreach($col in $columns) {
			        if($unicodeNarrow.Contains($col.DataType.ToString())) {
				        $definitionLength += $col.DataType.MaximumLength * 2;
				        if($col.InPrimaryKey) {
					        $primaryKeyLength += $col.DataType.MaximumLength * 2;
					        [void]$pkColumns.AppendFormat("{0},", $col.Name);
			        elseif($unicodeWide.Contains($col.DataType.ToString())) {
				        $definitionLength += $col.DataType.MaximumLength;
				        if($col.InPrimaryKey) {
					        $primaryKeyLength += $col.DataType.MaximumLength;
					        [void]$pkColumns.AppendFormat("{0},", $col.Name);
			        elseif($col.DataType.MaximumLength -eq -1) {
				        $definitionLength += 16;
				        if($col.InPrimaryKey) {
					        $primaryKeyLength += 16;
					        [void]$pkColumns.AppendFormat("{0},", $col.Name);
			        else {
				        $definitionLength += $col.DataType.MaximumLength;
				        if($col.InPrimaryKey) {
					        $primaryKeyLength += $col.DataType.MaximumLength;
					        [void]$pkColumns.AppendFormat("{0},", $col.Name);
		        $row.defined_length = $definitionLength
		        $row.index_count = $table.Indexes.Count
		        $row.IterationId = $id
		        $row.primary_key_length = $primaryKeyLength
		        $row.primary_key_columns = $pkColumns.ToString()
		        try {
	            catch {
	                Write-Output "$($opsserver.Name),$($db.Name),$($table.Name)"
					Write-Output $_.Exception.Message
			try {
				Write-DataTable $StatsInstance $StatsDatabase "stats_Table" $dt	
			catch {
				Write-Error "Could not write to stats_Table"
				Write-Error $_.Exception.Message

		    foreach($fg in $db.FileGroups) {
			    foreach($dbfile in $fg.Files) {
				    $row = $spaceTable.NewRow()
				    $row.server_name = $opsserver.Name
				    $row.database_name = $db.Name
				    $row.file_type = 'Data'
				    $row.file_id = $dbfile.ID
				    $row.logical_name = $dbfile.Name
				    $row.physical_name = $dbfile.FileName
				    $row.size_in_mb = $($dbfile.Size / 1KB)
				    $row.growth_rate = $dbfile.Growth
				    $row.growth_type = $dbfile.GrowthType
				    $row.max_size = $dbfile.MaxSize
				    $row.used_space_in_mb = $($dbfile.UsedSpace / 1KB)
				    $row.available_space_in_mb = $($dbfile.AvailableSpace / 1KB)
				    $row.iterationid = $id
	                $row.number_of_disk_reads = $dbfile.NumberOfDiskReads
	                $row.number_of_disk_writes = $dbfile.NumberOfDiskWrites
					$row.bytes_read_from_disk = $dbfile.BytesReadFromDisk
					$row.bytes_written_to_disk = $dbfile.BytesWrittenToDisk
		    foreach($logfile in $db.LogFiles) {
			    $row = $spaceTable.NewRow()
			    $row.server_name = $opsserver.Name
			    $row.database_name = $db.Name
			    $row.file_type = 'Log'
			    $row.file_id = $logfile.ID
			    $row.logical_name = $logfile.Name
			    $row.physical_name = $logfile.FileName
			    $row.size_in_mb = $($logfile.Size / 1KB)
			    $row.growth_rate = $logfile.Growth
			    $row.growth_type = $logfile.GrowthType
			    $row.max_size = $logfile.MaxSize
			    $row.used_space_in_mb = $($logfile.UsedSpace / 1KB)
			    $row.available_space_in_mb = $(($logfile.Size - $logfile.UsedSpace) / 1KB)
			    $row.iterationid = $id
			    $row.number_of_disk_reads = $logfile.NumberOfDiskReads
	            $row.number_of_disk_writes = $logfile.NumberOfDiskWrites
				$row.bytes_read_from_disk = $dbfile.BytesReadFromDisk
				$row.bytes_written_to_disk = $dbfile.BytesWrittenToDisk

	try {
		Write-DataTable $StatsInstance $StatsDatabase "stats_File" $spaceTable
		Write-DataTable $StatsInstance $StatsDatabase "stats_Database" $dbSpace
	catch {
		Write-Error "Could not write to stats_File"
		Write-Error $_.Exception.Message
	@{ "ServerName" = "$($opsserver.Name)"; "StartEnd" = "End"; "Date" = "$(Get-Date)" } | ft * -AutoSize



Download file for this blog post.