The Number One Rule for Writing Great PowerShell Scripts
Summary: In this article, Microsoft Scripting Guy Ed Wilson discusses the rules for writing great Windows PowerShell scripts.
Hello EB, Microsoft Scripting Guy Ed Wilson here. I am still in holiday mode, and will not shift back into work mode for another week. Right now, the Scripting Wife and I are returning from Greenville, South Carolina where we have spent the last few days. The big event was the chance to see Richard Wagner’s Die Walkure at the Peace Center. Greenville is a really cool town with a nice down town area and lots of little shops, and different things to do. I took the following picture while standing outside the hotel one night.
EB, script design need not be difficult, but at the same time it can be complicated if you allow it to become so. In addition, if you do not do any design, there is a good chance you will end up with a script that will be either difficult to use, or difficult to modify in the future.
When you come right down to it, script development shares many features with the software development life cycle. The first task is to analyze requirements. When analyzing requirements for a script, there are several items that must be considered. Each of the requirements will affect how the script is written, and will determine both the usability and the capability of the end product.
Does the script need to expose a graphical user interface? If the script will offer the user a selection of computers upon which to run the script, you may wish to display a drop down list of available computers, and then allow the user to pick one or more computer names from the list. This specific scenario would require a winform, a listbox, and a button to press to launch the remainder of the script. In addition, you may also wish to present a textbox to display the returned information.
EB, you might be thinking to yourself, “Yeah, yeah! That would be cool.”
Nevertheless, I always balance the difficulty of the implementation against the increase in productivity. I like to think of these things in terms of Return on Investment (ROI). If it will cost me an extra hour of development time, I want to know that the implementation will save me at least an hour of time in the future.
Instead of a winform, listbox and a button, a command-line parameter that allows me to type in the name of the target computer might be just as useful, and it would certainly be easier to code. Alternatively, maybe I would like to be able to feed the script an input file with a list of computer names in it. On the other hand, maybe I want the script to run as soon as it launches. If this were the case, exposing a graphical interface would be inappropriate.
As you can see, the use of a graphical user interface depends on how the script will be utilized. In fact, visualizing how a script will be utilized influences several design decisions. It is common to refer to scripting as automation. However, many times, it seems that the way people use scripting is more like a lightweight development platform, and they are engaging in rapid application development (RAD) instead of automation. There is nothing wrong with this approach as long as you know what you are doing, and are choosing the best tool for the job.
So, to get back to script design, one of the main questions to ask is “How do I anticipate the script being utilized?” This question drives the graphical interface, command line argument decision. But it also influences the type of output the script will produce. Typical outputs include displaying to the Windows PowerShell console, writing to a text file, Microsoft Word Document, Microsoft Excel spreadsheet, or other Office product. But output can also take the form of writing to a database; either a SQL Server database, or Active Directory Domain Services (AD DS) which is also a database. Output can also be directed to the event log, to a diagnostics log, or even to the Registry. The output itself can be text, comma separated value (CSV) or other delimited text, HTML, XML, or other formatted, decorated text.
The way in which the script will be utilized, also determines the location in which the script will be stored. For example, if the script is to run from inside the Windows PowerShell console, the script accepts command-line parameters to modify the way the script executes, it makes sense that the script would be stored in an easily accessible location. I like to create a folder off the root in which to store my command line scripts. Whether I call the folder fso, bin, scratch, or scripts does not really matter. My requirements for such a folder are that the name does not contain spaces, that it is a short name, and that it applies consistently across the network. The next thing is to apply the appropriate amount of security to the folder. If you anticipate normal users launching the scripts, they need at least read permission. If you expect the users to edit the scripts prior to running the script, they need write permission in addition to read permission. On my network, I give the administrator group read / write permission, myself full control, and normal users read permission. The scripts that are stored in the local folder are a replica of a master set of scripts that are stored on a network share; the logon script verifies that the scripts are up to date.
I would like to return to the idea of the output as a design consideration. It is perfectly permissible for a script to gather data and to return that data in the form of an object or a series of objects. In fact, this is an excellent way of writing scripts, and it has a great deal of flexibility built into it. The output from the script will feed into one of the output cmdlets. For example, a script might be written that returns an object that contains the name of the computer, and the current version of the bios. This output can then be displayed to the console by piping the output of the script to the Format-Table cmdlet. The command to perform such an action would not be very difficult. In addition, if you later decided you wanted to write the output to a file, you could pipeline the output to the Set-Content Windows PowerShell cmdlet. If you wanted the ability to manipulate the display in an interactive fashion, you could output to the Out-Grid Windows PowerShell cmdlet. Such flexibility, however, comes with a cost – a more complex command line. Therefore, the design decision is one that will involve implementing a limited number of output options directly into the script, or a script that simply returns an object that will require further processing to achieve the display that is sought.
Another design decision that must be made early in the script writing process is what to do by default. For example, the Get-Process cmdlet by default will return a listing of all the processes and a subset of information in a nicely formatted table. The information comes from the local computer. On the other hand, the Get-Eventlog Windows PowerShell cmdlet prompts for the name of an event log. If you skip over providing the name of an event log, an error is generated. Suppose I decide to write a script called Get-DiskSpace.ps1. I can design the script so that it returns the amount of free disk space on all drives attached to the system as soon as the script is run. On the other hand, there are other ways to configure the script as well. I can return only the free disk space from the C drive. I can have the script prompt for a drive letter, or I can have the script display a message that states that a drive letter is required. I could also have the script return a list of all the drives that are attached to the system, and have the script prompt for a drive letter.
EB, that is all there is to script design. Script Design week will continue tomorrow when I will talk about using command line input.
I invite you to follow me on Twitter or Facebook. If you have any questions, send email to me at email@example.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.
Ed Wilson, Microsoft Scripting Guy