PSCX 3.2.1 Pushed to the PowerShellGallery

When you get your shiny new Windows 10 system up and running and want to get PSCX 3.2.1 installed, just drop to the console and execute:

Install-Module Pscx -Scope CurrentUser

If you are running from an elevated prompt, you can skip providing the “-Scope CurrentUser” and Pscx will be installed for all users.  However if you aren’t running in an elevated session, the “CurrentUser” scope is the only scope PowerShell can install into with standard user privileges.

There have been a few bug fixes to Pscx to make it work better on PowerShell 5.0.  The Get-Parameter command has been updated to not generate errors when trying to determine dynamic parameters for the available providers.  There’s also a new parameter, SkipProviderParameters, that will speed up discovery of parameters at the expense of not listing any provider specific parameters.  Expand-Archive has been updated to output DirectoryInfo and FileInfo objects of created directories and files if the PassThru parameter is specified.

Keep in that there are some commands that overlap with PowerShell 5.0.  Those are:

  • Format-Hex
  • Get-Clipboard
  • Set-Clipboard
  • Expand-Archive

To ensure you get the Pscx version of these commands, use Pscx\<command_name>.  To use the PowerShell versions, you use the appropriate module prefix: Microsoft.PowerShel.Utility, Microsoft.PowerShell.Management and Microsoft.PowerShell.Archive.

Posted in PSCX | Leave a comment

PSScriptAnalyzer–FxCop for PowerShell script

As a C and now C# developer I’ve always liked getting help with my code in the form of either Lint for C or code analysis a.k.a. FxCop for C#.  These tools are great at pointing out real and potential issues that are usually worth addressing.  As the old manufacturing saying goes, the earlier you catch a problem, the cheaper it is to fix.  This is also true for software including shell scripts.  Bugs can range from benign to frustrating to downright  catastrophically damaging.  I had a co-worker many years ago write a Korn shell script with a line like this:

rm –rf $folder

At one point, he made a change to the script that resulted in $folder having only “/” assigned to it.  That wasn’t the intention.  Needless to say, he and our system adminstrator, weren’t too happy the next day.

I’m very excited about PSScriptAnalyzer and its potential to help folks improve their scripts by having the collective wisdom of the PowerShell team and greater community embodied in rules that can detect potential problems before they become real problems.  Problems that can potentially take hours to track down. 

The following is the about topic for PSScriptAnalyzer that I wrote.  Hopefully it gives you a good understanding of how to use PSScriptAnalyzer.  It also demonstrates that the PSScriptAnalyzer team is taking community contributions via pull requests on GitHub specifically at

As you try the script analyzer on your scripts, keep in mind that it is the early days for this tool.  You’ll likely find issues that indicate a bug in the tool rather than your script.  Please submit these as issues on the PSScriptAnalyzer GitHub site.  And if you’re “that” type, fix the issue yourself and submit a PR.  As we all make the tool better, every user benefits.

The team holds a community meeting once every three weeks or so – at least for these early stages of the development of PSScriptAnalyzer.  These meetings are open to the community.  You can catch up on past meetings via the meeting notes on the project’s wiki.

BTW if you write PowerShell modules for the public, please consider writing an about topic for your module.  Module level about topics should be the primary/intro help topic folks can rely upon to find out information about a module. While folks can probably piece together the necessary information from the help topics of individual module commands, why make them do that when an about topic is so easy to write?

        PSScriptAnalyzer is a static code checker for PowerShell script.

        PSScriptAnalyzer checks the quality of Windows PowerShell script by evaluating
        that script against a set of rules.  The script can be in the form of a
        stand-alone script (.ps1 files), a module (.psm1, .psd1 and .ps1 files) or
        a DSC Resource (.psm1, .psd1 and .ps1 files).
        The rules are based on PowerShell best practices identified by the 
        PowerShell Team and the community. These rules can help you create more 
        readable, maintainable and reliable scripts. PSScriptAnalyzer generates 
        DiagnosticResults (errors and warnings) to inform you about potential script 
        issues, including the reason why there might be an issue, and provide you  
        with guidance on how to fix the issue.

        PSScriptAnalyzer is shipped with a collection of built-in rules that check 
        various aspects of PowerShell code such as presence of uninitialized 
        variables, usage of the PSCredential Type, usage of Invoke-Expression, etc.
        The following additional functionality is also supported:
        * Including and/or excluding specific rules globally
        * Suppression of rules within script
        * Creation of custom rules
        * Creation of loggers

        There are two commands provided by the PSScriptAnalyzer module, those are:
        Get-ScriptAnalyzerRule [-CustomizedRulePath <string[]>] [-Name <string[]>] 
                               [-Severity <string[]>] 

        Invoke-ScriptAnalyzer  [-Path] <string> [-CustomizedRulePath <string[]>] 
                               [-ExcludeRule <string[]>] [-IncludeRule<string[]>] 
                               [-Severity <string[]>] [-Recurse] [-SuppressedOnly] 

        To run the script analyzer against a single script file execute:
        PS C:\> Invoke-ScriptAnalyzer -Path myscript.ps1
        This will analyze your script against every built-in rule.  As you may find
        if your script is sufficiently large, that could result in a lot of warnings
        and/or errors. See the next section on recommendations for running against
        an existing script, module or DSC resource.
        To run the script analyzer against a whole directory, specify the folder
        containing the script, module and DSC files you want analyzed.  Specify
        the Recurse parameter if you also want sub-directories searched for files 
        to analyze.
        PS C:\> Invoke-ScriptAnalyzer -Path . -Recurse
        To see all the built-in rules execute:
        PS C:\> Get-ScriptAnalyzerRule

        If you have the luxury of starting a new script, module or DSC resource, it
        is in your best interest to run the script analyzer with all the rules 
        enabled.  Be sure to evaluate your script often to address rule violations as 
        soon as they occur.  
        Over time, you may find rules that you don't find value in or have a need to 
        explicitly violate.  Suppress those rules as necessary but try to avoid 
        "knee jerk" suppression of rules.  Analyze the diagnostic output and the part
        of your script that violates the rule to be sure you understand the reason for 
        the warning and that it is indeed OK to suppress the rule.  For information on 
        how to suppress rules see the RULE SUPPRESSION section below.

        If you have existing scripts, they are not likely following all of these best 
        practices, practices that have just found their way into books, web sites, 
        blog posts and now the PSScriptAnalyer in the past few years.
        For these existing scripts, if you just run the script analyzer without
        limiting the set of rules executed, you may get deluged with diagnostics
        output in the form of information, warning and error messages.  You should 
        try running the script analyzer with all the rules enabled (the default) and
        see if the output is "manageable".  If it isn't, then you will want to "ease 
        into" things by starting with the most serious violations first - errors.
        You may be temtped to use the Invoke-ScriptAnalyzer command's Severity 
        parameter with the argument Error to do this - don't.  This will run every 
        built-in rule and then filter the results during output.  The more rules the 
        script analyzer runs, the longer it will take to analyze a file.  You can 
        easily get Invoke-ScriptAnalyzer to run just the rules that are of severity 
        Error like so:
        PS C:\> $errorRules = Get-ScriptAnalyzer -Severity Error
        PS C:\> Invoke-ScriptAnalyzer -Path . -IncludeRule $errorRules
        The output should be much shorter (hopefully) and more importantly, these rules
        typically indicate serious issues in your script that should be addressed.
        Once you have addressed the errors in the script, you are ready to tackle
        warnings.  This is likely what generated the most output when you ran the 
        first time with all the rules enabled.  Now not all of the warnings generated 
        by the script analyzer are of equal importance.  For the existing script 
        scenario, try running error and warning rules included but with a few rules 
        PS C:\> $rules = Get-ScriptAnalyzerRule -Severity Error,Warning
        PS C:\> Invoke-ScriptAnalyzer -Path . -IncludeRule $rules -ExcludeRule `
                    PSAvoidUsingCmdletAliases, PSAvoidUsingPositionalParameters

        The PSAvoidUsingCmdletAliases and PSAvoidUsingPositionalParameters warnings 
        are likely to generate prodigious amounts of output.  While these rules have 
        their reason for being many existing scripts violate these rules over and 
        over again.  It would be a shame if you let a flood of warnings from these two 
        rules, keep you from addressing more potentially serious warnings.
        There may be other rules that generate a lot of output that you don't care 
        about - at least not yet.  As you examine the remaining diagnostics output, 
        it is often helpful to group output by rule.  You may decide that the one or 
        two rules generating 80% of the output are rules you don't care about.  You 
        can get this view of your output easily:
        PS C:\> $rules = Get-ScriptAnalyzerRule -Severity Error,Warning
        PS C:\> $res = Invoke-ScriptAnalyzer -Path . -IncludeRule $rules -ExcludeRule `
                          PSAvoidUsingPositionalParameters, PSAvoidUsingCmdletAliases
        PS C:\> $res | Group RuleName | Sort Count -Desc | Format-Table Count, Name
        This renders output like the following:
        Count Name
        ----- ----
           23 PSAvoidUsingInvokeExpression
            8 PSUseDeclaredVarsMoreThanAssigments
            8 PSProvideDefaultParameterValue
            6 PSAvoidUninitializedVariable
            3 PSPossibleIncorrectComparisonWithNull
            1 PSAvoidUsingComputerNameHardcoded
        You may decide to exclude the PSAvoidUsingInvokeExpression rule for the moment
        and focus on the rest, especially the PSUseDeclaredVarsMoreThanAssigments, 
        PSAvoidUninitializedVariable and PSPossibleIncorrectComparisonWithNull rules.
        As you fix rules, go back and enable more rules as you have time to address 
        the associated issues.  In some cases, you may want to suppress a rule at
        the function, script or class scope instead of globally excluding the rule.  
        See the RULE SUPPRESSION section below.
        While getting a completely clean run through every rule is a noble goal, it 
        may not always be feasible. You have to weigh the gain of passing the rule 
        and eliminating a "potential" issue with changing script and possibly 
        introducing a new problem.  In the end, for existing scripts, it is usually 
        best to have evaluated the rule violations that you deem the most valuable to 


        Rule suppression allows you to turn off rule verification on a function, 
        scripts or class definition.  This allows you to exclude only specified 
        scripts or functions from verification of a rule instead of globally 
        excluding the rule.  

        There are several ways to suppress rules.  You can suppress a rule globally 
        by using the ExcludeRule parameter when invoking the script analyzer e.g.:
        PS C:\> Invoke-ScriptAnalyzer -Path . -ExcludeRule `
                    PSProvideDefaultParameterValue, PSAvoidUsingWMICmdlet
        Note that the ExcludeRule parameter takes an array of strings i.e. rule names.
        Sometimes you will want to suppress a rule for part of your script but not for
        the entire script.  PSScriptAnalyzer allows you to suppress rules at the 
        script, function and class scope.  You can use the .NET Framework 
        System.Diagnoctics.CodeAnalysis.SuppressMesssageAttribute in your script 
        like so:
        function Commit-Change() {
                                                               "", Scope="Function", 

        PSScriptAnalyzer has been designed to allow you to create your own rules via
        a custom .NET assembly or PowerShell module.  PSScriptAnalyzer also allows 
        you to plug in a custom logger (implemented as a .NET assembly).

        PSScriptAnalyzer is open source on GitHub:
        As you run the script analyzer and find what you believe to be are bugs,
        please submit them to:
        Better yet, fix the bug and submit a pull request.
Posted in PowerShell 5.0, PSScriptAnalyzer | Leave a comment

DNVM Execution Results in Get-Help Error in PowerShell

If you are running dnvm with no parameters on PowerShell with PSCX, you’re likely to see this error:

4> dnvm
You must specify a command!
   ___  _  ___   ____  ___
  / _ \/ |/ / | / /  |/  /
 / // /    /| |/ / /|_/ /
/____/_/|_/ |___/_/  /_/
.NET Version Manager v1.0.0-beta4-10356
By Microsoft Open Technologies, Inc.

usage: dnvm  []

Get-Help : Get-Help could not find dnvm-alias in a help file in this session. To
download updated help topics type: "Update-Help". To get help online, search for the
help topic in the TechNet library at
At C:\Program Files\Microsoft DNX\Dnvm\dnvm.ps1:755 char:17
+                 $h = Get-Help $_.Name
+                 ~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ResourceUnavailable: (:) [Get-Help], HelpNotFoundException
    + FullyQualifiedErrorId : HelpNotFound,Microsoft.PowerShell.Commands.GetHelpCommand

    alias     C:\Program Files\Microsoft DNX\Dnvm\dnvm.ps1 : You cannot call a method on a
At line:1 char:1
+ dnvm
+ ~~~~
    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,dnvm.ps1

There is apparently a bug in the PSCX proxy command for Get-Help, which adds support for looking up help for .NET types and members on MSDN.  For now, you can disable the Get-Help proxy by either editing your existing Pscx.UserPreferences.ps1 file.

If you are not use the Pscx.UserPreferences.ps1 file, you will need to copy one to your profile dir.  Your profile dir can be found like so:

PS C:\> (Split-Path $profile –Parent)

The Pscx.UserPreferences.ps1 file can be located in your Pscx install dir which can be found with this command:

PS C:\> (Get-Module Pscx).ModuleBase

With this info, copy the Pscx.UserPreferences.ps1 file into your profile dir.

Now edit the Pscx.UserPreferences.ps1 file in your profile dir and change this line:

GetHelp           = $true


GetHelp           = $false

Save the file and the next time you load PSCX, use this command line or replace the existing import of PSCX in your profile script with this one:

PS C:\> Import-Module Pscx -Arg "$(Split-Path $profile -Par)\Pscx.UserPreferences.ps1"

Sorry for this inconvenience.  I will be looking into a fix for PSCX vnext which might just be to disable the Get-Help proxy by default.

Posted in .NETCore, PSCX | Leave a comment

Windows PowerShell V5 Goodies in the April WMF5 Preview

It looks like we will get not just the package management features PackageManagement  and PowerShellGet, but the whole of PowerShell V5 downlevel to Windows 7.  This is great news!

The big features in V5 are package management, improved DSC and support for classes but there are quite a number of small goodies that folks will find useful.

First up are the package management features.  If you used a previous preview or read about earlier previews of PowerShell V5, you’ve probably heard about OneGet.  News alert: the OneGet module had to be renamed, due to trademark conflicts, to PackageManagement.  What is it with all this renaming?  First Metro, then OneDrive, now OneGet and possibly even Skype.  I guess I know why my company uses boring model numbers for products.  Smile

Keep in mind that OneGet, er PackageManagement is *not* a package manager itself but a manager of package managers.  One of those is PowerShellGet which comes “in the box”.

The PackageManagement module allows you to register various package sources.  By default you get the PSGallery which uses the PSModule provider used by PowerShellGet.  With this one feature you will be able to install popular PowerShell modules as simply as this:

PS C:\> Install-Module Pscx -Scope CurrentUser

You can install a module for all users (-Scope AllUsers) but you need to be running in an admin console to do so.  This feature alone is huge because it opens up easy access to very useful modules like Pscx, PSReadline, Pester, ISESteroids, xWebAdministration, etc to average users who might not know where to find these modules.  After all, we have multiple repos for these: CodePlex, GitHub, PoshCode, etc.  Additionally you can easily update modules to their latest versions using Update-Module.

When you start searching for or installing packages the first time, the PackageManagement module will need to bootstrap certain providers/sources like Chocolatey, which also requires it to download the NuGet provider (Chocolatey uses NuGet packages).

The Install-Package command allows you to install software from various repos. By default PowerShell comes with a Chocolatey provider which will allow you to install from Chocolatey but be careful, the same package may be found in multiple places.  For instance PSCX is on both the PSGallery and Chocolatey.

PS C:\> Find-Package Pscx | Format-Table -AutoSize

Name Version Source
---- ------- ------
pscx 3.2.0   chocolatey

The Install-Module command, as configured, will install only from the PSGallery.  However if you use Install-Package you can use the –Source parameter to specify the particular package source you want to install from.  Note that the version of Pscx from Chocolatey is actually the MSI installer for Pscx.

Chocolatey is a very handy source of developer tools.  You can easily find and install various tools using Install-Package e.g.:

PS C:\> Install-Package dotPeek –Source chocolatey
PS C:\> Install-Package fiddler4 -Source chocolatey

If you are thinking this looks like apt-get or yum on Linux, you’re right. Windows is overdue for a simple package management capability like this.

I’ve covered the classes feature in a previous blog post.  However there is a new related feature that will make using .NET types much easier and that is the new “using namespace” feature.  Ever get tired of typing full type names over and over?  Now you don’t need to:

using namespace System.Collections.Generic
$list = New-Object 'List[int]'
$cust = New-Object 'Dictionary[string, hashtable]'

I’ve been waiting a long time for that feature!!

There are also number of handy new commands:

  • Compress/Expand-Archive – finally!
  • Get/Protect/Unprotect-CmsMessage – protects secrets using PKI.  See my blog post on these commands.
  • Get-ItemProperyValue
  • New-Guid
  • New-TemporaryFile
  • Get-Clipboard – this can even paste file lists
  • Set-Clipboard
  • Format-Hex (the one in PSCX is better IMO)
  • Out-File, Add-Content and Set-Content have a new parameter -NoNewline
  • Get-ChildItem now has a -Depth parameter you use with -Recurse
  • Convert-String – for changing text from one form to another
  • ConvertFrom-String – to extract information from text into objects
  • New-Item -ItemType now takes SymbolicLink, HardLink and Junction as additional arguments
  • Test-ModuleManifest
  • Debug-Job, Debug-Process, Debug-Runspace
  • Disable-RunspaceDebug, Enable-RunspaceDebug, Get-RunspaceDebug, Wait-Debugger
  • Enter-PSHostProcess, Exit-PSHostProcess, Get-PSHostProcessInfo

For the Get-Clipboard FileList support, check this out.  I copied two files from different directories into the clipboard from Windows Explorer.  I was able to “get” those file items using Get-Clipboard:

PS C:\> Microsoft.PowerShell.Management\Get-Clipboard -Format FileDropList

    Directory: C:\Program Files (x86)\Windows Kits\10\Platforms\UAP\10.0.10030.0

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
darhsl       12/31/1600   5:00 PM                Platform.xml

    Directory: C:\Program Files (x86)\Windows Kits\10\Source\ucrt\conio

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
darhsl       12/31/1600   5:00 PM                pipe.cpp

Note that I prefixed Get-Clipboard with Microsoft.PowerShell.Management\Get-Clipboard to ensure I am using the new Get-Clipboard command and not the one in Pscx.

There are also a lot of enhancements to debugging.  You can break into the PowerShell debugger on any running script in the console by pressing Ctrl+Break (in ISE it is Ctrl+B).  You can remote edit a script using PSEdit from within a remote, interactive session (started with Enter-PSSession).  You can attach to any PowerShell engine process for debugging using Enter-PSHostProcess – this includes processes running on a remote machine!  Once you’re attached to the process, you can list all available runspaces using Get-Runspace, then start debugging the runspace you’re interested in using Debug-Runspace.

Get-Command will now also show the “Source” of a command instead of the ModuleName.  The pain in V4 and below is that when you use Get-Command on an exe or external script the default display did not show the path.  You had to pipe through Format-List Path to see the path to the command.  Not the case in V5, check it out:

PS C:\> Get-Command dnvm | Format-Table -AutoSize

CommandType    Name     Version Source
-----------    ----     ------- ------
ExternalScript dnvm.ps1         C:\Program Files\Microsoft DNX\Dnvm\dnvm.ps1

PowerShell V4 was interesting because of the DSC feature but as a dev I wasn’t able to use DSC that much so V4 was a bit of a let down for me.  However V5 is packed with lots of goodies for developers!!  You can grab the April WMF5 Preview from the link on this PowerShell team blog post.  Or if you are using Windows 10 build 10074 or higher, you already have it.

Posted in PowerShell 5.0 | Leave a comment

MVP Virtual Conference

Passing this on from Microsoft.

Register to attend the Microsoft MVP Virtual Conference

Hi All – I wanted to let you know about a great free event that Microsoft and the MVPs are putting on, May 14th & 15th.  Join Microsoft MVPs from the Americas’ region as they share their knowledge and real-world expertise during a free event, the MVP Virtual Conference.

The MVP Virtual Conference will showcase 95 sessions of content for IT Pros, Developers and Consumer experts designed to help you navigate life in a mobile-first, cloud-first world.  Microsoft’s Corporate Vice President of Developer Platform, Steve Guggenheimer, will be on hand to deliver the opening Key Note Address.

Why attend MVP V-Conf? The conference will have 5 tracks, IT Pro English, Dev English, Consumer English, Portuguese mixed sessions & Spanish mixed sessions, there is something for everyone! Learn from the best and brightest MVPs in the tech world today and develop some great skills!

Be sure to register quickly to hold your spot and tell your friends & colleagues.

The conference will be widely covered on social media, you can join the conversation by following @MVPAward and using the hashtag #MVPvConf.

Register now and feel the power of community!


Posted in Conference, Microsoft | Leave a comment

PowerShell V5 New Feature: Protect/Unprotect-CmsMessage

Windows PowerShell V5, due out sometime in 2015, sports a number of new features: OneGet, PowerShell Get, enhanced DSC, ConvertFrom-String, support for authoring classes in PowerShell script, Compress/Expand-Archive, support for creating symbolic links, hard links and junctions, etc.

One of the more obscure but useful features is the support for cryptographically protecting messages as documented in the IETF standard RFC5652.  This involves the creation of a certificate which I will show you how to do.  You can then protect and unprotect messages using that certificate.  However, where it gets interesting is when you export a public certificate from the original certificate.  You can give the public certificate to anybody and they can use that to encrypt (protect) a message.  That message cannot not unencrypted (unprotected) by anyone except the individual that holds the original certificate.  The original certificate contains both the public and private key.  It is the private key that is used to unprotect the message. 

This is the fundamental basis for asymmetric cryptography.   One key encrypts a message that only the other key can decrypt.  The public key can be distributed broadly but the private  key needs to be held securely.

Here is how it works.  First, we need to create a certificate INF file that is configured for document encryption.  We will feed this file into a tool that will create our certificate:

Signature = "$Windows NT$"


Subject = ""
MachineKeySet = false
KeyLength = 2048
HashAlgorithm = Sha1
Exportable = true
RequestType = Cert
ValidityPeriod = "Years"
ValidityPeriodUnits = "1000"


Save this content into a text file called DocumentEncryption.inf.  Then run the following command:

PS C:\> certreq -new DocumentEncryption.inf DocumentEncryption.cer
Installed Certificate:
  Serial Number: 106ace908d57cc9548f830cc67e672e0
  Thumbprint: 852dcc0c3384c5050e58ee5e655aee3981bc309f
  Microsoft Strong Cryptographic Provider


This not only creates the certifcate .cer file but it also installs the certificate into the Certificate store.  You can see it by listing the cert: drive using the new dynamic parameter –DocumentEncryptionCert e.g.:

PS C:\> Get-ChildItem Cert:\CurrentUser\My\ -DocumentEncryptionCert

    Directory: Microsoft.PowerShell.Security\Certificate::CurrentUser\My

Thumbprint                                Subject
----------                                -------


At this point, you can round trip a protected message on this machine, under the user account that created and imported the certificate using certreq.exe:

PS C:\> $msg = Protect-CmsMessage -Content "This is a secret message!" -To *
PS C:\> $msg
-----BEGIN CMS-----
-----END CMS-----
PS C:\> $msg | Unprotect-CmsMessage
This is a secret message!


BTW the –To parameter will also take the certificate thumbprint as well as the path to the .CER file.  That demos the basic capability but round tripping on the same machine isn’t all that interesting.  Where it gets interesting is when you provide others with a public key in which they can protect messages they send to you.  Those protected messages can only be decrypted by you.

First lets export the public key part of the certificate:

$cert = Get-ChildItem -Path cert:\CurrentUser\My\852DCC0C3384C5050E58EE5E655AEE3981BC309F 
Export-Certificate -Cert $cert -FilePath DocumentEncryption-Public.cer 

Now copy the DocumentEncryption-Public.cer file to another machine.  Let’s use it to protect a message:

PS C:\> $msg = Protect-CmsMessage -Content "I am he as you are he as you are me" `
                                  -To .\DocumentEncryption-Public.cer PS C:\> $msg -----BEGIN CMS----- MIIBwgYJKoZIhvcNAQcDoIIBszCCAa8CAQAxggFKMIIBRgIBADAuMBoxGDAWBgNVBAMMD2Zvby5i YXJAYmF6LmNvbQIQEGrOkI1XzJVI+DDMZ+Zy4DANBgkqhkiG9w0BAQcwAASCAQBl3BE6UDdXxNRe /TATEgqasqVL4FZi2RVsm6s8RWUKH/GIUUe1EI2N3BeBHZP847DCAkiAKrd16Kds3yaCF4mcmKao lCd0TiInUA5WenDnxO40VW85MJrLWM6sjhQB0bMCNa0UMRV9IzRAAr1lSKDKcWupMScCQvQQ9JHR qCLRgPmWtGA+oMvfl5xs8FTS6oUvNOGh3MwMW7ZOMrk5y6vBltiI5TY34PRVZ/pYl+jnyjSi/tfP vmGp/GqmK9OgIGpgRjRJ8QHnWJ4CXAvL3zj3LXXyPevHKXBum8EOwpiM//zF5kl6gPdYrrqqQJuG OdQkNHXLZunZwtySAVL9n5TcMFwGCSqGSIb3DQEHATAdBglghkgBZQMEASoEEA9egmjPe/PsOo5j Wadz5WGAMAObP28v4rN1iCEjqEuY9Yjhgu8/m8kD2eWdm7/KRbpFCADSF6k5crTyqMApMUug6Q== -----END CMS-----


In case you were wondering, having the public key is not enough to decrypt the message – even on the same machine you used to encrypt the message with the public certificate:

PS C:\> $msg | Unprotect-CmsMessage –To .\DocumentEncryption-Public.cer
Unprotect-CmsMessage : The enveloped-data message does not contain the
specified recipient.
At line:1 char:8
+ $msg | Unprotect-CmsMessage
+        ~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Unprotect-CmsMessage], Crypto
    + FullyQualifiedErrorId : System.Security.Cryptography.CryptographicExcept


If you would prefer to install the public key certificate on the second machine, you can do that easily:

PS C:\> Import-Certificate -FilePath DocumentEncryption-Public.cer `
                           -CertStoreLocation cert:\CurrentUser\My


Then you can use either the certification thumbprint or the subject e.g. * as the argument to the –To parameter on Protect-CmsMessage.

Now let’s copy paste the protected message back to the original machine with the certificate containing the private key and unprotect it:

PS C:\> $msg = @'
>>> -----BEGIN CMS-----
>>> /TATEgqasqVL4FZi2RVsm6s8RWUKH/GIUUe1EI2N3BeBHZP847DCAkiAKrd16Kds3yaCF4mcmKao
>>> qCLRgPmWtGA+oMvfl5xs8FTS6oUvNOGh3MwMW7ZOMrk5y6vBltiI5TY34PRVZ/pYl+jnyjSi/tfP
>>> vmGp/GqmK9OgIGpgRjRJ8QHnWJ4CXAvL3zj3LXXyPevHKXBum8EOwpiM//zF5kl6gPdYrrqqQJuG
>>> Wadz5WGAMAObP28v4rN1iCEjqEuY9Yjhgu8/m8kD2eWdm7/KRbpFCADSF6k5crTyqMApMUug6Q==
>>> -----END CMS-----
>>> '@
204> $msg | Unprotect-CmsMessage
I am he as you are he as you are me


OK so these two commands are working as advertised but you may be asking what the practical application of this is.  Well, we’ve always had this issue of how to protect passwords that have to be embedded in scripts.  This asymmetric approach can help here but I think only when the script runs on machines and/or user accounts that are limited in access.  That is, I could envision checking in the public certificate into version control. Anybody could use that cert to protect passwords and embedded the protected password text into a script.  Folks like contractors that have access to source control cannot determine the password from the public key and the protected password plainly visible in scripts.  However, if you protect the private certificate by only installing it on the machines that run the scripts (build machines, test machines) *and* you can limit who can access the machines the scripts run on you have protected your passwords at least a bit more securely.

Disclaimer: I am not a security or certificate guru.  I’m interested in hearing how folks might use this asymmetric encryption functionality.  I’m also curious if the private key certificate can be installed onto a machine so that A) only a specific user can access the key and B) the key can’t be exported.  The INF file contains a field called Exportable that is set to true in the sample INF file above.  I need to experiment with it set to false to see if that would prevent the private key from being exported once it is installed on a specific user account on a specific machine.

Posted in PowerShell 5.0 | 7 Comments

BlackJack, NamedPipes and PowerShell Classes – Oh My!

In my last blog post, I introduced you to using .NET named pipes to implement BlackJack across different PowerShell processes and even across the network.  In this blog post, we will take a look at what it is like to convert the previous procedural implementation to an object-oriented implementation using the class support in the preview version of Windows PowerShell 5.0 – specifically the version in the Windows 10 Technical Preview.  Note: “preview” version means that the classes feature is likely to change between now and when PowerShell 5.0 ships.  Hopefully that means it gets better but there is always the possibility the feature gets pulled.

One of the major benefits of object-oriented programming is encapsulation i.e. you can put related code and state together into a single class definition rather than have it spread across your source code.  This makes it easier to fix bugs because certain types of bugs tend to impact all code that touches a specific data structure.  In procedural code you tend to look all over for that code but in an object-oriented implementation the code tends to be within the same class definition.  Here’s an example from the “procedural” version of the BlackJackDealer script.  These are the variables and functions that deal with cards, the deck and the hand:

$suits = 'Clubs','Diamonds','Hearts','Spades'
$ranks = 'Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King'

function GetShuffledDeck {
    $deck = 0..3 | Foreach {$suit = $_; 0..12 | Foreach { 
                      $num = if ($_ -eq 0) {11} elseif ($_ -ge 10) {10} else {$_ + 1}
    for($i = $deck.Length - 1; $i -gt 0; --$i) {
        $rndNdx = Get-Random -Maximum ($i+1)
        $temp = $deck[$i]
        $deck[$i] = $deck[$rndNdx]
        $deck[$rndNdx] = $temp

function GetValueOfHand($hand) {
    $sum = ($hand | Measure-Object Value -Sum).Sum
    if ($sum -gt 21) {
        $sum = ($hand | Foreach {if ($_.Value -eq 11) {1} else {$_.Value}} | Measure-Object -Sum).Sum

function IsHandBust($hand) {
    (GetValueOfHand $hand) -gt 21

function IsHandBlackJack($hand) {
    if ($hand.Length -ne 2) { return $false }
    (GetValueOfHand $hand) -eq 21

function DumpHand($hand) {
    $cards = $hand | Foreach {DumpCard $_}
    $OFS = ', '

function DumpCard($card) {
    "$($card.Rank) of $($card.Suit)"

$cardNdx = -1
function DealCard {
    if ($cardNdx -lt 0) {
        WriteToPipeAndLog 'Deck empty, reshuffling deck' > $null
        $script:deck = GetShuffledDeck
        $script:cardNdx = $deck.Length - 1

Note the script level variables $suits, $ranks and $cardNdx.  With functions you have to be careful to remember to use script scope when you need to modify a script scope variable e.g.:


It’s easy to forget to use the $script: prefix and that can lead to hard to find bugs.   It’s also not so obvious which of these functions are using the script scope variables.  Sure you can use your editor’s Find feature to determine that but in more complex cases involving multiple dot-sourced scripts, using Find can be more challenging.  Ideally you’d like to have the variables with the functions that use those variables encapsulated together.

BTW modules provide encapsulation and at this point in time, modules provide better support for encapsulation than PowerShell classes do. That is, variables in modules default to private but can be made public.  In PowerShell classes, the equivalent is a property but unfortunately at this time, properties can only be public.  But since this post is about classes and not modules, let’s press on.

Let’s look at the equivalent implementation using classes (and an enum).

# Updated for Windows 10 Preview Build 9879 - new 'hidden' keyword 
# and added support for semi-colon separated enum fields.
enum Suit { Clubs; Diamonds; Hearts; Spades }

class Card {

    Card($s, $r, $v) {
        $Suit = $s
        $Rank = $r
        $Value = $v

    [string] ToString() {
        return "$Rank of $Suit"

class Deck {
    hidden [Card[]]$Cards
    hidden [Pipe] $Pipe
    hidden [int]$Index

    Deck([Pipe]$p) {
        $this.Index = 0
        $ranks = 'Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King'
        $this.Cards = 0..3 | Foreach { $suit = $_; 0..12 | Foreach { 
                              $num = if ($_ -eq 0) {11} elseif ($_ -ge 10) {10} else {$_ + 1}
                              [Card]::new([Suit]$suit, $ranks[$_], $num)
        $this.Pipe = $p

    [void] Shuffle() {
        for($i = $this.Cards.Length - 1; $i -gt 0; --$i) {
            $rndNdx = Get-Random -Maximum ($i+1)
            $temp = $this.Cards[$i]
            $this.Cards[$i] = $this.Cards[$rndNdx]
            $this.Cards[$rndNdx] = $temp

    [Card] DrawCard() {
        if ($this.Index -gt $this.Cards.Length - 1) {
            $this.Index = 0
            Write-Host ($this.Pipe.WriteLine('Deck empty, reshuffling deck'))
        return $this.Cards[$this.Index++]

    [string] ToString() {
        $OFS = ', '
        return "$Cards"

class Hand {
    hidden [Deck]$Deck
    hidden [Card[]]$Cards

    Hand([Deck]$d) {
        $this.Deck = $d
        $this.Cards = $d.DrawCard(), $d.DrawCard()

    [Card] DrawCard() {
        $card = $Deck.DrawCard()
        $this.Cards += $card
        return $card

    [int] GetValueOfHand() {
        $sum = ($Cards | Measure-Object Value -Sum).Sum
        for ($i = $Cards.Length - 1; ($i -ge 0) -and ($sum -gt 21); $i--) {
            if ($Cards[$i].Value -eq 11) {
                $Cards[$i].Value = 1
                $sum -= 10
        return $sum

    [string] ToString() {
        $OFS = ', '
        return "$Cards"

    [string] ToDealerString() {
        return $this.Cards[0].ToString() + ', hole card'

    [bool] IsBlackJack() {
        if ($this.Cards.Length -ne 2) { return $false }
        return $this.GetValueOfHand() -eq 21

    [bool] IsBusted() {
        return $this.GetValueOfHand() -gt 21

    [bool] IsMandatoryDealerHit() {
        return $this.GetValueOfHand() -lt 17

A couple of things to note here.  First, yes the object-oriented version is more lines of script.  However, more lines of script doesn’t always mean more complex or harder to maintain.  In fact, I’d argue just the opposite in this case.  The script is easier to understand from a perspective of what variables are impacted by what methods.  We can easily see we have three basic types here: Card, Deck and Hand.  Each type knows how to perform the operations required of it i.e. a Deck knows how to shuffle, a Hand knows how to draw a Card from the Deck, a Card knows its value, etc.

Note the enum definition for Suit.  At this point in time, it requires newline as a separator between enum fields.  That makes this definition of Suit take more lines than in my original version.  UPDATE 11/15/2014: I would love to see the team add support for comma as a separator which would tighten up this definition to: Ask and you shall receive.  🙂  There is now semi-colon separator support for enum fields. The following works on PowerShell in build 9879 of the Windows 10 Technical Preview.

enum Suit {Clubs;Diamonds;Hearts;Spades} # in build >= 5.0.9879

The second thing to note is that none of the “methods” use the function keyword.  They are sans any keywords like function or def.  Just a return type (or [void] for no return type), the method name and the parameters.  Nice and succinct.

The third and probably most important difference to note is that every method that has a return value *must* use the return keyword.  This is a major difference from typical PowerShell functions.  In functions, any output that is not captured or redirected is streamed back to the caller.  In this regard, copying script from the command line and pasting it into a function doesn’t result in any behavioral differences.  All command output that isn’t captured in a variable or redirected is “output” from the function.  Class methods however, act more like traditional programming language methods.  They do not automatically stream output back to the caller.  So in a class method, you have to explicitly return the data from the method using the return keyword.

Another thing worth noting is how instances of your class get rendered.  Right now, sending an instance of a class to a Format-* command (or Out-Default by default) will result in the object either having its properties displayed – just like this was a PSCustomObject.  Or if there are no properties, just the class name gets displayed.  Inside of a double-quoted string e.g. “$hand”, the class name is displayed.  However, you can change this behavior by implementing a ToString() method in your class e.g.:

[string] ToString() { return '...Whatever makes sense...' }

You can see that I have done this for the Card, Deck and Hand classes.

I also want to show you the Pipe class:

# The hidden keyword is new to builds >= 5.0.9879 and hides the associated 
# fields for development (Intellisense) purposes.
class Pipe {
    hidden $PipeServer
    hidden $PipeReader
    hidden $PipeWriter

    Pipe() {
        $PipeServer = new-object IO.Pipes.NamedPipeServerStream('BlackJack', 
        $PipeReader = new-object IO.StreamReader($PipeServer)
        $PipeWriter = new-object IO.StreamWriter($PipeServer)

    [void] Dispose() {

    [void] WaitForConnection() {
        $PipeWriter.AutoFlush = $true

    [string] ReadLine() {
        return $PipeReader.ReadLine()

    [string] WriteLine([string]$msg) {
        return $msg

Note how it wraps up the StreamReader and StreamWriter as internal variables.

One feature I really want to see added to PowerShell classes is support for at least a private access modifier.  Like modules, I want to make many of my class properties visible only from inside the class.  And to a lesser extent I want to do that for some methods as well. UPDATE 11/15/2014: I have been informed that having a true “private” modifier would make the debug experience bad. The team has come up with a compromise via a new keyword – hidden.  This effectively tells the Intellisense feature to not display the class member when an instance of the class is being accesed from “outside” the class.  You will still get Intellisense of these members from within the class.  And when you are debugging you will be able to see the hidden fields.  That seems reasonable to me.

The nice thing about the code above is that the user of this Pipe class doesn’t have to deal with individual $PipeServer, $PipeReader or $PipeWriter objects, they just use the instance of this class – $pipe that is created in the main body of this script (see below) using the static new() method.  Using the new() method is how you create instances of your classes.  Note that class constructors can take parameters.  You can see this below in the call to the [Deck] constructor where I pass it the $pipe variable.  Here is the *complete* main body of the dealer script that uses classes:

$pipe = [Pipe]::new()
$deck = [Deck]::new($pipe)
$blackJackGame = [BlackJackGame]::new($pipe, $deck)

Pretty simple eh?  Most of the game logic has been encapsulated in the BlackJackGame class.  Here are links to the full implementations of BlackJackDealer with Class.ps1 and BlackJackPlayer with Class.ps1.

For more information on using classes in PowerShell V5, check out Dan Harman’s talk on this topic at the European PowerShell 2014 Summit.

Posted in .NET, PowerShell, PowerShell 5.0 | 2 Comments

Windows PowerShell and Named Pipes

A named pipe is a stream-based mechanism for inter-process communication (IPC).  The .NET Framework has two types for allow you to use named pipes:

MSDN describes named pipes like so:

Named pipes provide one-way or duplex pipes for communication between a pipe server and one or more pipe clients. Named pipes can be used for interprocess communication locally or over a network. A single pipe name can be shared by multiple NamedPipeClientStream objects.

Being a .NET feature, named pipes are easily usable from PowerShell giving you a mechanism to communicate between separate PowerShell processes on the same machine or between different machines.  Note: another way of communicating between separate PowerShell processes in a very decoupled way is to use the Microsoft Message Queue (MSMQ) but that’s a topic for another blog post.

For this demonstration, I’ve chosen to implement BlackJack.  First a disclaimer, I’m no expert in BlackJack.  The implementation is from memory and is just basic BlackJack. There’s no support for doubling down, splitting, insurance, etc.  The point is to show how you can use named pipes to communicate between two PowerShell processes even from two different machines.

Before we get into the code, here is what a game session looks like:


PS C:\> .\BlackJackDealer.ps1
BlackJack dealer started
Waiting for client connection
Connection established
Connected to Keith.
Starting new game -----------------------------------------
Dealer's hand is Queen of Diamonds, hole card
Keith hand is King of Diamonds, 5 of Spades
Keith drew a 8 of Clubs, updated hand King of Diamonds, 5 of Spades, 8 of Clubs
DEALER's hand is Queen of Diamonds, King of Hearts
Keith busts with King of Diamonds, 5 of Spades, 8 of Clubs
DEALER wins with Queen of Diamonds, King of Hearts


PS C:\> .\BlackJackPlayer.ps1 Keith-PC
Enter your name: Keith
BlackJack player connecting to dealer
Connected to dealer
Connected to Keith.
Starting new game -----------------------------------------
Deck empty, reshuffling deck
Dealer's hand is Queen of Diamonds, hole card
Keith hand is King of Diamonds, 5 of Spades
Enter H (hit me) or S (stand): h
Keith drew a 8 of Clubs, updated hand King of Diamonds, 5 of Spades, 8 of Clubs
DEALER's hand is Queen of Diamonds, King of Hearts
Keith busts with King of Diamonds, 5 of Spades, 8 of Clubs
DEALER wins with Queen of Diamonds, King of Hearts
Deal again? Y (yes) N (no):

And that’s why I don’t gamble.  Smile

Let’s get to the implementation which you can download in whole from by OneDrive via the two PowerShell scripts BlackJackDealer.ps1 and BlackJackPlayer.ps1.

First up is the dealer script:

$suits = 'Clubs','Diamonds','Hearts','Spades'
$ranks = 'Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King'

function GetShuffledDeck {
    $deck = 0..3 | Foreach {$suit = $_; 0..12 | Foreach { 
                      $num = if ($_ -eq 0) {11} elseif ($_ -ge 10) {10} else {$_ + 1}
    for($i = $deck.Length - 1; $i -gt 0; --$i) {
        $rndNdx = Get-Random -Maximum ($i+1)
        $temp = $deck[$i]
        $deck[$i] = $deck[$rndNdx]
        $deck[$rndNdx] = $temp

function GetValueOfHand($hand) {
    $sum = ($hand | Measure-Object Value -Sum).Sum
    if ($sum -gt 21) {
        $sum = ($hand | Foreach {if ($_.Value -eq 11) {1} else {$_.Value}} | Measure-Object -Sum).Sum

function IsHandBust($hand) {
    (GetValueOfHand $hand) -gt 21

function IsHandBlackJack($hand) {
    if ($hand.Length -ne 2) { return $false }
    (GetValueOfHand $hand) -eq 21

function DumpHand($hand) {
    $cards = $hand | Foreach {DumpCard $_}
    $OFS = ', '

function DumpCard($card) {
    "$($card.Rank) of $($card.Suit)"

$cardNdx = -1
function DealCard {
    if ($cardNdx -lt 0) {
        WriteToPipeAndLog 'Deck empty, reshuffling deck' | Out-Null
        $script:deck = GetShuffledDeck
        $script:cardNdx = $deck.Length - 1

function WriteToPipeAndLog($msg) {

$npipeServer = new-object System.IO.Pipes.NamedPipeServerStream('BlackJack', 
try {
    'BlackJack dealer started'
    'Waiting for client connection'
    'Connection established'

    $pipeReader = new-object System.IO.StreamReader($npipeServer)
    $script:pipeWriter = new-object System.IO.StreamWriter($npipeServer)
    $pipeWriter.AutoFlush = $true

    $playerName = $pipeReader.ReadLine()
    WriteToPipeAndLog "Connected to $playerName."

    # Outer game loop
    while (1)
        WriteToPipeAndLog 'Starting new game -----------------------------------------'
        $playerHand  = @(DealCard)
        $dealerHand  = @(DealCard)
        $playerHand += DealCard
        $dealerHand += DealCard

        WriteToPipeAndLog "Dealer's hand is $(DumpCard $dealerHand[0]), hole card"
        WriteToPipeAndLog "$playerName hand is $(DumpHand $playerHand)"

        $playerDealtBlackJack = IsHandBlackJack $playerHand
        $dealerDealtBlackJack = IsHandBlackJack $dealerHand

        if ($playerDealtBlackJack -and $dealerDealtBlackJack) {
            WriteToPipeAndLog "Both the Dealer and $playerName get BLACKJACK. The game is a push"
        elseif (IsHandBlackJack $playerHand) {
            WriteToPipeAndLog "$playerName gets BLACKJACK and wins!"
        elseif (IsHandBlackJack $dealerHand) {
            WriteToPipeAndLog "Dealer gets BLACKJACK and wins!"
        else {
            # Let's play this hand
            $dealerBusts = $false
            $playerBusts = $false

            # Player's turn
            while (1) {
                $stand = $false
                $invalidKey = $false
                $command = $pipeReader.ReadLine()
                switch ($command) {
                    "H"     { }
                    "S"     { $stand = $true }
                    default { $invalidKey = $true }

                if ($invalidKey) {
                    WriteToPipeAndLog "Sorry $playerName, didn't recognize command: $command"
                elseif ($stand) {
                    WriteToPipeAndLog "$playerName stands with hand $(DumpHand $playerHand)"
                else {
                    $newCard = DealCard
                    $playerHand += $newCard
                    WriteToPipeAndLog "$playerName drew a $(DumpCard $newCard), updated hand $(DumpHand $playerHand)"
                    if (IsHandBust $playerHand) {
                        $playerBusts = $true

            # Dealer's turn
            WriteToPipeAndLog "DEALER's hand is $(DumpHand $dealerHand)"
            if (!$playerBusts) {
                do {
                    $dealerSum = GetValueOfHand $dealerHand
                    if ($dealerSum -gt 21) {
                        $dealerBusts = $true
                    elseif ($dealerSum -ge 17) {
                        WriteToPipeAndLog "DEALER stands with $(DumpHand $dealerHand)"

                    $newCard = DealCard
                    $dealerHand += $newCard
                    WriteToPipeAndLog "Dealer draws $(DumpCard $newCard), updated hand $(DumpHand $dealerHand)"

                    Start-Sleep -Seconds 1
                } while (1)

            # Determine who won
            if ($playerBusts) {
                WriteToPipeAndLog "$playerName busts with $(DumpHand $playerHand)"
                WriteToPipeAndLog "DEALER wins with $(DumpHand $dealerHand)"
            elseif ($dealerBusts) {
                WriteToPipeAndLog "DEALER busts with $(DumpHand $dealerHand)"
                WriteToPipeAndLog "$playerName wins with $(DumpHand $playerHand)"
            else {
                $dealerSum = GetValueOfHand $dealerHand
                $playerSum = GetValueOfHand $playerHand
                if ($dealerSum -gt $playerSum) {
                    $msg = "DEALER wins with $(DumpHand $dealerHand)"
                elseif ($playerSum -gt $dealerSum) {
                    $msg = "$playerName wins with $(DumpHand $playerHand)"
                else {
                    $msg = 'The game is a push'
                WriteToPipeAndLog $msg

        $command = $pipeReader.ReadLine()
        if ($command -eq 'EXIT') { break }

    Start-Sleep -Seconds 2
finally {
    'Game exiting'

Note lines 62 – 71 of the dealer script is where I setup the server side of the named pipe.  It sits and waits at the WaitForConnection() call for a player to join the game.  The named pipe is a low-level, byte-oriented stream.  To make it simple to pass string messages and commands back and forth, I decorate the PipeStream with a StreamReader and StreamWriter.  I use those to write and read write strings to and from the client.  Once a player has joined, the game loop proceeds to send instructions by using the StreamWriter’s WriteLine() method to the player and it uses the StreamReader’s ReadLine() to receive the player’s instructions as a string e.g. “H” for hit and “S” for stand.

And here is the simpler, player script:

param ($ComputerName = '.')

$npipeClient = new-object System.IO.Pipes.NamedPipeClientStream($ComputerName, 'BlackJack', [System.IO.Pipes.PipeDirection]::InOut,
$pipeReader = $pipeWriter = $null
try {
    $playerName = Read-Host 'Enter your name'
    'BlackJack player connecting to dealer'
    'Connected to dealer'

    $pipeReader = new-object System.IO.StreamReader($npipeClient)
    $pipeWriter = new-object System.IO.StreamWriter($npipeClient)
    $pipeWriter.AutoFlush = $true
    # Game loop
    while (1) {
        # Hand loop
        while (1) {      
            while (($msg = $pipeReader.ReadLine()) -notmatch 'YOURMOVE|ROUNDOVER') {
            if ($msg -match 'ROUNDOVER') { break }
            $command = Read-Host 'Enter H (hit me) or S (stand)'

        while (($msg = $pipeReader.ReadLine()) -notmatch 'NEWDEAL') {
        $res = Read-Host 'Deal again? Y (yes) N (no)'
        if ($res -eq 'N') { 
        else {
finally {
    'Game exiting'

Lines 3 – 15 in the player script is where I set up the client side of the named pipe and connect to the server. Note the [System.Security.Principal.TokenImpersonationLevel]::Impersonation is required to make the connection work between two different machines.

There is a fair amount to the logic of the game that has nothing to do with named pipes but the above should give you a quick primer on how to use named pipes in your application. In summary, it is pretty straightforward:

  1. Create NamedPipeServerStream
  2. Wrap the server’s PipeStream in StreamReader/StreamWriter objects if you want to read/write string messages.
  3. Have the server pipe WaitForConnection
  4. Create NamedPipeClientStream in client script
  5. Wrap the client’s PipeStream in StreamReader/StreamWriter objects if you want to read/write string messages.
  6. Call Connect() to connect to the server.
  7. Start using StreamWriter.WriteLine() and StreamReader.ReadLine() to pass messages back and forth between the server and the client.

Stay tuned.  My next goal is to show you what this looks like using preview version of  PowerShell V5 classes.

Posted in .NET, PowerShell | 1 Comment

PSCX 3.2.0 Available

A new version of the PowerShell Community Extensions was released this morning on CodePlex.  PSCX 3.2.0 is also available on the PowerShell Resource Gallery Preview site which means you can use the new Install-Module command in WMF 5.0 Preview and Windows 10 Preview to install the module e.g.:

C:\PS> Install-Module Pscx -Scope CurrentUser

This new version fixes a number of reported bugs including an issue with directory listings under PowerShell v5.  The Import-VisualStudioVars command has been updated to work with Visual Studio 14 CTP.  You can now import Visual Studio environment variables based on the Visual Studio “version number” e.g. 140 for VS 14 CTP or the name/year 2010, 2012, 2013 e.g.:

C:\PS> Import-VisualStudioVars 140 -Architecture x86

Another minor feature we exposed in this release is a convenient way to convert decimal numbers to hex.  Today you do the following to convert a decimal number to hex:

C:\PS> "0x{0:X}" -f 5123123

With the update, you can do the same a bit more easily:

C:\PS> [hex]5123123

The major new addition for this release is the Edit-File command.  This command allows you to interactively open a file for editing as well as automate the editing of a file.  Here is how you would open a file for interactive editing:

C:\PS> Edit-File $profile.CurrentUserAllHosts

This starts notepad.exe by default and loads your profile.ps1 file.  You can change the text editor that is used by setting $Pscx:Preferences.TextEditor = ‘notepad2.exe’ in your profile.

However the real power of Edit-File (alias e) is that can you use it to automate editing files.  Today, this is how you would automate editing a file in PowerShell:

C:\PS> (Get-Content .\Pscx.csproj) -replace '>\s*v3.5\s*<', '>v4.5.1<' | 
           Out-File .\Pscx.csproj –Force

This works fine but has a couple of issues.  First, in order to write back to the same file PowerShell is reading from, you have to read the whole file into memory.  That is why the parentheses are around the Get-Content command.  This is no problem for most text files but if you have a huge file, it could be a problem.  The bigger issue is that Out-File by default writes the text file using Unicode encoding by default.  But this Visual Studio project file does not use Unicode encoding.  It is UTF-8 with no byte order mark or BOM.  You could certainly specify the encoding as a parameter to Out-File but that requires you to know the file’s encoding in the first place.  In addition, you may want to edit a series of files whose encoding varies.

Edit-File solves this problem by detecting the file’s encoding and using that same encoding when it writes back to the file.  Edit-File also solves the issue of editing large files by first, defaulting to “line-by-line” processing of the file.  If the file size is less than 84,000 the file is processed in memory.  If the file size is larger then a temp file is used to hold intermediate results.  These few small features make for a powerful command.  Here is the Edit-File equivalent of the previous command:

C:\PS> Edit-File .\Pscx.csproj '>\s*v3.5\s*<' '>v4.5.1<' –Force

Here are some sample usages of the Edit-File command:

# Open text editor with no file
C:\PS> Edit-File 

# Opens foo.txt in text editor for interactive editing
C:\PS> Edit-File foo.txt 

# Edits the file, replacing instances of foo with bar
C:\PS> Edit-File foo.txt -Pattern foo -Replacment bar 

# If file is readonly, makes it writeable and then replaces foo with bar
C:\PS> Edit-File foo.txt foo bar -Force 

# Same as above but regex is now case-sensitive
C:\PS> Edit-File foo.txt foo bar -CaseSensitive 

# Can take array of patterns/replacements - array sizes must match
C:\PS> Edit-File foo.txt 'foo','ba(.)' 'oof','$1ab' 

# Edit-File takes pipeline input that it can PassThru
C:\PS> GCI . -r *.txt | Edit-File -Pattern foo -Replacement bar -PassThru | 
           Copy-Item -Dest {'E:\temp' + $_.FullName.Substring(2).Replace('\','_')}

# Uses -SingleString to eliminate script between PostBuildEvent tags
C:\PS> GCI . -r *.csproj | 
           Edit-File -Pattern '(?s)(<PostBuildEvent>).*?(</PostBuildEvent)' `
                     -Replacement '$1$2' -SingleString 

# Replaces any empty line with '#--'.
C:\PS> Edit-File foo.txt '(?m)^(?=\r$)' '#--' -SingleString 

One note on the SingleString parameter used in the last two examples above.  Using this parameter causes the whole file to be read into memory as a single string.  Therefore, you may not want to use this parameter on huge (GB) text files.  However, what this parameter enables is the Multiline and Singleline regular expression modes.  This is crucial if your regular expression needs to span multiple lines.

The rest of the changes are outlined in the release notes.  This is the first release of Edit-File so there may be issues.  If you run into any, please file them on the PSCX CodePlex site.  Thanks for using PSCX and enjoy!

Posted in PowerShell, PowerShell 5.0, PSCX | 6 Comments

Windows PowerShell DSC Resource Kit Wave 5–xWindowsOptionalFeature

When PowerShell 4.0 shipped, the major new feature was DSC or Desired State Configuration – a very convenient and declarative way to manage the configuration of your Windows servers.  However as a developer whose IT department doesn’t really allow me anywhere near their servers, I’m interested in DSC from a purely Windows client SKU perspective.  I stand up new test machines and VMs all the time with various Windows client SKUs.  I was disappointed to find that the initial set of DSC resources for Windows clients was missing the equivalent of the WindowsFeature resource that is supported for Windows Server SKU configuration.  Without this resource, you can’t use DSC to configure your machine with features such as IIS, IIS-ASPNET45, etc. 

Well that all changes with the DSC Resource Kit Wave 5.  This version has an updated xPSDesiredStateConfiguration resource that includes xWindowsOptionalFeature.  Add to that the xOneGet resource for installing utilities like dotPeek, SysInternals, and Fiddler and now DSC is going further towards getting my test machines and VMs setup quickly for doing development work.

Here’s basically what I’m doing to configure, well at least start configure, my test machines and VMs.  I’m sure over time I will continue to add to this DSC configuration to automate more of the steps I’m currently doing manually.

First step is to go and grab the DSC Resource Kit Wave 5.  Second, you will need the July “experimental” version of PowerShell (WMF 5) to use these new resources.  You don’t want to install this on a production machine.  This would be a great excuse to spin up a Hyper-V image with Windows 8.1 on it to experiment with this.   To install the July version, go to the Requirements section of the website above and click on either the x64 MSU or x86 MSU links – depending on the bit-width of your OS installation.  Third, download the the resource kit ZIP file via the link at the top of the web page.  After you have downloaded it, be sure to “unblock” the zip before extracting any files from it.  Now fire up PowerShell “as administrator” and execute the following PowerShell commands. Note: the installation of xOneGet will prompt you to install nuget.exe.  Answer “y” to this to allow nuget to be installed. The new Install-Module cmdlets require nuget.

C:\> Set-ExecutionPolicy RemoteSigned –Force
C:\> Enable-PSRemoting –Force
C:\> Install-Module xOneGet

Now open the DSC Resource Kit Wave 5 zip you downloaded and unblocked.  Copy the directory xPSDesiredStateConfiguration to your C:\Program Files\WindowsPowerShell\Modules directory.  There should be two directories in there now: xOneGet and xPSDesiredStateConfiguration.

Now open up the 5.0 PowerShell_ISE and copy this script and save it as $home\DevPCConfig.ps1. The first section installs all the Windows optional features required to do ASP.NET development on .NET 4.5.  There is a section in there to modify settings of the console host for both x64 and x86 PowerShell.  Remove that or tweak to your satisfaction.  The last section installs some tools that I use all the time: dotPeek, Fiddler4 and PerfView.  To see what packages are available run:

C:\> Find-Package

Now, back to preparing to apply this configuration.  From the admin PowerShell console, cd to the same directory where you saved DevPCConfig.ps1 and execute the script.  Note: this doesn’t actually apply the configuration.  We will do that in the next step.

C:\Users\Keith> .\DevPCConfig.ps1

This will create the MOF file that drives the DSC engine.  You can see the file in .\DevPCConfig\localhost.mof.  Now let’s tell the DSC engine to apply this configuration to the machine by executing this command:

C:\Users\Keith> Start-DscConfiguration .\DevPCConfig –Wait –Verbose 

This will generate a lot of output but is informative the first time or two.  After that you can drop the –Verbose parameter. 

In an ideal world, I could use the community resource cPSGet to allow me to install my two favorite PowerShell modules but alas, bugs are preventing that from happening.  So as a manual step, I execute the following two commands to install these modules:

C:\> Install-Module Pscx –Scope CurrentUser 
C:\> Install-Module PSReadline –Scope CurrentUser

There is still plenty more I find myself tweaking on every new machine I set up.  Over time I’ll be able to “declare” more of that configuration in DevPCConfig.ps1 especially as more Microsoft and community resources become available.

Posted in PowerShell, PowerShell 5.0 | 2 Comments