Wednesday, February 27, 2013

Help Send Me To Teched?

Microsoft has given the UK MVPs a challenge – and I love a challenge. They, Microsoft UK, is hoping you’ll take a look at System Centre and Windows Server. They have three sets of goodies for you:

The deal is this: if Microsoft get enough clicks, thanks to you generous clicking of these URLs, then those MVPs who participate go into a draw, with 1st prize being a trip to TechEd Madrid this summer. It’s a show I’d love to go to – so please click now and click often.

AND, as an added incentive – if these URLs get more than 250 click thrus, then MS will give me a new Windows Phone. Now those of you who know how much I love my iPhone – if I win the phone, I’ll give up my iPhone for 3 months and use the Windows phone exclusively. So for all those who have been telling me to get a Windows Phone – here’s your chance!

Please click here, here AND here.

Click all! Click early!! Click often!!!

 

Thanks !

Wednesday, February 20, 2013

MSDN/TechNet Library–The Classic Skin is No More

For those of you who use the MSDN and TechNet library content, you may have noticed that Microsoft has changed the UI of these subsites: http://msdn.microsoft.com/library and http://technet.microsoft/com/library. . These are essentially the same site, that point to different databases. Some years ago, the sites were updated to have several skins: lightweight, script free and classic. I’ve been a user of Classic since forever and was very surprised to see that MS has decided to retire the classic skin.

In the MSDN forum, this decision generated a lot of discussion: see http://social.msdn.microsoft.com/Forums/en-US/libraryfeedback/thread/bbfb492b-4c85-4e8d-ab44-423c4050089e/ for the thread. A user, Victor Araya, posted on Jan 24th claiming to be the PM responsible for the user experience of these sites. He laid out his argument trying to ‘address some of the concerns’.  The feedback on that post was interesting in that not one of the users who follows up this post either agree with him or like the alternative skin. Not one. Comments like “the Lightweight display is a complete failure”, “Lightweight is simply no where near as useable as classic”, “the lightweight view is just washed out and unappealing to the point I really don't want to use it”,” Microsoft really seems to be taking steps to alienate it's developers or just make our work harder” etc.

For me, the change means losing all the community content metrics and tag information. As a community content contributor for both the TechNet and MSND libraries, in fact the largest contributor by a mile, I am sad to see all reference to the thousands of hours I’ve spend curating the content just thrown away.  It’s like all that work has just been for nothing. Heck, I didn’t even get a mail giving me a heads up that all reference to my work would vanish. Thanks Victor and your team for such a great job.

But perhaps the saddest comment comes from a very long time MVP, Cindy Meister. She says: You have to wonder what MSDN does with all the money people pay for their subscriptions. Good point Cindy.

So with such positive feedback – what does Microsoft do? Most companies would have read the feedback and at least gone into explain mode. But no – this is not how Microsoft reacted to the bad feedback. Instead of engaging the community,  we’ve not had a single further response from either ‘Victor’ or any other MS employee. I find the lack of response from Microsoft highly disappointing. And despite every poster asking MS to keep the classic skin, the classic skin is no more – it’s gone. And along with the skin itself, we have lost quite a lot of great information and as well as the improved usability if offered. 

It is sad is that no one from Microsoft has taken the time or made the effort to follow up on the many negative comments. It’s like they have made the decision and that’s that. No amount of sane and sensible paying customer feedback will change their minds. So we suffer. IMHO, Someone at Microsoft needs to listen to the community better. If I were Victor’s boss and read this thread, I’d be very tempted to let him follow his career objectives elsewhere and replace him with someone who gets the needs of the community.

What a sad day!

Windows PowerShell 2.0 Best Practices–A book by Ed Wilson

Ed Wilson, aka Microsoft’s The Scripting Guy, has written a number of PowerShell books (for MS Press). This book, Windows PowerShell 2.0 Best Practices, is one I’ve been slowly reading through. Although this book is a couple of years old, the advice and guidance it contains is still excellent.

The book is divided up into 5 sections: Introduction, Planning, Designing, Testing and Deploying, Optimizing.  In effect, the book is divided around the scripting lifecycle. The Planning section looks at identifying the opportunities for scripting within your organisation. The Designing section shows you how to design scripts that meet your business needs based on the features of PowerShell V2. As I said the book is based on V2 – but there area  number of features that, at least in my experience, a lot of users simply do not know. The fourth section of this book covers both testing (something every script needs!) and deployment (how your users get your scripts). The final section looks at optimising your scripts.

The book, like many MS Press books, contain side bars from folks in the industry. These sidebars provide the voice of experience and give weight to the ideas Ed is promoting. I like these as they provide counterpoint to the book itself.

This is not an easy book to just skim through. Ed writes for adults, and the examples are rich – it took me literally months to finish reading this as I read a little of the book each night. I found that I had to read some pages several times to enable me to distill the key points the book is making.

If you are new to PowerShell, then this would be a good book to read as it provides great background to PowerShell V2 as well a wealth of scripts you could use in your environment. If you have PowerShell skills, then this book can give you new perspectives on PowerShell in the enterprise as well as show you a number of tricks you can leverage in your own code.

I give this book 5 stars!

Sunday, February 10, 2013

PowerShell Remoting – The Double Hop Problem And A Solution

I’ve been doing quite a bit of work lately with remoting – running scripts and script blocks on other machines. As part of my series on developing a Hyper-V VM lab, I’ve scripted the installation and configuration of a mini network. One of the patterns I am using to do most of the VM configuration work is defining a script block (on one machine), and running it in the target VM. In the development of the scripts, I kept falling over errors due to what we call the double-hop problem.
What is the Double Hop Problem?
In remoting, a user on one machine (e.g. Win8.Cookham.Net – my laptop) uses Invoke-Command to run a script block in a VM (e.g. I want to run block on server SRV1.Reskit.Org). Cookham.net is my home network, complete with DC, etc., while Reskit.Org is my test lab domain/network. For most configuration, this works fine, but in some cases it doesn’t. When I run a script block on SRV1, I do it by using Invoke-Command, and specify my (Reskit) domain administrator credentials.
The double-hop problem occurs when my target machine, i.e SRV1, needs to go to another machine  for something. For example, running Get-Certificate in a script block on SRV1 requires SRV1 to go off to DC1 to get the appropriate X.509 certificate. This second hop is where the problem lies.
When the second hop is attempted, SRV1 by default uses the credentials of the PowerShell process running on SRV1, NOT your user credentials. The problem is that those credentials are not likely to have (and in my case did NOT have) sufficient privileges to carry out the necessary action (i.e. getting the certificate from the CA on DC1). 
This problem is widely known about and the solution is the Credential Security Support Provider, also known as CredSSP. CredSSP was added to Windows as part of Windows Vista/Server 2008, and is leveraged by PowerShell. As should be obvious, CredSSP is key component of Single Sign On (SSO) as well as being rather useful in my VM building scenario.
The Solution – CredSSP
With CredSSP, you pass explicit credentials on the initial hop (from Win8 into SRV1), and when SRV1 needs to go to DC1, it uses those same credentials. And if you configure DC1 and other servers correctly, you can in theory go hopping further!
In order to make use of CredSSP, you need to enable CredSSP on both client and server systems, then explicitly specify you want to use CredSSP when you run the Invoke-Command (or Enter-PSSession) cmdlet. In my case, I could conceivably run a script block against any of the servers in my VM farm which could in theory double hop to any other machine in my farm. Since all my VMs could in theory be both client and server, I run the following cmdlets on all the servers:
Enable-WSManCredSSP -Role Client -DelegateComputer '*.reskit.org' –Force Enable-WSManCredSSP -Role Server –Force
What Does This Do To My Host?
Using Enable-WsManCredSSP and enabling the client role does two things. First it sets the
WS-Management setting WSMan:\localhost\Client\Auth\CredSSP to true. Second, it sets a local policy, Allow delegating fresh Credentials (and updates that policy with the list of servers you are going to use delegation with). The server list can be a single server,  a set of servers, or a wildcard set of servers. In the above example, I am going to allow the client to delegate credentials to any server in the Reskit.Org domain. These two settings allow the local client to negotiate the use of CredSSP when creating the session on the remote machine.
Using Enable-WsManCredSSP and enabling the server role does just one things – configuring the
WS-Management setting WSMan:\localhost\Service\Auth\CredSSP to true. This allows the WinRM Service on the remote machine to use the credentials in the second hop.
What about Group Policy?
Whilst researching this issue, I came across several web pages that talked about setting up Group Policy to enable CredSSP. And for a large environment, it might be appropriate to do that. However, just using Enable-WsManCredSSP does all you need. There is one small gotcha that I kept running into. When you enable the client role, as I note above, the Enable-WsManCredSSP cmdlet sets a local policy. The one thing I kept hitting is that while the policy is set by using the cmdlet, it takes a GP refresh on the client in order for the client to be able to use CredSSP against the computers in the DelegateComputer List.
To get around this, in my configuration scripts, I just set the client/server roles (remember in my test lab, any computer in theory can be involved in a 2nd hop with any other computer) on each system, then I force a GPUpdate (or do a reboot) which means after the refresh/reboot, the policy is in force!

Thursday, February 07, 2013

Working with Base64 Strings in PowerShell

Base64 is an encoding method that enables transfer of arbitrary binary data through restrictive networks. The most obvious, to me anyway, case of this is email. The SMTP protocol was designed to transfer 7-bit (aka ASCII) characters. If you want to transmit binary data over such a 7-bit transport, you need to encode it some how – and that’s what Base64 does for you. There are loads of other uses for Base64.

For IT Pros, Base 64 can be encoded text that you need to see decoded. I got asked about this the other week in class. You can use .NET it’s current Unicode format using System.Convert and System.Text.Encoding. You can also covert Base 64 encoded strings back into Unicode by using the same .net methods, as you can see in this screen shot:

 

SNAGHTML76cd8c86

That works, when you can remember the magic incantation(s) but something simpler would be nice, my students mentioned. The obvious answer is to just use PowerShell’s Extensible Type System (ETS) and add a couple of properties onto System.String objects representing encoded/decoded Base 64. This is easy – just create a types.ps1xml file (mine is named My.Types.Ps1xml) that looks like this:

<Types>
<Type>
  <Name>System.String</Name>
    <Members>
      <ScriptProperty>
        <Name>ToBase64String</Name>
          <GetScriptBlock>
             [System.Convert]::ToBase64String([System.
Text.Encoding]::Unicode.GetBytes($this))
          </GetScriptBlock>
      </ScriptProperty>
      <ScriptProperty>
        <Name>FromBase64String</Name>
          <GetScriptBlock>
             [System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($this))
          </GetScriptBlock>
      </ScriptProperty>
    </Members>
  </Type>
</Types>

With that file saved you can add it into your PowerShell environment by using the Update-TypeData cmdlet, specifying your PS1XML file (e.g. Update-TypeData c:\foo\My.Types.PS1XML). Once that’s complete, System.String in nicely extended, as you can see here:

SNAGHTML7708cf12

I have added this bit of type-XML to my type extensions file that I load in my $Profile – which means the ability to covert from/to Base64 is, as it were, now baked into PowerShell.

[Later]
Thanks to Bryan Price for catching two typos in this post.

 

Monday, February 04, 2013

Building A Hyper-V Test Lab on Windows 8 – Part 5 Configuring DC1

Introduction

This the fifth part in a multi-part set of articles on building a test lab with Hyper-V and PowerShell. See the following prior articles in this series.

Configuring the First DC

In the last article, I showed you how to create a domain controller in a new forest by taking a newly installed workgroup server and promoting it to be the first domain controller in a new domain/forest.  Since the DC provides the AD environment for the rest of the VMs in my test lab(s), it has to become the DC before any other VMs are created. Once the server becomes the domain controller, a second script is used to configure the DC. Note that most of the other servers can be configured in just one step (although the jury’s still out that with respect to Lync and Exchange).

In this article I present a second configuration script for the domain controller, snappily named Configure-DC1-2.ps1. This second script just finishes off the configuration and setup of the DC. Once the DC1 VM has been created and promoted to being a domain controller, you use this second script to finish configuration.

From a work flow perspective, once the DC has been created, creating and configuring other new VMs can be done in parallel with configuring DC1 (i.e. running Configure-DC1-2). Also, if you are creating VMs that rely on DHCP, you would need to complete the DCHP configuration before those VMs are created.

In my lab’s case, configuration the first domain controller, DC1, is pretty simple:

  • Set the VM to automatically logon as the domain admin – in lab environments, life really is too short to have to type credentials any more than is absolutely needed. So I set the registry settings to enable auto admin logon
  • Install Key Windows features - I include some simple features, including IIS (which is needed to enable the DC to be a CA).
  • Install and configure basic DHCP - Most of my lab machines have fixed IP addresses, but having a small DHCP block seems a good thing! I allocate 20 IP addresses, but you could change that as needed. I also configure this scope with some basic options (IP, subnet mask, and DNS Server). You could add a default gateway if you want to enable routing via the host.
  • Create a second administrator – I create one extra user, me (TFL), and add this user to domain and enterprise admin groups. I have a further script Configure-ReskitAD.ps1, as part of this series, that adds a richer AD environment in terms of a coupe of OUs, and more users.

Once that has been done, I do two more things in the script (outside the configuration block):

  • Force a reboot of the VM – The very last thing the script block does before returning is to call restart-computer. The reboot is, in effect, asynchronous. Thus after the script exits and control is passed to the main script running on the Hyper-V box which continues while the DC reboots. After the exit, the script restarts DC1.
  • Take a snapshot of the DC. This is useful if I want to do some AD configuration but then back out of that. To cater for the async nature of the reboot (it happens in another process/VM), I use the the parameters ‘–Wait –For PowerShell’ which waits till the system has rebooted, the user has logged on before proceeding to take the actual snapshot. .

Using Remoting to Configure DC1

In the first two scripts that create and start up the VM, the scripts contain a function definition that is then run against the local system, i.e. the host you are using to run Hyper-V. In my case, this was done on Windows 8 on my laptop and one of my Server 2012 boxes. They both run Hyper-V well so testing is easy both at home and on the road. As I noted previously, the remainder of the scripts I use that to setup and configure the domain and servers use remoting. The following pattern if these scripts is as follows:

  • Create a script block, $CONF or similar, containing PowerShell code to perform some configuration on a server. The PowerShell code is intended to run in the target VM.
  • Use Invoke-Command, and the appropriate credentials, to run that script block on a remote server (i.e. one of the Hyper-V VMs you are building/configuring.

This process is flexible and allows me to do things before invoking the script block. For example, to install SQL, Exchange or Lync, I need to have the product CD inserted in the D: drive. For Exchange, I have to load some pre-requisites onto the server, for example bits of IIS, etc. that are not needed for other labs. SO in that case, the configuration script file can create a couple of script blocks to divide up the work. In the longer term I consolidate the multiple script blocks, but that’s work to be done!

Snapshotting VMs

These scripts were designed to support me in writing and developing courseware. Since the development work can be error-prone plus with a need to test, test, and re-test lab instructions, I need to take snapshots before and after key configuration events. So what this script does, at the very end, is to take a VM snapshot and label it as being created by this script. You can, of course, comment out this if you don’t need to have a snapshot!

Using the Scripts

When I am building out a new set of VMs, I open ALL these scripts in the ISE on the Hyper-V host. Whilst I am at home, that means running these via a terminal services window against one of my Hyper-V servers or my laptop. Once I have all the scripts open, I just work through them, tweaking the unattend.xml, building the base disk, building then promoting DC1, finishing off DC1 configuration, configuring a CA, configuring the IIS servers, etc.

Getting the Scripts

I have published the full set of deployment scripts to my web site, at http://www.reskit.net/powershell/vmbuild.zip. Note that some of the scripts in this zip file are very much works in progress that are changing, and hopefully improving, as I publish these articles. I reserve the right to change any of all of them from time to time. I will try to blog any important changes.

I am also publishing the individual scripts over on my PowerShell Scripts Blog:

Recent Script Changes

Since starting this series, I’ve been tidying up the scripts. In some cases, I’ve moved parameters into hash tables to increase readability. The changes to the scripts are now added to the script itself. One key change is that I set Auto-Admin logon for all servers, and force a reboot of the server after configuring it. I’ve also added some judicious Hyper-V check pointing into some of the scripts to simplify both further testing and to suit my courses.

[Later]

In the original post, the reboot and snapshot were not handled in a tidy fashion. I re-coded this logic so that the configuration script block does NOT do the restart. Instead, I let the script block continue, and exit back to the main Configure-DC1-2 script block where I then forcibly reboot the DC, wait till the reboot has completed then take the snapshot. Just a little more elegant and it ensures that the snapshot after the reboot has completed and the auto-admin logon has occurred. This makes reverting back to the snapshot that bit easier.

Future Scripts

The next couple of scripts, which I hope to get documented this week, including building and configuring general purpose servers and creating a Certificate Authority. I also have some utility scripts that I have added and will also be documenting.

Comments

Any comments? I’d love to hear from you – either as comments to this blog post, or via email.