Automation, A Framework for Success – Part 1

Automation when I started in IT was mainly a tool for reaction. If something went wrong, you could quickly write a VBScript to correct something that might of affected hundreds to thousands of systems. As things like Group Policy, SCCM, Zenworks and others came along, you could do the same but it was a different approach. Though these are great tools, over the lifetime of a system or service there are bound to be issues that just were not envisioned during the implementation process. It could be that the operating system is now out of date, the disk/storage layout is no longer relevant, you need to scale, etc. With many of these things you could create a one time processes that would automate the migration to a new environment.

Today, many people look at the new tools like PowerShell DSC, Chef and Puppet and think, that it is just a new spin on what I already had in Group Policy and traditional scripting. I will admit that the shoe does fit if you think about it in that constrained view. What you may not see is that this framework is much more. Yes you can configure one server and keep it in a desired state just like you have in the past, but that is not the point! These new tools are designed for environments.

Think of an environment as a single server running a host of services or a datacenter connected to service providers around the world just running one service. The pieces of the puzzle are exactly the same. Basically what I am saying is, don’t think of a desired state as something that only applies to a single system or category or systems. Think of desired state as it also applies to the environment. The scope of an environment can be anything an application, datacenter, service, production, non-production.

Over the next few weeks, I will fill you in on how to better understand this concept and achieve it no matter what your current environment looks like.

This post will be updated as we progress.

Posted in Azure Automation, Chef, DSC, PowerShell, Puppet | Tagged , | Comments Off

Updating Trusted Sites in a User Friendly Way

Everyone knows that when you edit trusted sites through Group Policy, you are locking down the configuration. Once that happens it seems like the entire internet needs to be come a trusted site for one reason for another.

This correct way of editing trusted sites is User Configuration > Administrative Templates > Windows Components > Internet Explorer > Internet Control Panel > Security Page > Site to Zone Assignment List noted here –

But if you don’t want the to bog yourself down with every single site that any of your users plan to use, try doing it a differently, though it may not be fully supported.

If you use a Registry Group Policy Preference: User Configuration > Preferences > Windows Settings > Registry, you can add sites to the trusted sites list and still allow users to edit the list adding the other ones that are important to them.

Take this example of adding to Trusted Sites.


Adding sites in this way is a little more raw than using the other methods but you are better catering to your users needs. Also, consider that you have the ability to remove sites from this location that allows you to make sure that users are not able to add (or get removed upon refresh) certain sites though they still maintain some usability and control that makes them feel more comfortable with the management that you provide.




Posted in Group Policy | Tagged | Comments Off

Active Directory Administrative Center Search Carshing fixed

For those of us that were using the Active Directory Administrative Center on Windows 8.1 through the RSAT and connecting to an older domain like Windows 2008 R2 AD DS. Any time you would try to search the ADAC (Active Directory Administrative Center) would crash hard. This also broke the ability to run Get-ADUser like this.

Looking at the ADAC, you will now see the “Global Search” again and it will work. If your schema was more up to date, then it would not have been a issue.

Search example


With Windows 8.1 Update, all of this is working for me now!

Hopefully I was not the only one experiencing this.


Posted in PowerShell | Comments Off

Snovers Puzzle Starting to Show with PowerShell V5 – SCCM

With PowerShell V5 Preview, we get things such as OneGet and Network Swtich Management. These are 2 things down different tracks that will meet each other as things come more obvious.

If you think about DSC and OneGet together, you have all of SCCM except a big feature for reporting. All reporting has ever been is a complex working of WMI. Look forward to say SCCM 2015 and here is what you will get.

SCCM Configuration Management – PowerShell DSC
SCCM Software and Updates Deployment – OneGet using SCCM Software Repo
SCCM Reporting – Cloud Service endpoint to upload Get-CIM*

This brings me to the data collection part and where I think all this could be going.

  • 1. No Client SCCM deployments. With PowerShell on the endpoints (Computers, Switches, Routers, Storage controllers, and so on) why do you need a SCCM client? Sum of these endpoints could be simple data pulls and pushes from a engine like SMA. Though a network switches does not have the software to automatically push into a web service, PowerShell combined with a managed Workflows can collect, test and do whatever is needed on the endpoint.
  • For Windows, data collection could be solely done using Scheduled task. In SCCM now, the MOF extensions to WMI are used to help format the data (from what I can tell – not an expert). Using PowerShell and the CIM cmdlets, it would be possible to create a local workflow that can capture, transform and upload the data to a collection point.
  • Service Discovery can be done at the beginning of the scheduled task just using a rest call as a locator, embedded in your internal or Microsoft Azure deployment. This endpoint would provide the client with needed collections, configuration changes and anything you can dream up.
  • Your OneGet repo can be secured and trusted by the client so that it’s would be available anywhere.
  • Customizations to the engine processes is easy enough for any knowledgeable PowerShell staff member.

You get the idea. Thanks for reading!

Posted in PowerShell | Comments Off

PowerShell DSC need for v5 offical or v6 – Pull Test

As a person that has enjoyed PowerShell since the early days just before the first release, in the latest few increments the tool has become an amazing asset. Features like Workflows and Desired State Configuration are such a change to the industry! Only a few of us small time developers could reach this kind of level in the past with huge VBScripts or little VB.Net programs. Now it is so easy to fan out and in that I cannot imagine a world without it.

DSC is a crowning feature but with it needs the ability have a pull server that can test all possible configurations on a server to create a resulting configuration. Since all resources in the pull servers catalog have a test that must be passed. Many of these configurations can be tested and stored to snapshot of the new systems configuration. Now with OneGet you can test a server or system for installed software which is a big thing.

Assume that I have a pull server and on this server I have configurations stored for a few hundred or thousand systems. If a cmdlet can be created to read all of the configurations for all of the existing servers, it would be able to compare that to any running system that it has access and build a desired state for future reference. It would be entirely possible for the Local Configuration Manager on a system to generate the basics like services, WindowsFeatures, OneGet  without the need of a pull server to tell it how to test other than request the results. There may be a special switch on a Configuration block to tag this as a testing collection. With testing configuration, the server could be inelegant and find the matching servers. This could be for AD, Exchange, Lync, Web and many more that have base configuration your server is set to identify allowing you to baby step your way up to a desired state with servers that are already running. Then you can take the next step to a fluid environment ready to handle anything.

Though something like this may be completely a 3rd party add-on, it will help the On-boarding process of DSC though it is growing rapidly.

Posted in Uncategorized | Comments Off

Windows PowerShell Best Practices by Ed Wilson, Microsoft Press

If you have been around Microsoft scripting and automation technologies for any amount of time you will recognize Ed Wilson as the center of the community. In this book he covers PowerShell in such a way that any beginner or expert will learn something new. While letting you feel like you are just learning best practices, the sidebars from many in the PowerShell community help give you some real world understanding of PowerShell and great ideas that will you with your own projects. I especially enjoyed the section “Handling missing WMI providers” because it never fails that while trying automate some process for a large company you run into computers that have some sort of WMI error that can keep you from finishing. Make sure to pay close attention to chapter 8 that covers how to design your script! If you design your script properly it helps you focus on the problem you are trying solve and get it fixed properly. Also, a well written script will allow you to share it easier with others so they can be more productive as well. This a foundational book on PowerShell for anyone that is just starting out or wanting make better, more supportable scripts. I think this is a great book for anyone using PowerShell.

Check it out yourself at


Posted in PowerShell | Tagged | Comments Off

Windows Update Home Brew

Last night, was an odd occurrence, many of our servers were to skip a round of Windows Updates because in a few days we have a very large go-live of a new product. Management just did not want to take any chances that it would cause something to go wrong at the last minute. As one of the primary Active Directory administrators, I let them know that the Domain Controllers needed to be patched anyway. They only run MS software and it is critical to more than just a few applications. Other than that I did not want to explain to a different team why our security report all the sudden turned all shades of red. My real problem started when all of the updates in SCCM had their deadline move a month ahead and I could not depend on getting all that setup in time. I told them, I would just handle it.

Since I have to wake up at 2AM for the maintenance window, I did a quick Bing for Windows Update and PowerShell, knowing there has got to be something these days. What I find was an article by the Windows Scripting Guy about this script/module from the TechNet Repository bookmarked it and off to sleep.

When 2AM rolled around, I got the module and start looking around and found all sorts of goodness. There are 2 commands that I really found very useful to get my job done. Get-WUInstall which will start an install process and Invoke-WUInstall that will create a schedule task on the target system to run the Get-WUInstall. If you have a good background in the Windows Update process, you will know that the actual execution must be ran from the local system. Due to security, it is not possible to directly invoke the Windows Update process through a 2nd hop. After running a few quick test, I realize that I must have the Windows Update module on the remote system and have the ability to execute the PowerShell to get the job done. By 2:30, I had created the code below and my first Domain Controller was under way.

This is a pretty raw way to get it done but you must understand that there was a little crunch time here. The Start-Sleep is just making sure that I am not rebooting all of the Domain Controllers at the same time. After running this on one system, and confirming it worked, I just let it do its thing. It allowed me to update 32 Domain Controllers in 3 different states in about 45 minutes manually on the fly, right out of a dead sleep. But, if you needed to get many more systems done in the same way, it is completely possible.

After returning to bed and waking again a few hours later to start the day, I did a quick check of the PSWindowsUpdate.log file on all of the systems to confirm things were good. It was easy to delete the PSWU folder, remove the scheduled task and move the PSWindowsUpdate.log file to an archive location with standard PowerShell.

The module also contains the Get-WURebootStatus to let me know if there was a system that needed an extra reboot or had issues rebooting. Thanks Michal Gajda, this module is great!

One nice little thing to remember is the Invoke-WUInstall does not have the sole role of kicking off Windows Updates. With the Script parameter, you can really make it do whatever you want.

Posted in PowerShell | Comments Off

Workflow : Check the Checkpoint

I was testing some workflow processes the other day and I ran into something I thought was odd and I hope I can explain it well here.

The issues is that when you create a workflow, if you do not plan to use the -PSPersist switch when calling the workflow, make sure you spend plenty time figuring out where you are going to put you Checkpoint-Workflow activities in your workflow. Take the below script for example.

You would think if I ran the workflow by calling Test1 -AsJob and immediately there after called Suspend-Job -Id ..  I would get no output. That is NOT the case! When I call Suspend-Job -id .. the job will go into a “Suspending” state until it reaches the Checkpoint-Workflow activity. This will give me an output of 1 and 2 after 60 seconds of execution.

If I were to call the workflow with Test1 -PSPresist:$True -AsJob and immediately call Suspend-Job -Id .. Then oddly, I actually get an output of 1. This is odd because the job will checkpoint to disk after every activity allowing for a suspension  but note that it actually did not stop after the current running activity, it just stopped before the next activity, the second Start-Sleep. The $i = 1 and $i lines were allowed to execute before suspension.

The reason you need to take real care of when you put in your checkpoint activities is because the data or objects gathered in one step may require completion of another step or activity. In addition, Step A may be useless if Step B does not run with in a few minutes. This would mean that using the -PSPresist could be a bad thing if the workflow was interrupted just after Step A.

When designing your workflow, just be careful and plan your activities and checkpoints at the same time, it could be difficult after you get it up and a few versions down the road.


Posted in PowerShell | Comments Off

Workflow – Persist to Disk

When using workflows it is common to rely on persistence to allow for external interaction and recovery. The PSPersist parameter on a workflow can be used by the end user to ensure recovery. As the creator, you have the ability to control the persistence of each activity with the PSPersist parameter, or by calling CheckPoint-Workflow or Suspend-Workflow at any point to force a checkpoint

Using the PSPersist would be the most common way for a user to interact with the persistence process. When calling a workflow, you can specify -PSPersist like this:

When using the PSPersist parameter, you have two options, $True or $False. Used when invoking a workflow, this is a all or nothing. I cannot see why anyone would call this with $False because this would be equal to the default action which would not add any extra checkpoints into the workflow other than the ones that are already added by the creator. When you specify $True, the workflow will create a checkpoint to disk after every activity. Doing this consider the disk resources needed to create the checkpoint. When resuming a workflow, you do not have the option to pick which checkpoint you want to use, it will resume from the last checkpoint taken. Depending on what you are doing in your workflow, the disk space used in the process can become considerable because all variables are stored.

 When you invoke this workflow with the -PSPersist $True parameter, you will be persisting the entire file and folder structure to disk after the Get-ChildItem is called. Though it looks like you are just calling a simple Get-ChildItem, this cmdlet is actually an activity execution that is just being stored to $FileNames. If this is something you want to do just make sure that there is ample storage space to do the checkpoint.

If you are creating workflows yourself the PSPersist parameter can be used after each activity with the same $True and $False options. Again if you use it and specify $False you are not going to take a checkpoint when the activity completes. Using $True would force a checkpoint to be taken when the activity is done. If the activity Checkpoint-Workflow is just after the activity, either way a checkpoint is taken but it can be taken twice if you use the PSPersist $True.

The reasons you would want to use workflow persistence is for recovery and suspension. Up to this point I have mainly spoken about the ability to control checkpoints. If you properly take checkpoints and something goes very wrong, like a network or power outage, you can call Resume-Job on the and the workflow will continue where it last took a checkpoint. The Suspend-Workflow is more interesting. Calling Suspend-Workflow will execute a checkpoint and stop the workflow. This is useful if you need some external intervention like approvals. A good example would be a process that would require a lot of validation before a series of actions are taken. When you call the workflow and specify all of your parameters, you can have the workflow call Suspend-Workflow if it finds something that does not match up in the logic somewhere. This will allow you to ensure that the correct information was typed in and you are wanting to really carry though or just remove the job all together.

A workflow checkpoint is stored in the users profile.


Checkpoint to Disk

Feel free to reboot or what every you need to at this point, the workflow will remain here until you decide to resume or remove it. On my test computer the only snag was that I needed the PSWorkflow module loaded to properly see the suspended workflow. Once a workflow is suspended or you are recovering from a failure to a checkpoint, the workflow is a PSWorkflowJob

Workflow Get-Job

Workflow Get-Job

Thanks for reading!

Posted in PowerShell | Tagged , , | Comments Off

Upgrading to Windows Server 2012 GUI mode issue

Much like myself, I assume many of you are going to upgrade many of your Windows 2008 and Windows 2008 R2 system to Windows Server 2012. There is a known issue after the upgrade that may impact you if you start to remove and add back the GUI features noted in this URL.
If you upgrade from a Full installation of Windows Server2008 or Windows Server 2008& R2 to Windows Server 2012 in Server with a GUI mode, and then switch Windows Server 2012 to Server Core mode, conversion back to Server with a GUI mode will fail.
To avoid this, delete these registry keys with the following commands:
reg delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Publishers\{bc2eeeec-b77a-4a52-b6a4-dffb1b1370cb}
reg delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Publishers\{57e0b31d-de8c-4181-bcd1-f70e880b49fc}
reg delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Publishers\{8c9dd1ad-e6e5-4b07-b455-684a9d879900}
After running these commands, restart the upgrade.

Posted in Uncategorized | Comments Off