Archive for the ‘AD’ Category

HOWTO: Set Logon as a Service Dynamically via GPO

March 16, 2015 Leave a comment

I recently ran into a situation where a client has a group per server for Administrators, Remote Desktop Users, and hopefully, Service Accounts.  This may or may not be the best way of dealing with this, but it does solve a need by moving user access to AD vs configuration on local servers.  It’s a little easier to centralize and manage by administrators that may have access to AD but not the servers themselves (eg: HelpDesk users).  The problem, as indicated below, is that setting the rights for the service account/groups has been getting done manually to the systems as they are built or needed.  This has resulted in inconsistencies, as one might expect.  So I found a way to standardize and bring it all “back up to code”, as it were.



You have a need to set a user or group to have “Log on as a Service” or “Log on as a Batch Job” rights.  This can be done via the Local Security Policy (secpol.msc) or via GPO.  However, there are two obvious issues with this:

1) Using SECPOL.MSC means you’re editing the local security policy.  While this may be the only way to accomplish this, it is decentralized and uncertain to maintain. 

2) Using the GPO method only allows you to set a particular set of user(s) or group(s) to the affected machines

However, if you have a need to set a 1:1 relationship with a dynamic name to the system, GPO’s and the Local Security Policy leave something to be desired.  There is no functionality within the GPO to say “Apply GRP-%SERVERNAME%-SVC” to have this rights, and have it apply as needed – at least for the Logon As a Service right.  Using other methods you can allocate to existing groups with existing rights, but you cannot either dynamically specify a group in THIS GPO location, affect the Local Security Policy, or set the rights for this local group. 


  • Have each server/system have a group such as GRP-SERVER01-SVC group identifying service accounts.  This would be a company policy scenario, and would ensure that administration and auditing of local group memberships was ONLY done via Active Directory, and could be done via delegated rights by users who may not have rights to login to the server. 
  • Have the group apply only to the named server.  Eg: GRP-SERVER01-SVC should have rights on SERVER01, but not SERVER02 or SERVER03
  • If possible, one should also be able to add to the local group a GRP-ALLSERVERS-SVC for a service account that might be globally allowed. Eg: DOMAIN\svcAutomation, DOMAIN\svcBackup, etc. 
  • Centrally manageable
  • Automatic, dynamic, updates and standardizes over time. 
  • OPTIONAL – also do similar for the pre-existing local groups of “Administrators” and “Remote Desktop Users” for a corresponding GRP-%COMPUTERNAME%-ADM and GRP-%COMPUTERNAME%-RDP as appropriate.


1) Obtain the file “NTRIGHTS.EXE” from the Windows 2003 Resource Kit found at

Unpack/install the Resource Kit and copy the file where appropriate. 

2) Copy the file centrally to a location that is accessible by the MACHINE account, not a user.  A great example would be to place the file in \\DOMAIN\NETLOGON, as this allows Read/Execute.

3) Create a script that will run in that location that contains the following:


@echo off 

net localgroup "Service Accounts" /add /Comment:"Used for allowing Service Accounts local rights" >> \\SERVER\INSTALLS\BIN\logs\SET_LOGONASSERVICE.LOG

\\SERVER\INSTALLS\BIN\ntrights +r SeServiceLogonRight -u "Service Accounts" -m \\%COMPUTERNAME% >> \\SERVER\INSTALLS\BIN\logs\SET_LOGONASSERVICE.LOG 


4) If required, this script can be called via PSEXEC and executed against a list of computers:


This MUST be run with the –u / -p switch to specify the user to use with the –h “highest privileges”.  The –C must also be used to copy the batch file to the local system so it can run. 

You will see entries in the log similar to:

Granting SeServiceLogonRight to Service Accounts on \\NW-ADCS1... successful 

Granting SeServiceLogonRight to Service Accounts on \\NW-DC1... successful 

Granting SeServiceLogonRight to Service Accounts on \\NW-DC2... successful 

5) We now have a local group called “Service Accounts” and this local group has the rights “Logon as a Service”. 

We can verify this by running “SECPOL.MSC” on one of the servers and checking the rights assignments:


Sure enough, the local “Service Accounts” group is listed.

6) We can now handle the remainder of this via normal GPO’s for Restricted Groups, using DYNAMIC naming. 

Open the GPO editor and create a new GPO and name it something obvious such as “LOCAL_RESTRICTED_GROUPS”, and then edit it.



Right click and select NEW -> LOCAL GROUP

8) Now we modify the properties for this group:


We will choose UPDATE for an action, as the group should already exist based on our previous work. 

The group name will be “SERVICE ACCOUNTS”. 

Click ADD to add members


This is where the magic comes in.  If you press the “…” beside the NAME, you can search for the group/user based on a traditional ADUC type search.  But we don’t want that.  Instead, place your cursor in the NAME field.  Press the F3 key:


We get a list of VARIABLES!  We want to use ComputerName so that we can reference the group as GRP-%COMPUTERNAME%-SVC and each computer will get its own group.  Click SELECT.


Note the variable shows %ComputerName% as expected.  Modify that as needed to have the GRP- and -SVC prefix and suffix.


Click OK to close this window.

I’ve chosen to also add an -ADM and –RDP group for Administrators and Remote Desktop Users as this is another use case.


Close and save the GPO

9) Link your GPO appropriately:


Here I have a GROUPS-TEST OU and I have placed my NW-VEEAM01 server in this OU, along with the 3 associated groups.   This will limit impact during testing.

10) On the system in question, check the current group memberships:


11) On the system in question, run a “gpupdate /force”

12) Again on the system in question, confirm the updated group membership:


There you have it.  The ADM/RDP groups were easy as they not only pre-exist, but are pre-defined.  The complication really was the “Service Accounts” group, which both does not pre-exist, and has no special rights by default or built in direct way of adding them via the command line. 

The recommendation would be to run the SET_LOGONASSERVICE.BAT as part of the server build process/scripts, or have it pre-done in your deployment image/WIM/VM Template.  Equally, a PSEXEC run against all servers in the domain could force set this group on a periodic basis to ensure the rights existed.  Additional error checking could be built in to check if the command was successful, check if the domain group exists, create it if required, etc. 

Some post comments:

  • Remember that the local account has a SID.  If it is deleted, and recreated with the same name, that won’t be enough as the Log on as a Service right will be assigned to the old SID
  • As the batch file creates the account with a description and we didn’t tell the GPO to do so, it’ll create a new group if required, but with no description.  This is your identifier that something is off, and hopefully that helps you troubleshoot.

PoSH: Get-PatchingScheduleInfo.ps1 for SCCM

November 20, 2014 Leave a comment

In my previous two posts in my “Automate the Server Patching with SCCM 2012” series, I covered how we get the dates for patching and how we

PoSH: Get-PatchDate.ps1 for SCCM
Get the dates for patching windows based on day of month and a set business logic.

PoSH: Get-PatchDetails.ps1 for SCCM
Obtain information from AD Groups and Computer objects.

Next up, I’m going to use the Description field of the Patching Group in AD, along with the schedule detail from Get-PatchDate.ps1 to build another object we can pull from later as we start to do things like:

  • Send an e-mail with the details of the patching to the server owners for review
  • Let those owners know when the window(s) will occur
  • Use the same information when we start telling SCCM to create Deployment Packages with Maintenance and Deadline windows using the schedule information.

First, the code:

# Created By: Avram Woroch
# Purpose:
#   To obtain Patching Schedule information, which is contained in the Description field 
#    of the Patch Group object in AD.  We are assuming a group name of:
#    also we are assuming a Description field that contains 3 fields, delimited by ^ in the format of:
#       <Whatever>^<PatchWindowStart>^<PatchWindowEnd>
#    We don't store the patch day of month here, as we may need to do one-off patching
#   We are then left with an $Object called $objPatchScheduleList which contains:
#       $PatchGroupName $PatchingDate $WindowStart $WindowEnd
# Usage:
#    Get-PatchingScheduleInfo.ps1

# MODIFY THIS VARIABLE - the -like "name" shoudl be the common name for the SET of patch groups
$PatchGroups  =get-ADGroup -filter {Name -like "SRV-S0-Patching*"}

# Create a custom object that contains the columns that we want to export
$objPatchScheduleList = @()
Function Add-ToObject{ $Script:objPatchScheduleList += New-Object PSObject -Property @{ PatchingGroup = $args[0]; PatchingDate = $args[1]; WindowStart = $args[2]; WindowEnd = $args[3]; } }

$PatchingDate = ""

# Loop through each of the groups
ForEach ($Group in $PatchGroups) 
     # Search computers and get their Name and Description
     $PatchGroup = Get-ADGroup -Properties description $Group | Select Name,Description
     # Store the resulting server name 
     $PatchGroupName = $PatchGroup.Name
     # Split the group name to get the unique portion we commonly refer to it as - eg: PROD1A
     $PatchGroupTemp = $PatchGroupName -split "-"
     $PatchGroupSet = $PatchGroupTemp[3]
     $PatchGroupSet = $PatchGroupSet.Substring(0,$PatchGroupSet.Length-1)
     $PatchingDateTemp = '$PatchDay'+$PatchGroupSet
     $PatchingDate = $ExecutionContext.InvokeCommand.ExpandString($PatchingDateTemp)
     # Create a $Desc array and use -split to use the delimiter to break apart the variables
     if ($PatchGroup.Description) {$Desc = $PatchGroup.Description -split "\^"}
     # WindowStart is Field1 after -split
     $WindowStart = $Desc[1]
     # WindowEnd is Field2 after -split
     $WindowEnd = $Desc[2]
     # Send those dtails out to the object definied earlier 
     Add-ToObject $PatchGroupName $PatchingDate $WindowStart $WindowEnd

This isn’t a lot different from the Get-PatchDetails, and the same sort of logic is used.  Build an object that we can reference later using existing data, and split apart some fields to make them more readily usable later on.

Our output is going to look like:

PS C:\bin> $ObjPatchScheduleList | ft -autosize

PatchingDate        WindowStart PatchingGroup                WindowEnd
------------        ----------- -------------                ---------
11/06/2014 00:00:00 08:00       SRV-S0-Patching-Dev1A        11:00    
11/06/2014 00:00:00 13:00       SRV-S0-Patching-Dev1B        16:00    
11/15/2014 00:00:00 09:00       SRV-S0-Patching-Prod1a       10:00    
11/15/2014 00:00:00 11:00       SRV-S0-Patching-Prod1b       16:00    
11/16/2014 00:00:00 21:00       SRV-S0-Patching-Prod2a       22:00    
11/16/2014 00:00:00 23:00       SRV-S0-Patching-Prod2b       23:59    
11/17/2014 00:00:00 08:00       SRV-S0-Patching-Prod3a       11:00    
11/17/2014 00:00:00 08:00       SRV-S0-Patching-Prod3b       11:00  

As you can see I’ve populated this with dummy information, but I can revise later. 

Some things I think of now as I look at it, but want to stop messing with it because it works:

  • I probably should store the “Short Patch Name” – eg: “PROD3B” in a column, might make the rest of the work later on a few less steps
  • I know I’m going to have situations where the WindowEnd is the next day in the AM – eg: 22:30-04:30.  I don’t yet know how I’m going to factor for that.  Probably do some logic that says “if $WindowEnd < $WindowStart, $WindowEndDate=$PatchingDate+1”.  We’ll see.  I may find outt that WindowEnd is better suited as WindowDuration with the # of hours.  But I wanted to make it easy to have in the Group Description Field
  • I have this feeling I might want to use actual AD Schema, but I’m not sure if it’s as maintainable as just telling someone to “Edit the Description”.  It also means that the Description becomes pretty dependent, and someone modifying it without knowing that it used for this, might break it.  In that event, one might run this script nightly and export the object to a CSV, so if someone ever DID mess up the Descriptions, you could VERY easily refer back to what they were at the time.  There’s many other ways you could deal with that though…
    A sample of the Computer object, with its Description:


Next up – we send an e-mail with this detail!

Categories: AD, PowerShell, SCCM2012, Scripting

Windows Patching – What happens when you aren’t paying attention.

November 19, 2014 Leave a comment

Yesterday, I posted some details about MS14-068 and MS14-066 ( and of course today, have had to do some investigating into a few sites that have a variety of patching systems.  Some are using SCCM, some WSUS, some have policies and procedures, some don’t.  But I noticed a potential ‘perfect storm’(?) of situations that could cause some of them grief – and it was more than just one.

Let me draw you a picture of what is a pretty common environment:

  • WSUS exists for updates, because that’s “the responsible thing to do”
  • WSUS was likely configured some time ago, and no one likes it because it’s not sexy or fancy, so it doesn’t get any love.  Thus, it is probably running on Windows 2008 or 2008 R2.
  • Someone at some point *did* ensure that WSUS was upgraded or installed with WSUS 3.0 SP2

This all sounds pretty good, on the face of it.  Now let’s introduce some real world into this environment….

  • Someone decreed that they shall “only install Critical and Security Updates” – Updates, Update Rollups, Feature Packs, etc, would not be installed.
  • Procedures state that you will install updates that are previous month or older – so  you’re staying 30 days out, which is reasonable – let someone else go on Day0.
  • Those same procedures state that you will look at the list, and select the Critical and Security Updates from the last month, and approve them.
  • Nothing is stated for what to do about the current month’s patches – they are left as “unapproved” – but also not “declined”

Alright, so still pretty “common” and at face value, not that bad.  A year or two goes by, and now you introduce Windows 2012 and Windows 2012 R2 to the mix.  This itself is not a problem, but it’s where you start to see the cracks.  Without even having to look at the environment, I know already the things I want to be looking for….

  • Because the current month’s updates are not being “Declined”, they’re showing up in the list as “missing”.  If you have 10 updates, and 8 are approved and 2 are not, you will only ever possibly show 90% patched.  The remaining two WSUS/WU knows are “available”, but “I don’t have them.  You want to decline those so they only show up as 8 updates and 100% success.  Otherwise, how do you know at a glance if the missing update is the approved one that SHOULD be there, or one from this month?  Your reporting is bad.  See:


  • Because the process counts on someone approving “last months” updates and not “all previous updates”, there’s almost certainly going to be some weird “gap” where there is a period of a few months that isn’t approved and isn’t installed for some reason.  But the “assumption” is that they’re all healthy.  Because the previous point doesn’t “decline” any updates, the reports for completion are untrustworthy – and/or never reviewed anyways.


  • Next, Windows 2012+ has been introduced.  There’s a KB that is required to be installed on the WSUS server *and* rebuild of the WSUS package on the client to ensure compatibility.  See MS KB2734608 (  Because this is an “Update” and neither Critical nor Security, it is not applied to either the WSUS server or the clients.



  • In order for the Windows 2012/2012R2 WU/WSUS behavior to actually be changed, you need GPO’s that Windows 2012/2012R2 understands.  In order for that to be true, you need 2012+ ADMX files in your GPO environment.  Preferably in your GPO “Central Store” (again –  But because Windows 2012 and 2012 R2 were likely “added to the domain” with no testing, studying, certification, or reading, this wasn’t done.  Equally, even if it WAS done, most likely someone is still editing the GPO’s on the 2008/2008R2 based Domain Controller – which wipes out the ADMX based changes and replaces them with ADM files and the subset of options that they understand.  You’ll never know this happened though, and even if you jump up and down and tell people not to do it, they will.


  • No one is ever doing a WSUS cleanup, so Expired, Superceded, etc updates are still present.  Which isn’t helping anyone.


So to make that detail a little shorter:

  • Choosing Critical and Security Updates only is causing you to miss out on *required* updates.  Stop being “fancy” – just select them all please.
  • Because you’re choosing “date ranges” of updates, you’re missing some from time to time.  Stop being “fancy” – select “from TODAY-## to END”
  • If you introduce a new OS to your environment, you need to ensure your AD and GPO’s support them.

On top of the Updates and Update Rollups above that cause those issues, let’s take a quick look at some of the other things that are NOT considered Critical or Security Updates:

November 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2:

    That’s just ONE Update Rollup.  None of those look like ANYTHING I’d want to happen to my servers.  </Sarcasm> So why WOULDN’T I want to install those?  Yes, there may be features you’re not using.  Perhaps you don’t use DeDuplication or DFS-R.  Won’t it be fun later when you install those Roles/Features, and WSUS scans that server, and says “all good, nothing to update” for you?  Tons of fun!
    So, long story short – please stop being fancy.  You’re introducing complexity and gaps into your environment, and actually making things harder.  This means more work for you and your staff and co-workers.  That likely don’t have enough time and resources as it is.
    Don’t pay technical debt….

PoSH: Get-PatchDetails.ps1 for SCCM

November 19, 2014 1 comment

In my continuing saga to automate SCCM 2012 Server patching, I’ve now progressed to being able to get the list of details for all the servers.  What we do here is first make some assumptions:

  • Patching Groups have a common and standardized naming:
        SRV-S0-Patching-{VARIABLE} where the {VARIABLE} is DEV1/PROD1/PROD2/PROD3
  • Computer Descriptions are standardized with a ^ delimiter character, with 3 fields:
  • Each of the Patching Groups contains the servers that belong to each group

This script then does the following:

  • Obtains all the patch groups
  • Loops through the groups to get all the Computer Members
  • Loops through each Computer and gets its Description
  • Splits the Description into separate distinct fields
  • Puts this list into an array object so it can be used and processed later
    # Created By: Avram Woroch / / @AvramWoroch
    # Purpose:
    #   To collect AD baseed ComputerName, ContactEmail, Role, and PatchGroup 
    #   ContactEmail and Role are collected by using a ^ delimited AD Computer Object
    #   delimited Description field in the format of:
    #     &lt;ContactEmail&gt;^&lt;SupportHours&gt;^&lt;Role
    # Usage:
    #    Get-PatchDetails.ps1
    # MODIFY THIS VARIABLE - the -like &quot;name&quot; shoudl be the common name for the SET of patch groups
    $PatchGroups=get-ADGroup -filter {Name -like &quot;SRV-S0-Patching*&quot;}
    # Create a custom object that contains the columns that we want to export
    $objServerlist = @()
    Function Add-ToObject{ $Script:objServerlist += New-Object PSObject -Property @{ ComputerName = $args[0]; ContactEmail = $args[1]; Role = $args[2]; Group = $args[3]; } }
    # Loop through each of the groups
    ForEach ($Group in $PatchGroups) 
       # Look for all the Group Members in said group
       $Servers = Get-ADGroupMember &quot;$Group&quot;
       # Loop through each of those servers
       ForEach ($Server in $Servers) 
         # Search computers and get their Name and Description
         $ServersWithDesc = Get-AdComputer -Properties description $Server | Select Name,Description
         # Store the resulting server name 
         $ComputerName = $ServersWithDesc.Name
         # Create a $Desc array and use -split to use the delimiter to break apart the variables
         $Desc = $ServersWithDesc.Description -split &quot;\^&quot;
         # Email is Field0 after -split
         $ContactEmail = $Desc[0]
         # Role is Field2 after -split
         $Role = $Desc[2]
         # Send those dtails out to the object definied earlier 
         Add-ToObject $ComputerName $ContactEmail $Role $Group.Name
    # Uncomment to have the script display the array created - useful for troubleshooting or human interactio

The resulting output looks like:

PS C:\bin&gt; C:\BIN\Get-PatchGroupDetail.ps1

ContactEmail    ComputerName   Group                   Role                                    
------------    ------------   -----                   ----                                    
SysAdminTeam    SERVD311       SRV-S0-Patching-Dev1A   CITRIX XenApp 6                         
SysAdminTeam    SERVD611       SRV-S0-Patching-Dev1B   SCOM 2012 Dev Server                    

From here we now have an array of details we can use and search through for upcoming steps. 

Some things I’ve learned through this process:

On to the next steps – making this all generate some HTML formatted e-mails to server/application owners about the upcoming patching!

Categories: AD, PowerShell, SCCM2012, Scripting

CVE-2014-6324, MS14-068, and you!

November 19, 2014 4 comments

By now, you’ve almost certainly heard of the Microsoft Update being released out of band, MS14-068 related to CVE-2014-068, for an in-the-wild Kerberos exploit with some pretty serious ramifications.

Definitely check out this Microsoft Technet Blog post:

The relevant portions to me are:

Today Microsoft released update MS14-068 to address CVE-2014-6324, a Windows Kerberos implementation elevation of privilege vulnerability that is being exploited in-the-wild in limited, targeted attacks. The goal of this blog post is to provide additional information about the vulnerability, update priority, and detection guidance for defenders. Microsoft recommends customers apply this update to their domain controllers as quickly as possible.


The exploit found in-the-wild targeted a vulnerable code path in domain controllers running on Windows Server 2008R2 and below. Microsoft has determined that domain controllers running 2012 and above are vulnerable to a related attack, but it would be significantly more difficult to exploit. Non-domain controllers running all versions of Windows are receiving a “defense in depth” update but are not vulnerable to this issue.

Now, don’t take that to mean my stance is “Meh, don’t patch!”.  Quite the opposite.  As per the article:

Update Priority

  1. Domain controllers running Windows Server 2008R2 and below
  2. Domain controllers running Windows Server 2012 and higher
  3. All other systems running any version of Windows

So get those DC’s patched _now_, and calmly plan to update the remaining servers.


But I’ve heard from a number of colleagues/twitter/posts today that this introduces chaos, makes a busy week worse, etc.  Certainly it is critical and important, but I’m not getting the frustration:

  • It immediately only applies to 2008R2 DC’s and lower.  Most Small to Mid size enterprises I know don’t have more than a couple dozen at best, and often many less.  So patch them.
  • You likely don’t have 2012R2 DC’s – for many reasons.  Too many legacy systems that don’t like 2012/2012R2 DC’s, you haven’t had time to get around to it, you haven’t tested, you’re afraid of them, whatever. 
  • They’re DC’s, they’re redundant.  Just patch the bloody things.

But I think it’s that last part that makes people lose their minds.  Folks, if you can’t reboot a DC in your environment, you’ve built a very poor system (or “have” one – maybe you inherited it – it’s still your job to make it better!).  Yes, you should minimize the downtime, so do it in a period of lower activity if you can, but if you have to wait for… 2:00AM on a Sunday, there’s a problem with what you’ve built.  I can probably even guess what these problems are:

  • Even though you likely have Windows Server Datacenter and virtualization (Hyper-V or VMware) for unlimited VM’s, someone is probably all freaked out about “server sprawl” – so you have fewer servers that you could have.
  • Which means you likely aren’t separating out roles
  • So your DC’s are likely serving double exponential duty also serving DNS, and DHCP, and PKI, and RADIUS, and, and, and. 
  • Failover/maintenance has never been tested.  So you have “redundant systems” and maybe tested the failover, in a CONTROLLED fashion – but never tested the equivalent of a “power cord yank”
    Stop doing this. 
    It doesn’t require a $5000 1U server to run a role any more.  Stop building like its 2003.  Server Sprawl is only a problem if you have lousy automation and processes for consistency.  Managing 53 or 153 servers shouldn’t be significantly different.  You SHOULD be able to reboot servers and services at any point in time without concern.  If you cannot, then even if you have multiple, you DO realize you have identified a failure point, right?

If your answer is something along the lines of “But we don’t know the impact it will have…” – seriously?  Why not?  You tested, right?  Your monitoring software will alert you of services or functions that fail when a dependent service fails?  You might have even built in rules to self-heal or scripts to try “the obvious fix”?

Probably not though.  Everyone’s too busy paying 28% “Technical Debt” on the big fancy expensive toys and software they bought that they didn’t get enough people to install completely or got button mashed until it “kinda worked” then the next fire stole the body away.  You know that “Cloud” thing everyone’s talking about and how all the CEO/CIO/Directors/Management “want it” but “don’t know what it does”?  It’s about automation, scale, and self-healing, with growth and shrinking elasticity.  Instead of “wanting it”, it’s time to “build it”. 

Or, we can just keep doing like we’ve always done – chasing the next hot thing, and killing symptoms instead of root causes.  That’s probably what will happen…


All that said, MS14-066, which addresses the SChannel issues, that needs to be updated for as well.  But as per many online sources (, KB 2992611, there are issues with this update, that have resulted in it getting a re-issue.  Microsoft has a blog post about this as well:

Specific details you care about:

Update 16-11-2014: KB 2992611 has information on known issues.

Update 18-11-2014: V2 of the bulletin was released.  Details from the update:

Reason for Revision: V2.0 (November 18, 2014): Bulletin revised to announce the reoffering of the 2992611 update to systems running Windows Server 2008 R2 and Windows Server 2012. The reoffering addresses known issues that a small number of customers experienced with the new TLS cipher suites that were included in the original release. Customers running Windows Server 2008 R2 or Windows Server 2012 who installed the 2992611 update prior to the November 18 reoffering should reapply the update. See Microsoft Knowledge Base Article 2992611 for more information

So if you’ve already patched, you’ll need to re-patch. 

I wonder if this can be taken to be true:

As of writing, the MSRC and other security assets do not report that there attacks in the wild since the issue was responsibly disclosed to Microsoft. However it is only a matter of time….

Given the issues, and how this is introducing interoperability issues, it may be advisable to give some thought to how fast this update gets rushed into production.

Hope the above information helps, and sorry for my little detour into rant-ville.  I feel better now though, if it matters.

Categories: AD, WSUS

2008R2_LAB: Configure Monowall Firewall as a VM for a Windows 2008 R2 environment

August 5, 2013 Leave a comment

In order to set  up an isolated Lab network, we need a way to handle the “isolation” part.  By doing so, we can allow the VM’s to still have internet access and/or access to the company LAN, but have no direct inbound access to them other than the vSphere console.  By doing so, we ensure that the internal LAN for the labs, can be used without conflict with existing LAN’s.  For example, DHCP and PXE booting would then be safe to use.  To do so, we’ll use a M0n0wall appliance, as this works well on VMware Workstation, vSphere, etc.   This example will cover building this for a VMware vSphere environment, vs VMware Workstation – but the concepts carry across.

Information you will require to complete this task:

· User the lab is for – eg: David Lock – we need this for the initials to use

· An existing PVLAN configured on the Lab vSphere host – eg: DL_PVLAN – or, a VMnet in VMware Workstation.

· The VLAN ID of the PVLAN – eg: 4005 – representing Subnet 5

· The Subnet to use for the LAN interface of the lab – eg:

· The IP address to use for the LAN interface of the lab – eg:

1) You will need to download the M0n0wall appliance from –  Note the specific link you want is:


For the GENERIC-PC-1.34-VM.ZIP

Select any appropriate mirror site to download from, and click the link.  Save the file when prompted, to a location such as C:\TEMP.



Unpack the zip file to a folder.  You’ll be left with a VMDK (disk) and a VMX (configuration) file. 

2) From the vSphere Client, browse to INVENTORY -> DATASTORES AND DATASTORE CLUSTERS. 


Find the datastore in use by the lab in question, right click and choose BROWSE DATASTORE.




Name the VM folder with the name of the VM.  DL-MONOWALL, for example.  Click OK.


Browse into the new folder on the left hand side.  Ensure it has the OPEN FOLDER icon. 




Browse to and select the VMDK file and click OPEN.

Repeat for the VMX file. 

From the DATASTORE BROWSER, right click on the VMX file and choose ADD TO INVENTORY.


Name the VM and choose the appropriate LAB folder for the user:


Eg: EDM -> LABS -> DL-VM’s and name “DL-MONOWALL”.  Click NEXT.


Choose the HOST/CLUSTER for the VM to live on and click NEXT.


Complete the installation by clicking FINISH.

3) In vCenter Client, choose INVENTORY -> VM’S AND TEMPLATES.


Locate the VM you just created, in the appropriate LABS -> DL-VM’S folder.  Right click and choose OPEN CONSOLE. 

4) This is the point where deploying from  the downloaded files or cloning an existing Lab Monowall VM would be similar.



Highlight both NIC’s and choose REMOVE.  Click OK.

Choose VM -> EDIT SETTINGS: again



The first NIC we will use an internal LAN VM Port Group (such as VMNET_0111):


Click NEXT.



Repeat the above for the second NIC, but in that case choose the appropriate LAB network (eg: DL_PVLAN). 


Click OK when completed.

Choose POWER ON:



Choose Option 1) INTERFACES so we can reverse the LAN/WAN ports from EM0/EM1 to EM1/EM0.


You will be asked if you want to setup VLAN’s (no).  Enter the LAN interface of “em1” and WAN interface of “em0”.  Press ENTER when finished.  When prompted, type Y to proceed with a reboot.

Choose Option #2 to change the LAN IP address:


Enter the IP address of 192.168.<VLANID>.1.  The DL_PVLAN for example is VLAN 4005, so we will use “5”.  The subnet mask is /24, and we will not enable DHCP.  Press ENTER to continue.

NOTE: If you need to find the XX_PVLAN VLAN ID, you can do this by browsing to the Lab Host, clicking on the CONFIGURATION tab, and choosing NETWORKING.  Locate the PVLAN VM Port Group:


Here you can see that DL_PVLAN is 4005, SL_PVLAN is 4006, etc.  Remove “4000” from the VLAN ID to obtain the subnet ID – thus, 4016 would be, etc. 


Press Option #3 to reset the password to “mono”

Choose Option #5 to reboot the VM.

Now we have a working lab Monowall firewall. 


If you happen to be doing this work in VMware workstation, then the NIC’s in Step 4 would have the VMnic0 for the WAN, be on a BRIDGED VMnet NIC and then the LAN NIC would be on a HOST-ONLY network. 

Some additional HOWTO’s to follow:

  • COMPLETE – HOWTO: Configure Monowall Firewall as a VM for a Windows 2008 R2 environment
  • HOWTO: Creating the first AD DC in a Windows 2008 R2 environment
  • HOWTO: Configuring DNS in a Windows 2008 R2 environment
  • HOWTO: Configuring DHCP in a Windows 2008 R2 environment
  • HOWTO: Configuring a Member Server to join a Windows 2008 R2 environment
  • HOWTO: Configuring WSUS in a Windows 2008 R2 environment
  • HOWTO: Configuring WDS in a Windows 2008 R2 environment
  • HOWTO: Installation and use of GPMC in a Windows 2008 R2 environment

HOWTO: Exchange 2010 ActiveSync reporting and policy filtering

March 16, 2013 Leave a comment

Recently we came across an issue with our Exchange 2010 environment related to ActiveSync and Apple iOS devices prior to firmware v6.1.2.  As such we needed a way to not only get a report of users with device relationships by version/device, but also a means to setup a block for those devices if needed.  It turns out that Exchange has a built in process for this by way of the ActiveSync Policies and their state can be either “Granted”, “Denied” or “Quarantined”.  In the case of a Quarantine, the user will get a message on their phone and will no longer be able to access the system.  However, upon remedying their issue, they will automatically be “Granted” by nature of the new OS/firmware now no longer matching the Quarantine policy search.  This works exceptionally well for us, and I will document the steps I’ve used over the last few days to make this all work.

1) Obtain a report of iOS users of all device types and version:

Get-ActiveSyncDevice | where {$_.DeviceOS -like “*iOS*”} | select UserDisplayName,DeviceType,DeviceOS,WhenChanged | export-csv e:\IOS_USERS.CSV

This should be relatively self-explanatory.  We’re getting ActiveSyncDevices where the DeviceOS column/field is anything containing *iOS*, and then outputting only the UserDisplayName,DeviceType,DeviceOS,WhenChanged fields, and then exporting it to a CSV file.  This CSV file can then be sorted and filtered as desired.

2) As we only had iOS v6.x devices, we needed to put in place Quarantine policies.  We could not, however, simply do “*iOS 6*” or “iOS 6.1*” as this would also match the approved v6.1.2 version.  Also, while it MAY be possible to Quarantine “*iOS*” and then Grant “*iOS 6.1.2*”, this would result in v6.1.2 being the ONLY approved version and when v6.1.3, or v6.2 or v7.0 comes out, new polices would need to be put in place.   By creating only policies that match to Quarantine exiting v6.0, v6.1.0, v6.1.1 devices, we miss that issue:

New-ActiveSyncDeviceAccessRule -QueryString “iOS 6.0” -Characteristic DeviceOS -AccessLevel Quarantine

New-ActiveSyncDeviceAccessRule -QueryString “iOS 6.1 10B141” -Characteristic DeviceOS -AccessLevel Quarantine

New-ActiveSyncDeviceAccessRule -QueryString “iOS 6.1.1 10B145” -Characteristic DeviceOS -AccessLevel Quarantine

As you can see, it took 3 policies to get us the desired results

3) To determine which devices are quarantined:

Get-ActiveSyncDevice | where-object {$_.DeviceAccessState -eq “Quarantined”} | select UserDisplayName,DeviceUserAgent,DeviceOS,DeviceAccessState | format-table -autosize

UserDisplayName                                     DeviceUserAgent                DeviceOS         DeviceAccessState

—————                                     —————                ——–         —————–

<domain>/Calgary/users/xxxxx     Apple-iPad2C2/1002.141         iOS 6.1 10B141         Quarantined

<domain>/edmonton/users/xxxxx         Apple-iPad3C3/1001.537600005   iOS 6.0 10A5376e       Quarantined

<domain>/edmonton/users/xxxxx        Apple-iPhone4C1/1001.537600005 iOS 6.0 10A5376e       Quarantined


This will show the UserDisplayName, their DeviceUserAgent (useful for determining the type of device) and what DeviceOS they were running.   It is worth noting that following the update from a user, and the removal from Quarantine, a re-run of the above command will not show the user as removed, they simply no longer are Quarantined, and do not show up in the list.  I confirmed this with my own device, as I upgraded from iOS 6.0.2 to iOS 6.1.2.

4) There also exists the ability to set the ActiveSyncOrganizationSettings to allow for an “administrator e-mail” account(s).  This lets us put in e-mail address(es) that can get an instant notification of when a device gets quarantined or blocked.  This way, we know as soon as the user knows.  While it is unlikely we would do so, we could even proactively contact the user after seeing the alert, to ask if they need assistance.

[PS] C:\Windows\system32>Set-ActiveSyncOrganizationSettings -AdminMailRecipients,

[PS] C:\Windows\system32>Get-ActiveSyncOrganizationSettings

RunspaceId                : 6b2980bc-0bd2-403b-a7d8-f8db66f969e8

DefaultAccessLevel        : Allow

UserMailInsert            :

AdminMailRecipients       : {,}

OtaNotificationMailInsert :

Name                      : Mobile Mailbox Settings

OtherWellKnownObjects     : {}

AdminDisplayName          :

ExchangeVersion           : 0.10 (

DistinguishedName         : CN=Mobile Mailbox Settings,CN=xxxxx,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=<DOMAIN>,DC=<DOMAIN>

Identity                  : Mobile Mailbox Settings

Guid                      : 5bbce140-80e4-494f-a7f1-900c0xxxxxx

ObjectCategory            : <domain>/Configuration/Schema/ms-Exch-Mobile-Mailbox-Settings

ObjectClass               : {top, msExchMobileMailboxSettings}

WhenChanged               : 3/13/2013 9:06:38 PM

WhenCreated               : 7/19/2011 4:19:40 PM

WhenChangedUTC            : 3/14/2013 3:06:38 AM

WhenCreatedUTC            : 7/19/2011 10:19:40 PM

OrganizationId            :

OriginatingServer         : <DC>.<DOMAIN_NAME>

IsValid                   : True

5) Finally, in the report from Step 1, it should be noted that users/mailboxes/devices that have not been properly/fully removed will still show up.  For example, even if Bob Smith’s account is disabled, that mailbox and devices will show up.  Equally, I noted that my iPhone 4 was still showing as I never did anything to remove the device.  But more confusing is that my iPhone 5 (of which I only have one of) showed up twice – once for iOS 6.0.2 and once for iOS 6.1.2.

I did attempt to purge my iOS 6.1.2 device to test what would happen, and upon my phone’s next sync, it emptied my mail folders, then refreshed, redownloaded all my mail, and current calendar appointments.  When I checked to ensure that my sync folders were still accurate, all of my settings were intact.  No interaction on my part was needed to reconnect, I was not prompted for credentials or settings, etc.  As such, it seems that any device that is considered old, out of date or suspect, is fair game to delete and if it is in fact still active, it will simply recreate the relationship.

The last largely outstanding task is to find a way to *customize* the Quarantine message.  Each policy/filter should be able to have its own, and according to documentation, should be reachable via the ECP (eg: /ECP”>https://mail.<>/ECP) but I was having no luck getting it to do more than show “loading”.  Another day, perhaps…….

Categories: ActiveSync, AD, Exchange, PowerShell