Archive

Archive for the ‘AD’ Category

HOWTO: Set Logon as a Service Dynamically via GPO

March 16, 2015 Leave a comment

I recently ran into a situation where a client has a group per server for Administrators, Remote Desktop Users, and hopefully, Service Accounts.  This may or may not be the best way of dealing with this, but it does solve a need by moving user access to AD vs configuration on local servers.  It’s a little easier to centralize and manage by administrators that may have access to AD but not the servers themselves (eg: HelpDesk users).  The problem, as indicated below, is that setting the rights for the service account/groups has been getting done manually to the systems as they are built or needed.  This has resulted in inconsistencies, as one might expect.  So I found a way to standardize and bring it all “back up to code”, as it were.

 

PROBLEM:

You have a need to set a user or group to have “Log on as a Service” or “Log on as a Batch Job” rights.  This can be done via the Local Security Policy (secpol.msc) or via GPO.  However, there are two obvious issues with this:

1) Using SECPOL.MSC means you’re editing the local security policy.  While this may be the only way to accomplish this, it is decentralized and uncertain to maintain. 

2) Using the GPO method only allows you to set a particular set of user(s) or group(s) to the affected machines

However, if you have a need to set a 1:1 relationship with a dynamic name to the system, GPO’s and the Local Security Policy leave something to be desired.  There is no functionality within the GPO to say “Apply GRP-%SERVERNAME%-SVC” to have this rights, and have it apply as needed – at least for the Logon As a Service right.  Using other methods you can allocate to existing groups with existing rights, but you cannot either dynamically specify a group in THIS GPO location, affect the Local Security Policy, or set the rights for this local group. 

REQUIREMENT:

  • Have each server/system have a group such as GRP-SERVER01-SVC group identifying service accounts.  This would be a company policy scenario, and would ensure that administration and auditing of local group memberships was ONLY done via Active Directory, and could be done via delegated rights by users who may not have rights to login to the server. 
  • Have the group apply only to the named server.  Eg: GRP-SERVER01-SVC should have rights on SERVER01, but not SERVER02 or SERVER03
  • If possible, one should also be able to add to the local group a GRP-ALLSERVERS-SVC for a service account that might be globally allowed. Eg: DOMAIN\svcAutomation, DOMAIN\svcBackup, etc. 
  • Centrally manageable
  • Automatic, dynamic, updates and standardizes over time. 
  • OPTIONAL – also do similar for the pre-existing local groups of “Administrators” and “Remote Desktop Users” for a corresponding GRP-%COMPUTERNAME%-ADM and GRP-%COMPUTERNAME%-RDP as appropriate.

PROCESS:

1) Obtain the file “NTRIGHTS.EXE” from the Windows 2003 Resource Kit found at https://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=17657

Unpack/install the Resource Kit and copy the file where appropriate. 

2) Copy the file centrally to a location that is accessible by the MACHINE account, not a user.  A great example would be to place the file in \\DOMAIN\NETLOGON, as this allows Read/Execute.

3) Create a script that will run in that location that contains the following:

====== SET_LOGONASSERVICE.BAT – BEGIN =====

@echo off 

net localgroup "Service Accounts" /add /Comment:"Used for allowing Service Accounts local rights" >> \\SERVER\INSTALLS\BIN\logs\SET_LOGONASSERVICE.LOG

\\SERVER\INSTALLS\BIN\ntrights +r SeServiceLogonRight -u "Service Accounts" -m \\%COMPUTERNAME% >> \\SERVER\INSTALLS\BIN\logs\SET_LOGONASSERVICE.LOG 

====== SET_LOGONASSERVICE.BAT – END =====

4) If required, this script can be called via PSEXEC and executed against a list of computers:

C:\bin>psexec @SERVER.LST -u DOMAIN\$USER$ -p  -h -d -C -f \\SERVER\SHARE\BIN\SET_LOGONASSERVICE.BAT

This MUST be run with the –u / -p switch to specify the user to use with the –h “highest privileges”.  The –C must also be used to copy the batch file to the local system so it can run. 

You will see entries in the log similar to:

Granting SeServiceLogonRight to Service Accounts on \\NW-ADCS1... successful 

Granting SeServiceLogonRight to Service Accounts on \\NW-DC1... successful 

Granting SeServiceLogonRight to Service Accounts on \\NW-DC2... successful 

5) We now have a local group called “Service Accounts” and this local group has the rights “Logon as a Service”. 

We can verify this by running “SECPOL.MSC” on one of the servers and checking the rights assignments:

clip_image002

Sure enough, the local “Service Accounts” group is listed.

6) We can now handle the remainder of this via normal GPO’s for Restricted Groups, using DYNAMIC naming. 

Open the GPO editor and create a new GPO and name it something obvious such as “LOCAL_RESTRICTED_GROUPS”, and then edit it.

7) Browse to COMPUTER CONFIGURATION -> PREFERENCES -> CONTROL PANEL SETTINGS -> LOCAL USERS AND GROUPS:

clip_image004

Right click and select NEW -> LOCAL GROUP

8) Now we modify the properties for this group:

clip_image006

We will choose UPDATE for an action, as the group should already exist based on our previous work. 

The group name will be “SERVICE ACCOUNTS”. 

Click ADD to add members

clip_image008

This is where the magic comes in.  If you press the “…” beside the NAME, you can search for the group/user based on a traditional ADUC type search.  But we don’t want that.  Instead, place your cursor in the NAME field.  Press the F3 key:

clip_image010

We get a list of VARIABLES!  We want to use ComputerName so that we can reference the group as GRP-%COMPUTERNAME%-SVC and each computer will get its own group.  Click SELECT.

clip_image012

Note the variable shows %ComputerName% as expected.  Modify that as needed to have the GRP- and -SVC prefix and suffix.

clip_image014

Click OK to close this window.

I’ve chosen to also add an -ADM and –RDP group for Administrators and Remote Desktop Users as this is another use case.

clip_image016

Close and save the GPO

9) Link your GPO appropriately:

clip_image018

Here I have a GROUPS-TEST OU and I have placed my NW-VEEAM01 server in this OU, along with the 3 associated groups.   This will limit impact during testing.

10) On the system in question, check the current group memberships:

clip_image020

11) On the system in question, run a “gpupdate /force”

12) Again on the system in question, confirm the updated group membership:

clip_image022

There you have it.  The ADM/RDP groups were easy as they not only pre-exist, but are pre-defined.  The complication really was the “Service Accounts” group, which both does not pre-exist, and has no special rights by default or built in direct way of adding them via the command line. 

The recommendation would be to run the SET_LOGONASSERVICE.BAT as part of the server build process/scripts, or have it pre-done in your deployment image/WIM/VM Template.  Equally, a PSEXEC run against all servers in the domain could force set this group on a periodic basis to ensure the rights existed.  Additional error checking could be built in to check if the command was successful, check if the domain group exists, create it if required, etc. 

Some post comments:

  • Remember that the local account has a SID.  If it is deleted, and recreated with the same name, that won’t be enough as the Log on as a Service right will be assigned to the old SID
  • As the batch file creates the account with a description and we didn’t tell the GPO to do so, it’ll create a new group if required, but with no description.  This is your identifier that something is off, and hopefully that helps you troubleshoot.

PoSH: Get-PatchingScheduleInfo.ps1 for SCCM

November 20, 2014 Leave a comment

In my previous two posts in my “Automate the Server Patching with SCCM 2012” series, I covered how we get the dates for patching and how we

PoSH: Get-PatchDate.ps1 for SCCM
Get the dates for patching windows based on day of month and a set business logic.

PoSH: Get-PatchDetails.ps1 for SCCM
Obtain information from AD Groups and Computer objects.

Next up, I’m going to use the Description field of the Patching Group in AD, along with the schedule detail from Get-PatchDate.ps1 to build another object we can pull from later as we start to do things like:

  • Send an e-mail with the details of the patching to the server owners for review
  • Let those owners know when the window(s) will occur
  • Use the same information when we start telling SCCM to create Deployment Packages with Maintenance and Deadline windows using the schedule information.

First, the code:

#
# Created By: Avram Woroch
# Purpose:
#   To obtain Patching Schedule information, which is contained in the Description field 
#    of the Patch Group object in AD.  We are assuming a group name of:
#       SRV-S0-PATCHING-PROD1A, SRV-S0-PATCHING-PROD2B, etc.  
#    also we are assuming a Description field that contains 3 fields, delimited by ^ in the format of:
#       <Whatever>^<PatchWindowStart>^<PatchWindowEnd>
#    We don't store the patch day of month here, as we may need to do one-off patching
#   We are then left with an $Object called $objPatchScheduleList which contains:
#       $PatchGroupName $PatchingDate $WindowStart $WindowEnd
# Usage:
#    Get-PatchingScheduleInfo.ps1

# MODIFY THIS VARIABLE - the -like "name" shoudl be the common name for the SET of patch groups
$PatchGroups  =get-ADGroup -filter {Name -like "SRV-S0-Patching*"}

# Create a custom object that contains the columns that we want to export
$objPatchScheduleList = @()
Function Add-ToObject{ $Script:objPatchScheduleList += New-Object PSObject -Property @{ PatchingGroup = $args[0]; PatchingDate = $args[1]; WindowStart = $args[2]; WindowEnd = $args[3]; } }

$PatchingDate = ""

# Loop through each of the groups
ForEach ($Group in $PatchGroups) 
{
     # Search computers and get their Name and Description
     $PatchGroup = Get-ADGroup -Properties description $Group | Select Name,Description
     # Store the resulting server name 
     $PatchGroupName = $PatchGroup.Name
     # Split the group name to get the unique portion we commonly refer to it as - eg: PROD1A
     $PatchGroupTemp = $PatchGroupName -split "-"
     $PatchGroupSet = $PatchGroupTemp[3]
     $PatchGroupSet = $PatchGroupSet.Substring(0,$PatchGroupSet.Length-1)
     $PatchingDateTemp = '$PatchDay'+$PatchGroupSet
     $PatchingDate = $ExecutionContext.InvokeCommand.ExpandString($PatchingDateTemp)
     # Create a $Desc array and use -split to use the delimiter to break apart the variables
     if ($PatchGroup.Description) {$Desc = $PatchGroup.Description -split "\^"}
     # WindowStart is Field1 after -split
     $WindowStart = $Desc[1]
     # WindowEnd is Field2 after -split
     $WindowEnd = $Desc[2]
     # Send those dtails out to the object definied earlier 
     Add-ToObject $PatchGroupName $PatchingDate $WindowStart $WindowEnd
} 
$objPatchScheduleList 

This isn’t a lot different from the Get-PatchDetails, and the same sort of logic is used.  Build an object that we can reference later using existing data, and split apart some fields to make them more readily usable later on.

Our output is going to look like:

PS C:\bin> $ObjPatchScheduleList | ft -autosize

PatchingDate        WindowStart PatchingGroup                WindowEnd
------------        ----------- -------------                ---------
11/06/2014 00:00:00 08:00       SRV-S0-Patching-Dev1A        11:00    
11/06/2014 00:00:00 13:00       SRV-S0-Patching-Dev1B        16:00    
11/15/2014 00:00:00 09:00       SRV-S0-Patching-Prod1a       10:00    
11/15/2014 00:00:00 11:00       SRV-S0-Patching-Prod1b       16:00    
11/16/2014 00:00:00 21:00       SRV-S0-Patching-Prod2a       22:00    
11/16/2014 00:00:00 23:00       SRV-S0-Patching-Prod2b       23:59    
11/17/2014 00:00:00 08:00       SRV-S0-Patching-Prod3a       11:00    
11/17/2014 00:00:00 08:00       SRV-S0-Patching-Prod3b       11:00  

As you can see I’ve populated this with dummy information, but I can revise later. 

Some things I think of now as I look at it, but want to stop messing with it because it works:

  • I probably should store the “Short Patch Name” – eg: “PROD3B” in a column, might make the rest of the work later on a few less steps
  • I know I’m going to have situations where the WindowEnd is the next day in the AM – eg: 22:30-04:30.  I don’t yet know how I’m going to factor for that.  Probably do some logic that says “if $WindowEnd < $WindowStart, $WindowEndDate=$PatchingDate+1”.  We’ll see.  I may find outt that WindowEnd is better suited as WindowDuration with the # of hours.  But I wanted to make it easy to have in the Group Description Field
  • I have this feeling I might want to use actual AD Schema, but I’m not sure if it’s as maintainable as just telling someone to “Edit the Description”.  It also means that the Description becomes pretty dependent, and someone modifying it without knowing that it used for this, might break it.  In that event, one might run this script nightly and export the object to a CSV, so if someone ever DID mess up the Descriptions, you could VERY easily refer back to what they were at the time.  There’s many other ways you could deal with that though…
    A sample of the Computer object, with its Description:

image

Next up – we send an e-mail with this detail!

Categories: AD, PowerShell, SCCM2012, Scripting

Windows Patching – What happens when you aren’t paying attention.

November 19, 2014 Leave a comment

Yesterday, I posted some details about MS14-068 and MS14-066 (https://vnetwise.wordpress.com/2014/11/19/cve-2014-6324-ms14-068-and-you/) and of course today, have had to do some investigating into a few sites that have a variety of patching systems.  Some are using SCCM, some WSUS, some have policies and procedures, some don’t.  But I noticed a potential ‘perfect storm’(?) of situations that could cause some of them grief – and it was more than just one.

Let me draw you a picture of what is a pretty common environment:

  • WSUS exists for updates, because that’s “the responsible thing to do”
  • WSUS was likely configured some time ago, and no one likes it because it’s not sexy or fancy, so it doesn’t get any love.  Thus, it is probably running on Windows 2008 or 2008 R2.
  • Someone at some point *did* ensure that WSUS was upgraded or installed with WSUS 3.0 SP2

This all sounds pretty good, on the face of it.  Now let’s introduce some real world into this environment….

  • Someone decreed that they shall “only install Critical and Security Updates” – Updates, Update Rollups, Feature Packs, etc, would not be installed.
  • Procedures state that you will install updates that are previous month or older – so  you’re staying 30 days out, which is reasonable – let someone else go on Day0.
  • Those same procedures state that you will look at the list, and select the Critical and Security Updates from the last month, and approve them.
  • Nothing is stated for what to do about the current month’s patches – they are left as “unapproved” – but also not “declined”

Alright, so still pretty “common” and at face value, not that bad.  A year or two goes by, and now you introduce Windows 2012 and Windows 2012 R2 to the mix.  This itself is not a problem, but it’s where you start to see the cracks.  Without even having to look at the environment, I know already the things I want to be looking for….

  • Because the current month’s updates are not being “Declined”, they’re showing up in the list as “missing”.  If you have 10 updates, and 8 are approved and 2 are not, you will only ever possibly show 90% patched.  The remaining two WSUS/WU knows are “available”, but “I don’t have them.  You want to decline those so they only show up as 8 updates and 100% success.  Otherwise, how do you know at a glance if the missing update is the approved one that SHOULD be there, or one from this month?  Your reporting is bad.  See: https://vnetwise.wordpress.com/2014/03/24/howto-tweaking-wsus-so-it-only-reports-on-updates-you-care-about/

 

  • Because the process counts on someone approving “last months” updates and not “all previous updates”, there’s almost certainly going to be some weird “gap” where there is a period of a few months that isn’t approved and isn’t installed for some reason.  But the “assumption” is that they’re all healthy.  Because the previous point doesn’t “decline” any updates, the reports for completion are untrustworthy – and/or never reviewed anyways.

 

  • Next, Windows 2012+ has been introduced.  There’s a KB that is required to be installed on the WSUS server *and* rebuild of the WSUS package on the client to ensure compatibility.  See MS KB2734608 (http://support.microsoft.com/kb/2734608).  Because this is an “Update” and neither Critical nor Security, it is not applied to either the WSUS server or the clients.

 

 

  • In order for the Windows 2012/2012R2 WU/WSUS behavior to actually be changed, you need GPO’s that Windows 2012/2012R2 understands.  In order for that to be true, you need 2012+ ADMX files in your GPO environment.  Preferably in your GPO “Central Store” (again – https://vnetwise.wordpress.com/2014/03/20/howto-dealing-with-windows-2012-and-2012-r2-windows-update-behavior-and-the-3-day-delay/).  But because Windows 2012 and 2012 R2 were likely “added to the domain” with no testing, studying, certification, or reading, this wasn’t done.  Equally, even if it WAS done, most likely someone is still editing the GPO’s on the 2008/2008R2 based Domain Controller – which wipes out the ADMX based changes and replaces them with ADM files and the subset of options that they understand.  You’ll never know this happened though, and even if you jump up and down and tell people not to do it, they will.

 

  • No one is ever doing a WSUS cleanup, so Expired, Superceded, etc updates are still present.  Which isn’t helping anyone.

 

So to make that detail a little shorter:

  • Choosing Critical and Security Updates only is causing you to miss out on *required* updates.  Stop being “fancy” – just select them all please.
  • Because you’re choosing “date ranges” of updates, you’re missing some from time to time.  Stop being “fancy” – select “from TODAY-## to END”
  • If you introduce a new OS to your environment, you need to ensure your AD and GPO’s support them.

On top of the Updates and Update Rollups above that cause those issues, let’s take a quick look at some of the other things that are NOT considered Critical or Security Updates:

November 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2:

    That’s just ONE Update Rollup.  None of those look like ANYTHING I’d want to happen to my servers.  </Sarcasm> So why WOULDN’T I want to install those?  Yes, there may be features you’re not using.  Perhaps you don’t use DeDuplication or DFS-R.  Won’t it be fun later when you install those Roles/Features, and WSUS scans that server, and says “all good, nothing to update” for you?  Tons of fun!
    So, long story short – please stop being fancy.  You’re introducing complexity and gaps into your environment, and actually making things harder.  This means more work for you and your staff and co-workers.  That likely don’t have enough time and resources as it is.
    Don’t pay technical debt….

PoSH: Get-PatchDetails.ps1 for SCCM

November 19, 2014 1 comment

In my continuing saga to automate SCCM 2012 Server patching, I’ve now progressed to being able to get the list of details for all the servers.  What we do here is first make some assumptions:

  • Patching Groups have a common and standardized naming:
        SRV-S0-Patching-{VARIABLE} where the {VARIABLE} is DEV1/PROD1/PROD2/PROD3
  • Computer Descriptions are standardized with a ^ delimiter character, with 3 fields:
        {ContactEmails}^{ignored}^{ServerRole}
  • Each of the Patching Groups contains the servers that belong to each group

This script then does the following:

  • Obtains all the patch groups
  • Loops through the groups to get all the Computer Members
  • Loops through each Computer and gets its Description
  • Splits the Description into separate distinct fields
  • Puts this list into an array object so it can be used and processed later
    #
    # Created By: Avram Woroch / Avram@netwise.ca / @AvramWoroch
    # Purpose:
    #   To collect AD baseed ComputerName, ContactEmail, Role, and PatchGroup 
    #   ContactEmail and Role are collected by using a ^ delimited AD Computer Object
    #   delimited Description field in the format of:
    #     &lt;ContactEmail&gt;^&lt;SupportHours&gt;^&lt;Role
    # Usage:
    #    Get-PatchDetails.ps1
    #
    
    # MODIFY THIS VARIABLE - the -like &quot;name&quot; shoudl be the common name for the SET of patch groups
    $PatchGroups=get-ADGroup -filter {Name -like &quot;SRV-S0-Patching*&quot;}
    
    # Create a custom object that contains the columns that we want to export
    $objServerlist = @()
    Function Add-ToObject{ $Script:objServerlist += New-Object PSObject -Property @{ ComputerName = $args[0]; ContactEmail = $args[1]; Role = $args[2]; Group = $args[3]; } }
    
    # Loop through each of the groups
    ForEach ($Group in $PatchGroups) 
    {
       # Look for all the Group Members in said group
       $Servers = Get-ADGroupMember &quot;$Group&quot;
       # Loop through each of those servers
       ForEach ($Server in $Servers) 
       {
         # Search computers and get their Name and Description
         $ServersWithDesc = Get-AdComputer -Properties description $Server | Select Name,Description
         # Store the resulting server name 
         $ComputerName = $ServersWithDesc.Name
         # Create a $Desc array and use -split to use the delimiter to break apart the variables
         $Desc = $ServersWithDesc.Description -split &quot;\^&quot;
         # Email is Field0 after -split
         $ContactEmail = $Desc[0]
         # Role is Field2 after -split
         $Role = $Desc[2]
         # Send those dtails out to the object definied earlier 
         Add-ToObject $ComputerName $ContactEmail $Role $Group.Name
       }
    } 
    # Uncomment to have the script display the array created - useful for troubleshooting or human interactio
    $objServerList
    

The resulting output looks like:

PS C:\bin&gt; C:\BIN\Get-PatchGroupDetail.ps1

ContactEmail    ComputerName   Group                   Role                                    
------------    ------------   -----                   ----                                    
SysAdminTeam    SERVD311       SRV-S0-Patching-Dev1A   CITRIX XenApp 6                         
SysAdminTeam    SERVD611       SRV-S0-Patching-Dev1B   SCOM 2012 Dev Server                    

From here we now have an array of details we can use and search through for upcoming steps. 

Some things I’ve learned through this process:

On to the next steps – making this all generate some HTML formatted e-mails to server/application owners about the upcoming patching!

Categories: AD, PowerShell, SCCM2012, Scripting

CVE-2014-6324, MS14-068, and you!

November 19, 2014 4 comments

By now, you’ve almost certainly heard of the Microsoft Update being released out of band, MS14-068 related to CVE-2014-068, for an in-the-wild Kerberos exploit with some pretty serious ramifications.

Definitely check out this Microsoft Technet Blog post: http://blogs.technet.com/b/srd/archive/2014/11/18/additional-information-about-cve-2014-6324.aspx

The relevant portions to me are:

Today Microsoft released update MS14-068 to address CVE-2014-6324, a Windows Kerberos implementation elevation of privilege vulnerability that is being exploited in-the-wild in limited, targeted attacks. The goal of this blog post is to provide additional information about the vulnerability, update priority, and detection guidance for defenders. Microsoft recommends customers apply this update to their domain controllers as quickly as possible.

And:

The exploit found in-the-wild targeted a vulnerable code path in domain controllers running on Windows Server 2008R2 and below. Microsoft has determined that domain controllers running 2012 and above are vulnerable to a related attack, but it would be significantly more difficult to exploit. Non-domain controllers running all versions of Windows are receiving a “defense in depth” update but are not vulnerable to this issue.

Now, don’t take that to mean my stance is “Meh, don’t patch!”.  Quite the opposite.  As per the article:

Update Priority

  1. Domain controllers running Windows Server 2008R2 and below
  2. Domain controllers running Windows Server 2012 and higher
  3. All other systems running any version of Windows

So get those DC’s patched _now_, and calmly plan to update the remaining servers.

 

But I’ve heard from a number of colleagues/twitter/posts today that this introduces chaos, makes a busy week worse, etc.  Certainly it is critical and important, but I’m not getting the frustration:

  • It immediately only applies to 2008R2 DC’s and lower.  Most Small to Mid size enterprises I know don’t have more than a couple dozen at best, and often many less.  So patch them.
  • You likely don’t have 2012R2 DC’s – for many reasons.  Too many legacy systems that don’t like 2012/2012R2 DC’s, you haven’t had time to get around to it, you haven’t tested, you’re afraid of them, whatever. 
  • They’re DC’s, they’re redundant.  Just patch the bloody things.

But I think it’s that last part that makes people lose their minds.  Folks, if you can’t reboot a DC in your environment, you’ve built a very poor system (or “have” one – maybe you inherited it – it’s still your job to make it better!).  Yes, you should minimize the downtime, so do it in a period of lower activity if you can, but if you have to wait for… 2:00AM on a Sunday, there’s a problem with what you’ve built.  I can probably even guess what these problems are:

  • Even though you likely have Windows Server Datacenter and virtualization (Hyper-V or VMware) for unlimited VM’s, someone is probably all freaked out about “server sprawl” – so you have fewer servers that you could have.
  • Which means you likely aren’t separating out roles
  • So your DC’s are likely serving double exponential duty also serving DNS, and DHCP, and PKI, and RADIUS, and, and, and. 
  • Failover/maintenance has never been tested.  So you have “redundant systems” and maybe tested the failover, in a CONTROLLED fashion – but never tested the equivalent of a “power cord yank”
    Stop doing this. 
    It doesn’t require a $5000 1U server to run a role any more.  Stop building like its 2003.  Server Sprawl is only a problem if you have lousy automation and processes for consistency.  Managing 53 or 153 servers shouldn’t be significantly different.  You SHOULD be able to reboot servers and services at any point in time without concern.  If you cannot, then even if you have multiple, you DO realize you have identified a failure point, right?

If your answer is something along the lines of “But we don’t know the impact it will have…” – seriously?  Why not?  You tested, right?  Your monitoring software will alert you of services or functions that fail when a dependent service fails?  You might have even built in rules to self-heal or scripts to try “the obvious fix”?

Probably not though.  Everyone’s too busy paying 28% “Technical Debt” on the big fancy expensive toys and software they bought that they didn’t get enough people to install completely or got button mashed until it “kinda worked” then the next fire stole the body away.  You know that “Cloud” thing everyone’s talking about and how all the CEO/CIO/Directors/Management “want it” but “don’t know what it does”?  It’s about automation, scale, and self-healing, with growth and shrinking elasticity.  Instead of “wanting it”, it’s time to “build it”. 

Or, we can just keep doing like we’ve always done – chasing the next hot thing, and killing symptoms instead of root causes.  That’s probably what will happen…

 

All that said, MS14-066, which addresses the SChannel issues, that needs to be updated for as well.  But as per many online sources (http://windowsitpro.com/security/ms14-066-months-problem-patch, KB 2992611, http://www.infoworld.com/article/2849292/operating-systems/more-patch-problems-reported-with-the-ms14-066-kb-2992611-winshock-mess.html) there are issues with this update, that have resulted in it getting a re-issue.  Microsoft has a blog post about this as well:

http://blogs.technet.com/b/rmilne/archive/2014/11/13/critical-schannel-vulnerability-ms14-066.aspx

Specific details you care about:

Update 16-11-2014: KB 2992611 has information on known issues.

Update 18-11-2014: V2 of the bulletin was released.  Details from the update:

Reason for Revision: V2.0 (November 18, 2014): Bulletin revised to announce the reoffering of the 2992611 update to systems running Windows Server 2008 R2 and Windows Server 2012. The reoffering addresses known issues that a small number of customers experienced with the new TLS cipher suites that were included in the original release. Customers running Windows Server 2008 R2 or Windows Server 2012 who installed the 2992611 update prior to the November 18 reoffering should reapply the update. See Microsoft Knowledge Base Article 2992611 for more information

So if you’ve already patched, you’ll need to re-patch. 

I wonder if this can be taken to be true:

As of writing, the MSRC and other security assets do not report that there attacks in the wild since the issue was responsibly disclosed to Microsoft. However it is only a matter of time….

Given the issues, and how this is introducing interoperability issues, it may be advisable to give some thought to how fast this update gets rushed into production.

Hope the above information helps, and sorry for my little detour into rant-ville.  I feel better now though, if it matters.

Categories: AD, WSUS

2008R2_LAB: Configure Monowall Firewall as a VM for a Windows 2008 R2 environment

August 5, 2013 Leave a comment

In order to set  up an isolated Lab network, we need a way to handle the “isolation” part.  By doing so, we can allow the VM’s to still have internet access and/or access to the company LAN, but have no direct inbound access to them other than the vSphere console.  By doing so, we ensure that the internal LAN for the labs, can be used without conflict with existing LAN’s.  For example, DHCP and PXE booting would then be safe to use.  To do so, we’ll use a M0n0wall appliance, as this works well on VMware Workstation, vSphere, etc.   This example will cover building this for a VMware vSphere environment, vs VMware Workstation – but the concepts carry across.

Information you will require to complete this task:

· User the lab is for – eg: David Lock – we need this for the initials to use

· An existing PVLAN configured on the Lab vSphere host – eg: DL_PVLAN – or, a VMnet in VMware Workstation.

· The VLAN ID of the PVLAN – eg: 4005 – representing Subnet 5

· The Subnet to use for the LAN interface of the lab – eg: 192.168.5.0/24

· The IP address to use for the LAN interface of the lab – eg: 192.168.5.1/24

1) You will need to download the M0n0wall appliance from – http://m0n0.ch/wall/downloads.php.  Note the specific link you want is: http://m0n0.ch/wall/download.php?file=generic-pc-1.34-vm.zip

clip_image002

For the GENERIC-PC-1.34-VM.ZIP

Select any appropriate mirror site to download from, and click the link.  Save the file when prompted, to a location such as C:\TEMP.

clip_image004

clip_image006

Unpack the zip file to a folder.  You’ll be left with a VMDK (disk) and a VMX (configuration) file. 

2) From the vSphere Client, browse to INVENTORY -> DATASTORES AND DATASTORE CLUSTERS. 

image

Find the datastore in use by the lab in question, right click and choose BROWSE DATASTORE.

clip_image010

Click on CREATE A NEW FOLDER.

clip_image012

Name the VM folder with the name of the VM.  DL-MONOWALL, for example.  Click OK.

clip_image014

Browse into the new folder on the left hand side.  Ensure it has the OPEN FOLDER icon. 

clip_image016

Click UPLOAD FILES TO THIS DATASTORE.

clip_image018

Browse to and select the VMDK file and click OPEN.

Repeat for the VMX file. 

From the DATASTORE BROWSER, right click on the VMX file and choose ADD TO INVENTORY.

clip_image020

Name the VM and choose the appropriate LAB folder for the user:

image

Eg: EDM -> LABS -> DL-VM’s and name “DL-MONOWALL”.  Click NEXT.

clip_image024

Choose the HOST/CLUSTER for the VM to live on and click NEXT.

clip_image026

Complete the installation by clicking FINISH.

3) In vCenter Client, choose INVENTORY -> VM’S AND TEMPLATES.

clip_image028

Locate the VM you just created, in the appropriate LABS -> DL-VM’S folder.  Right click and choose OPEN CONSOLE. 

4) This is the point where deploying from  the downloaded files or cloning an existing Lab Monowall VM would be similar.

Choose VM -> EDIT SETTINGS:

clip_image030

Highlight both NIC’s and choose REMOVE.  Click OK.

Choose VM -> EDIT SETTINGS: again

clip_image032

Choose ETHERNET ADAPTER and click NEXT.

The first NIC we will use an internal LAN VM Port Group (such as VMNET_0111):

clip_image034

Click NEXT.

clip_image036

Click FINISH.

Repeat the above for the second NIC, but in that case choose the appropriate LAB network (eg: DL_PVLAN). 

clip_image038

Click OK when completed.

Choose POWER ON:

clip_image040

clip_image042

Choose Option 1) INTERFACES so we can reverse the LAN/WAN ports from EM0/EM1 to EM1/EM0.

clip_image044

You will be asked if you want to setup VLAN’s (no).  Enter the LAN interface of “em1” and WAN interface of “em0”.  Press ENTER when finished.  When prompted, type Y to proceed with a reboot.

Choose Option #2 to change the LAN IP address:

clip_image046

Enter the IP address of 192.168.<VLANID>.1.  The DL_PVLAN for example is VLAN 4005, so we will use “5”.  The subnet mask is /24, and we will not enable DHCP.  Press ENTER to continue.

NOTE: If you need to find the XX_PVLAN VLAN ID, you can do this by browsing to the Lab Host, clicking on the CONFIGURATION tab, and choosing NETWORKING.  Locate the PVLAN VM Port Group:

clip_image048

Here you can see that DL_PVLAN is 4005, SL_PVLAN is 4006, etc.  Remove “4000” from the VLAN ID to obtain the subnet ID – thus, 4016 would be 192.168.16.0/24, etc. 

clip_image050

Press Option #3 to reset the password to “mono”

Choose Option #5 to reboot the VM.

Now we have a working lab Monowall firewall. 

image

If you happen to be doing this work in VMware workstation, then the NIC’s in Step 4 would have the VMnic0 for the WAN, be on a BRIDGED VMnet NIC and then the LAN NIC would be on a HOST-ONLY network. 

Some additional HOWTO’s to follow:

  • COMPLETE – HOWTO: Configure Monowall Firewall as a VM for a Windows 2008 R2 environment
  • HOWTO: Creating the first AD DC in a Windows 2008 R2 environment
  • HOWTO: Configuring DNS in a Windows 2008 R2 environment
  • HOWTO: Configuring DHCP in a Windows 2008 R2 environment
  • HOWTO: Configuring a Member Server to join a Windows 2008 R2 environment
  • HOWTO: Configuring WSUS in a Windows 2008 R2 environment
  • HOWTO: Configuring WDS in a Windows 2008 R2 environment
  • HOWTO: Installation and use of GPMC in a Windows 2008 R2 environment

HOWTO: Exchange 2010 ActiveSync reporting and policy filtering

March 16, 2013 Leave a comment

Recently we came across an issue with our Exchange 2010 environment related to ActiveSync and Apple iOS devices prior to firmware v6.1.2.  As such we needed a way to not only get a report of users with device relationships by version/device, but also a means to setup a block for those devices if needed.  It turns out that Exchange has a built in process for this by way of the ActiveSync Policies and their state can be either “Granted”, “Denied” or “Quarantined”.  In the case of a Quarantine, the user will get a message on their phone and will no longer be able to access the system.  However, upon remedying their issue, they will automatically be “Granted” by nature of the new OS/firmware now no longer matching the Quarantine policy search.  This works exceptionally well for us, and I will document the steps I’ve used over the last few days to make this all work.

1) Obtain a report of iOS users of all device types and version:

Get-ActiveSyncDevice | where {$_.DeviceOS -like “*iOS*”} | select UserDisplayName,DeviceType,DeviceOS,WhenChanged | export-csv e:\IOS_USERS.CSV

This should be relatively self-explanatory.  We’re getting ActiveSyncDevices where the DeviceOS column/field is anything containing *iOS*, and then outputting only the UserDisplayName,DeviceType,DeviceOS,WhenChanged fields, and then exporting it to a CSV file.  This CSV file can then be sorted and filtered as desired.

2) As we only had iOS v6.x devices, we needed to put in place Quarantine policies.  We could not, however, simply do “*iOS 6*” or “iOS 6.1*” as this would also match the approved v6.1.2 version.  Also, while it MAY be possible to Quarantine “*iOS*” and then Grant “*iOS 6.1.2*”, this would result in v6.1.2 being the ONLY approved version and when v6.1.3, or v6.2 or v7.0 comes out, new polices would need to be put in place.   By creating only policies that match to Quarantine exiting v6.0, v6.1.0, v6.1.1 devices, we miss that issue:

New-ActiveSyncDeviceAccessRule -QueryString “iOS 6.0” -Characteristic DeviceOS -AccessLevel Quarantine

New-ActiveSyncDeviceAccessRule -QueryString “iOS 6.1 10B141” -Characteristic DeviceOS -AccessLevel Quarantine

New-ActiveSyncDeviceAccessRule -QueryString “iOS 6.1.1 10B145” -Characteristic DeviceOS -AccessLevel Quarantine

As you can see, it took 3 policies to get us the desired results

3) To determine which devices are quarantined:

Get-ActiveSyncDevice | where-object {$_.DeviceAccessState -eq “Quarantined”} | select UserDisplayName,DeviceUserAgent,DeviceOS,DeviceAccessState | format-table -autosize

UserDisplayName                                     DeviceUserAgent                DeviceOS         DeviceAccessState

—————                                     —————                ——–         —————–

<domain>/Calgary/users/xxxxx     Apple-iPad2C2/1002.141         iOS 6.1 10B141         Quarantined

<domain>/edmonton/users/xxxxx         Apple-iPad3C3/1001.537600005   iOS 6.0 10A5376e       Quarantined

<domain>/edmonton/users/xxxxx        Apple-iPhone4C1/1001.537600005 iOS 6.0 10A5376e       Quarantined

 

This will show the UserDisplayName, their DeviceUserAgent (useful for determining the type of device) and what DeviceOS they were running.   It is worth noting that following the update from a user, and the removal from Quarantine, a re-run of the above command will not show the user as removed, they simply no longer are Quarantined, and do not show up in the list.  I confirmed this with my own device, as I upgraded from iOS 6.0.2 to iOS 6.1.2.

4) There also exists the ability to set the ActiveSyncOrganizationSettings to allow for an “administrator e-mail” account(s).  This lets us put in e-mail address(es) that can get an instant notification of when a device gets quarantined or blocked.  This way, we know as soon as the user knows.  While it is unlikely we would do so, we could even proactively contact the user after seeing the alert, to ask if they need assistance.

[PS] C:\Windows\system32>Set-ActiveSyncOrganizationSettings -AdminMailRecipients helpdesk@netwise.ca, avram@netwise.ca

[PS] C:\Windows\system32>Get-ActiveSyncOrganizationSettings

RunspaceId                : 6b2980bc-0bd2-403b-a7d8-f8db66f969e8

DefaultAccessLevel        : Allow

UserMailInsert            :

AdminMailRecipients       : {helpdesk@netwise.ca, avram@netwise.ca}

OtaNotificationMailInsert :

Name                      : Mobile Mailbox Settings

OtherWellKnownObjects     : {}

AdminDisplayName          :

ExchangeVersion           : 0.10 (14.0.100.0)

DistinguishedName         : CN=Mobile Mailbox Settings,CN=xxxxx,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=<DOMAIN>,DC=<DOMAIN>

Identity                  : Mobile Mailbox Settings

Guid                      : 5bbce140-80e4-494f-a7f1-900c0xxxxxx

ObjectCategory            : <domain>/Configuration/Schema/ms-Exch-Mobile-Mailbox-Settings

ObjectClass               : {top, msExchMobileMailboxSettings}

WhenChanged               : 3/13/2013 9:06:38 PM

WhenCreated               : 7/19/2011 4:19:40 PM

WhenChangedUTC            : 3/14/2013 3:06:38 AM

WhenCreatedUTC            : 7/19/2011 10:19:40 PM

OrganizationId            :

OriginatingServer         : <DC>.<DOMAIN_NAME>

IsValid                   : True

5) Finally, in the report from Step 1, it should be noted that users/mailboxes/devices that have not been properly/fully removed will still show up.  For example, even if Bob Smith’s account is disabled, that mailbox and devices will show up.  Equally, I noted that my iPhone 4 was still showing as I never did anything to remove the device.  But more confusing is that my iPhone 5 (of which I only have one of) showed up twice – once for iOS 6.0.2 and once for iOS 6.1.2.

I did attempt to purge my iOS 6.1.2 device to test what would happen, and upon my phone’s next sync, it emptied my mail folders, then refreshed, redownloaded all my mail, and current calendar appointments.  When I checked to ensure that my sync folders were still accurate, all of my settings were intact.  No interaction on my part was needed to reconnect, I was not prompted for credentials or settings, etc.  As such, it seems that any device that is considered old, out of date or suspect, is fair game to delete and if it is in fact still active, it will simply recreate the relationship.

The last largely outstanding task is to find a way to *customize* the Quarantine message.  Each policy/filter should be able to have its own, and according to documentation, should be reachable via the ECP (eg: /ECP”>https://mail.<domain.name>/ECP) but I was having no luck getting it to do more than show “loading”.  Another day, perhaps…….

Categories: ActiveSync, AD, Exchange, PowerShell

HOWTO: Exchange 2010 ActiveSync Group enable and disable PowerShell scripting

March 14, 2013 Leave a comment

Within Exchange 2010, various ActiveSync rules exist for device partnerships with users.  One of the management requests is often to be able to ensure that users in a particular group are allowed to use ActiveSync.  At the company I work for, this group is “ActiveSyncAllowed”.  One request I had recently was to build a report of users who are:

· In “ActiveSyncAllowed” but DO NOT HAVE ActiveSync enabled.

· In “ActiveSyncAllowed” but DO HAVE ActiveSync enabled.

· Not in “ActiveSyncAllowed” but DO HAVE ActiveSync enabled.

I was not able to find a good way to *report* on this, as the reporting needs to track exceptions and error levels to ensure that it worked, didn’t work, etc.  I was, however, able to find a process that instead of reporting on this, simply *does* the work, and ensure that “Users in ActiveSyncAllowed” have “ActiveSync=Enabled”, and if they’re not in the group, ActiveSync will be disabled.  Very quick and dirty.

I found this detail via a blog post at LDAP389, and Active Directory Blog – http://www.ldap389.info/en/2012/04/19/powershell-enable-disable-activesync-ad-group-rbac-exchange-scheduled-task/

Specifically, the script can be found at: http://www.ldap389.info/wp-content/uploads/2012/04/ManageActivesyncusers.txt

And the script itself, in case the link breaks:

===== CheckActiveSyncGroup.ps1 =====

#With this command you do not need to install the Exchange Management Shell on the server, change the fqdn Cas-server.ldap389.local

$s = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri ./PowerShell/”>http://<CASSERVER>.<ADDOMAINNAME>/PowerShell/

Import-PSSession $s -allowclobber

#Import the AD module installed on the server

import-module activedirectory

#Change the DN of the AD group who grant the ActiveSync access

$groupDN = “CN=ActiveSyncAllowed,CN=users,DC=<DOMAIN>,DC=<DOMAIN_SUFFIX>”

$members = Get-ADGroupMember -Identity $groupDN -Recursive | Get-ADUser -Properties mail

$allcas = get-mailbox -ResultSize:unlimited | Get-CASMailbox

$users= $allcas | where-object {$_.ActiveSyncEnabled -eq $true}

foreach($user in $users)

        {

        $is = “”

        $is = $members |  where {($_.DistinguishedName -eq $user.DistinguishedName )}

               if (!$is) {

               Set-CASMailbox -identity $user.DistinguishedName –ActiveSyncEnabled $false -confirm:$false

               #Log file is created in folder C:\BIN, change if necessary

               (get-date).Tostring() + ‘ ‘ + [string] $user.PrimarySmtpAddress| Out-file C:\BIN\disable.txt -append

               }

        }

foreach($member in $members)

        {$is2 = “”

        $is2 = $allcas | where-object {$_.DistinguishedName -eq $member.DistinguishedName}

               if (!$is2.ActiveSyncEnabled){

                       Set-CASMailbox -identity $member.DistinguishedName –ActiveSyncEnabled $true -confirm:$false

                       #Log file is created in folder C:\BIN, change if necessary

                       (get-date).Tostring() + ‘ ‘ + [string] $member.mail | Out-file C:\BIN\enable.txt -append

                       }

        }

===== CheckActiveSyncGroup.ps1 =====

The output you get from this is in two files:

C:\BIN\ENABLE.TXT

3/14/2013 3:40:25 PM

3/14/2013 3:44:03 PM avram@netwise.ca

3/14/2013 3:44:04 PM robin@netwise.ca

3/14/2013 3:44:04 PM

3/14/2013 3:48:20 PM

C:\BIN\DISABLE.TXT

3/14/2013 3:40:25 PM

3/14/2013 3:44:03 PM

3/14/2013 3:48:20 PM testuser@netwise.ca

As you can see it will add a line and if there is nothing to do, it just puts a date/timestamp.  If there is, it puts not only the time/date but the e-mail address on the account it enabled or disabled.

Due to the PS modules in use, this is a little counter intuitive.  I ran this ON a DC and NOT on an Exchange server or a system with the Exchange Management Shell enabled.  There’s probably another half dozen ways to skin this cat, but this one works very well.

This script could be put in place to run every hour, or nightly, and there is no reason that the ENABLE/DISABLE files could not be set to overwrite and then e-mail them as attachments after the task runs.  Lots of options.

Also, it is worth noting that this generally shows how to ensure that GroupA has RightsA and not anyone else.  A similar script could be written for OWA or IMAP or POP3 access for exchange, or to set rights on folders, etc.

Hopefully this helps someone later with some powershell options.

Categories: ActiveSync, AD, Exchange, PowerShell

Active Directory authentication for ESXi v5.x

May 27, 2012 Leave a comment

So you’ve got your nice little vSphere/ESXi v5.x environment going, and you’d like to add it to Active Directory.  This makes sense, as if you do, then you can control authentication to the host via AD and have some decent logging – like find out who shutdown the host or updated the SNMP parameters, etc.

image

Click on your host.  Click on the CONFIGURATION tab.  Click on AUTHENTICATION SERVICES.  Then finally click PROPERTIES.

image

Change the drop down from LOCAL to ACTIVE DIRECTORY.  Enter the FQDN of the domain, NOT the NetBIOS name (ie: NETWISE.CA vs NETWISE).  Click JOIN DOMAIN.  Enter the username/password and click JOIN DOMAIN.

image

When it has joined, you’ll see it gray out the options, and the button changes to LEAVE DOMAIN.  Click OK.

Okay so let’s do something with this.  Let’s use PuTTY and SSH into the host (I’ve previously configured it both to allow SSH and allow Root via SSH) using an AD account.

image

Well that’s odd.   Let’s try the FQN of the account, using the DOMAIN\user format.

image

Nope, no better.  Well that’s not terribly useful.

So we’ll go google this of course.  And then we’ll find this link – http://v-front.blogspot.co.uk/2012/01/undocumented-parameters-for-esxi-50.html.  Turns out that vSphere/ESXi wants to use a group called “ESX Admins” to provide access.  I guess that’s great and all, but maybe I have a small environment and I just want Domain Admins to have rights.  Or I have a large environment, and each cluster gets its own rights and groups, and so I want ESX Admins – ClusterA group, etc.

So let’s go back to the vSphere client, back to the host and the CONFIGURATION tab and click on ADVANCED SETTINGS.  Then browse to CONFIG –> HOSTAGENT –> PLUGINS –> HOSTSVC.  On the “Config.HostAgent.plugins.hostsvc.esxAdminsGroup” you can see it defaults to “ESX Admins”.   I’m going to change that to “Domain Admins” which my user is a member of and press OK.

image

So we’ll try SSH again and…..

image

That probably should have been expected.  We haven’t restarted anything so that the change takes effect.  From a root SSH session, I’ve chosen to run “services.sh restart” to restart the services.  Other options would be to simply restart the host (a little drastic) or use the Restart Management Services option from the DCUI.  Whatever gets the services restarted will do.

image

And now, it works.

Except there is a small problem still.  There is no “sudo” to run root commands.  And you’re not root.  So you can’t do anything really.

image

So you still have to run “su –“ to get to a root prompt.  This isn’t a total loss though.  While you won’t know who is running the commands as root, you CAN trace back through your logs from the one you’ve found that is your concern, back to who was the last person to su to become root, and you’d generally know who ran the commands.  It’s not exact, but it is better than nothing.  So you still need to share the root password with your admins, I’m afraid.  But you could in theory make it so that root cannot SSH into the host directly and users must enter as themselves and then su to root.

The same blog where I found the link about changing the group, also has a link to a VMware Community Forum post, where they’re asking for this feature to get fixed or enhanced – http://communities.vmware.com/thread/344466.  I’d really recommend that as many people as possible give this a shot.

BTW, the other reason you might want to add the host to have AD credentials:

image

image

Doesn’t quite work as you’d expect.  You CAN use your Windows Credentials, but you CANNOT check the box that says “Use Windows session credentials”.  This is because it wants to pass through your session to the host, and it doesn’t know how to deal with the no password I guess.  vCenter, does.  So if you do the same thing, but type in your username and password:

image

It will work just fine.

So, while you can’t do a lot from the command line with AD credentials and no root password, you CAN get into the console and manage everything through the vSphere client.  So as far as Tier 1 support staff getting in and checking on performance, and VM’s that are up, etc, (assuming they cannot via vSphere directly – maybe you have some ESXi development boxes or something), you could at least let those staff into the console this way and monitor who did what.  If you need to do any sort of triage on the host, then you call in the next Tier up, who has the root password.

Categories: AD, ESXi, vSphere

Enterprise CA PKI for Domains – 2 Tier, with Root & Subordinate

May 19, 2012 3 comments

I’ve been fighting with wanting a PKI working in my environment for a while.  For all the typical reasons:

  • Required for 802.1x WiFi authentication
  • Required for 802.1x Wired authentication
  • Required for RADIUS
  • Required for IPSEC between domain systems
  • Required for internal web sites that have SSL based traffic without a 3rd party SSL including:
    • Actual web sites – ie: those hosted and created on IIS/Apache
    • Application/utility web sites – ie: Open Manage (OMSA, IT Assistant, OME, etc), Veeam Enterprise Manager, etc.
    • Hardware with administration pages – ie: Juniper ScreenOS/SSG, Dell PowerConnect switches, Dell iDRAC, Digi CM32, etc.

I finally got around to finding a way to make it work.  I don’t propose that this is 100% functional at this point, but it’s working. I also don’t suggest that this is ideal – there likely is a better way.  By all means – let me know if there is!  Thanks in advance.  I cobbled together a little here and a little there and made it functional for me.  I’m sure there is more to do…..

With that said – the meat of this post…..

Would you like to know why companies don’t have PKI infrastructures?  Because it is FRUSTRATING AS HELL.  Wizardry might be easier.  Seriously, I could make dragons teleport in and breath flaming acid on Orcs easier.

Okay, so here is the VERY high level of what you want to know:

GENERAL PKI /  CA INFRASTRUCTURE:

1) Best link to get started:  http://security-24-7.com/windows-2008-r2-certification-authority-installation-guide/

a. READ IT VERY FREAKING CAREFULLY.  There is a lot of stuff that if you skim it, you’re missing some CRITICAL stuff.  Or if you think you know better, you’ll mess it up – and realize why in step 43.  For example:

· “Log off the Subordinate CA”.  “Log onto the Root CA, do blah blah”.  “Log off the root CA”.  “Log onto the DC and edit the GPO to include the certificate”, “Log back onto the Subordinate CA”

WHAT YOU MISS IF YOU SKIM:

o You’re logging OFF the first box, so that after you modify the GPO, you’re re-processing the GPO when you login.  If you don’t, the stuff you added to the GPO isn’t there!  Go figure.

o If you don’t log off the DC after editing the GPO, there’s a good chance you didn’t close the GPO.  Or save it.  Or set it up to push out the next pass.

· Think about your naming.  You will NEVER be able to change it.

· I have NO idea how to do any sort of dual server clustering of your Subordinate CA’s.  You will almost certainly want some HA of it, especially in a corporate environment

2) After you do the above…. Now what?  There’s no good details on how to TEST it.  You also haven’t added in the Web Requestor, the Network Device Enrollment, etc.  We’ll get to that.

3) http://blogs.technet.com/b/askds/archive/2011/04/11/designing-and-implementing-a-pki-series-wrapup-and-downloadable-copies.aspx– this is a great resource.  Not just this link, and this 5 part series, but in general.  Read it a few times.  Digest the information.  Let it sink in.  Read it again.  Then realize it is NOT digested.

4) Go find a computer.  Go login as a member of ENTERPRISE ADMINS.  Start MMC, add the CERTIFICATE option and choose COMPUTER -> LOCAL COMPUTER.

a. Browse to COMPUTER and RIGHT CLICK.  Choose ALL TASKS -> REQUEST NEW CERTIFICATE.  Click NEXT.

clip_image002

b. Look at that – we have some options.  All we care about (and that you’ll have at this point is COMPUTER and *maybe* IPSEC.  Click COMPUTER and NEXT.

clip_image004

c. OOOH!  Look at that!

clip_image006

Click FINISH.

d. I bring to you – certificates!  It works!

clip_image008

5) Sooner or later you’re going to want to create a certificate request.  I can’t seem to do mine via the MMC – because I was getting the error shown here: http://pdconsec.net/blogs/davidr/archive/2008/08/13/No_2D00_Certificate_2D00_Template_2D00_In_2D00_Request.aspx

Rather than fight with it, I have accepted Dave’s Solution.  Smile  It works fine for me, I’ll cope for now.  This is a “to be fixed” thing though for me.

6) http://blogs.technet.com/b/askds/archive/2010/05/25/enabling-cep-and-ces-for-enrolling-non-domain-joined-computers-for-certificates.aspx – you’re probably going to set it up like me so it uses Domain Credentials and Trusted Computer to do enrollment.  Think about that for a second.   That works GREAT if your computers are all Windows, and all Domain Joined and all Local.  Now how are you planning on getting a certificate for your Juniper Router, at a remote site, that isn’t connected with a VPN because of DMZ?  How is your user going to request a Cert for DirectAccess, when he’s off network, and trying to setup for the first time?  There are a LOT of VERY LONG TERM design issues to think about here.  For now, I’m stumbling.

Categories: AD, ADCS, GPO, PKI, SSL