Reimaged Computers Can’t Register Their DNS record

This one took me a while to solve. The desktop guys kept coming to me stating when they re-imaged a computer, it either didn’t ping or it had the wrong IP address. I found out later they had changed their imaging methodology. Before re-imaging any computer, they first delete the computer account and then re-image it. I would guess that this netjoin hardening change is the reason.

When I went into DNS management, I could clearly see an “Account unknown” in the ACL of the DNS record, which makes sense, because the computer account registered the DNS record, but now that computer account didn’t exist anymore. Until the DNS record is scavenged or deleted manually, the newly imaged computer will be unable to update its own DNS record.

This led me down a path of many dead ends. I wrote a script to compare DHCP leases to DNS records. However, I soon found out that DHCP is not always correct either for the current IP address. If someone moves from location to location, the last DHCP lease is the one you want to use. I then looked into making DHCP the owner and updater of all dynamic DNS records, but this too caused issues such as duplicate DNS records.

I then looked at trying to find any DNS records with “Account unknown” in the ACL, but the script ended up too complex and just didn’t work. It was back to basics: I only cared about recently deleted computer accounts, so why not just look for recently deleted computer accounts and then delete the DNS records for those accounts?

That’s exactly what dns_orphan_fix.ps1 does. It looks back 60 minutes for any deleted computer accounts and then attempts to delete the DNS records for those accounts. I run this in the task scheduler every 30 minutes, so that does mean that DNS records will get deleted twice, but I shouldn’t miss any deleted computer accounts this way. There is a “$dryrun” option that you can flip to $true just to make sure this script will operate the way you think it will operate in your environment before setting it to $false to actually delete DNS records.

  • Soli Deo Gloria

Adding .NET Framework 3.5 – Error Code 0x800f0954

Here we go again: another server, another error. Why can’t things just work properly? Had a consultant e-mail me they couldn’t load .NET Framework 3.5 on Windows Server 2019. “Easy peasy lemon squeezy” I thought. Well, of course, it wasn’t that easy. Attempts to load this feature ended up with error code 0x800f0954. What in the hades is error code 0x800f0954?

Time to hit the Google and wow, there’s a bunch of random articles on this error code. I already had a hunch this had something to do with WSUS. We use SCCM in our environment and SCCM sets the WSUS server in the client registry to a WSUS server without any binaries, an empty WSUS server if you will. I usually fix that by deleting the whole registry key HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU. The SCCM client will recreate this key periodically. Unfortunately, this did not work. Where to look? Our old friend C:\windows\logs\cbs\cbs.log and we find:

2023-11-01 13:12:37, Info CBS External EvaluateApplicability, package: Package_8_for_KB5031005~31bf3856ad364e35~amd64~~10.0.4069.1, package applicable State: Installed, highest update applicable state: Installed, resulting applicable state:Installed
2023-11-01 13:12:37, Info CBS External EvaluateApplicability, package: Package_for_DotNetRollup~31bf3856ad364e35~amd64~~10.0.4069.1, package applicable State: Installed, highest update applicable state: Installed, resulting applicable state:Installed
2023-11-01 13:12:37, Info CBS DLWD: Expecting search returns 1 update, actual:0 [HRESULT = 0x800f0954 - CBS_E_INVALID_WINDOWS_UPDATE_COUNT_WSUS]
2023-11-01 13:12:37, Info CBS DWLD:Failed to do Windows update search [HRESULT = 0x800f0954 - CBS_E_INVALID_WINDOWS_UPDATE_COUNT_WSUS]
2023-11-01 13:12:37, Info CBS FC: WindowsUpdateDownloadFromUUP returns. [0x800F0954]
2023-11-01 13:12:37, Error CBS FC: CFCAcquirerWUClient::Download(136): Result = 0x800F0954
2023-11-01 13:12:37, Error CBS FC: CFCAcquirerWrapper::Execute(147): Result = 0x800F0954
2023-11-01 13:12:37, Info CBS Exec: Failed to download FOD from WU, retry onece. [HRESULT = 0x800f0954 - CBS_E_INVALID_WINDOWS_UPDATE_COUNT_WSUS]

It IS a WSUS problem, but why didn’t deleting the WindowsUpdate registry key help? Well, it appears the WindowsUpdate service only reads this registry key when it starts and if you change or delete this key after it’s running you have to restart the service so it takes note of the new changes. Oh, I like the misspelling of “retry onece” in the logs.

It also didn’t matter if I tried to point Powershell or DISM directly to the binaries in the SxS folder, it wasn’t having anything of that without being able to reach out to Windows Update. Odd.

What’s so frustrating is that I cannot find this error code in any lookup tool such as helpmsg or cmtrace. It’s not documented anywhere I can find. If the program had spit out the whole error message instead of just some random hex code, I could have saved 30 minutes of my life doing something really important, like fixing someone’s Office 365 mailbox that they deleted all of the e-mails out of (oof).

  • Soli Deo Gloria

Get an Extra Month of Internet Service on the Calyx Institute Network

If you use my referral link, you can get an extra month of Internet service on the Calyx Institute network and I get an extra month of Internet service as well.

They use the T-Mobile network and your hotspot will have unlimited data.

Your mileage will vary based on location, but I get around 250Mbps using the hotspot. If you work from home, I highly suggest having a backup Internet option in case your main Internet goes out.

  • Soli Deo Gloria

ERROR_SXS_ASSEMBLY_MISSING Chaos

Tried to add IIS and MSMQ features to a server. Kept getting a 0x80073701 error: missing assembly file. Off to C:\windows\logs\cbs.log we go:

2023-09-06 07:02:33, Error CSI 00000009 (F) STATUS_SXS_ASSEMBLY_MISSING #2625634# from CCSDirectTransaction::OperateEnding at index 0 of 1 operations, disposition 2[gle=0xd015000c]
2023-09-06 07:02:33, Error CSI 0000000a (F) HRESULT_FROM_WIN32(ERROR_SXS_ASSEMBLY_MISSING) #2625476# from Windows::ServicingAPI::CCSITransaction::ICSITransaction_PinDeployment(Flags = 0, a = dbbb65b179c955b3c0186aa84fa6e087, version 10.0.17763.3165, arch amd64, nonSxS, pkt {l:8 b:31bf3856ad364e35}, cb = (null), s = (null), rid = 'Package_4455_for_KB5022286~31bf3856ad364e35~amd64~~10.0.1.7.5022286-8227_neutral', rah = (null), manpath = (null), catpath = (null), ed = 0, disp = 0)[gle=0x80073701]

On Google, I found this post, but I will save you the time: downloading said update, expanding it to a CAB file and then adding the CAB file via DISM did absolutely nothing to fix the problem. Neither did running SFC /scannow or dism /online /cleanup-image /restorehealth.

The fix is to remove the keys referencing the bad KB from the registry under HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\Packages, then try re-adding the roles from Server Manager. I suggest using Baretail to watch C:\windows\logs\cbs.log while you are doing this to see if additional errors come back up (you may need to do this fix for multiple KBs. In my case, I would fix one and another KB would pop up).

Before running this script, run regedit using the psexec -s -i cmd trick to run under the SYSTEM account, then go to HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\Packages, right-click on Packages and grant SYSTEM full control. Trying to adjust permissions and take ownership of the registry keys within the script was a nightmare, so I went back to basics by removing that logic and just set permissions manually using the registry editor.

You’ll need to run the Powershell script under the same SYSTEM trick above to avoid any permission issues removing the keys:

# Define the root path to search in
$rootPath = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\Packages"

# Get all child items (keys) under the root path
$keys = Get-ChildItem -Path $rootPath

# Filter the keys based on the presence of the desired values in the name
$filteredKeys = $keys | Where-Object { $_.Name -like '*KB5022286*' -or $_.Name -like '*KB5027222*' }

# Loop through each matching key and remove it
$filteredKeys | ForEach-Object {
    # Extract the key's path
    $keyPath = $_.Name -replace 'HKEY_LOCAL_MACHINE', 'HKLM:'

    # Remove the key
    Remove-Item -Path $keyPath -Recurse -Force
}

Write-Output "Operation completed."

Now for the “root cause analysis”, a buzz word we love to throw around in IT: it appears that someone completely cleared out the contents of C:\Windows\SoftwareDistribution on the server and DISM couldn’t find the source files anymore for these KBs. However, there were other KBs pointed to this folder (which was empty) and they worked just fine? Perhaps these specific KBs actually updated the core IIS files within the OS and that’s why DISM was querying them during the IIS/MSMQ role add?

Perhaps a better solution is to copy the SoftwareDistribution folder from a server running the same server OS where the downloads are not cleared from the folder. Not sure if the GUIDs would match up between the two different servers, but might be worth trying the next time this comes up. If you should try this route yourself, you’ll need to temporarily disable and stop the Windows Update service on both servers as it likes to lock files in this folder.

If you were also curious: Windows Update keeps working just fine after the procedure of removing bad KBs from HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\Packages.

  • Soli Deo Gloria

Windows Server 2019 installation has failed

Recently, I’ve been doing in-place upgrades from Server 2012R2 to Server 2019 and having some issues. They usually sit at 32% or 37% for 4 to 6 hours before the setup continues on. At the very end, I’ll sometimes get a “Windows Server 2019 installation has failed” error message and nothing else. I then and go look for setuperr.log on the disk and always see these errors:

2023-08-21 15:07:33, Error [0x0808fe] MIG Plugin {D12A3141-A1FF-4DAD-BF67-1B664DE1CBD6}: WSLicensing: Error reading Server Info hr=0x80070490
2023-08-21 15:07:38, Error CSetupAutomation::Resurrect: File not found: C:\$WINDOWS.~BT\Sources\Panther\automation.dat[gle=0x00000002]
2023-08-21 15:07:38, Error SP CSetupPlatform::ResurrectAutomation: Failed to resurrect automation: 0x80070002[gle=0x00000002]
2023-08-21 15:07:38, Error SP CMountWIM::DoExecute: Failed to mount WIM file C:\$WINDOWS.~BT\Sources\SafeOS\winre.wim. Error 0x80070522[gle=0x00000522]
2023-08-21 15:07:38, Error SP Operation failed: Mount WIM file C:\$WINDOWS.~BT\Sources\SafeOS\winre.wim, index 1 to C:\$WINDOWS.~BT\Sources\SafeOS\SafeOS.Mount. Error: 0x80070522[gle=0x000000b7]
2023-08-21 15:07:38, Error SP ExecuteOperations: Failed execution phase Pre-Finalize. Error: 0x80070522
2023-08-21 15:07:38, Error MOUPG MoSetupPlatform: ExecuteCurrentOperations reported failure!
2023-08-21 15:07:38, Error MOUPG MoSetupPlatform: Using action error code: [0x80070522]
2023-08-21 15:07:38, Error MOUPG CDlpActionPreFinalize::ExecuteRoutine(545): Result = 0x80070522
2023-08-21 15:07:39, Error MOUPG CDlpActionImpl > > >::Execute(441): Result = 0x80070522
2023-08-21 15:07:39, Error MOUPG CDlpTask::ExecuteAction(3259): Result = 0x80070522
2023-08-21 15:07:39, Error MOUPG CDlpTask::ExecuteActions(3413): Result = 0x80070522
2023-08-21 15:07:39, Error MOUPG CDlpTask::Execute(1644): Result = 0x80070522
2023-08-21 15:07:39, Error MOUPG CSetupManager::ExecuteTask(2478): Result = 0x80070522
2023-08-21 15:07:39, Error MOUPG CSetupManager::ExecuteTask(2441): Result = 0x80070522
2023-08-21 15:07:39, Error MOUPG CSetupManager::ExecuteInstallMode(883): Result = 0x80070522
2023-08-21 15:07:39, Error MOUPG CSetupManager::ExecuteDownlevelMode(390): Result = 0x80070522
2023-08-21 15:07:39, Error SP CDeploymentBase::CleanupMounts: Unable to unmount the directory C:\$WINDOWS.~BT\Sources\SafeOS\SafeOS.Mount. Error: 0xC142011C[gle=0xc142011c]
2023-08-21 15:07:41, Error MOUPG CSetupManager::Execute(282): Result = 0x80070522
2023-08-21 15:07:41, Error MOUPG CSetupHost::Execute(400): Result = 0x80070522

Using a trick from Sami Laiho, we can look up error codes using net helpmsg <4 digit number> or winrm helpmsg <hexcode>. 0x80070522 comes out to be “A required privilege is not held by the client” which is very odd. What I can tell you is its failure to mount WinRE.WIM is a red herring and has absolutely nothing to do with the actual problem. I suspect when there is any error earlier in the pipeline that cannot be ignored, a generic error is spit out regardless of what actually happened.

If you comb through setupact.log nothing will stand out as being a problem and setupdiag.exe only works on Windows 10 and 11, so we are on our own for figuring this problem out.

Through trial and error, I actually figured out what was going on, so I will list the prep steps I now do which took the process from 4 to 6 hours to about 15 minutes and I didn’t get any in-place setup failures anymore.

  1. Block GPO inheritance on an OU and then move the server computer account to that OU.
  2. Delete everything in C:\windows\system32\GroupPolicy and then restart the server. You may have to turn on showing hidden items to see this folder.
  3. Run secedit /configure /cfg %windir%\inf\defltbase.inf /db defltbase.sdb /verbose to reset local group policy back to in-box defaults.
  4. Make sure all built-in Microsoft services are functioning such as the print spooler (we can disable it after the upgrade).
  5. Run dism /online /cleanup-image /restorehealth to repair any CBS store corruption.
  6. Remove any extraneous roles or programs not needed.
  7. Run psexec -s -i cmd and then launch setup.exe. This runs the setup under the SYSTEM account which has more permissions then just local Administrator. PSEXEC is part of the Sysinternals Suite you can download free from Microsoft.

Depending on what’s in your environment, you may have to go further after the in-place setup is done. As an example: I am upgrading Lansweeper scanning servers and they require .NET Framework 4.8. During the in-place upgrade, .NET framework is removed, so I need to reinstall it. Luckily, I was able to determine that from the Application event log. Another one is the SCCM client. All custom WMI classes that it uses get reset when the in-place upgrade is complete, so I need to uninstall the SCCM client (ccmsetup /uninstall) and then reinstall the SCCM client.

Don’t forget to move your computer account back to its original OU and disable any services needed for security reasons.

Back to the 4 to 6 hour delay: I believe that is because I disabled Internet Explorer on the servers using a SRS rule in a domain GPO. I was told for security reasons that Internet Explorer couldn’t run on our servers anymore and there is no group policy setting for servers to disable Internet Explorer, they only have that for Windows 10 and that’s only for later builds. It seems that Microsoft assumes that the operating system is in a particular state and if it’s not, it has a hard time performing the in-place upgrade.

  • Soli Deo Gloria

Case of Operator Failure

We are retiring an old file server at work. One of the file shares held a bunch of text files of people’s log offs: username, date, time. The problem is I had no idea where this script was running from. RSOP.MSC didn’t show any scripts that would do this in the Logoff section of any applied GPOs and searches of SYSVOL with Agent Ransack came up with nothing.

I decided to use an old Procmon trick from Sami Laiho: https://4sysops.com/archives/using-process-monitor-procmon-remotely/. Basically, this allows us to run Procmon remotely and in another user session, so we can trace events during user logons and logoffs. I did the remote Procmon trick, logged on and off and then took a look at the PML file in Procmon. I searched for the name of the share and behold: I found powershell.exe running a file called logout_oldadmin.ps1.

Doing an e-mail search with the script name, I found some old gpresult HTML reports in old e-mail messages which led me to the GPO name in question that was firing this script off. The question is why didn’t I see in this RSOP.MSC and why didn’t Agent Ransack find the script?

RSOP.MSC hasn’t been supported since 2006 and Microsoft even warns it may not show all of the group policies. Instead, you are supposed to use gpresult /h report.html. Mea culpa.

The Agent Ransack issue…I had a date filter set and didn’t realize it. Oof! Today’s lesson is if you don’t find what you are looking for, use a different tool. When I dropped to a CMD session, mapped a drive to SYSVOL and used “dir”, I could see the files.

  • Soli Deo Gloria

Case of the Print Server “Access Denied”

It’s been a while since I’ve done one of these “Case of” blog posts. Back in my desktop engineering days, you could find users doing all sorts of wacky stuff on their computers and those stories of how I found the problem and fixed it made for some interesting posts. Now that am a sysadmin working mainly on servers, it’s only me and a few IT people making changes on the servers.

Another sysadmin asked me to look at why users couldn’t connect to any printers on a particular print server. We have 10 of these print servers, all pretty much identically configured running Windows Server 2019. I won’t bore you with the many hours I spent restarting the server, installing a new printer and sharing it out, fiddling with random registry settings, running Procmon and just trying other random off the wall stuff. The one weird thing was I could install printers from the troubled server under my login, but not of that of a regular user. Yes, I did make that regular user a local administrator: it made no difference.

I surmised that someone in the IT department had been messing with the server, so I used Beyond Compare’s registry comparison feature to compare the problem server with a print server that was working properly.

The problem server is on the left, a working server on the right. You can see that the server-role for print services was missing.

I went into Server Manager and sure enough: the print services role was missing on the troubled server. I thought I had hit the jackpot after re-adding the role, but alas I still could not add a printer from the printer server using a regular user account.

I started doing a “stare and compare” between the two servers and then I noticed something interesting. On the working server, the “View Server” permission for our staff group was checked; on the not working server, it was unchecked.

Upon checking that box, I could now add a printer from the troubled print server without any problems.

I couldn’t stop there, so I had to look up what the View Server permissions were and this is what Microsoft says:

View Server

The View Server permission assigns the ability to view the print server. Without the View Server permission, users cannot see the printers that are managed by the server. By default, this permission is given to members of the Everyone group.

Despite what Microsoft says, I could, as a regular user, go to \\bad_print_server\ and see all of the printers shared on the server, however, I couldn’t install any of them. It seems View Server really means Install Printers from server.

  • Soli Deo Gloria

Admin Assistant Freeware

Admin Assistant is a handy piece of software for sysadmins. It used to be known as AutoAdministator back in 2014. Recently, I needed the ability to do a mass shutdown of servers, so I went searching for this tool again. It sits somewhere between PDQ Deploy and psexec in terms of functionality.

Here’s everything that it can do:

What I love about this program is that it’s free: free of ads and free of “Let me get your e-mail address before you download this so I can spam you“. You can just download it and go. It does exactly what it says it does and it does it very well.

  • Soli Deo Gloria

Windows 11 Install without Meeting Requirements

Windows 11 released to the world last week and as I predicted: the TPM, Secure Boot and CPU requirements can all be bypassed. Depending on how you are trying to install Windows 11, you have several options. If you are trying to do an in-place upgrade from within Windows 10 itself, you can do a Google search for AllowUpgradesWithUnsupportedTPMOrCPU. Creating this registry key will cause the Windows 11 setup program to ignore the CPU check and will allow you to proceed with TPM 1.2, however, you still need a TPM chip.

To bypass all requirements, you need to run the Windows 11 install from a bootable USB stick. Copy the following into notepad and save as bypass.reg to the USB stick:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\Setup\LabConfig]
"BypassTPMCheck"=dword:00000001
"BypassSecureBootCheck"=dword:00000001
"BypassRAMCheck"=dword:00000001
"BypassStorageCheck"=dword:00000001
"BypassCPUCheck"=dword:00000001

Boot to the Windows 11 setup using the USB stick. During the setup, you will get an error your PC is not supported. Click back to the main screen. At this point, you can hit SHIFT-F10 to get to a CMD prompt, type regedit and then go to File>Import and import bypass.reg above. You can now proceed installing Windows 11.

Techpowerup did a really nice write-up here on the process:

https://www.techpowerup.com/287584/windows-11-tpm-requirement-bypass-it-in-5-minutes

Rufus now has a beta version that will create a bootable ISO with all of these restrictions removed called “Windows 11 Extended Support”: https://github.com/pbatard/rufus/releases/. Note that you can only do clean installs using the bootable USB stick method and the upgrade option does not work from the bootable media.

Update: this now works for in-place upgrades as well! https://www.ghacks.net/2022/03/04/rufus-3-18-adds-support-for-windows-11-inplace-upgrade-bypasses/

  • Soli Deo Gloria