Windows Server Automation

TLDR
  • Windows Server backups were local only. No offsite copy.
  • Built an 8-line batch file that syncs the backup drive to S3 daily, organized by date.
  • aws s3 sync only transfers the delta, keeping costs and transfer time low.
  • S3 lifecycle policies handle retention automatically.
Daily
S3 sync
Delta
Transfers only
8
Lines of code

The gap

Our Windows Server backups were running fine. They'd dump to a dedicated backup volume on the server every night, and we had a good retention policy. But everything was local. If that backup drive died, or the server had a catastrophic failure, we had nothing offsite. That's the kind of thing you don't think about until it happens, and then it's too late.

I wanted the simplest possible solution: take whatever is on the backup drive and sync it to S3 once a day. No agents, no backup software licenses, no complex configuration. Just a scheduled task running a batch file.

The script

The whole thing is eight lines. It grabs the current date using WMIC (because getting a formatted date in a batch file is weirdly hard), builds an S3 path with the date as the folder name, and runs aws s3 sync.

@echo off
for /f "tokens=2 delims==" %%G in ('wmic os get localdatetime /value') do set datetime=%%G

set year=%datetime:~0,4%
set month=%datetime:~4,2%
set day=%datetime:~6,2%

aws s3 sync X:\. s3://bucket-name/%year%-%month%-%day%/

The aws s3 sync command is the key. It only uploads files that are new or have changed since the last sync. So even though the backup volume might be hundreds of gigs, the daily sync only transfers the delta. That keeps the transfer time and S3 costs reasonable.

Why a batch file

I could have written this in PowerShell. I could have used a proper backup agent. But the batch file approach has one thing going for it: there's nothing to break. No module dependencies, no framework updates, no service that needs to be running. The AWS CLI is the only dependency, and it's already on the server for other things. The batch file runs, syncs the files, and exits. If it fails, Task Scheduler logs the failure and we check it on Monday.

Date-based folders

Organizing by date in S3 gives us point-in-time recovery without any backup software tracking it. Need to restore from last Tuesday? Go to the 2022-06-14 folder. The WMIC date parsing is the ugliest part of the script, but it works reliably. WMIC returns the date in YYYYMMDDHHmmss format, so you just substring out the parts you need.

S3 lifecycle policies

On the S3 side, I set up a lifecycle policy to transition objects to S3 Glacier after 30 days and delete them after 90 days. That keeps the cost down without having to manage retention in the script. The script just keeps syncing, and S3 handles the aging.

It's not the most sophisticated backup strategy, but it gave us an offsite copy with almost zero setup cost. And honestly, the simplest solutions are usually the ones that keep running without attention, which is exactly what you want from a backup system.


View the script on GitHub

Upload-BackupsToS3.bat for daily Windows Server backup sync to S3.

View on GitHub