Automated Windows Disk Provisioning
Introduction¶
When deploying Windows virtual machines using VMware Aria Automation, additional disks are typically attached during provisioning but still require manual preparation inside the guest operating system.
Administrators must normally:
- Bring disks online
- Initialise the disk
- Create partitions
- Assign drive letters
- Format filesystems
- Apply consistent volume labels
These manual steps introduce operational overhead and configuration drift.
This guide demonstrates how to automate disk provisioning using Salt so that Windows disks are automatically prepared during VM deployment.
This approach aligns with the architecture pattern used throughout this site:
- Aria Automation defines infrastructure and workload intent.
- Salt enforces the operating system configuration.
- Configuration logic and configuration data remain separated.
Disk layout is declared using Salt pillar data and applied automatically when the system converges.
When the VM finishes deploying, the disks are already initialised, formatted, and mounted.
Architecture Overview¶
This implementation separates responsibilities between Aria Automation and Salt as follows:
Aria Automation is responsible for:
- Provisioning the virtual machine
- Attaching additional disks
- Setting the Salt
rolecustom grain
Salt is responsible for:
- Applying the operating system baseline
- Evaluating the selected storage layout
- Initialising and mounting RAW disks inside Windows
At a high level, disk provisioning follows this workflow:
- Aria Automation provisions a Windows virtual machine
- Additional disks are attached during deployment
- Salt applies the
bringupstate - The Windows profile includes the storage state
- The storage state reads pillar data and configures the disks
Key characteristics of this approach:
- Declarative configuration
- Role-based layouts
- Optional host-specific overrides
- Safe to re-run via the Salt CLI or RaaS GUI
- Clear separation between infrastructure provisioning and OS configuration
flowchart TD
A[Aria Automation Deployment] --> B[VM Provisioned]
B --> C[Additional Disks Attached]
C --> D[Salt Minion Installed]
D --> E[bringup/init.sls]
E --> F[profiles.windows]
F --> G[windows.storage]
G --> H[Disks Initialised and Mounted]
In this guide the windows.storage state refers to the states/windows/storage/init.sls file.
Repository Structure¶
State Repository¶
We add the following new files to the state repository:
states/
└─ windows/
└─ storage/
├─ init.sls
└─ files/
├─ create_raw_disk.ps1
└─ check_raw_disk_capacity.ps1
| Component | Purpose |
|---|---|
init.sls |
Main Salt state responsible for orchestrating disk provisioning |
create_raw_disk.ps1 |
Initialises and formats a disk |
check_raw_disk_capacity.ps1 |
Ensures enough RAW disks are available before changes begin |
This modular structure keeps Salt state logic clean while allowing complex disk operations to be handled in PowerShell.
Pillar Repository¶
A matching storage folder is added to the pillar repository:
storage/
├─ lookup.sls
├─ roles.yaml
└─ map.yaml
| Component | Purpose |
|---|---|
lookup.sls |
Selects role-based or host-specific storage data |
roles.yaml |
Defines standard disk layouts by role |
map.yaml |
Defines host-specific overrides |
Integrating Storage Automation with Bringup¶
This guide assumes Aria Automation will trigger a single Salt entry point during provisioning.
If you already use a central bringup/init.sls orchestration state, the storage automation can be
added to that existing pattern.
If you do not yet use that model, create one now. This gives Aria Automation a stable Salt entry point to call whenever a new VM is provisioned.
Create:
bringup/init.sls
# salt://bringup/init.sls
{% set osfam = grains.get('os_family', '') %}
{% set roles = grains.get('requested_roles', []) %}
include:
- profiles.common
- env
{% if osfam in ['RedHat', 'Debian', 'Suse'] %}
- profiles.linux
{% elif osfam == 'Windows' %}
- profiles.windows
{% endif %}
{# Simple role fan-out (no nested validation loops) #}
{% for r in roles %}
- roles.{{ r|lower }}
{% endfor %}
This state acts as the single orchestration layer for VM configuration.
It:
- applies common baseline configuration
- applies the correct OS profile
- expands any requested application roles
Add Storage to the Windows Profile¶
To ensure disk provisioning runs automatically for Windows systems, update the Windows profile to include the new storage state.
Amend:
profiles/windows.sls
Example:
# salt://profiles/windows.sls
include:
- windows.storage
This causes the storage state to run automatically whenever the Windows profile is included by bringup/init.sls.
Because the storage state only takes action when matching storage pillar data is present, it is safe to include as part of the standard Windows baseline.
Resulting Execution Path¶
When a VM is deployed via Aria Automation, the state flow becomes:
flowchart TD
A[Aria Automation Deployment] --> B[bringup/init.sls]
B --> C[profiles.common]
B --> D[env/init.sls]
B --> E[profiles.windows]
E --> F[windows.storage]
F --> G[Stage helper scripts]
F --> H[Validate RAW disk capacity]
F --> I[Initialise and mount disks]
Implement the Storage State¶
Create the file:
states/windows/storage/init.sls
{% set storage = salt['pillar.get']('storage', {}) %}
{% set disks = storage.get('disks', []) %}
{% if disks %}
stage-create-disk-script:
file.managed:
- name: C:\salt\scripts\create_raw_disk.ps1
- source: salt://states/windows/storage/files/create_raw_disk.ps1
- makedirs: True
stage-check-disk-script:
file.managed:
- name: C:\salt\scripts\check_raw_disk_capacity.ps1
- source: salt://states/windows/storage/files/check_raw_disk_capacity.ps1
- makedirs: True
check-raw-disk-capacity:
cmd.run:
- name: >
powershell.exe -NoProfile -ExecutionPolicy Bypass
-File C:\salt\scripts\check_raw_disk_capacity.ps1
-DriveLettersCsv "{{ disks | map(attribute='drive_letter') | join(',') }}"
- shell: cmd
- require:
- file: stage-create-disk-script
- file: stage-check-disk-script
{% for disk in disks %}
add-disk-{{ disk.drive_letter }}:
cmd.run:
- name: >
powershell.exe -NoProfile -ExecutionPolicy Bypass
-File C:\salt\scripts\create_raw_disk.ps1
-DriveLetter {{ disk.drive_letter }}
-DriveLabel "{{ disk.label }}"
-FileSystem "{{ disk.get('filesystem', 'NTFS') }}"
-PartitionStyle "{{ disk.get('partition_style', 'GPT') }}"
- shell: cmd
- unless: >
powershell.exe -NoProfile -ExecutionPolicy Bypass -Command
"if (Get-Volume -DriveLetter '{{ disk.drive_letter }}'
-ErrorAction SilentlyContinue) { exit 0 } else { exit 1 }"
- require:
- cmd: check-raw-disk-capacity
{% endfor %}
{% endif %}
This state performs three tasks:
- Stages helper PowerShell scripts on the minion
- Verifies enough RAW disks exist
- Creates and mounts disks defined in pillar
Disk Provisioning Script¶
Create:
states/windows/storage/files/create_raw_disk.ps1
param(
[Parameter(Mandatory = $true)]
[ValidatePattern('^[A-Z]$')]
[string]$DriveLetter,
[Parameter(Mandatory = $true)]
[string]$DriveLabel,
[Parameter(Mandatory = $false)]
[string]$FileSystem = 'NTFS',
[Parameter(Mandatory = $false)]
[string]$PartitionStyle = 'GPT'
)
$ErrorActionPreference = 'Stop'
$existing = Get-Volume -DriveLetter $DriveLetter -ErrorAction SilentlyContinue
if ($existing) {
Write-Output "Drive $DriveLetter already exists. Nothing to do."
exit 0
}
$disk = Get-Disk |
Where-Object {
$_.PartitionStyle -eq 'RAW' -and
-not $_.IsBoot -and
-not $_.IsSystem
} |
Sort-Object Number |
Select-Object -First 1
if (-not $disk) {
throw "No suitable RAW disk found to provision."
}
if ($disk.IsOffline -or $disk.OperationalStatus -eq 'Offline') {
Set-Disk -Number $disk.Number -IsOffline $false
}
if ($disk.IsReadOnly) {
Set-Disk -Number $disk.Number -IsReadOnly $false
}
$disk = Get-Disk -Number $disk.Number
Initialize-Disk -Number $disk.Number -PartitionStyle $PartitionStyle -PassThru |
New-Partition -UseMaximumSize -DriveLetter $DriveLetter |
Format-Volume -FileSystem $FileSystem -NewFileSystemLabel $DriveLabel -Confirm:$false -Force
Write-Output "Provisioned disk $($disk.Number) as $DriveLetter`: with label '$DriveLabel'."
This script:
- selects the next available RAW disk
- brings it online if necessary
- initializes and formats the disk
- assigns the requested drive letter
Disk Capacity Validation Script¶
Create:
states/windows/storage/files/check_raw_disk_capacity.ps1
param(
[Parameter(Mandatory = $true)]
[string]$DriveLettersCsv
)
$ErrorActionPreference = 'Stop'
$driveLetters = $DriveLettersCsv.Split(',') | ForEach-Object { $_.Trim() }
$requiredCount = 0
foreach ($driveLetter in $driveLetters) {
$existing = Get-Volume -DriveLetter $driveLetter -ErrorAction SilentlyContinue
if (-not $existing) {
$requiredCount++
}
}
$availableDisks = Get-Disk |
Where-Object {
$_.PartitionStyle -eq 'RAW' -and
-not $_.IsBoot -and
-not $_.IsSystem
-and $_.Number -ne 0
}
$availableCount = @($availableDisks).Count
Write-Output "Requested new disks needed: $requiredCount"
Write-Output "Available RAW disks: $availableCount"
if ($availableCount -lt $requiredCount) {
throw "Not enough RAW disks available."
}
Write-Output "Sufficient disks available."
This script validates that enough RAW disks are available before provisioning begins. This prevents partial configuration where only some of the requested drives are created.
Configure Storage Pillar Data¶
The storage state is data-driven. It reads a storage definition from pillar and applies the declared disk layout.
Storage Lookup¶
Create:
storage/lookup.sls
{% import_yaml 'storage/roles.yaml' as storage_roles %}
{% import_yaml 'storage/map.yaml' as storage_map %}
{% set minion_id = grains['id'] %}
{% set role_name = grains.get('role') %}
{% set minion_storage = storage_map.get(minion_id, {}) %}
{% set role_storage = storage_roles.get(role_name, {}) if role_name else {} %}
{% if minion_storage.get('storage') %}
{{ minion_storage | yaml }}
{% else %}
{{ role_storage | yaml }}
{% endif %}
Storage Lookup Precedence¶
Storage configuration is resolved using the following order:
- Host-specific definition in
storage/map.yaml - Role-based definition in
storage/roles.yaml - No storage configuration
This allows standard role layouts to be applied automatically while still supporting exceptions.
Role-Based Storage Layouts¶
Create:
storage/roles.yaml
Example entries:
mssql:
storage:
disks:
- drive_letter: F
label: Data
filesystem: NTFS
partition_style: GPT
- drive_letter: G
label: Logs
filesystem: NTFS
partition_style: GPT
- drive_letter: H
label: TempDB
filesystem: NTFS
partition_style: GPT
iis:
storage:
disks:
- drive_letter: F
label: WebData
filesystem: NTFS
partition_style: GPT
These layouts apply automatically to hosts with the matching role grain.
Optional Host Overrides¶
Create:
storage/map.yaml
Example entry:
sql-special-01:
storage:
disks:
- drive_letter: F
label: SQLData
- drive_letter: G
label: SQLLogs
If a host entry exists in map.yaml, it fully replaces the role-based layout.
Pillar Top File¶
Ensure the storage lookup is included for Windows systems. The example below exposes the above storage mapping to all Windows minions.
Example:
base:
'*':
- fileserver
'os:Windows':
- match: grain
- storage.lookup
Testing the Storage State¶
During development or troubleshooting, the storage state can be tested directly.
Example:
salt 'sql-prod-01' state.apply windows.storage
This is useful for validating:
- pillar rendering
- disk detection logic
- PowerShell script behaviour
In Aria Automation environments, this state would normally run automatically during the bringup process.
Integrating This Approach with Aria Automation¶
In this model Aria Automation provisions the virtual machine while Salt configures the guest operating system after deployment.
A typical deployment workflow is:
- The cloud template defines the machine role
- The cloud template attaches the required disks
- The Automation Config resource deploys and configures the Salt minion
- Aria Automation triggers the
bringupstate - The Windows profile includes
windows.storage - The guest operating system is configured to match the declared layout
Define the Role in the Blueprint¶
In the blueprint, define an input for the server role.
Example:
inputs:
role:
type: string
default: mssql
Then apply the role as a Salt grain during deployment.
Example Salt configuration step:
salt-config:
type: Cloud.SaltStack
properties:
grains:
role: ${input.role}
stateFiles:
- bringup
This ensures the deployed VM has a role grain that Salt can use when selecting the storage layout.
Define Disks in the Blueprint¶
Add the required disks to the VM resource.
Example:
resources:
sqlvm:
type: Cloud.vSphere.Machine
properties:
image: windows
flavor: medium
attachedDisks:
- source: dataDisk
- source: logDisk
- source: tempdbDisk
Example disk resources:
resources:
dataDisk:
type: Cloud.vSphere.Disk
properties:
capacityGb: 200
logDisk:
type: Cloud.vSphere.Disk
properties:
capacityGb: 100
tempdbDisk:
type: Cloud.vSphere.Disk
properties:
capacityGb: 50
These disks will appear in Windows as RAW disks, which the Salt storage state will detect and configure.
Example End-to-End Result¶
Blueprint defines:
role: mssql
Pillar role layout:
mssql:
storage:
disks:
- drive_letter: F
label: Data
- drive_letter: G
label: Logs
- drive_letter: H
label: TempDB
Blueprint attaches three disks.
When the VM deploys:
- The disks appear in Windows as RAW disks
- Salt applies
bringup profiles.windowsincludeswindows.storage- The disks are initialised automatically
Result:
F: Data
G: Logs
H: TempDB
No manual disk configuration is required.
Benefits of This Approach¶
Using Aria Automation together with Salt provides:
- Automatic disk initialisation during VM deployment
- Consistent storage layouts for workload roles
- Declarative disk configuration managed through pillar data
- Separation of infrastructure provisioning and OS configuration
- Idempotent configuration enforcement
The blueprint defines the infrastructure topology, while Salt ensures the operating system converges to the declared configuration.
The result is a VM that is fully configured and ready for use immediately after deployment.
Operational Model¶
Typical workflow:
- User deploys a blueprint
- Blueprint assigns a server role
- Blueprint attaches disks
- Salt automatically configures the operating system
- Disk layout is applied according to role or host-specific pillar data
No manual guest disk preparation is required.
Summary¶
This approach provides a clean and scalable way to automate Windows disk provisioning in Aria Automation environments using Salt.
It works well because:
- Aria Automation controls the infrastructure lifecycle
- Salt controls guest operating system convergence
- Storage intent is data-driven through pillar
- Standard layouts can be reused across roles
- Exceptions can be handled with host-specific overrides
By integrating the storage state into the Windows profile and executing it through the bringup
process, disk configuration becomes part of the normal VM lifecycle rather than a manual Day-2 task.