About Google Cloud Hyperdisk


Google Cloud Hyperdisk is the newest generation of network block storage service in Google Cloud. Designed for the most demanding mission-critical applications, Hyperdisk offers a scalable, high-performance storage service with a comprehensive suite of data persistence and management capabilities. With Hyperdisk you can provision, manage, and scale your Compute Engine workloads without the cost and complexity of a typical on-premises storage area network (SAN).

Hyperdisk storage capacity is partitioned and made available to virtual machine (VM) instances as individual volumes. Hyperdisk volumes are decoupled from VMs enabling you to attach, detach, and move volumes between VMs. Data stored on Hyperdisk volumes are persistent over VM reboots and deletions.

Hyperdisk volumes have the following features:

  • A Hyperdisk volume is mounted as a disk on a VM using an NVMe or SCSI interface, depending on the machine type of the VM.
  • Hyperdisk volumes feature substantially better performance than Persistent Disk. With Hyperdisk, you get dedicated IOPS and throughput with each volume, as compared to Persistent Disk where performance is shared between volumes of the same type.
  • Hyperdisk lets you scale storage performance and capacity to match your workload needs. IOPS and throughput can be adjusted up or down, but capacity can only be increased.
  • To access static data from multiple VMs, you can attach the same disk in read-only mode to hundreds of VMs.
  • The maximum total Hyperdisk capacity is 512 TiB for VMs with 32 or more vCPUs, and 257 TiB for VMs with 1 to 31 vCPUs.
  • Synchronous replication is available with Hyperdisk Balanced High Availability (Preview), which synchronously replicates data between two zones in the same region. This ensures high availability (HA) for Hyperdisk Balanced High Availability disk data for up to one zonal failure.
  • Hyperdisk Extreme, Hyperdisk Balanced, Hyperdisk ML are designed for sub-millisecond latencies.

The maximum number of Hyperdisk volumes you can attach to a VM varies by Hyperdisk type. See Hyperdisk limits per VM for details.

When to use Hyperdisk

Hyperdisk volumes use the NVMe or SCSI storage interface, depending on the VM machine type.

  • For most workloads, use Hyperdisk Balanced. Hyperdisk Balanced is a good fit for a wide range of use cases such as LOB applications, web applications, and medium-tier databases that don't require the performance of Hyperdisk Extreme. Use Hyperdisk Balanced for applications where multiple VMs in the same zone simultaneously require write access to the same disk.

  • To protect your applications from a zonal outage, use Hyperdisk Balanced High Availability (Preview) to synchronously replicate the disk data across two zones in the same region. You can also use Hyperdisk Balanced High Availability for applications where multiple VMs in the same regions simultaneously require write access to the same disk.

  • For machine learning workloads that require the highest throughput, use Hyperdisk ML. Hyperdisk ML has the highest throughput available and the fastest data load times as a result. Faster data load times mean shorter accelerator idle times and lower compute costs. For large inference and training workloads, you can attach a single Hyperdisk ML volume to multiple VMs in read-only mode.

  • For performance-critical applications, use Hyperdisk Extreme if Extreme Persistent Disk isn't supported or doesn't provide enough performance. Hyperdisk Extreme disks feature higher maximum IOPS and throughput along with low sub-millisecond latencies, and offer high performance for the most demanding workloads, such as high performance databases.

  • For scale-out analytics workloads like Hadoop and Kafka, cold storage, and data drives for cost sensitive apps, use Hyperdisk Throughput. Hyperdisk Throughput lets you flexibly provision capacity and throughput as needed. Hyperdisk Throughput offers increased efficiency and reduced TCO compared to Standard Persistent Disk volumes. For details, see Throughput for Hyperdisk Throughput.

For detailed performance information, see Performance limits.

Feature summary

The following table summarizes the differences between the various Hyperdisk types.

Hyperdisk type Customizable throughput Customizable IOPS Shareable between VMs Boot disk support
Hyperdisk Balanced Yes Yes Yes Yes*
Hyperdisk Balanced High Availability
(Preview)
Yes Yes Yes Yes*
Hyperdisk Extreme No Yes No No
Hyperdisk ML Yes No Yes, in
read-only mode
No
Hyperdisk Throughput Yes No No No
* You can't use a Hyperdisk Balanced or Hyperdisk Balanced High Availability disk as a boot disk if the disk is in multi-writer mode, even if the disk isn't attached to multiple VMs.

How Hyperdisk storage works

Hyperdisk volumes are durable network storage devices that your VMs can access, similar to Persistent Disk volumes. The data on each Hyperdisk is distributed across several physical disks. Compute Engine manages the physical disks and the data distribution for you to ensure redundancy and optimal performance.

Hyperdisk volumes use Titanium to achieve higher IOPS and throughput rates. Titanium offloads processing from the host CPU onto silicon devices deployed throughout the data center.

Hyperdisk volumes are located independently from your VMs, so you can detach or move Hyperdisk volumes to keep your data, even after you delete your VMs. Hyperdisk performance is decoupled from size, so you can update the performance, resize your existing Hyperdisk volumes or add more Hyperdisk volumes to a VM to meet your performance and storage space requirements.

Share Hyperdisk volumes between VMs

You can share a Hyperdisk volume between multiple VMs by simultaneously attaching the same volume to multiple VMs.

The following scenarios are supported:

  • Concurrent read-write access to a single volume from multiple VMs. Recommended for clustered file systems and highly available workloads like SQL Server Failover Cluster Infrastructure. Supported for Hyperdisk Balanced and Hyperdisk Balanced High Availability (Preview) volumes.

  • Concurrent read-only access to a single volume from multiple VMs. This is more cost effective than having multiple disks with the same data. Recommended for accelerator-optimized machine learning workloads. Supported for Hyperdisk ML volumes.

You can't attach a Hyperdisk Throughput or Hyperdisk Extreme volume to more than one VM.

To learn about disk sharing, see Share a disk between VMs.

Encryption for Hyperdisk volumes

By default, Compute Engine protects your Hyperdisk volumes with Google-managed encryption. You can also encrypt your Hyperdisk volumes with customer-managed encryption keys (CMEK).

For more information, see About disk encryption.

Confidential Computing with Hyperdisk volumes

You can add hardware-based encryption to a Hyperdisk Balanced disk by enabling Confidential mode for the disk when you create it. You can use Confidential mode only with Hyperdisk Balanced disks that are attached to Confidential VMs.

For more information, see Confidential mode for Hyperdisk Balanced volumes.

Limitations for Hyperdisk

  • You can't create a machine image from a Hyperdisk volume.
  • You can't back up a disk in multi-writer mode with snapshots or images. You must disable multi-writer mode first.
  • You can't create an image from a Hyperdisk Extreme, Hyperdisk Throughput, or Hyperdisk Balanced High Availability volume.
  • You can't create an instant snapshot from a Hyperdisk volume.
  • You can't attach Hyperdisk Throughput or Hyperdisk Extreme volumes to more than one VM.
  • Hyperdisk Extreme, Hyperdisk ML and Hyperdisk Throughput volumes can't be used as boot disks.
  • You can attach a Hyperdisk ML volume to up to 100 VMs at most once every 30 seconds.
  • You can't create a Hyperdisk ML disk in read-write mode from a snapshot or a disk image. You must create the disk in read-only mode.
  • If you enable read-only mode for a Hyperdisk ML volume, you can't re-enable read-write mode.
  • If you create a Hyperdisk Balanced volume in Confidential mode, see additional limitations.

Hyperdisk limits per disk

The following table shows the maximum and minimum values you can use for a single Hyperdisk volume.

For details about throughput and IOPS for Hyperdisk, see About IOPS and throughput provisioning for Hyperdisk.

Property Hyperdisk Balanced Hyperdisk Balanced HA (Preview) Hyperdisk Extreme Hyperdisk ML Hyperdisk Throughput
Min disk size 4 GiB 4 GiB 64 GiB 4 GiB 2 TiB
Max disk size 64 TiB 64 TiB 64 TiB 64 TiB 32 TiB
Min IOPS 3,000 IOPS 3,000 IOPS 2 IOPS per GiB of capacity 16 IOPS per MBps of provisioned throughput 4 random IOPS or 8 sequential IOPS per MBps of throughput
Max IOPS 500 IOPS per GiB of disk capacity,
but not more than 160,000
500 IOPS per GiB of disk capacity,
but not more than 100,000
1,000 IOPS per GiB of capacity,
but not more than 350,000
16 IOPS per MBps of provisioned throughput 4 random IOPS or 8 sequential IOPS per MBps of throughput
Min throughput 140 MBps 140 MBps 256 KBps of throughput per provisioned IOPS The greater of 0.12 MBps per GiB and 400 MBps per disk The greater of 10 MBps per TiB, or 20 MBps per disk
Max throughput IOPS divided by 4, but not more than 2,400 MBps IOPS divided by 4, but not more than 1,200 MBps 256 KiBs of throughput per provisioned IOPS, but not more than 4,800 MBps 1600 MBps per GiB, but not more than 1200000 MBps The lesser of 90 MBps per TiB, or 600 MBps per disk
Frequency for changes Every 4 hours
for capacity and performance
Every 4 hours
for capacity and performance
Every 4 hours
for capacity and performance
Every 4 hours
for capacity,
every 6 hours
for throughput
Every 6 hours for capacity,
every 4 hours
for performance.
* Maximum achievable IOPS and throughput are ultimately limited by the machine type of the VM to which the disk is attached. Those limits are described later on this page.
Hyperdisk Balanced and Hyperdisk Balanced HA volumes with sizes of 4 or 5 GiB can only be provisioned with exactly 2000 or 2500 IOPS, respectively.

Hyperdisk limits per VM

This section describes the capacity limits that apply to using Hyperdisk volumes with a VM. The limits discussed don't apply to any Local SSD disks attached to the same VM.

Maximum total capacity per VM

The maximum total disk capacity (in TiB) across all Hyperdisk and Persistent Disk types that you attach to a VM depends on the number of vCPUs the VM has. The capacity limits are as follows:

  • For machine types with less than 32 vCPUs:

    • 257 TiB for all Hyperdisk or all Persistent Disk
    • 257 TiB for a mixture of Hyperdisk and Persistent Disk
  • For machine types with 32 or more vCPUs:

    • 512 TiB for all Hyperdisk
    • 512 TiB for a mixture of Hyperdisk and Persistent Disk
    • 257 TiB for all Persistent Disk

You can attach a combination of Hyperdisk and Persistent Disk volumes to a single VM but the total disk capacity for Persistent Disk can't exceed 257 TiB.

Maximum number of disks per VM, across all types

The maximum number of individual Persistent Disk and Hyperdisk volumes you can attach to a VM is 128. So, if you have attached 28 Hyperdisk volumes to a VM, you can still attach up to 100 more Persistent Disk volumes to the same VM.

Maximum Hyperdisk volumes per VM

The maximum number of Hyperdisk volumes that you can attach to a VM depends on the number of vCPUs that the VM has, as described in the following table:

Number of vCPUs Max number of Hyperdisk, all types* Max number of Hyperdisk Balanced disks Max number of Hyperdisk Balanced HA disks (Preview) Max number of Hyperdisk Extreme disks Max number of Hyperdisk ML disks Max number of Hyperdisk Throughput disks
1 to 3 20 16 0 0 20 20
4 to 7 24 16 16 0 24 24
8 to 15 32 32 16 0 32 32
16 to 31 48 32 32 0 48 48
32 to 63 64 32 32 0 64 64
64 or more 64 32 32 8# 64 64
* Z3 VMs support a maximum of 32 Hyperdisk.
The maximum number of Hyperdisk Balanced and Hyperdisk Balanced High Availability disks that you can attach to a VM varies from 16 to 32 volumes, depending on the machine series and machine type. Refer to the specific machine series documentation to learn about its disk limits.
# N2 VMs require a minimum of 80 vCPUs to use Hyperdisk Extreme.

Summary of Hyperdisk per-VM limits

Overall, for an individual VM instance, there are the following limits for using Hyperdisk:

  • A limit for the total number of all Persistent Disk and Hyperdisk volumes that you can attach to a VM, including the boot disk.
  • A limit for the combined total capacity of all disks attached to a VM.
  • A limit for the total number of Hyperdisk volumes that you can attach to a VM.
  • A limit for the maximum number of each type of Hyperdisk volume that you can attach to a single VM.

When multiple limits apply, the most specific limit is enforced. For example, suppose you have a VM with 96 vCPUs, and you want to use a combination of Hyperdisk and Persistent Disk volumes. The following limits apply:

  • Maximum number of Persistent Disk and Hyperdisk volumes that you can attach to the VM: 128
  • Maximum number of Hyperdisk volumes, across all types: 64
  • Maximum number of Hyperdisk Throughput volumes: 64
  • Maximum number of Hyperdisk ML volumes: 64
  • Maximum number of Hyperdisk Extreme volumes: 8
  • Maximum number of Hyperdisk Balanced or Hyperdisk Balanced High Availability volumes: 32

The following examples illustrate these limits.

  • Maximum number of a single type of Hyperdisk per VM: You can only attach 8 Hyperdisk Extreme volumes to the VM. This is true even if you don't attach any other Persistent Disk or Hyperdisk volumes to the VM.

  • Maximum number of Hyperdisk volumes per VM: If you attach 8 Hyperdisk Extreme volumes to the VM, you can attach at most 56 other Hyperdisk volumes to the VM. This makes the combined number of Hyperdisk volumes equal to 64, which is the maximum number of Hyperdisk volumes that you can attach to a VM.

  • Maximum number of disks or volumes per VM, across all types: If you attach a combined total of 64 Hyperdisk volumes to the VM, then you can't attach any more Hyperdisk volumes. However, because the maximum number of disks of all types is 128, you can still attach up to 64 Persistent Disk volumes to the VM.

Machine type support

This section lists the machine types that each Hyperdisk type supports.

Hyperdisk Balanced

Hyperdisk Balanced supports these machine types:

Hyperdisk Balanced High Availability (Preview)

Hyperdisk Balanced High Availability supports these machine types:

Hyperdisk Extreme

Hyperdisk Extreme supports these machine types:

  • A3 with 4 or more GPUs
  • C3 with 88 or more vCPUs
  • C3D with 60 or more vCPUs
  • C4 with 96 or more vCPUs
  • C4A with 64 or more vCPUs
  • M1 with 80 or more vCPUs
  • M2 (all machine types)
  • M3 with 64 or more vCPUs
  • N2 with 80 or more vCPUs; Custom N2 machine types aren't supported.
  • X4
  • Z3 (all machine types)
Hyperdisk ML

Hyperdisk ML supports these machine types:

Hyperdisk Throughput

Hyperdisk Throughput supports these machine types:

Hyperdisk performance limits

The following tables list the per VM Hyperdisk performance limits for the supported machine types.

For Persistent Disk performance limits, see performance limits for Persistent Disks.

The maximum IOPS rate is for read IOPS or write IOPS. If performing both read and write IOPS at the same time, the combined rate cannot exceed this limit.

C4 Hyperdisk performance limits

The C4 machine series provides the option to see the steady state IOPS and maximum IOPS read/write as well as the steady state and maximum throughput read/write for every VM machine type.

Hyperdisk Balanced

Hyperdisk Balanced offers up to 320,000 IOPs and 10,000 MBps throughput on C4 VMs with 192 vCPUs. Because the per disk limit is 160,000 IOPs and 2,400 MBps throughput for Hyperdisk Balanced, you must attach multiple Hyperdisk Balanced volumes to achieve this level of performance per VM.

Machine type Max disk count Steady state IOPS - Read/write Maximum IOPS - Read/write Steady state throughput - Read/write Maximum throughput - Read/write
C4 with 2 vCPUs 8 6,200 50,000 120 MBps 400 MBps
C4 with 4 vCPUs 16 12,500 50,000 240 MBps 400 MBps
C4 with 8 vCPUs 32 25,000 50,000 480 MBps 800 MBps
C4 with 16 vCPUs 32 50,000 100,000 1,000 MBps 1,600 MBps
C4 with 32 vCPUs 32 100,000 100,000 1,600 MBps 1,600 MBps
C4 with 48 vCPUs 32 160,000 160,000 2,400 MBps 2,400 MBps
C4 with 96 vCPUs 64 240,000 240,000 4,800 MBps 4,800 MBps
C4 with 192 vCPUs 128 320,000 320,000 10,000 MBps 10,000 MBps

Hyperdisk Extreme

Hyperdisk Extreme offers up to 500,000 IOPS and 10,000 MBps on C4 VMs with 192 vCPUs. Because the per disk limit is 350,000 IOPS and 5,000 MBps throughput for Hyperdisk Extreme, you must attach multiple Hyperdisk Extreme volumes to achieve this level of performance per VM.

Machine type Max disk count Steady state IOPS - Read/write Maximum IOPS - Read/write Steady state throughput - Read/write Maximum throughput - Read/write
C4 with 2 vCPUs 0 0 0 0 0
C4 with 4 vCPUs 0 0 0 0 0
C4 with 8 vCPUs 0 0 0 0 0
C4 with 16 vCPUs 0 0 0 0 0
C4 with 32 vCPUs 0 0 0 0 0
C4 with 48 vCPUs 0 0 0 0 0
C4 with 96 vCPUs 8 350,000 350,000 5,000 5,000
C4 with 192 vCPUs 8 500,000 500,000 10,000 10,000

Performance limits for other VMs

Performance information for all other machine series are listed in the following table.

Hyperdisk Balanced

Hyperdisk Balanced offers up to 160,000 IOPs and 4,800 MBps throughput on some machine series. Because the per disk limit is 160,000 IOPs and 2,400 MBps throughput for Hyperdisk Balanced, you must attach multiple Hyperdisk Balanced volumes to achieve this level of performance per instance.

Machine type Maximum IOPS - Read/write Maximum throughput - Read/write
A3 High and Mega with 8 GPUs (using two volumes) 160,000 4,800 MBps
C4A with 1 vCPU 25,000 400 MBps
C4A with 2 or 4 vCPUs 50,000 800 MBps
C4A with 8 vCPUs 50,000 1,000 MBps
C4A with 16 vCPUs 60,000 1,600 MBps
C4A with 32 vCPUs 120,000 2,400 MBps
C4A with 48 vCPUs 160,000 3,300 MBps
C4A with 64 vCPUs 240,000 4,400 MBps
C4A with 72 vCPUs 240,000 5,000 MBps
C3 with 4 vCPUs 25,000 400 MBps
C3 with 8 vCPUs 50,000 800 MBps
C3 with 22 vCPUs 120,000 1,800 MBps
C3 with 44 vCPUs 160,000 2,400 MBps
C3 with 88 vCPUs 160,000 4,800 MBps
C3 with 176 or more vCPUs* 160,000 10,000 MBps
C3D with 4 vCPUs 25,000 400 MBps
C3D with 8 vCPUs 50,000 800 MBps
C3D with 16 to 30 vCPUs 75,000 1,200 MBps
C3D with 60 or more vCPUs 160,000 2,400 MBps
H3 with 88 vCPUs 15,000 240 MBps
M1 with 40 vCPUs 60,000 1,200 MBps
M1 with 80 vCPUs 100,000 2,400 MBps
M1 with 96 and 160 vCPUs 100,000 4,000 MBps
M2 with 208 vCPUs 100,000 2,400 MBps
M2 with 416 vCPUs 100,000 4,000 MBps
M3 with 32 vCPUs 160,000 2,400 MBps
M3 with 64 or more vCPUs 160,000 4,800 MBps
N4 with 2 and 4 vCPUs 15,000 240 MBps
N4 with 8 vCPUs 15,000 480 MBps
N4 with 16 vCPUs 80,000 1,200 MBps
N4 with 32 vCPUs 100,000 1,600 MBps
N4 with 48 or more vCPUs 160,000 2,400 MBps
X4 with 960 or more vCPUs 160,000 4,800 MBps

* Includes bare metal instances.

Hyperdisk Balanced HA

Machine type Maximum IOPS - Read/write Maximum throughput - Read/write
C3 with 4 vCPUs 25,000 400 MBps
C3 with 8 vCPUs 50,000 600 MBps
C3 with 22 vCPUs 100,000 600 MBps
C3 with 44 vCPUs 100,000 1,200 MBps
C3 with 88 or more vCPUs 100,000 2,500 MBps
M3 with 32 vCPUs 100,000 1,900 MBps
M3 with 64 or more vCPUs 100,000 2,500 MBps

Hyperdisk Extreme

Hyperdisk Extreme offers up to 500,000 IOPS and 10,000 MBps on C3 VMs with 176 vCPUs. Because the per disk limit is 350,000 IOPS and 5,000 MBps throughput for Hyperdisk Extreme, you must attach multiple Hyperdisk Extreme volumes to achieve this level of performance per VM.

Machine type Maximum IOPS
Read/write
Maximum throughput (MBps)
Read/write
A3 High VMs with 1 GPU N/A N/A
A3 High VMs with 2 GPUs N/A N/A
A3 High VMs with 4 GPUs 350,000 5,000
A3 High and Mega VMs with 8 GPUs (using two volumes) 400,000 8,000
C3 with 88 vCPUs * 350,000 5,000
C4A with 64 or more vCPUs 350,000 5,000
C3 with 176 or more vCPUs *,† 500,000 10,000
C3D with 60 or more vCPUs* 350,000 5,000
M1 VMs 100,000 4,000
M2 VMs 100,000 4,000
M3 VMs with 64 vCPUs * 350,000 5,000
M3 VMs with 128 vCPUs * 450,000 7,200
N2 VMs with 80 or more vCPUs 160,000 5,000
X4 with 960 or more vCPUs 400,000 10,000
Z3 with 88 vCPUs 350,000 5,000
Z3 with 176 vCPUs 350,000 5,000

* If using Hyperdisk Extreme with a VM that uses Microsoft Windows, refer to the known issues for Windows VM instances.
Includes bare metal instances (Preview).

Hyperdisk ML

You can provision up to 1,200,000 MBps of throughput for each Hyperdisk ML disk. The provisioned throughput is distributed across each VM the disk is attached to, up to the maximum throughput level supported by the VM type.

For example, the maximum throughput for an A3 VM with 8 GPUs is 4,000 MBps (4 GBps). If you provision a Hyperdisk ML volume with 500,000 MBps of throughput, you must attach the disk to at least 125 A3 VMs to consume the provisioned 500,000 MBps of throughput. However, to use the same disk with A2 VMs with 1 GPU, you would need 278 A2 VMs because A2 VMs have a lower maximum throughput limit.

For more information about how throughput is distributed across VMs, see Throughput for Hyperdisk ML.

The following table outlines the maximum throughput each VM attached to a Hyperdisk ML volume can reach.

Each Hyperdisk ML volume gets 16 IOPS per MBps of provisioned throughput, and this can't be changed. The IOPS limits apply to sequential and random I/O.

Machine type vCPU count Maximum throughput (MBps) per VM Max IOPS per instance
A2 with 1 GPU 12 1,800 28,800
A2 with 2 GPUs 24 2,400 38,400
A2 with 4 GPUs 48 2,400 38,400
A2 with 8 GPUs 96 2,400 38,400
A2 with 16 GPUs 96 2,400 38,400
A3 High with 1 GPU 26 1,800 28,800
A3 High with 2 GPUs 52 2,400 38,400
A3 High with 4 GPUs 104 2,400 38,400
A3 High and Mega with 8 GPUs (in read-only mode) 208 4,000 64,000
A3 High and Mega with 8 GPUs (in read-write mode) 208 2,400 38,400
C3 4 400 6,400
8 800 12,800
22 1,800 28,800
44 or more 2,400 38,400
C3D 4 400 6,400
8 800 12,800
16 1,200 19,200
30 1,200 19,200
60 or more 2,400 38,400
G2 4 800 12,800
8 1,200 19,200
12 1,800 28,800
16 or more 2,400 38,400

For A3 VMs with 8 GPUs, performance depends on whether the disk is attached to the VM in read-only or read-write mode.

Hyperdisk Throughput

You can provision at most 600 MBps of throughput per Hyperdisk Throughput volume, but if you attach multiple Hyperdisk Throughput volumes to the same VM, then the throughput limits stated in the following table apply.

For example, as shown in the table, a C3 VM with 22 vCPUs can provide a total maximum throughput of 1200 MBps, regardless of how many Hyperdisk Throughput volumes you attach.

If you attach one Hyperdisk Throughput volume to a VM, the maximum throughput for the VM is 600 MBps. If you attach two or more Hyperdisk Throughput volumes to a VM, the maximum throughput for the VM is 1200 MBps.

Machine type vCPU count Maximum read/write
throughput (MBps) per VM*
Minimum number of disks needed to reach maximum throughput
A3 High with 1 GPU 26 1200 2
A3 High with 2 GPUs 52 2400 4
A3 High with 4 GPUs 104 2400 4
A3 High and Mega with 8 GPUs 208 3000 5
C3 4 240 1
8 800 2
22 1200 2
44 1800 3
88 or more 2400 4
C3D 4 400 1
8 800 2
16 or 30 1200 2
60 or more 2400 4
G2 4 240 1
8 or 12 800 2
16 or 24 1200 2
32 1800 3
48 or more 2400 4
H3 88 240 1
M3 32 1800 3
64 or 128 2400 4
N2 1-3 200 1
4-7 240 1
8-15 800 2
16-31 1200 2
32-47 1800 3
48-63 2400 4
64-127 3000 5
128 or more 2400 4
N2D 2 200 1
4 240 1
8 800 2
16 1200 2
32 1800 3
48 to 63 2400 4
T2D 1-2 200 1
4 240 1
8 800 2
16 1200 2
32 1800 3
48 or more 2400 4
Z3 88 or more 2400 4
* Assuming at least 128K sequential IO or at least 256K random IO.

Hyperdisk regional availability

Hyperdisk can be used in the following regions or zones:

Hyperdisk Balanced

The following table lists the specific zones within each region that supports Hyperdisk Balanced.

Region Available Zones
Changhua County, Taiwan—asia-east1 asia-east1-a
asia-east1-b
asia-east1-c
Tokyo, Japan—asia-northeast1 asia-northeast1-a
asia-northeast1-b
asia-northeast1-c
Osaka, Japan—asia-northeast2 asia-northeast2-a
asia-northeast2-b
asia-northeast2-c
Seoul, South Korea—asia-northeast3 asia-northeast3-a
asia-northeast3-b
asia-northeast3-c
Jurong West, Singapore—asia-southeast1 asia-southeast1-a
asia-southeast1-b
asia-southeast1-c
Jakarta, Indonesia—asia-southeast2 asia-southeast2-a
asia-southeast2-c
Mumbai, India—asia-south1 asia-south1-a
asia-south1-b
asia-south1-c
Delhi, India—asia-south2 asia-south2-a
asia-south2-b
Sydney, Australia—australia-southeast1 australia-southeast1-a
australia-southeast1-b
australia-southeast1-c
Melbourne, Australia—australia-southeast2 australia-southeast2-b
australia-southeast2-c
Warsaw, Poland—europe-central2 europe-central2-a
europe-central2-b
Madrid, Spain—europe-southwest1 europe-southwest1-a
europe-southwest1-c
St. Ghislain, Belgium—europe-west1 europe-west1-b
europe-west1-c
europe-west1-d
London, England—europe-west2 europe-west2-a
europe-west2-b
europe-west2-c
Frankfurt, Germany—europe-west3 europe-west3-a
europe-west3-b
europe-west3-c
Eemshaven, Netherlands—europe-west4 europe-west4-a
europe-west4-b
europe-west4-c
Zurich, Switzerland—europe-west6 europe-west6-b
europe-west6-c
Milan, Italy—europe-west8 europe-west8-a
europe-west8-c
Paris, France—europe-west9 europe-west9-a
europe-west9-b
europe-west9-c
Turin, Italy—europe-west12 europe-west12-a
europe-west12-b
europe-west12-c
Montréal, Québec—northamerica-northeast1 northamerica-northeast1-b
northamerica-northeast1-c
Toronto, Ontario—northamerica-northeast2 northamerica-northeast2-a
northamerica-northeast2-b
northamerica-northeast2-c
Council Bluffs, Iowa—us-central1 us-central1-a
us-central1-b
us-central1-c
us-central1-f
Moncks Corner, South Carolina—us-east1 us-east1-b
us-east1-c
us-east1-d
Ashburn, Virginia—us-east4 us-east4-a
us-east4-b
us-east4-c
Columbus, Ohio—us-east5 us-east5-a
us-east5-b
Dallas, Texas—us-south1 us-south1-b
The Dalles, Oregon—us-west1 us-west1-a
us-west1-b
Los Angeles, California—us-west2 us-west2-b
us-west2-c
Las Vegas, Nevada—us-west4 us-west4-a
us-west4-b
us-west4-c
Osasco, São Paulo, Brazil—southamerica-east1 southamerica-east1-b
southamerica-east1-c
Santiago, Chile—southamerica-west1 southamerica-west1-b
southamerica-west1-c
Doha, Qatar—me-central1 me-central1-b
me-central1-c
Dammam, Saudi Arabia—me-central2 me-central2-a
me-central2-c
Tel Aviv, Israel—me-west1 me-west1-a
me-west1-c

Hyperdisk Balanced HA

The following table lists the regions where Hyperdisk Balanced High Availability is available.

  • Jurong West, Singapore—asia-southeast1
  • St. Ghislain, Belgium—europe-west1
  • Eemshaven, Netherlands—europe-west4
  • Council Bluffs, Iowa—us-central1
  • Moncks Corner, South Carolina—us-east1
  • Ashburn, Virginia—us-east4

Hyperdisk Extreme

  • Changhua County, Taiwan—asia-east1
  • Tokyo, Japan—asia-northeast1
  • Osaka, Japan—asia-northeast2
  • Seoul, South Korea—asia-northeast3
  • Mumbai, India—asia-south1
  • Delhi, India—asia-south2
  • Jurong West, Singapore—asia-southeast1
  • Jakarta, Indonesia—asia-southeast2
  • Sydney, Australia—australia-southeast1
  • Madrid, Spain—europe-southwest1
  • St. Ghislain, Belgium—europe-west1
  • London, England—europe-west2
  • Frankfurt, Germany—europe-west3
  • Eemshaven, Netherlands—europe-west4
  • Zurich, Switzerland—europe-west6
  • Milan, Italy—europe-west8
  • Paris, France—europe-west9
  • Turin, Italy—europe-west12-a and europe-west12-b
  • Tel Aviv, Israel—me-west1
  • Montréal, Québec—northamerica-northeast1
  • Toronto, Ontario—northamerica-northeast2
  • Osasco, São Paulo, Brazil—southamerica-east1
  • Council Bluffs, Iowa—us-central1
  • Moncks Corner, South Carolina—us-east1
  • Ashburn, Virginia—us-east4
  • Columbus, Ohio—us-east5-b
  • The Dalles, Oregon—us-west1
  • Los Angeles, California—us-west2
  • Salt Lake City, Utah—us-west3
  • Las Vegas, Nevada—us-west4

Hyperdisk ML

Region Available Zones
Changhua County, Taiwan—asia-east1 asia-east1-a
asia-east1-b
asia-east1-c
Tokyo, Japan—asia-northeast1 asia-northeast1-a
asia-northeast1-b
asia-northeast1-c
Seoul, South Korea—asia-northeast3 asia-northeast3-a
asia-northeast3-b
Jurong West, Singapore—asia-southeast1 asia-southeast1-a
asia-southeast1-b
asia-southeast1-c
Mumbai, India—asia-south1 asia-south1-b
asia-south1-c
St. Ghislain, Belgium—europe-west1 europe-west1-b
europe-west1-c
London, England—europe-west2 europe-west2-a
europe-west2-b
europe-west3-b
Eemshaven, Netherlands—europe-west4 europe-west4-a
europe-west4-b
europe-west4-c
Zurich, Switzerland—europe-west6 europe-west6-b
europe-west6-c
Tel Aviv, Israel—me-west1 me-west1-b
me-west1-c
Council Bluffs, Iowa—us-central1 us-central1-a
us-central1-b
us-central1-c
us-central1-f
Moncks Corner, South Carolina—us-east1 us-east1-a
us-east1-b
us-east1-c
us-east1-d
Ashburn, Virginia—us-east4 us-east4-a
us-east4-b
us-east4-c
Columbus, Ohio—us-east5 us-east5-a
us-east5-b
The Dalles, Oregon—us-west1 us-west1-a
us-west1-b
us-west1-c
Salt Lake City, Utah—us-west3 us-west3-b
Las Vegas, Nevada—us-west4 us-west4-a
us-west4-b
us-west4-c

Hyperdisk Throughput

  • Zone: Mumbai, India—asia-south1-a
  • Region: Jurong West, Singapore—asia-southeast1
  • Region: Eemshaven, Netherlands—europe-west4
  • Region: Council Bluffs, Iowa—us-central1
  • Region: Moncks Corner, South Carolina—us-east1
  • Region: Ashburn, Virginia—us-east4

About IOPS and throughput provisioning for Hyperdisk

Unlike Persistent Disk, where performance scales automatically with size, with Hyperdisk you can provision performance directly. To provision performance, you select the target performance level for a given volume. Individual volumes have full performance isolation—each volume gets the performance provisioned to it.

About IOPS for Hyperdisk

You can modify the provisioned IOPS for Hyperdisk Balanced, Hyperdisk Balanced High Availability, and Hyperdisk Extreme volumes, but not for Hyperdisk Throughput or Hyperdisk ML volumes.

To reach maximum IOPS and throughput levels offered by Hyperdisk volumes, you must consider the following workload parameters:

  • I/O size: Maximum IOPS limits assume that you are using an I/O size of 4 KB or 16 KB. Maximum throughput limits assume that you are using an I/O size of at least 64 KB.
  • Queue length: Queue length is the number of pending requests for a volume. To reach maximum performance limits, you must tune your queue length according to the I/O size, IOPS, and latency sensitivity of your workload. Optimal queue length varies for each workload, but typically should be larger than 256.
  • Working set size: Working set size is the amount of data of a volume being accessed within a short period of time. To achieve optimal performance, working set sizes must be greater than or equal to 32 GiB.
  • Multiple attached disks: Hyperdisk volumes share the per-VM maximum IOPS and throughput limits with all Persistent Disk and Hyperdisk volumes attached to the same VM. With multiple attached disks, each disk has performance limits proportional to their share of IOPS in the total provisioned across all attached Hyperdisk volumes. When monitoring the performance of your Hyperdisk volumes, take into account any I/O requests that you are sending to other volumes that are attached to the same VM.

However, the IOPS for Hyperdisk volumes are ultimately capped by per-VM limits for the VM to which your volumes are attached. To review these limits, see Hyperdisk performance limits.

For more information about how to improve performance, see Optimize performance of Hyperdisk.

IOPS for Hyperdisk Balanced and Hyperdisk Balanced High Availability

If you don't specify a disk size or a value for IOPS when creating a Hyperdisk Balanced or Hyperdisk Balanced High Availability volume, the default IOPS value is 3,600 IOPS. If you specify a size for the disk, then the default value depends on the size:

  • 6 GiB or less: 500 IOPS per GiB of disk size
  • Larger than 6 GiB: The lesser of 3000 + 6 IOPS per GiB of disk size, or 160,000 for Hyperdisk Balanced and 100,000 for Hyperdisk Balanced High Availability

You can provision custom IOPS levels for your Hyperdisk Balanced and Hyperdisk Balanced High Availability volumes. The provisioned IOPS must follow these rules:

  • Minimum: The lesser of 3,000 IOPS or 500 IOPS per GiB of disk capacity.
  • Maximum: 500 IOPS per GiB of disk capacity, but not more than 160,000 for Hyperdisk Balanced and 100,000 for Hyperdisk Balanced High Availability.

IOPS for Hyperdisk ML

You can't specify a custom IOPS value for a Hyperdisk ML volume. The IOPS for a Hyperdisk ML volume scales with provisioned throughput, at a rate of 16 IOPS per MBps. For example, a Hyperdisk ML volume with 1,000 MBps of throughput can have at most 16,000 IOPS. However, IOPS is ultimately limited by the machine type of the VM to which the Hyperdisk ML volumes are attached.

IOPS for Hyperdisk Extreme

If you don't specify a value for IOPS when creating a Hyperdisk Extreme volume, a default value is used, which is the lesser of 100 IOPS per GiB of disk capacity or the maximum IOPS for the machine type. You can provision custom IOPS levels for your Hyperdisk Extreme volumes. The provisioned IOPS must follow these rules:

  • At least 2 IOPS per GiB of disk capacity, but not more than 1000 IOPS per GiB of capacity
  • At most 350,000 per volume, depending on the machine type

IOPS for Hyperdisk Throughput

For Hyperdisk Throughput volumes, the IOPS scales with the provisioned throughput, at a rate of 4 IOPS per MBps for random I/O, or 8 IOPS per MBps for sequential I/O. However, IOPS is ultimately limited by the machine type of the VM to which your Hyperdisk Throughput volumes are attached.

About throughput for Hyperdisk

You can modify the provisioned throughput for the following Hyperdisk volumes:

  • Hyperdisk Balanced
  • Hyperdisk Balanced High Availability
  • Hyperdisk Throughput
  • Hyperdisk ML

You can't modify the provisioned throughput for Hyperdisk Extreme volumes.

To reach maximum throughput levels offered by Hyperdisk volumes, you must consider the following workload parameters:

  • I/O size: Maximum throughput limits assume that you are using the following:
    • Hyperdisk Throughput: a sequential I/O size of at least 128 KB, or a random I/O size of at least 256 KB.
    • Hyperdisk Balanced or Hyperdisk Balanced High Availability: an I/O size of at least 64 KB
  • Queue length: Queue length is the number of pending requests for a volume. To reach maximum performance limits, you must tune your queue length according to the I/O size, IOPS, and latency sensitivity of your workload. Optimal queue length varies for each workload, but typically should be larger than 256.
  • Shared disks: If supported, you can attach the same disk to multiple VMs to reach the provisioned throughput for the disk. For example, suppose you have a Hyperdisk ML volume with a provisioned throughput limit of 50,000 MBps. If you attach the disk to one A3 VM, the maximum throughput achievable is the limit for an A3 VM: 4,000 MBps. To reach the disk's provisioned throughput limit, attach the same disk to 15 A3 VMs.
  • Multiple attached disks: If you attach more than one Hyperdisk volume to your VM, and the total throughput provisioned for all Hyperdisk volumes exceeds the limits documented for the machine type, the total disk performance won't exceed the limit for the machine type. For example, to reach the maximum throughput for an A3 VM (3,000 MBps), you must attach at least 5 Hyperdisk Throughput volumes to the VM because each Hyperdisk Throughput volume has a maximum throughput limit of 600 MBps.

For more information, see Optimize performance of Hyperdisk.

Throughput for Hyperdisk Balanced and Hyperdisk Balanced High Availability

If you don't specify a value for throughput or the disk size, then the default value for throughput is 290 MBps. If you specify a size for the disk, then the default value depends on the size:

  • 6 GiB or less: 140 MBps
  • Larger than 6 GiB: The lesser of ((6 * disk size in GiB) / 4) + 140, or 2,400 MBps for Hyperdisk Balanced or 1,200 MBps for Hyperdisk Balanced High Availability

You can provision custom throughput levels for your Hyperdisk Balanced and Hyperdisk Balanced High Availability volumes. The provisioned throughput for each disk must follow these rules:

  • Minimum: The greater of 140 MBps or the configured IOPS divided by 256.
  • Maximum: The greater of 2,400 MBps for Hyperdisk Balanced, 1,200 MBps for Hyperdisk Balanced High Availability, or the provisioned IOPS divided by 4.

Throughput for Hyperdisk ML

You can provision at most 1,200,000 MBps of throughput for a single Hyperdisk ML volume and at least 400 MBps of throughput. The provisioned throughput is shared among all attached VMs. This means the total throughput consumed by all VMs that the disk is attached to can't exceed the provisioned throughput.

The following rules apply to the provisioned throughput:

  • Number of VM attachments: If the volume is attached to more than 20 VMs, then you must provision at least 100 MBps of throughput for each VM. For example, if you attach a disk to 500 VMs, the disk must be provisioned with at least 50 GBps of throughput.

  • Default throughput: The default throughput is the greater of 24 MBps per GiB of disk size, or 400 MBps.

    For example, a 10 GiB Hyperdisk ML volume is provisioned by default with 400 MBps of throughput, because 24 MBps * 10 = 240, which is less than the 400 MBps minimum.

  • Throughput limits per GiB: You can provision between 0.12 MBps and 1600 MBps per GiB of disk capacity. However, the provisioned throughput must be between 400 MBps and 1,200,000 MBps.

    For example, for a 10 GiB Hyperdisk ML volume, the highest throughput for the disk is 16,000 MBps (1,600 MBps * 10). Because of the size of the disk, you can't provision the maximum throughput of 1,200,000 MBps.

    On the other hand, for a 20 TiB disk, the highest throughput is 1,200,000 MBps, because 32,000,000 MBps (1,600 MBps * 20,000) exceeds the 1,200,000 MBps per disk maximum.

Throughput for Hyperdisk Throughput

You can provision throughput levels for Hyperdisk Throughput volumes. The provisioned throughput must follow these rules:

  • At most 600 MBps per volume.
  • At least 10 MBps per TiB of capacity, but not more than 90 MBps per TiB of capacity.

If you don't specify a throughput value, Compute Engine provisions the disk with 90 MBps per TiB of disk capacity, up to a maximum of 600  MBps.

For Hyperdisk Throughput volumes, throughput doesn't automatically scale with the size of the volume or number of provisioned vCPUs. You must specify the throughput level you want for each Hyperdisk Throughput disk.

Throughput for Hyperdisk Extreme

For Hyperdisk Extreme volumes, throughput scales with the number of IOPS you provision at a rate of 256 KBps of throughput per I/O. However, throughput is ultimately capped by per-VM limits that depend on the number of vCPUs on the VM to which your Hyperdisk Extreme volumes are attached.

Throughput for Hyperdisk Extreme volumes is not full duplex. The maximum throughput limits listed in Hyperdisk performance limits apply to the sum total of read and write throughput.

Pricing

You are billed for the total provisioned capacity of your Hyperdisk volumes until you delete them. You are charged per GiB per month. Additionally, you are billed for the following:

  • Hyperdisk Balanced charges a monthly rate for the provisioned IOPS and provisioned throughput (in MBps) in excess of the baseline values of 3,000 IOPS and 140 MBps throughput.
  • Hyperdisk Extreme charges a monthly rate based on the provisioned IOPS.
  • Hyperdisk ML charges a monthly rate based on the provisioned throughput (in MBps). There is no additional charge for attaching multiple VMs to a single Hyperdisk ML volume.
  • Hyperdisk Throughput charges a monthly rate based on the provisioned throughput (in MBps).

Because the data for synchronously replicated disks is written to two locations, the cost of Hyperdisk Balanced High Availability storage (Preview) is twice the cost of Hyperdisk Balanced storage.

For more pricing information, see Disk pricing.

Hyperdisk and committed use discounts

Hyperdisk volumes are not eligible for:

  • Resource-based committed use discounts (CUDs)
  • Sustained use discounts (SUDs)

Hyperdisk and preemptible VM instances

Hyperdisk can be used with Spot VMs (or preemptible VMs). However, there are no discounted spot prices for Hyperdisk.

What's next?