If you need to move your Compute Engine boot disk data
outside of your Compute Engine project, you can export a boot disk
image to Cloud Storage as a tar.gz
file. If you need to
create a persistent disk image to use when you create new persistent disks on
Compute Engine, read
Creating a custom image.
You can backup or share a custom image by exporting the image to Cloud Storage. This method is ideal for sharing individual images with projects that do not have access to your images. Alternatively, you can share images by granting the Compute Engine image user role on the image or on the project that contains it.
The following diagram shows some typical workflows for the creation and reuse of a custom image.
Before you begin
- Read the images page.
- If the project that you want to export the image from has a trusted image
policy defined, add
projects/compute-image-import
andprojects/compute-image-tools
to the allowed list of publishers. - To find out how to meet requirements before exporting images, see Prerequisites for importing and exporting VM images.
-
If you haven't already, then set up authentication.
Authentication is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
-
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
-
Limitations and restrictions
For projects that are protected with VPC Service Controls, use one of the following methods:
- export from the same project where the image resides
- export the image manually
Export an image with a single command
Export an image to Cloud Storage
You can export your images using either the Google Cloud console, the Google Cloud CLI, or REST.
Console
In the Google Cloud console, go to the Images page.
Click the name of the image that you want to export to go to the image details page. You can't export public images provided by Google. You can only export images that you previously created or imported.
From the image details page, click Export to open the Export Image page.
From the Export image page, choose the Export format of the image.
Choose the Cloud Storage location to export your image to by clicking Browse.
Choose an existing Cloud Storage location to export your image. Or, follow the directions to create a new Cloud Storage bucket, and then enter a name for the new Cloud Storage bucket.
Once you choose a Cloud Storage, choose a filename for the exported image. You can use the default filename, or you can choose your own filename.
After choosing a Cloud Storage, and entering a filename for the image, click Select.
From the Export image page, click Export. After choosing Export, the Google Cloud console displays the Image export history, where you can view the image export process. For additional details about the image export process, click the Cloud Build ID to go to the Image export details page where you can view and download the image export log.
Go to the Storage page to access your exported image.
gcloud
The preferred way to export an image to Cloud Storage is to
use the
gcloud compute images export
command. This command uses
Daisy
to chain together the multiple steps that are required to export an image.
It assumes that you have already
created an image,
for example, with the
gcloud compute images create
command.
Using the Google Cloud CLI, run:
gcloud compute images export \ --destination-uri DESTINATION_URI \ --image IMAGE_NAME
Replace the following:
DESTINATION_URI
: the Cloud Storage URI destination for the exported image file.IMAGE_NAME
: the name of the disk image to export.
By default, images are exported in the Compute Engine format,
which is a disk.raw
file that is tarred and gzipped. To export images in
other formats supported by the QEMU disk image utility, you can use the
--export-format
flag. Valid formats include vmdk
, vhdx
, vpc
, vdi
,
and qcow2
.
Example
For example, the following command exports an image named my-image
from
my-project
to a Cloud Storage bucket named my-bucket
. By
default, the image is exported as a disk.raw file
and is compressed into
the tar.gz
file format.
gcloud compute images export \ --destination-uri gs://my-bucket/my-image.tar.gz \ --image my-image \ --project my-project
For flags, see the
gcloud compute images export
reference documentation.
REST
Send a POST
request to the Cloud Build API.
POST https://cloudbuild.googleapis.com/v1/projects/PROJECT_ID/builds { "timeout": "7200s", "steps":[ { "args":[ "-timeout=7000s", "-source_image=SOURCE_IMAGE", "-client_id=api", "-format=IMAGE_FORMAT", "-destination_uri=DESTINATION_URI" ], "name":"gcr.io/compute-image-tools/gce_vm_image_export:release", "env":[ "BUILD_ID=$BUILD_ID" ] } ], "tags":[ "gce-daisy", "gce-daisy-image-export" ] }
Replace the following:
PROJECT_ID
: the project ID for the project that contains the image that you want to export.SOURCE_IMAGE
: the name of the image to be exported.IMAGE_FORMAT
: the format of the exported image. Valid formats includevmdk
,vhdx
,vpc
,vdi
, andqcow2
.DESTINATION_URI
: the Cloud Storage URI location that you want to export the image file to. For example,gs://my-bucket/my-exported-image.vmdk
.
For additional args
values that can be provided, see the optional
flags section of the
VM image export GitHub page.
Example response
The following sample response resembles the output that is returned:
{ "name": "operations/build/myproject-12345/operation-1578608233418", "metadata": { "@type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata", "build": { "id": "3a2055bc-ccbd-4101-9434-d376b88b8940", "status": "QUEUED", "createTime": "2019-10-02T18:59:13.393492020Z", "steps": [ { "name": "gcr.io/compute-image-tools/gce_vm_image_export:release", "env": [ "BUILD_ID=3a2055bc-ccbd-4101-9434-d376b88b8940" ], "args": [ "-timeout=7056s", "-source_image=my-image", "-client_id=api", "-format=vmdk", "-destination_uri=gs://my-bucket/my-exported-image.vmdk" ] } ], "timeout": "7200s", "projectId": "myproject-12345", "logsBucket": "gs://123456.cloudbuild-logs.googleusercontent.com", "options": { "logging": "LEGACY" }, "logUrl": "https://console.cloud.google.com/cloud-build/builds/3a2055bc-ccbd-4101-9434-d376b88b8940?project=123456" } }
There are a couple ways you can monitor your build:
- Run a
projects.builds.get
request using the returned
build-id
. - Review the logs hosted at the provided
logUrl
.
Export an image from a project using a custom Compute Engine service account
During an image export, a temporary virtual machine (VM) instance is created in your project. The image export tool on this temporary VM must be authenticated.
A service account is an identity that is attached to a VM. Service account access tokens can be accessed through the instance metadata server and used to authenticate the image export tool on the VM.
By default, the export process uses the project's default Compute Engine Service Agent. However, if the default Compute Engine service account is disabled in your project or if you want to use a custom Compute Engine service account, then you need to create a service account and specify it for the export process.
You can export your images using either the Google Cloud CLI, or REST.
gcloud
Create a service account and assign the minimum roles. For more information about creating service accounts, see Creating and managing service accounts.
At minimum, the specified Compute Engine service account needs to have the following roles assigned:
roles/compute.storageAdmin
roles/storage.objectAdmin
For more information, see Grant required roles to the Compute Engine service account.
Use the
gcloud compute images export
command to export the image.gcloud compute images export \ --destination-uri DESTINATION_URI \ --image IMAGE_NAME \ --compute-service-account SERVICE_ACCOUNT_EMAIL
Replace the following:
DESTINATION_URI
: the Cloud Storage URI destination for the exported image file.IMAGE_NAME
: the name of the disk image to export.SERVICE_ACCOUNT_EMAIL
: the email address associated with the Compute Engine service account created in the previous step.
Example
For example, the following command exports an image named my-image
from
my-project
to a Cloud Storage bucket named my-bucket
with a
service account that has the email
image-export-service-account@proj-12345.iam.gserviceaccount.com
. By
default, the image is exported as a disk.raw
file and is compressed into
the tar.gz
file format.
gcloud compute images export \ --destination-uri gs://my-bucket/my-image.tar.gz \ --image my-image \ --project my-project \ --compute-service-account image-export-service-account@proj-12345.iam.gserviceaccount.com
For flags, see the
gcloud compute images export
reference documentation.
REST
Create a service account and assign the minimum roles. For more information about creating service accounts, see Creating and managing service accounts.
At minimum, the specified Compute Engine service account needs to have the following roles assigned:
roles/compute.storageAdmin
roles/storage.objectAdmin
For more information, see Grant required roles to the Compute Engine service account.
In the API, create a
POST
request to the Cloud Build API.POST https://cloudbuild.googleapis.com/v1/projects/PROJECT_ID/builds { "timeout": "7200s", "steps":[ { "args":[ "-timeout=7000s", "-source_image=SOURCE_IMAGE", "-client_id=api", "-format=IMAGE_FORMAT", "-destination_uri=DESTINATION_URI", "-compute_service_account=SERVICE_ACCOUNT_EMAIL" ], "name":"gcr.io/compute-image-tools/gce_vm_image_export:release", "env":[ "BUILD_ID=$BUILD_ID" ] } ], "tags":[ "gce-daisy", "gce-daisy-image-export" ] }
Replace the following:
PROJECT_ID
: the project ID for the project that contains the image that you want to export.SOURCE_IMAGE
: the name of the image to be exported.IMAGE_FORMAT
: the f ormat of the exported image. Valid formats includevmdk
,vhdx
,vpc
,vdi
, andqcow2
.DESTINATION_URI
: the Cloud Storage URI location that you want to export the image file to. For example,gs://my-bucket/my-exported-image.vmdk
.SERVICE_ACCOUNT_EMAIL
: the email address associated with the Compute Engine service account created in the previous step.
For additional args
values that can be provided, see the optional
flags section of the
VM image export GitHub page.
Export an image using Shared VPC
Before you export an image that uses a shared VPC, you must
add the compute.networkUser
role to the Cloud Build service account.
For more information, see Grant required roles to the Cloud Build
service account.
You can export your image using either the Google Cloud CLI, or the REST.
gcloud
Use the gcloud compute images export
command
to export your image.
gcloud compute images export \ --image IMAGE_NAME \ --destination-uri DESTINATION_URI \ --project PROJECT_ID \ --network NETWORK \ --subnet SUBNET \ --zone ZONE
Replace the following:
IMAGE_NAME
: the name of the image to export.DESTINATION_URI
: the Cloud Storage URI location that you want to export the image file to.PROJECT_ID
: ID of the project where the image is located.NETWORK
: the full path to a Shared VPC network. For example,projects/HOST_PROJECT_ID/global/networks/VPC_NETWORK_NAME
.SUBNET
: Optional. The full path to a Shared VPC subnetwork. For example,projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME
.Specifying this mode depends on the VPC network mode.
- If the VPC network uses legacy mode, don't specify a subnet.
- If the VPC network uses auto mode, specifying the subnet is optional.
- If the VPC network uses custom mode, then this field must be specified.
ZONE
: Optional. The zone to use for the export. This zone must match the region of the subnet. For example, if theSUBNET
is in theus-west1
region, the export zone must be one of the following:us-west1-a
,us-west1-b
orus-west1-c
.If you specified a
SUBNET
, you must also specify the zone.
For example, the following command exports an image named example-image
from
my-project
to a Cloud Storage bucket named my-bucket
. In this
example the Virtual Private Cloud network (my-shared-vp
) uses a custom subnet
(my-custom-subnet
).
By default, the image is exported as a disk.raw
file and is compressed into
the tar.gz
file format.
Sample command
gcloud compute images export \ --image example-image \ --destination-uri gs://my-bucket/my-image.tar.gz \ --project my-project \ --network projects/my-vpc-project/global/networks/my-shared-vpc \ --subnet projects/my-vpc-project/regions/us-west1/subnetworks/my-custom-subnet \ --zone us-west1-c
REST
Add the image to Cloud Storage.
In the API, create a
POST
request to the Cloud Build API.POST https://cloudbuild.googleapis.com/v1/projects/PROJECT_ID/builds { "timeout": "7200s", "steps":[ { "args":[ "-timeout=7000s", "-source_image=SOURCE_IMAGE", "-client_id=api", "-format=IMAGE_FORMAT", "-destination_uri=DESTINATION_URI", "-network=NETWORK", "-subnet=SUBNET", "-zone=ZONE" ], "name":"gcr.io/compute-image-tools/gce_vm_image_export:release", "env":[ "BUILD_ID=$BUILD_ID" ] } ], "tags":[ "gce-daisy", "gce-daisy-image-export" ] }
Replace the following:
PROJECT_ID
: the project ID for the project that contains the image that you want to export.SOURCE_IMAGE
: the name of the image to be exported.IMAGE_FORMAT
: the format of the exported image. Valid formats includevmdk
,vhdx
,vpc
,vdi
, andqcow2
.DESTINATION_URI
: the Cloud Storage URI location that you want to export the image file to. For example,gs://my-bucket/my-exported-image.vmdk
.NETWORK
: the full path to a shared VPC network. For example,projects/HOST_PROJECT_ID/global/networks/VPC_NETWORK_NAME
.SUBNET
: the full path to a Shared VPC subnetwork. For example,projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME
.Specifying this mode depends on the VPC network mode.
- If the VPC network uses legacy mode, do not specify a subnet.
- If the VPC network uses auto mode, specifying the subnet is optional.
- If the VPC network uses custom mode, then this field must be specified.
ZONE
: the zone to use for the export. This zone must match the region of the subnet. For example, if theSUBNET
is in theus-west1
region, the export zone must be one of the following:us-west1-a
,us-west1-b
orus-west1-c
.In most cases specifying a zone is optional. If
SUBNET
is specified, zone must be specified.
For additional
args
values that can be provided, see the optional flags section of the VM image export GitHub page.
Create and export an image manually
If the gcloud compute images create
and gcloud compute images export
commands do not meet your requirements, you can create and export an image
manually from a Compute Engine instance. This process has discrete
steps to first create an image and then export an image.
In the following example, note the created disk is called image-disk.
To create and export an image:
Optional: Stop the instance that the disk is attached to before you create the snapshot. Stopping the instance ensures the integrity of the disk contents in the snapshot.
Create a snapshot of the disk. Name the snapshot
image-snapshot
.gcloud compute disks snapshot DISK_NAME \ --snapshot-names image-snapshot
Replace
DISK_NAME
with the name of the disk that you want to use to create the snapshot. You can find the name of the disk by listing disks.Use the
image-snapshot
snapshot to create a new disk namedimage-disk
by running the following command:gcloud compute disks create image-disk \ --source-snapshot image-snapshot
Create a temporary disk named
temporary-disk
to hold yourtar
file, and specify theSIZE
of the disk to be at least 50% larger than the image disk.You can detach and delete the disk afterwards.
gcloud compute disks create temporary-disk \ --size SIZE
where
SIZE
is the size, in gigabytes or terabytes, of the temporary disk. For example, specify100GB
to create a 100-gigabyte disk.Create an instance and enable
storage-rw
scope on the instance. Also, attach theimage-disk
and thetemporary-disk
to the instance as secondary disks with specificdevice-name
attributes. ReplaceVM_NAME
with the name of the instance to create.gcloud compute instances create VM_NAME \ --scopes storage-rw \ --disk name=image-disk,device-name=image-disk \ --disk name=temporary-disk,device-name=temporary-disk
Note that you're passing in service account scopes so that you can upload your file to Cloud Storage in later steps.
Review the details about starting a new instance if necessary.
Connect to your instance. Replace
VM_NAME
with the name of the instance to connect to.gcloud compute ssh VM_NAME
Format and mount the temporary disk. Formatting the disk deletes the contents of the temporary disk.
sudo mkdir /mnt/tmp
sudo mkfs.ext4 -F /dev/disk/by-id/google-temporary-disk
sudo mount -o discard,defaults /dev/disk/by-id/google-temporary-disk /mnt/tmp
Optional: Mount the image disk and make additional changes before you create the
tar
file. For example, you might want to delete any existing files from the/home
directory if you do not want them to be part of your image. Mount the disk partitions that you need to modify, modify the files on the disk that you need to change, and then unmount the disk when you are done.Create a directory where you can mount your disk or partition.
sudo mkdir /mnt/image-disk
Use the
ls
command to determine which disk or disk partition you need to mount.ls /dev/disk/by-id/
The command prints a list of disk IDs and partitions. For example, the following disk has a partition table with one partition. The
google-image-disk
ID points to the full disk from which you want to create an image. Thegoogle-image-disk-part1
ID points to the first partition on this disk. Mount the partition if you need to make changes to the disk, then create the image from the full disk.google-image-disk google-image-disk-part1
Mount the disk or the partition. If your disk has a partition table, mount the individual partitions for your disk. For example, mount
google-image-disk-part1
.sudo mount /dev/disk/by-id/google-image-disk-part1 /mnt/image-disk
Alternatively, if your disk is raw formatted with no partition table, mount the full
google-image-disk
disk.sudo mount /dev/disk/by-id/google-image-disk /mnt/image-disk
Modify the files in the
/mnt/image-disk
directory to configure the files on the disk. As an example, you might remove the/mnt/image-disk/home/[USER]/.ssh/authorized_keys
file to protect your SSH keys from being shared.After you have finished modifying files, unmount the disk.
sudo umount /mnt/image-disk/
Create a
tar
file of your image.When you finish customizing the files on the image disk, create a raw disk file on your temporary disk. The name of the raw disk image must be 'disk.raw':
sudo dd if=/dev/disk/by-id/google-image-disk of=/mnt/tmp/disk.raw bs=4096
Then create the
tar.gz
file:cd /mnt/tmp
sudo tar czvf myimage.tar.gz disk.raw
This command creates an image of the instance in the following location:
/mnt/tmp/myimage.tar.gz
Upload the image into Cloud Storage.
To upload the
tar
file to Cloud Storage, use the Google Cloud CLI that comes preinstalled on your instance.Create a bucket using the gcloud CLI.
Make sure to review the bucket and object naming guidelines before you create your bucket. Then, create your bucket using the following command. Replace
BUCKET_NAME
with the name of the bucket to create.me@example-instance:~$ gcloud storage buckets create gs://BUCKET_NAME
Copy your file to your new bucket. Replace
BUCKET_NAME
with the name of the bucket to copy the file to.me@example-instance:~$ gcloud storage cp /mnt/tmp/myimage.tar.gz gs://BUCKET_NAME
You have exported your file into Cloud Storage. You can now share
the image with other people, or use the tar
file to add a new image to a
Google Cloud console project.
What's next
- Share images using the image user role.
- Learn about the import methods available for Compute Engine.