Skip to main content

Google debuts new Cloud Storage archive class for long-term data retention

Google Cloud Transfer Appliance Faceplate (blue).
A rendering shows a close-up of the Google Cloud Transfer Appliance's faceplate.
Image Credit: Google

What good’s the cloud without storage? Enterprises need places to stow their data before it’s analyzed, ingested, exported, or otherwise transformed, and fortunately, Google has their back. Today at its annual Cloud Next conference in San Francisco, the company announced new storage tools, pricing, and products for customers of all sizes.

First on the agenda was a new archive class designed for long-term data retention that eliminates the need for a separate retrieval process, Google says, while providing “immediate” and low-latency access to content. Both access and management are performed via a familiar set of Google Cloud Storage APIs through which objects can be tiered down to save on costs, and data is redundantly stored geo-redundantly across multi-regional availability zones.

Pricing will start at $0.0012 per GB per month ($1.23 per TB per month) when it launches later this year. That’s significantly cheaper than Microsoft’s Azure Cool Blob Storage, which costs $0.002 per GB per month, and competitive with Amazon S3 Glacier, which is priced at $0.004 per GB per month.

For customers with more conventional storage needs, there’s Cloud Firestore, Google’s managed high-performance file storage system. It’s now generally available, and the company says that the premium instances now provide increased read performance up to 1.2 GBps throughput and 60,000 input/output operations per second.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Customers paying for Regional Persistent Disk, which delivers storage synchronously replicated across two zones in the same region, will be pleased to learn that Google has increased the standard throughput limits per instance to 240 MBps for writes and reads, which works out to about 33% faster performance per instance. The number of Persistent Disks that can be attached to a virtual machine, meanwhile, has increased to 128. (Google says that all machine types with at least one vCPU will be allowed to attach up to 128 Persistent Disks, while shared-core and burst machine types will be limited to 16.)

Also in tow is Bucket Policy Only for Google Cloud Storage (in beta), which lets admins enforce Cloud Identity and Access Management (IAM) policies at the storage bucket level. (A new organization policy setting ensures that newly created buckets utilize Bucket Policy Only.) And Google has made generally available Custom Cloud IAM roles and permissions in Cloud Storage Transfer Service, which allow admins to assign Cloud IAM permissions for creating, reading, updating, and deleting transfer jobs to individual users or roles.

Last but not least, this week marks the launch of V4 signatures in beta, which enable customers to access multiple object stores using the same application code.

“Storage provides the foundation for enterprise infrastructures, and that’s particularly true with cloud, where readily available data storage that’s cost-efficient is a must,” said Google Cloud directors of product management Dominic Preuss and Dave Nettleton. “At Google Cloud, we think you should have a range of straightforward options to store your data and reliably access it when and how you need it, with the performance you need.”