What is Object Storage?

Object Storage is an alternative way to store, organize and access units of data. It provides a reasonable balance between performance and functionality versus simplicity and scalability. Object Storage enables a minimal set of features: store, retrieve, copy, and delete objects. These basic operations are done via REST APIs that allow programmers to work with the objects. The HTTP interface to Object Storage systems allows fast and easy access to the data for users from anywhere in the world.

Object Storage vs. Block and File Storage

Object Storage is much more scalable than file storage because it is vastly simpler. Objects are not organized in hierarchical folders, but in a flat organization of containers or buckets. Each object is assigned a unique ID or key. Their keys, regardless of where the objects are stored, retrieve objects. Access is via APIs at the application level, rather than via OS at the file system level. As a result, Object Storage requires less metadata, and less management overhead than file systems. This means Object Storage can be scaled out with almost no limits. Object Storage is easier to use than block storage and overcomes the limitation of fixed size LUNs. It also removes file system limitations such as the folder size or path name length. Unlike block or file, Object Storage does not use RAID for data protection. It simply keeps a number of copies of each object.

VPSA Object Storage (ZIOS) is Zadara’s object storage service. It is provided on Zadara clouds, side by side with the VPSA that provides block and file storage services.

VPSA Object Storage Components

Provisioning Portal

The Zadara Provisioning Portal is your gateway to the Zadara Storage ecosystem through which you can create, view, and modify your VPSA configurations on multiple Clouds that Zadara Storage offers.

Virtual Controller

A Virtual Controller (VC) is a Virtual Machine with dedicated CPUs & RAM, which runs the VPSA Object Storage IO stack and control stack. The number of VC’s in a configuration is determined by the number of drives assigned, starting with a minimal configuration of 2 VCs, and can grow to hundreds. Each VC supports up to 12 drives. VCs are automatically provisioned as needed.

There are 2 services running in each VC: Proxy Layer and Storage Layer. The Proxy Layer is the interface to the users or the application using the data objects. The storage Layer is responsible for storing the objects on the drives, and updating the metadata in the databases.

The VCs also provide a web management interface and REST API endpoints for management and control, as well as authentication and load balancing services.

Dedicated Drives

The Zadara Storage Cloud Orchestrator assigns dedicated drives for each each VPSA. The drives are provisioned from different Storage Nodes (SNs) for maximum redundancy and performance. Each drive is exposed as a separate iSCSI target from the SN and is LUN masked only to the VPSA’s VCs. Your QoS is guaranteed, because neighbors, with provisioned drives adjacent to yours, cannot access your drives, impact your performance, or compromise your privacy and security.

VPSA Object Storage Administration

VPSA Object Storage Hierarchy

The Object Storage system organizes data in a hierarchy, as follows:

  • Account (also referred to as Tenant). Represents the top-level of the hierarchy. Usually created by the service provider. The account admin owns all resources in that account. The account defines a namespace for containers. Containers in two different accounts, might have the same name. Accounts are also used to control users access to objects and containers.
  • Container (Also referred to as Bucket). Defines a namespace for objects. Objects in two different containers, may have the same name. Any number of containers can be created within an account. In addition to containing objects, you can also use the container to control access to objects, and you can set a storage policy that each container uses.
  • Object. Stores data content, such as documents, images, and so on.

VPSA Object Storage Users and Roles

There are 3 types of Roles assigned to VPSA Object Storage (ZIOS) Users:

  • ZIOS Admin is responsible for the administration of the VPSA Object Storage. The user (registered in Zadara Provisioning Portal) that orders the VPSA Object Storage becomes its Administrator. By default, the VPSA Object Storage is created with one account (ZIOS administrator account) and the ZIOS Administrator is a member of this account. ZIOS Administrators can add other users with the same role. ZIOS Administrator is a super-user with privileges to create accounts and users of any role. Users with ZIOS Administrator role can define policies, add/ remove drives and assign drives to policies. Users with ZIOS Administrator role can perform containers and objects operations across accounts. ZIOS administrator is also responsible for the VPSA Object Storage settings (like IP addresses, SSL certification, etc.), and has access to the metering and usage information.
  • Account Admin can create an account (using the Self Account Creation Wizard) and can manage their own accounts. They can perform any user management and containers/objects operations.
  • Member can do object storage operations according to the permission given by the account administrator, within the limits of that account. These operations include create/delete/list containers and create/delete/list objects.

User authentication is done against an internal VPSA Object Storage Users database.


VPSA Object Storage Architecture

VPSA Object Storage (ZIOS) architecture is a scale out cluster of Virtual Controllers that together provides the service. The number Of VC’s is automatically determined as needed to serve the capacity and performance of the system.

VPSA Object Storage Structure


This figure shows high level logical view of VPSA Object Storage (ZIOS). It is a Virtual Object Store cluster, with two distinct layers:

  • “Storage Layer” that manages individual disks
  • “Proxy - REST API Layer” that provides REST API front-end of the Object Storage.

The typical VC runs both functions and is referred to as “Proxy+Storage” VC. It is possible to add VCs with the Proxy layer only. There are referred to as “Proxy” VC.

Each VPSA Object Storage is typically composed of several Proxy+Storage VCs and optionally one or more Proxy VCs with each VC having dedicated CPU/RAM/networking. Proxy+Storage VC’s consume raw Physical drives (like SAS/SATA/SSD) exposed from Storage Nodes (SNs). Proxy+Storage and Proxy VCs run Object Storage Stack that provide Amazon S3 and Swift REST API interface.

Capacity & Performance can independently scaled up/down by adding/removing disks and proxy-VCs respectively. VPSA Object Storage typically has a set of load-balancers to distribute REST API traffic across the Proxy REST API Layers. Each VPSA Object Storage natively being multi-tenant allows creation of multiple accounts within it, with each account having multiple users who can work with the object interface (GET/PUT objects).

A single Zadara Storage Cloud can host several virtual object stores and this makes it truly disruptive and unique, as each VPSA Object Storage has entirely provisioned resources of CPU/RAM/networking/disks & runs the object stack in isolated Virtual Machines (i.e. there is no sharing of resources anywhere across VPSA’s) thereby providing complete performance and fault isolation.

Virtual Controller

VPSA Object Storage Virtual Controller (VC) provides multi-tenant, protected object storage.

Virtual Controller Responsibilities:

  • Query Cloud Controller and Storage Nodes for resource assignments and changes.
  • Provide data protection for objects - 2-way protection, 3-way protection & Erasure Coding protection with objects distributed across multiple SN’s disks
  • Provide Authentication/Authorization framework with which individual accounts/users can be managed and these account/users being able to work with objects within their account
  • Provide Amazon S3 and Swift API’s on object front-end with support for internal & external HTTPS termination
  • Provides capability to scale up/down capacity with addition/removal of drives with corresponding automatic addition/removal of proxy+storage VCs
  • Provide capability to scale REST API performance with addition/removal of proxy-VCs
  • Automatically reconfigure/redistribute object data across available disks on addition/removal of disks, failure/recovery
  • Provide management GUI and REST API to manipulate the system entities and also to work with the object store
  • Provide metering visibility in object request flows, capacity trend utilization
  • Billing based on capacity/throughput usage for each of the tenants
  • Provide internal load balancing service
  • Provide HA architecture for VC failure resiliency

The Ring

A ring represents a mapping between the names of entities stored on disk and their physical location. There are separate rings for accounts, containers, and one object ring per storage policy. When any components need to perform any operation on an object, container, or account, they need to interact with the appropriate ring to determine its location in the cluster.

The objects rings are stores in each Policy. The accounts and containers rings are stored in dedicated Policy named Metadata Policy.

One of the Virtual controllers (called Ring Master), runs the Rings, in addition to its other responsibilities. In case of failure of the Ring Master, another VC (called Ring Slave) will take its place.

VPSA Object Storage Fault Domains

In order to ensure the Object Storage survival in case a complete storage node is lost, the data is distributed between Fault Domains. “Object Storage Fault Domains” are manually populated for the cloud Storage Nodes by the cloud admin.

Object Storage VCs are created in “VC-Sets” according to the desired policy protection type (2-way/3-way/Erasure Coding protection). Each VC in a Set is created in a different Fault Domain.

Drives are added to the the Object Storage in sets as well. And allocated only to VCs within the same Fault Domain.