Configuring Storage Pools

Understanding Storage Pools

Storage Pools are virtual entities that manage storage provisioning from the aggregated capacity of one or more RAID Groups pooled into a single construct with some QoS attributes.

Volumes are thinly provisioned, allocating capacity from the Pool only when needed. The Pool has an underlying block virtualization layer which maps virtual address space to physically allocated Pool space and manages sharing of Pool physical chunks between Volumes, Snapshots and Clones.

Snapshots and Clones consume zero capacity when they are created because they share the same data chunks as the originating Volume. Anytime you actually modify the data in the Volume, or in one of the Clones, the data chunk is copied-on-write (COW) from the source in order to apply the new data write to a new pool region without affecting the data set of any other objects that share the same data chunk.

The Pool’s attributes define the way Volumes, Snapshots and Clones are provisioned.

imageAFA Tiers

Note

From version 20.12, VPSA Flash Array supports a mixture of media types within one storage pool, as tiers. Tiering optimizes costs, automating data storage placement by tracking segments, keeping high frequency activity on SSD storage and low frequency activity on HDD.

The high tier (also known as tier 0) is the more performant tier of the VPSA, and comprises in-array SSD/NVME drives.

The low tier (also known as tier 1) is the capacity-oriented tier of the VPSA. It can be implemented via in-array SATA/NLSAS drives, or by connectivity to a remote object storage container.

Supported configurations:

  • Tier 0 is SSD and Tier 1 (low tier) is HDD.

  • Tier 0 is SSD and Tier 1 (low tier) is Remote Object Storage.

Inline data reduction is supported irrespective of the actual data location. Data storage placement in the VPSA pool is aligned dynamically, based on an internal heat index. Scanning, promotion and demotion of the storage location between tiers is embedded in the VPSA garbage collection cycle.

The VPSA tier manager keeps track of heat scores for the hottest LSA chunks in each tiered pool.

The heat score is calculated according to:

  • Frequency of chunk read-write operations

  • Chunk deduplication references

The VPSA attempts to stabilize SSD utilization at a steady state around 80%.

SSD utilization

Promotion and demotion considerations

Below steady state

All data is retained in SSD and no demotions take place.

At stabilization target around steady state

Placement considerations, promotions and demotions.

Above steady state

Demotions are increased and promotions are blocked.

Chunk promotion: Chunks can be promoted to a higher tier:

  • By the low tier defragger

  • On host reads

  • By the tier manager, based on periodic assessing of chunks with highest heat scores

Chunk tier demotion: Chunks can be demoted to a lower tier:

  • By the SSD defragger

  • By the tier manager, when SSD utilization is at the steady state and higher


imageAFA Understanding Pool’s Capacity

The introduction of data reduction makes the pool capacity management more complex. Data reduction efficiency depends on the nature of the data, therefore it is harder to predict the drive capacity needed for each workload.

Capacity metrics to consider:

Physical View

Raw Capacity - Sum of all drives capacities in the Pool

Usable Capacity - Total capacity of all RAID groups in the Pool

Note

the system keeps about 0.5% of each RAID Group capacity as its internal spare

Used by Volumes - Capacity used to store the Volumes data

Used by metadata - Capacity used to store the Pool’s metadata

Used by data copies: - Capacity used to store the snapshots and clones

Used Capacity - The total size of all data written in the Pool

Used Capacity = Used by Volumes + Used by metadata + Used by data copies

Free Capacity - Available Capacity in the Pool that can be used for new Data and Metadata writes

Free Capacity = Usable Capacity – Used Capacity

%Full = “Used Capacity” / “Usable Capacity”

Note

Capacity alerts are based on Free Capacity

Virtual View

Provisioned Capacity - Sum of Pool’s Volumes and Clones capacities as seen by the hosts

Allocated Capacity - Pool’s allocated address space of all Volumes, Snapshots and Clones

Allocation Limit - Max Capacity of the Pool’s address space. Depends on the pool type. See

Free Address Space = Allocation Limit – Allocated Capacity

Note

Address Space alerts are based on Free Address Space

Effective Capacity - Amount of data written in the pool by all volumes and can be accessed by hosts. Not including space taken by snapshots

image20a

Data Reduction Saving

Thin Provision Ratio = Provisioned Capacity / Effective Capacity

Data Reduction Ratio = Effective Capacity / Used by Volumes

Data Reduction Saving = Effective Capacity - Used by Volumes

Data Reduction Percentage = 1- (1 / Data Reduction Ratio)

e.g.

Data reduction ratio 2:1 , Data Reduction Percentage 50%

Data reduction ratio 5:1 , Data Reduction Percentage 80%

Data reduction ratio 20:1 , Data Reduction Percentage 95%


Pool Capacity Monitoring

The Flash Array VPSA Dashboard shows the capacity consumption and data reduction saving.

vpsa-flash-dashboard-capacity

The upper bar shows the current capacity provisioned to the hosts by all pools vs. the effective capacity written by the hosts vs. the physical space needed to store the data.

The lower chart shows trend of time of the physical capacity used and available.

The Pools table on the Pools page shows 2 bars per pool:

vpsa-flash-pools

  • The Physical capacity bar shows the usable vs. used capacities.

  • The Virtual capacity bar shows the allocated capacity vs. the allocation limit.

Creating and Managing Pools


Creating a Pool

Note

By dafault when a a new VPSA is created, a default pool is automatically created for each type of drive selected for this VPSA.

If the default pool does not meet the needs, you can delete it and follow the process described here to create your own pools.

To create a new Storage Pool press either the Create button on the Pools page or the Create Pool button on the RAID Groups page. There are 2 methonds to create a pool.

  1. Create a Pool from RAID Groups

  2. Create a Pool from drives, and the let the system automatically create the needed RAID Groups.

You can toggle between the two by clicking Use Drive Selection / Use RAID Group Selection at the lower left corner of the dialog

To create a Pool from RAID Groups, you will see the following dialog:

image20

Select the Pool attributes:

  • Display Name – You can modify this anytime later.

  • RAID Group(s) selection – Check the box(es) of one or more RAID Groups from which protected storage capacity will be allocated for this Pool.

  • Capacity – The Pool’s physical capacity shown in GB. By default the capacity is the aggregated capacities of all the selected RAID Groups, but you do not have to allocate full RAID Groups. If you define a capacity smaller than is available in the selected RAID groups the capacity will be evenly distributed between the RAID Groups.

    Note

    The actual usable capacity of the Pools is a little less than the requested size, as the system reserves some space for the Pool’s metadata (typically up to 100GB).

  • Type – The imageSA supports Transactional, Repository and Archive Pool types. The imageAFA supports IOPS-Optimized, Balanced Pool and Throughput-Optimized Pool types. These Pool types use different chunk sizes for the mapping of virtual LBAs to Physical Drive addresses. The following tables describe the tradeoffs for each type and the recommended use cases:

imageSA Storage Array Pool types:

Transactional Pool

Repository Pool

Archive Pool

Chunk size

256KB

1MB

2MB

Pros

  • Faster COW operation

  • Space efficiency on Random writes to Snapshots

  • Smaller metadata size

  • Sequential workload performance is similar to transactional

  • Allows large pools

  • Sequential workload performance is the same

Cons

Increased metadata size

  • Slower COW operation

  • Less space efficient

  • Slower with frequent data modifications

  • Limited Snapshots frequncy (1 hour min)

Use Case

Transactional Workload with Snapshots

  • Repository type workload.

  • Large Pools

  • Many snapshots to keep

  • Relatively static data

  • Archive type workloads

  • very large pools/volume (> 100TB)

Limit

Transactional Pools have a maximum size of 20TB

  • Repository Pools have a maximum size of 100TB

  • Archive Pools have a maximum size of 200TB


imageAFA Flash Array Pool types:

IOPS-Optimized Pool

Balanced Pool

Throughput-Optimized Pool

Thin Provision Chunk size

1MB

2MB

4MB

Deduplication Chunk size

16KB

32KB

64KB

Pros

  • Smaller metadata size

  • Better deduplication

  • Lower COW overhead in cases of small block I/O

  • Allows large pools

  • Better sequential workload throughput compared to IOPS-Optimized pools

  • Better compression ratio compared to IOPS-Optimized pools

  • Optimal capacity consolidation for backup and media storage

  • Better sequential workload throughput compared to other pool types

  • Better compression ratio compared to other pool types

Cons

  • Increased metadata size

  • Higher COW overhead for IOPS < 16KB in comparison to IOPS-Optimized pools

  • Lower deduplication efficiency compared to IOPS-Optimized pools

  • Higher COW overhead for IOPS < 32KB in comparison to other pool types

  • Higher latency for small block writes < 64KB due to RMW

  • Less space-efficient for small files (recommended for file sizes >= 1MB)

  • Low deduplication efficiency (recommendation is to turn deduplication off)

Use Case

  • Analytics

  • Small block IOPS workloads

  • High IOPS

  • Database (OLTP)

  • Deduplication-friendly data

  • Pools/volumes (> 100TB)

  • Archive type workloads

  • Workloads with average IO block size of 32KB

  • General purpose

  • File system

  • Relatively static data

  • Backup repositories

  • Media repositories

  • Large file archives

  • Archive type workloads

  • Sequential workloads auch as video streaming

  • Workloads with average IO block size > 128KB

Limit

  • IOPS-Optimized Pools have a maximum size of 100TB

  • Balanced Pools have a maximum size of 200TB

  • Optimized Pools have a maximum size of 500TB


When there are a number of pools in a given VPSA, there is a limit to the aggregated total size of all pools. The following table lists the maximimum capacity per pool type in TB, per each VPSA Flash Array engine:

Engine

H100

H200

H300

H400

IOPS-Optimized

60

100

100

100

Balanced

100

160

200

200

Throughput-Optimized

140

220

400

500

Note

In most cases, the maximimum provisioned capacity is the same as the maximum usable capacity for that engine and pool type configuration.

An H400 engine with a Balanced pool supports a maximum provisioned capacity of 250TB when the overall data reduction ratio for the array is 1:1.5 or higher.

  • Cached – Check this box to use SSD to Cache Server’s reads and writes.

    • All Pools that are marked as “Cached” share the VPSA Cache.

    • Flash cache usually improves the performance of volumes based on HDD’s pools. However it depends on the specific workload and the size of the cache vs. the size of the active data set.

    • If the Pool consists of SSD drives this option will be disabled.

  • Striped – This check box is enabled only when you select two or more RAID Groups. Striping over RAID-1 or RAID-6 creates RAID-10 or RAID-60 configurations respectively. Use striping to improve performance of random workloads since the IOs will be distributed and all drives will share the workload.

imageAFA

  • Flash Array pools are always striped, and the Striped check box is hidden.

  • Additional Storage Class: Adding a storage class defines a low tier for the pool as HDD or remote Object Storage. Adding a storage class opens the SSD Cool Off configuration option.

    vpsa-flash-create-pool

  • SSD Cool Off: Set a goal for data retention in SSD with a value of 0 to 720 hours (30 days). The default is 0 (disabled). This hints to the system that within the cool-off period there will be a repeated access to a data chunk in the pool. When SDD utilization is around the steady state, the tiering manager references the cool-off period definition in its decision to determine tier placement for the data chunk.

To create a Pool from Drives, you will see the following dialog:

image20b

The parameters are the same as above. Check the boxes of drives that will be allocated for this Pool.


Expanding Pool Capacity

To Expand the Pool press the Expand button on the Pools page.

image21

You can use capacity from any RAID Group to expand a Pool. If the RAID Group from which the new capacity is added doesn’t match the protection type or drive type of the existing capacity you’ll see a warning message pop up asking you to confirm the mismatch. Keep in mind that continuing with the mismatched types may impact the pool performance and protection QoS.


imageAFA

Expand in Storage Class: Choose SSD or HDD to list the storage class resource details and availability for expansion.

imageAFA Shrinking Pool Capacity

Note

Pool shrink is only supported in Flash Array VPSAs.

If the pool capacity is not fully used you can shrink it’s size by removing one RAID Group at a time from the Pool. The VPSA will evacuate the selected RAID Group and will return the RAID Group to the VPSA for reuse, or for the RAID group to be deleted and the drives removed from the VPSA. To Shrink the Pool press the Shrink button on the Pools page.

vpsa-flash-shrink-pool

Storage Class to Shrink: Choose SSD or HDD to list the storage class raid group details and availability for shrinkage.

Select the RAID Group to remove from the Pool. Check the physical size expected after the shrinking operation is completed, and press Shrink. The operation might take a while, depending on the amount of data to be copied to other drives. The system will generate an Event once done.


imageSA

It is possible to enable Caching on non-cached Pools.

One use case for leveraging this capability is to enable caching only after the initial copy of the data into the VPSA. The initial copy typically generates a sequential write IO workload, where non-cached Pools are most efficient. Once the initial copy is completed enable caching on the Pool if you expect a more random type of IO workload.


imageSA Disabling SSD cache on a pool

image22

By default every pool is cached by the VPSA’s SSD cache, but it is also possible to disable caching on cached Pools which will remove this feature. The Enable Cache/Disable Cache buttons toggle depending on the current caching state of the Pool.


Viewing Pool properties

The Pools details are shown in the following South Panel tabs:

Properties


imageSA Each Pool of Storage Array includes the following properties:

Property

Description

ID

An internally assigned unique ID.

Name

User assigned name. Can be modified anytime.

Comment

User free text comment. Can be used for labels, reminders or any other purpose

Status

  • Normal

  • Creating

  • Deleting

  • Partial/Failed – At least one of the underlying RAID groups has failed, or the Pool metadata cannot be initialized at Start Of the Day.

Capacity

Total available capacity for user data & system metadata.

Available Capacity

Available (free) capacity to be used for User data. VPSA reserves 2% of the total Pool capacity for system metadata. If the VPSA needs more capacity for the metadata (very rare scenario), it will be consumed from the available capacity.

Metadata Capacity

Metadata Capacity

Capacity State

  • Normal

  • Alert

  • Protected

  • Emergency

See Managing Pool Capacity Alerts for more details.

Mode

  • Simple – There are one or more concatenated RAID Groups.

  • Stripe – There are two or more striped RAID Groups.

  • Mixed – There are two or more concatenated and striped RAID Groups.

Type

  • Transactional Workloads

  • Repository Storage

  • Archival Storage

Stripe Size

Applicable only for Pools of Striped mode (i.e. when data is striped between 2 or more RAID groups). The Stripe size is always 64KB.

Cached

Yes/No – Indicates whether the Pool utilizes SSD for read/write caching

Cache COW Writes

Yes/No – Indicates whether flash cache is used for internal snapshots Copy-On-Write Operations.Enabled by default. Disable only on rare cases where frequent snapshots cause extreme load of metadata operations. Consult Zadara support.

Raid Group(s)

RAID Group name, or “Multiple (X)” where X denotes the number of RAID Groups in the Pool.

Created

Date & time when the object was created.

Modified

Date & time when the object was last modified.

imageAFA Each Pool of Flash Array includes the following properties:

Property

Description

General

ID

An internally assigned unique ID.

Name

User assigned name. Can be modified anytime.

Comment

User free text comment. Can be used for labels, reminders or any other purpose

Status

  • Normal

  • Creating

  • Deleting

  • Partial/Failed – At least one of the underlying RAID groups has failed, or the Pool metadata cannot be initialized at Start Of the Day.

Type

  • Transactional Workloads

  • Repository Storage

  • Archival Storage

Raid Group(s)

RAID Group name, or “Multiple (X)” where X denotes the number of RAID Groups in the Pool.

Created

Date & time when the object was created.

Modified

Date & time when the object was last modified.

Physical Capacity

Usable Capacity

Total capacity of all RAID groups in the Pool

Used Capacity

The total size of all data written in the Pool Used Capacity = Used by Volumes + Used by metadatra + Used by data copies

Used by Volumes

Capacity used to store the Volumes data

Used by Data Copies

Capacity used to store Snapshots and Clones

Used by Metadata

Capacity used to store the Pool’s metadata

Free Capacity

Available Capacity in the Pool that can be used for new Data and Metadata writes

Physical Capacity State

  • Normal

  • Alert

  • Protected

  • Emergency

See Managing Pool Capacity Alerts for more details.

Virtual Capacity

Provisioned Capacity

Sum of Pool’s Volumes and Clones capacities as seen by the hosts

Allocated Capacity

Pool’s allocated address space of all Volumes, Snapshots and Clones

Effective Capacity

Amount of data written in the pool by all volumes and can be accessed by hosts. Not including space taken by snapshots

Virtual Capacity State

  • Normal

  • Alert

  • Protected

  • Emergency

See Managing Pool Capacity Alerts for more details.

Capacity Savings

Data Reduction Ratio

Capacity savings by all data reduction techniques. Data Reduction Ratio = Effective Capacity / Used by Volumes

Deduplication Ratio

Capacity savings by deduplication

Compression Ratio

Capacity savings by compression

Thin Provision Ratio

Capacity savings by thin provisioning technique. Thin Provision Ratio = Provisioned Capacity / Effective Capacity


RAID Groups

In the RAID Groups View This tab lists the RAID Groups allocated to the selected Pool. Each RAID Group includes the following information:

  • Name

  • Protection (RAID-1, RAID-5or RAID-6)

  • Status

  • Contributed Capacity

In the Segments View This tab shows the structure of pool made of concatinated or striped segments

image136


Tiers imageAFA

The Tiers tab displays details of the tier types allocated to the selected pool.

vpsa-flash-pool-tiers

Each tier type includes the following information:

  • Type of tier.

  • Members: RAID Groups or Drives and their capacities, that are members of each tier.

  • Capacity: Measurements in units for Usable capacity, Used By Volumes, Currently Inactive, and Free.

  • Status: Operational status of the tier.

  • Utilization: Progress bar and measurements in percentages: Used By Volumes, Currently Inactive, and Free.

Volumes and Dest Volumes

These two tabs display the provisioned Volumes and the Provisioned Remote Mirroring Destination Volumes. Please note that the Dest Volumes are not displayed in the main Volumes page since most operations are not applicable to them. Displaying the list of the Dest Volumes in the Pools South Panel provides a complete picture of the Objects that consume capacity from the Pool. Each Volume includes the following information:

  • Name

  • Capacity (virtual, not provisioned)

  • Status

  • Data Type (Block or File-System)


Recycle Bin

By default when you delete a volume it moves to a Pool’s Recycle Bin for 7 days until it is permanently deleted. From the Recycle Bin an administrator can purge (permanently delete) or restore a volume.


Logs

Displays all event logs associated with this Pool.


Metering

The Metering Charts provide live metering of the IO workload associated with the selected Pool.

The charts display the metering data as it was captured in the past 20 “intervals”. An interval length can be set to one of the following: 1 Second, 10 seconds, 1 Minute, 10 Minutes, or 1 Hour. The Auto button lets you see continuously-updating live metering info (refreshed every 3 seconds).

Pool Metering includes the following charts:

Chart

Description

IOPS

The number of read and write SCSI commands issued to the Pool, per second.

Bandwidth (MB/s)

Total throughput (in MB) of read and write SCSI commands issued to the Pool, per second.

IO Time (ms)

Average response time of all read and write SCSI commands issued to the Pool, per selected interval .


Capacity Alerts

The Capacity Alerts tab lists the configurable attributes of the Pool Protection Mechanism. See Managing Pool Capacity Alerts for more details. You can modify the following attributes:

  • Physical Pool Alert Mode Threshold - “Alert me when it is estimated that the Pool will be at full physical capacity in X Minutes.”

    • Default Value: 360 minutes

  • Physical Pool Protection Mode Threshold - “Do not allow new Volumes, Shares, or Snapshots to be created when it is estimated that the Pool will be at full physical capacity in X Minutes.”

    • Default Value: 60 minutes

  • Physical Pool Calculation Window - “Calculate the estimated time until the Pool is full based on new capacity usage in the previous X minutes.”

    • Default Value: 60 minutes

  • Physical Pool Emergency Mode Threshold - “Delete snapshots, starting from the oldest, when there is less than the following physical capacity left in the Pool”

    • Default Value: 50 GB

imageAFA

  • Allocated Capacity Alert Mode Threshold - “Alert me when it is estimated that the Pool’s address space will be at full capacity in X Minutes.”

    • Default Value: 360 minutes

  • Allocated Capacity Protection Mode Threshold - “Do not allow new Volumes, Shares, or Snapshots to be created when it is estimated that the Pool’s address space will be at full capacity in X Minutes.”

    • Default Value: 60 minutes

  • Allocated Capacity Calculation Window - “Calculate the estimated time until the Pool’s address space is full based on new capacity usage in the previous X minutes.”

    • Default Value: 60 minutes

  • Allocated Capacity Emergency Mode Threshold - “Delete snapshots, starting from the oldest, when there is less than the following free address space left in the Pool”

    • Default Value: 5 GB


Performance Alerts

The Performance Alerts tab lists the Pool’s ability to send alerts when performance drops below expectations. See Managing Pool Performance Alerts for more details.


Managing Pool Capacity Alerts

The VPSA’s efficient and sophisticated storage provisioning infrastructure maximizes storage utilization, while providing key enterprise-grade data management functions. As a result, you can quite easily over-provision a Pool with Volumes, Snapshots and Clones, hence requiring a Pool Protection Mechanism to alert and protect when free Pool space is low.

The VPSA Pool Protection Mechanism is either time-based or capacity consumption based. The goal is to provide you sufficient time to fix the low free space situation by either deleting unused Volumes/Snapshots/Clones or by expanding the Pool’s available capacity (a very simple and quick process due to the elasticity of the VPSA and the Zadara Storage Cloud).

The VPSA measures the rate at which the Pool’s free space is consumed and calculates the estimated time left before running out of free space.

The following user-configurable parameters impact alerts and operations that are performed as part of the Pool Protection mechanism:

  • Physical Pool Capacity Alert Threshold – The estimated time (in minutes) before running out of free space or percentage used. When triggered an online support ticket is submitted and an email is sent to the VPSA user. When crossing this threshold the Free Capacity State changes to “Alert” and the available capacity will be shown in Yellow. A secondary “reminder” ticket and an email will be generated when only half of this threshold’s estimated time is left.

    • Default time: 600 minutes (10 hours)

    • Minimum: 1 minute (0 means disable this alert by time)

    or

    • Default Percentage: 90% full

    • Minimum: 1 % (0 means disable this alert by %)

  • Physical Pool Capacity Protection Threshold – The estimated time (in minutes) before running out of free space. When triggered the VPSA starts blocking the creation of new Volumes, Snapshots and Clones in that Pool. A support ticket and email are also generated. When crossing this threshold, the Free Capacity State changes to “Protect” and the available capacity will be shown in Red.

    • Default: 180 minutes (3 hour)

    • Minimum: 1 minute (0 means disable this alert by time)

    or

    • Default Percentage: 95% full

    • Minimum: 1 % (0 means disable this alert by %)

  • Physical Pool Capacity Emergency Threshold – When the Pool’s free capacity drops below this fixed threshold (in GB) or below the specified % threshold, the VPSA starts freeing Pool capacity by deleting older snapshots. The VPSA will delete one snapshot at a time, starting with the oldest snapshot, until it exceeds the Emergency threshold (i.e. when free capacity is greater than the threshold). A support ticket and email are also generated. When this threshold is crossed the Free Capacity State changes to “Emergency” and the available capacity will be shown in Red.

    • Default: 50 GB

    • Minimum: 1 GB

    or

    • Default Percentage: 99% full

    • Minimum: 1 % (0 means disable this alert by %)

  • Physical Pool Capacity Alert Interval - The size of the window (in minutes) that is used to calculate the rate at which free space is consumed. The smaller the window is the more this rate is impacted by intermediate changes in capacity allocations, which can result from changes in workload characteristics and/or the creation/deletion of new Snapshots and Clones.

    • Default: 60 minutes (1 hours)

    • Minimum: 1 minute

image20e


imageAFA In addition to the physical capacity alerts, The Flash Array VPSA provides alerts in case The Pool allocation (virtual address space) is near capacity.

Free Address Space = Allocation Limit – Allocated Capacity

The following user-configurable parameters impact alerts and operations that are performed as part of the Pool Protection mechanism:

  • Allocated Capacity Alert Threshold – The estimated time (in minutes) before running out of free address space. When triggered an online support ticket is submitted and an email is sent to the VPSA user. When crossing this threshold the Allocated Capacity Alert Mode changes to “Alert” and the available address sace will be shown in Yellow. A secondary “reminder” ticket and an email will be generated when only half of this threshold’s estimated time is left.

    • Default: 360 minutes (6 hours)

    • Minimum: 1 minute (0 means disable this alert)

  • Allocated Capacity Protection Threshold – The estimated time (in minutes) before running out of free address space. When triggered the VPSA starts blocking the creation of new Volumes, Snapshots and Clones in that Pool. A support ticket and email are also generated. When crossing this threshold, the Allocated Capacity Alert Mode changes to “Protect” and the available address space will be shown in Red.

    • Default: 60 minutes (1 hour)

    • Minimum: 1 minute (0 means disable this alert)

  • Allocated Capacity Alert Interval - The size of the window (in minutes) that is used to calculate the rate at which free address space is consumed. The smaller the window is the more this rate is impacted by intermediate changes in capacity allocations, which can result from changes in workload characteristics and/or the creation/deletion of new Snapshots and Clones.

    • Default: 60 minutes (1 hours)

    • Minimum: 1 minute

  • Allocated Capacity Emergency Threshold – When the Pool’s free address space drops below this fixed threshold (in GB), the VPSA starts freeing Pool capacity by deleting older snapshots. The VPSA will delete one snapshot at a time, starting with the oldest snapshot, until it exceeds the Emergency threshold (i.e. when free address space is greater than the threshold). A support ticket and email are also generated. When this threshold is crossed the Free Capacity State changes to “Emergency” and the available address space will be shown in Red.

    • Default: 5 GB

    • Minimum: 1 GB


Managing Pool Performance Alerts

A VPSA administrator has the option to set Pool Performance Alerts in addition to the default Pool Capacity Alerts. Performance Alerts are available for:

Read IOPS Limit – Creates an alert when the average read IOPS, during the past minute, for a Pool exceeds a user-specified threshold.

Read Throughput Limit - Creates an alert when, during the past minute, the average read MB/s for a Pool exceeds a user-specified threshold.

Read Latency Limit – Creates an alert when, during the past minute, the average read latency for a Pool exceeds a user-specified threshold.

Write IOPS Limit – Creates an alert when, during the past minute, the average write IOPS for a Pool exceeds a user-specified threshold.

Write Throughput Limit - Creates an alert when, during the past minute, the average write MB/s for a Pool exceeds a user-specified threshold.

Write Latency Limit – Creates an alert when, during the past minute, the average write latency for a Pool exceeds a user-specified threshold.