Large Object Support


VPSA Object Storage has a 5GB limit on the size of a single uploaded object. However, the download size of a single object is virtually unlimited with the concept of segmentation. Segments of the larger object are uploaded and a special manifest file is created that, when downloaded, sends all the segments concatenated as a single object. This also offers much greater upload speed with the possibility of parallel uploads of the segments.

Dynamic Large Objects

VPSA Object Storage is providing Dynamic Large Object (DLO) support via a dedicated middleware.

It is possible to upload file at any size as long as it is segmented into segments smaller than 5GB.

It’s the responsibility of the object operation client tool to break a file into segments, different tools can use different size of segments.

Failed upload handling

In case the multipart upload doesn’t complete, the VPSA Object Storage will not assemble the object parts and will not create any object. The parts will remain stored in the Object Storage for a period of 15 days, this until the Object Storage segment tracker will cleanup automatically the orphan parts. In this case aborted/failed uploads incomplete parts will be considered as part of the account used capacity.

S3 Interface

Most S3 clients tools support large objects handling. and operation is transparent to the user.

Swift Interface

Using the Swift Tool included with the python-swiftclient library, you can use the -S option to specify the segment size to use when splitting a large file. For example:

swift upload test\_container -S 1073741824 large\_file

This would split the large_file into 1G segments and begin uploading those segments in parallel. Once all the segments have been uploaded, swift will then create the manifest file so the segments can be downloaded as one.

So now, the following swift command would download the entire large object:

swift download test\_container large\_file

swift command uses a strict convention for its segmented object support. In the above example it will upload all the segments into a second container named test_container_segments.