Saving arrays¶
Beyond the basic data types of dictionaries, lists, strings and numbers, the
most important thing ASDF can save is arrays. It’s as simple as putting a
numpy
array somewhere in the tree. Here, we save an 8x8 array of random
floating-point numbers (using numpy.random.rand
). Note that the resulting
YAML output contains information about the structure (size and data type) of
the array, but the actual array content is in a binary block.
Note
In the file examples below, the first YAML part appears as it
appears in the file. The BLOCK
sections are stored as binary
data in the file, but are presented in human-readable form on this
page.
Saving inline arrays¶
For small arrays, you may not care about the efficiency of a binary
representation and just want to save the array contents directly in the YAML
tree. The set_array_storage
method can be used to set the
storage type of the associated data. The allowed values are internal
,
external
, and inline
.
internal
: The default. The array data will be stored in a binary block in the same ASDF file.external
: Store the data in a binary block in a separate ASDF file (also known as “exploded” format, which discussed below in Saving external arrays).inline
: Store the data as YAML inline in the tree.
Alternatively, it is possible to use the all_array_storage
parameter of
AsdfFile.write_to
and AsdfFile.update
to control the storage
format of all arrays in the file.
# This controls the output format of all arrays in the file
ff.write_to("test.asdf", all_array_storage='inline')
For automatic management of the array storage type based on number of elements, see array_inline_threshold.
Saving external arrays¶
ASDF files may also be saved in “exploded form”, which creates multiple files corresponding to the following data items:
One ASDF file containing only the header and tree.
n ASDF files, each containing a single array data block.
Exploded form is useful in the following scenarios:
Over a network protocol, such as HTTP, a client may only need to access some of the blocks. While reading a subset of the file can be done using HTTP
Range
headers, it still requires one (small) request per block to “jump” through the file to determine the start location of each block. This can become time-consuming over a high-latency network if there are many blocks. Exploded form allows each block to be requested directly by a specific URI.An ASDF writer may stream a table to disk, when the size of the table is not known at the outset. Using exploded form simplifies this, since a standalone file containing a single table can be iteratively appended to without worrying about any blocks that may follow it.
To save a block in an external file, set its block type to
'external'
.
Streaming array data¶
In certain scenarios, you may want to stream data to disk, rather than writing an entire array of data at once. For example, it may not be possible to fit the entire array in memory, or you may want to save data from a device as it comes in to prevent data loss. The ASDF Standard allows exactly one streaming block per file where the size of the block isn’t included in the block header, but instead is implicitly determined to include all of the remaining contents of the file. By definition, it must be the last block in the file.
To use streaming, rather than including a Numpy array object in the
tree, you include a asdf.Stream
object which sets up the structure
of the streamed data, but will not write out the actual content. The
file handle’s write
method is then used to manually write out the
binary data.
A case where streaming may be useful is when converting large data sets from a different format into ASDF. In these cases it would be impractical to hold all of the data in memory as an intermediate step. Consider the following example that streams a large CSV file containing rows of integer data and converts it to numpy arrays stored in ASDF:
import csv
import numpy as np
from asdf import AsdfFile, Stream
tree = {
# We happen to know in advance that each row in the CSV has 100 ints
'data': Stream([100], np.int64)
}
ff = AsdfFile(tree)
# open the output file handle
with open('new_file.asdf', 'wb') as fd:
ff.write_to(fd)
# open the CSV file to be converted
with open('large_file.csv', 'r') as cfd:
# read each line of the CSV file
reader = csv.reader(cfd)
for row in reader:
# convert each row to a numpy array
array = np.array([int(x) for x in row], np.int64)
# write the array to the output file handle
fd.write(array.tobytes())
Compression¶
Individual blocks in an ASDF file may be compressed.
You can easily zlib or bzip2 compress all blocks:
The lz4 compression algorithm is also supported, but requires the optional lz4 package in order to work.
When reading a file with compressed blocks, the blocks will be automatically decompressed when accessed. If a file with compressed blocks is read and then written out again, by default the new file will use the same compression as the original file. This behavior can be overridden by explicitly providing a different compression algorithm when writing the file out again.
import asdf
# Open a file with some compression
af = asdf.open('compressed.asdf')
# Use the same compression when writing out a new file
af.write_to('same.asdf')
# Or specify the (possibly different) algorithm to use when writing out
af.write_to('different.asdf', all_array_compression='lz4')
Memory mapping¶
By default, all internal array data is memory mapped using numpy.memmap
. This
allows for the efficient use of memory even when reading files with very large
arrays. The use of memory mapping means that the following usage pattern is not
permitted:
import asdf
with asdf.open('my_data.asdf') as af:
...
af.tree
Specifically, if an ASDF file has been opened using a with
context, it is not
possible to access the file contents outside of the scope of that context,
because any memory mapped arrays will no longer be available.
It may sometimes be useful to copy array data into memory instead of using
memory maps. This can be controlled by passing the copy_arrays
parameter to
either the AsdfFile
constructor or asdf.open
. By default,
copy_arrays=False
.