Class FitsSchemaInputMapper

Class Documentation

class FitsSchemaInputMapper

A class that describes a mapping from a FITS binary table to an afw::table Schema.

A FitsSchemaInputMapper is created every time a FITS binary table is read into an afw::table catalog, allowing limited customization of the mapping between on-disk FITS table columns an in-memory fields by subclasses of BaseTable.

The object is constructed from a daf::base::PropertyList that represents the FITS header, which is used to populate a custom container of FitsSchemaItems. These can then be retrieved by name or column number via the find() methods, allowing the user to create custom readers for columns or groups of columns via addColumnReader(). They can also be removed from the “regular” fields via the erase() method. Those regular fields are filled in by the finalize() method, which automatically generates mappings for any FitsSchemaItems that have not been removed by calls to erase(). Once finalize() has been called, readRecord() may be called repeatedly to read FITS rows into record objects according to the mapping that has been defined.

Public Types

typedef FitsSchemaItem Item

Public Functions

FitsSchemaInputMapper(daf::base::PropertyList &metadata, bool stripMetadata)

Construct a mapper from a PropertyList of FITS header values, stripping recognized keys if desired.

FitsSchemaInputMapper(FitsSchemaInputMapper const&)
FitsSchemaInputMapper(FitsSchemaInputMapper&&)
FitsSchemaInputMapper &operator=(FitsSchemaInputMapper const&)
FitsSchemaInputMapper &operator=(FitsSchemaInputMapper&&)
~FitsSchemaInputMapper()
void setArchive(std::shared_ptr<InputArchive> archive)

Set the Archive to an externally-provided one, overriding any that may have been read.

bool readArchive(afw::fits::Fits &fits)

Set the Archive by reading from the HDU specified by the AR_HDU header entry.

Returns true on success, false if there is no AR_HDU entry.

bool hasArchive() const

Return true if the mapper has an InputArchive.

Item const *find(std::string const &ttype) const

Find an item with the given column name (ttype), returning nullptr if no such column exists.

The returned pointer is owned by the mapper object, and should not be deleted. It is invalidated by calls to either erase() or finalize().

Item const *find(int column) const

Find an item with the given column number, returning nullptr if no such column exists.

The returned pointer is owned by the mapper object, and should not be deleted. It is invalidated by calls to either erase() or finalize().

void erase(Item const *item)

Remove the given item (which should have been retrieved via find()) from the mapping, preventing it from being included in the regular fields added by finalize().

void erase(std::string const &ttype)

Remove the item with the given column name (ttype) from the mapping, preventing it from being included in the regular fields added by finalize().

void erase(int column)

Remove the item at the given column position from the mapping, preventing it from being included in the regular fields added by finalize().

void customize(std::unique_ptr<FitsColumnReader> reader)

Customize a mapping by providing a FitsColumnReader instance that will be invoked by readRecords().

Schema finalize()

Map any remaining items into regular Schema items, and return the final Schema.

This method must be called before any calls to readRecords().

void readRecord(BaseRecord &record, afw::fits::Fits &fits, std::size_t row)

Fill a record from a FITS binary table row.

Public Static Attributes

std::size_t PREPPED_ROWS_FACTOR

When processing each column, divide this number by the record size (in bytes) and ask CFITSIO to read this many that of values from that column in a single call.

Both FITS binary tables and afw.table are stored row-major, so reading multiple rows from a single column at a time leads to nonsequential reads. But given the way the I/O code is structured, we tend to get nonsequential reads anyway, and it seems the per-call overload to CFITSIO is sufficiently high that it’s best to do this anyway for all but the largest record sizes.