Tables¶
BinTableHDU
¶
- class astropy.io.fits.BinTableHDU(data=None, header=None, name=None, uint=False, ver=None, character_as_bytes=False)[source]¶
Bases:
_TableBaseHDU
Binary table HDU class.
- Parameters:
- data
array
,FITS_rec
, orTable
Data to be used.
- header
Header
Header to be used.
- name
python:str
Name to be populated in
EXTNAME
keyword.- uintbool, optional
Set to
True
if the table contains unsigned integer columns.- ver
python:int
> 0 orpython:None
, optional The ver of the HDU, will be the value of the keyword
EXTVER
. If not given or None, it defaults to the value of theEXTVER
card of theheader
or 1. (default: None)- character_as_bytesbool
Whether to return bytes for string columns. By default this is
False
and (unicode) strings are returned, but this does not respect memory mapping and loads the whole column in memory when accessed.
- data
- add_checksum(when=None, override_datasum=False, checksum_keyword='CHECKSUM', datasum_keyword='DATASUM')¶
Add the
CHECKSUM
andDATASUM
cards to this HDU with the values set to the checksum calculated for the HDU and the data respectively. The addition of theDATASUM
card may be overridden.- Parameters:
- when
python:str
, optional comment string for the cards; by default the comments will represent the time when the checksum was calculated
- override_datasumbool, optional
add the
CHECKSUM
card only- checksum_keyword
python:str
, optional The name of the header keyword to store the checksum value in; this is typically ‘CHECKSUM’ per convention, but there exist use cases in which a different keyword should be used
- datasum_keyword
python:str
, optional See
checksum_keyword
- when
Notes
For testing purposes, first call
add_datasum
with awhen
argument, then calladd_checksum
with awhen
argument andoverride_datasum
set toTrue
. This will provide consistent comments for both cards and enable the generation of aCHECKSUM
card with a consistent value.
- add_datasum(when=None, datasum_keyword='DATASUM')¶
Add the
DATASUM
card to this HDU with the value set to the checksum calculated for the data.- Parameters:
- when
python:str
, optional Comment string for the card that by default represents the time when the checksum was calculated
- datasum_keyword
python:str
, optional The name of the header keyword to store the datasum value in; this is typically ‘DATASUM’ per convention, but there exist use cases in which a different keyword should be used
- when
- Returns:
- checksum
python:int
The calculated datasum
- checksum
Notes
For testing purposes, provide a
when
argument to enable the comment value in the card to remain consistent. This will enable the generation of aCHECKSUM
card with a consistent value.
- copy()¶
Make a copy of the table HDU, both header and data are copied.
- dump(datafile=None, cdfile=None, hfile=None, overwrite=False)[source]¶
Dump the table HDU to a file in ASCII format. The table may be dumped in three separate files, one containing column definitions, one containing header parameters, and one for table data.
- Parameters:
- datafilepython:path-like object or python:file-like object, optional
Output data file. The default is the root name of the fits file associated with this HDU appended with the extension
.txt
.- cdfilepython:path-like object or python:file-like object, optional
Output column definitions file. The default is
None
, no column definitions output is produced.- hfilepython:path-like object or python:file-like object, optional
Output header parameters file. The default is
None
, no header parameters output is produced.- overwritebool, optional
If
True
, overwrite the output file if it exists. Raises anOSError
ifFalse
and the output file exists. Default isFalse
.
Notes
The primary use for the
dump
method is to allow viewing and editing the table data and parameters in a standard text editor. Theload
method can be used to create a new table from the three plain text (ASCII) files.datafile: Each line of the data file represents one row of table data. The data is output one column at a time in column order. If a column contains an array, each element of the column array in the current row is output before moving on to the next column. Each row ends with a new line.
Integer data is output right-justified in a 21-character field followed by a blank. Floating point data is output right justified using ‘g’ format in a 21-character field with 15 digits of precision, followed by a blank. String data that does not contain whitespace is output left-justified in a field whose width matches the width specified in the
TFORM
header parameter for the column, followed by a blank. When the string data contains whitespace characters, the string is enclosed in quotation marks (""
). For the last data element in a row, the trailing blank in the field is replaced by a new line character.For column data containing variable length arrays (‘P’ format), the array data is preceded by the string
'VLA_Length= '
and the integer length of the array for that row, left-justified in a 21-character field, followed by a blank.Note
This format does not support variable length arrays using the (‘Q’ format) due to difficult to overcome ambiguities. What this means is that this file format cannot support VLA columns in tables stored in files that are over 2 GB in size.
For column data representing a bit field (‘X’ format), each bit value in the field is output right-justified in a 21-character field as 1 (for true) or 0 (for false).
cdfile: Each line of the column definitions file provides the definitions for one column in the table. The line is broken up into 8, sixteen-character fields. The first field provides the column name (
TTYPEn
). The second field provides the column format (TFORMn
). The third field provides the display format (TDISPn
). The fourth field provides the physical units (TUNITn
). The fifth field provides the dimensions for a multidimensional array (TDIMn
). The sixth field provides the value that signifies an undefined value (TNULLn
). The seventh field provides the scale factor (TSCALn
). The eighth field provides the offset value (TZEROn
). A field value of""
is used to represent the case where no value is provided.hfile: Each line of the header parameters file provides the definition of a single HDU header card as represented by the card image.
- filebytes()¶
Calculates and returns the number of bytes that this HDU will write to a file.
- fileinfo()¶
Returns a dictionary detailing information about the locations of this HDU within any associated file. The values are only valid after a read or write of the associated file with no intervening changes to the
HDUList
.- Returns:
python:dict
orpython:None
The dictionary details information about the locations of this HDU within an associated file. Returns
None
when the HDU is not associated with a file.Dictionary contents:
Key
Value
file
File object associated with the HDU
filemode
Mode in which the file was opened (readonly, copyonwrite, update, append, ostream)
hdrLoc
Starting byte location of header in file
datLoc
Starting byte location of data block in file
datSpan
Data size including padding
- classmethod from_columns(columns, header=None, nrows=0, fill=False, character_as_bytes=False, **kwargs)¶
Given either a
ColDefs
object, a sequence ofColumn
objects, or another table HDU or table data (aFITS_rec
or multi-fieldnumpy.ndarray
ornumpy.recarray
object, return a new table HDU of the class this method was called on using the column definition from the input.See also
FITS_rec.from_columns
.- Parameters:
- columnspython:sequence of
Column
,ColDefs
astropy:-like The columns from which to create the table data, or an object with a column-like structure from which a
ColDefs
can be instantiated. This includes an existingBinTableHDU
orTableHDU
, or anumpy.recarray
to give some examples.If these columns have data arrays attached that data may be used in initializing the new table. Otherwise the input columns will be used as a template for a new table with the requested number of rows.
- header
Header
An optional
Header
object to instantiate the new HDU yet. Header keywords specifically related to defining the table structure (such as the “TXXXn” keywords like TTYPEn) will be overridden by the supplied column definitions, but all other informational and data model-specific keywords are kept.- nrows
python:int
Number of rows in the new table. If the input columns have data associated with them, the size of the largest input column is used. Otherwise the default is 0.
- fillbool
If
True
, will fill all cells with zeros or blanks. IfFalse
, copy the data from input, undefined cells will still be filled with zeros/blanks.- character_as_bytesbool
Whether to return bytes for string columns when accessed from the HDU. By default this is
False
and (unicode) strings are returned, but for large tables this may use up a lot of memory.
- columnspython:sequence of
Notes
Any additional keyword arguments accepted by the HDU class’s
__init__
may also be passed in as keyword arguments.
- classmethod fromstring(data, checksum=False, ignore_missing_end=False, **kwargs)¶
Creates a new HDU object of the appropriate type from a string containing the HDU’s entire header and, optionally, its data.
Note: When creating a new HDU from a string without a backing file object, the data of that HDU may be read-only. It depends on whether the underlying string was an immutable Python str/bytes object, or some kind of read-write memory buffer such as a
memoryview
.- Parameters:
- data
python:str
,bytearray
,memoryview
,ndarray
A byte string containing the HDU’s header and data.
- checksumbool, optional
Check the HDU’s checksum and/or datasum.
- ignore_missing_endbool, optional
Ignore a missing end card in the header data. Note that without the end card the end of the header may be ambiguous and resulted in a corrupt HDU. In this case the assumption is that the first 2880 block that does not begin with valid FITS header data is the beginning of the data.
- **kwargsoptional
May consist of additional keyword arguments specific to an HDU type–these correspond to keywords recognized by the constructors of different HDU classes such as
PrimaryHDU
,ImageHDU
, orBinTableHDU
. Any unrecognized keyword arguments are simply ignored.
- data
- classmethod load(datafile, cdfile=None, hfile=None, replace=False, header=None)[source]¶
Create a table from the input ASCII files. The input is from up to three separate files, one containing column definitions, one containing header parameters, and one containing column data.
The column definition and header parameters files are not required. When absent the column definitions and/or header parameters are taken from the header object given in the header argument; otherwise sensible defaults are inferred (though this mode is not recommended).
- Parameters:
- datafilepython:path-like object or python:file-like object
Input data file containing the table data in ASCII format.
- cdfilepython:path-like object or python:file-like object, optional
Input column definition file containing the names, formats, display formats, physical units, multidimensional array dimensions, undefined values, scale factors, and offsets associated with the columns in the table. If
None
, the column definitions are taken from the current values in this object.- hfilepython:path-like object or python:file-like object, optional
Input parameter definition file containing the header parameter definitions to be associated with the table. If
None
, the header parameter definitions are taken from the current values in this objects header.- replacebool, optional
When
True
, indicates that the entire header should be replaced with the contents of the ASCII file instead of just updating the current header.- header
Header
, optional When the cdfile and hfile are missing, use this Header object in the creation of the new table and HDU. Otherwise this Header supersedes the keywords from hfile, which is only used to update values not present in this Header, unless
replace=True
in which this Header’s values are completely replaced with the values from hfile.
Notes
The primary use for the
load
method is to allow the input of ASCII data that was edited in a standard text editor of the table data and parameters. Thedump
method can be used to create the initial ASCII files.datafile: Each line of the data file represents one row of table data. The data is output one column at a time in column order. If a column contains an array, each element of the column array in the current row is output before moving on to the next column. Each row ends with a new line.
Integer data is output right-justified in a 21-character field followed by a blank. Floating point data is output right justified using ‘g’ format in a 21-character field with 15 digits of precision, followed by a blank. String data that does not contain whitespace is output left-justified in a field whose width matches the width specified in the
TFORM
header parameter for the column, followed by a blank. When the string data contains whitespace characters, the string is enclosed in quotation marks (""
). For the last data element in a row, the trailing blank in the field is replaced by a new line character.For column data containing variable length arrays (‘P’ format), the array data is preceded by the string
'VLA_Length= '
and the integer length of the array for that row, left-justified in a 21-character field, followed by a blank.Note
This format does not support variable length arrays using the (‘Q’ format) due to difficult to overcome ambiguities. What this means is that this file format cannot support VLA columns in tables stored in files that are over 2 GB in size.
For column data representing a bit field (‘X’ format), each bit value in the field is output right-justified in a 21-character field as 1 (for true) or 0 (for false).
cdfile: Each line of the column definitions file provides the definitions for one column in the table. The line is broken up into 8, sixteen-character fields. The first field provides the column name (
TTYPEn
). The second field provides the column format (TFORMn
). The third field provides the display format (TDISPn
). The fourth field provides the physical units (TUNITn
). The fifth field provides the dimensions for a multidimensional array (TDIMn
). The sixth field provides the value that signifies an undefined value (TNULLn
). The seventh field provides the scale factor (TSCALn
). The eighth field provides the offset value (TZEROn
). A field value of""
is used to represent the case where no value is provided.hfile: Each line of the header parameters file provides the definition of a single HDU header card as represented by the card image.
- classmethod match_header(header)[source]¶
This is an abstract type that implements the shared functionality of the ASCII and Binary Table HDU types, which should be used instead of this.
- classmethod readfrom(fileobj, checksum=False, ignore_missing_end=False, **kwargs)¶
Read the HDU from a file. Normally an HDU should be opened with
open()
which reads the entire HDU list in a FITS file. But this method is still provided for symmetry withwriteto()
.- Parameters:
- fileobjpython:file-like object
Input FITS file. The file’s seek pointer is assumed to be at the beginning of the HDU.
- checksumbool
If
True
, verifies that bothDATASUM
andCHECKSUM
card values (when present in the HDU header) match the header and data of all HDU’s in the file.- ignore_missing_endbool
Do not issue an exception when opening a file that is missing an
END
card in the last header.
- req_cards(keyword, pos, test, fix_value, option, errlist)¶
Check the existence, location, and value of a required
Card
.- Parameters:
- keyword
python:str
The keyword to validate
- pos
python:int
,python:callable()
If an
int
, this specifies the exact location this card should have in the header. Remember that Python is zero-indexed, so this meanspos=0
requires the card to be the first card in the header. If given a callable, it should take one argument–the actual position of the keyword–and returnTrue
orFalse
. This can be used for custom evaluation. For example ifpos=lambda idx: idx > 10
this will check that the keyword’s index is greater than 10.- test
python:callable()
This should be a callable (generally a function) that is passed the value of the given keyword and returns
True
orFalse
. This can be used to validate the value associated with the given keyword.- fix_value
python:str
,python:int
,python:float
,complex
, bool,python:None
A valid value for a FITS keyword to to use if the given
test
fails to replace an invalid value. In other words, this provides a default value to use as a replacement if the keyword’s current value is invalid. IfNone
, there is no replacement value and the keyword is unfixable.- option
python:str
Output verification option. Must be one of
"fix"
,"silentfix"
,"ignore"
,"warn"
, or"exception"
. May also be any combination of"fix"
or"silentfix"
with"+ignore"
,+warn
, or+exception" (e.g. ``"fix+warn"
). See Verification Options for more info.- errlist
python:list
A list of validation errors already found in the FITS file; this is used primarily for the validation system to collect errors across multiple HDUs and multiple calls to
req_cards
.
- keyword
Notes
If
pos=None
, the card can be anywhere in the header. If the card does not exist, the new card will have thefix_value
as its value when created. Also check the card’s value by using thetest
argument.
- run_option(option='warn', err_text='', fix_text='Fixed.', fix=None, fixable=True)¶
Execute the verification with selected option.
- property size¶
Size (in bytes) of the data portion of the HDU.
- update()¶
Update header keywords to reflect recent changes of columns.
- verify(option='warn')¶
Verify all values in the instance.
- Parameters:
- option
python:str
Output verification option. Must be one of
"fix"
,"silentfix"
,"ignore"
,"warn"
, or"exception"
. May also be any combination of"fix"
or"silentfix"
with"+ignore"
,"+warn"
, or"+exception"
(e.g."fix+warn"
). See Verification Options for more info.
- option
- verify_checksum()¶
Verify that the value in the
CHECKSUM
keyword matches the value calculated for the current HDU CHECKSUM.- Returns:
- valid
python:int
0 - failure
1 - success
2 - no
CHECKSUM
keyword present
- valid
- verify_datasum()¶
Verify that the value in the
DATASUM
keyword matches the value calculated for theDATASUM
of the current HDU data.- Returns:
- valid
python:int
0 - failure
1 - success
2 - no
DATASUM
keyword present
- valid
- writeto(name, output_verify='exception', overwrite=False, checksum=False)¶
Works similarly to the normal writeto(), but prepends a default
PrimaryHDU
are required by extension HDUs (which cannot stand on their own).
TableHDU
¶
- class astropy.io.fits.TableHDU(data=None, header=None, name=None, ver=None, character_as_bytes=False)[source]¶
Bases:
_TableBaseHDU
FITS ASCII table extension HDU class.
- Parameters:
- data
array
orFITS_rec
Data to be used.
- header
Header
Header to be used.
- name
python:str
Name to be populated in
EXTNAME
keyword.- ver
python:int
> 0 orpython:None
, optional The ver of the HDU, will be the value of the keyword
EXTVER
. If not given or None, it defaults to the value of theEXTVER
card of theheader
or 1. (default: None)- character_as_bytesbool
Whether to return bytes for string columns. By default this is
False
and (unicode) strings are returned, but this does not respect memory mapping and loads the whole column in memory when accessed.
- data
- add_checksum(when=None, override_datasum=False, checksum_keyword='CHECKSUM', datasum_keyword='DATASUM')¶
Add the
CHECKSUM
andDATASUM
cards to this HDU with the values set to the checksum calculated for the HDU and the data respectively. The addition of theDATASUM
card may be overridden.- Parameters:
- when
python:str
, optional comment string for the cards; by default the comments will represent the time when the checksum was calculated
- override_datasumbool, optional
add the
CHECKSUM
card only- checksum_keyword
python:str
, optional The name of the header keyword to store the checksum value in; this is typically ‘CHECKSUM’ per convention, but there exist use cases in which a different keyword should be used
- datasum_keyword
python:str
, optional See
checksum_keyword
- when
Notes
For testing purposes, first call
add_datasum
with awhen
argument, then calladd_checksum
with awhen
argument andoverride_datasum
set toTrue
. This will provide consistent comments for both cards and enable the generation of aCHECKSUM
card with a consistent value.
- add_datasum(when=None, datasum_keyword='DATASUM')¶
Add the
DATASUM
card to this HDU with the value set to the checksum calculated for the data.- Parameters:
- when
python:str
, optional Comment string for the card that by default represents the time when the checksum was calculated
- datasum_keyword
python:str
, optional The name of the header keyword to store the datasum value in; this is typically ‘DATASUM’ per convention, but there exist use cases in which a different keyword should be used
- when
- Returns:
- checksum
python:int
The calculated datasum
- checksum
Notes
For testing purposes, provide a
when
argument to enable the comment value in the card to remain consistent. This will enable the generation of aCHECKSUM
card with a consistent value.
- copy()¶
Make a copy of the table HDU, both header and data are copied.
- filebytes()¶
Calculates and returns the number of bytes that this HDU will write to a file.
- fileinfo()¶
Returns a dictionary detailing information about the locations of this HDU within any associated file. The values are only valid after a read or write of the associated file with no intervening changes to the
HDUList
.- Returns:
python:dict
orpython:None
The dictionary details information about the locations of this HDU within an associated file. Returns
None
when the HDU is not associated with a file.Dictionary contents:
Key
Value
file
File object associated with the HDU
filemode
Mode in which the file was opened (readonly, copyonwrite, update, append, ostream)
hdrLoc
Starting byte location of header in file
datLoc
Starting byte location of data block in file
datSpan
Data size including padding
- classmethod from_columns(columns, header=None, nrows=0, fill=False, character_as_bytes=False, **kwargs)¶
Given either a
ColDefs
object, a sequence ofColumn
objects, or another table HDU or table data (aFITS_rec
or multi-fieldnumpy.ndarray
ornumpy.recarray
object, return a new table HDU of the class this method was called on using the column definition from the input.See also
FITS_rec.from_columns
.- Parameters:
- columnspython:sequence of
Column
,ColDefs
astropy:-like The columns from which to create the table data, or an object with a column-like structure from which a
ColDefs
can be instantiated. This includes an existingBinTableHDU
orTableHDU
, or anumpy.recarray
to give some examples.If these columns have data arrays attached that data may be used in initializing the new table. Otherwise the input columns will be used as a template for a new table with the requested number of rows.
- header
Header
An optional
Header
object to instantiate the new HDU yet. Header keywords specifically related to defining the table structure (such as the “TXXXn” keywords like TTYPEn) will be overridden by the supplied column definitions, but all other informational and data model-specific keywords are kept.- nrows
python:int
Number of rows in the new table. If the input columns have data associated with them, the size of the largest input column is used. Otherwise the default is 0.
- fillbool
If
True
, will fill all cells with zeros or blanks. IfFalse
, copy the data from input, undefined cells will still be filled with zeros/blanks.- character_as_bytesbool
Whether to return bytes for string columns when accessed from the HDU. By default this is
False
and (unicode) strings are returned, but for large tables this may use up a lot of memory.
- columnspython:sequence of
Notes
Any additional keyword arguments accepted by the HDU class’s
__init__
may also be passed in as keyword arguments.
- classmethod fromstring(data, checksum=False, ignore_missing_end=False, **kwargs)¶
Creates a new HDU object of the appropriate type from a string containing the HDU’s entire header and, optionally, its data.
Note: When creating a new HDU from a string without a backing file object, the data of that HDU may be read-only. It depends on whether the underlying string was an immutable Python str/bytes object, or some kind of read-write memory buffer such as a
memoryview
.- Parameters:
- data
python:str
,bytearray
,memoryview
,ndarray
A byte string containing the HDU’s header and data.
- checksumbool, optional
Check the HDU’s checksum and/or datasum.
- ignore_missing_endbool, optional
Ignore a missing end card in the header data. Note that without the end card the end of the header may be ambiguous and resulted in a corrupt HDU. In this case the assumption is that the first 2880 block that does not begin with valid FITS header data is the beginning of the data.
- **kwargsoptional
May consist of additional keyword arguments specific to an HDU type–these correspond to keywords recognized by the constructors of different HDU classes such as
PrimaryHDU
,ImageHDU
, orBinTableHDU
. Any unrecognized keyword arguments are simply ignored.
- data
- classmethod match_header(header)[source]¶
This is an abstract type that implements the shared functionality of the ASCII and Binary Table HDU types, which should be used instead of this.
- classmethod readfrom(fileobj, checksum=False, ignore_missing_end=False, **kwargs)¶
Read the HDU from a file. Normally an HDU should be opened with
open()
which reads the entire HDU list in a FITS file. But this method is still provided for symmetry withwriteto()
.- Parameters:
- fileobjpython:file-like object
Input FITS file. The file’s seek pointer is assumed to be at the beginning of the HDU.
- checksumbool
If
True
, verifies that bothDATASUM
andCHECKSUM
card values (when present in the HDU header) match the header and data of all HDU’s in the file.- ignore_missing_endbool
Do not issue an exception when opening a file that is missing an
END
card in the last header.
- req_cards(keyword, pos, test, fix_value, option, errlist)¶
Check the existence, location, and value of a required
Card
.- Parameters:
- keyword
python:str
The keyword to validate
- pos
python:int
,python:callable()
If an
int
, this specifies the exact location this card should have in the header. Remember that Python is zero-indexed, so this meanspos=0
requires the card to be the first card in the header. If given a callable, it should take one argument–the actual position of the keyword–and returnTrue
orFalse
. This can be used for custom evaluation. For example ifpos=lambda idx: idx > 10
this will check that the keyword’s index is greater than 10.- test
python:callable()
This should be a callable (generally a function) that is passed the value of the given keyword and returns
True
orFalse
. This can be used to validate the value associated with the given keyword.- fix_value
python:str
,python:int
,python:float
,complex
, bool,python:None
A valid value for a FITS keyword to to use if the given
test
fails to replace an invalid value. In other words, this provides a default value to use as a replacement if the keyword’s current value is invalid. IfNone
, there is no replacement value and the keyword is unfixable.- option
python:str
Output verification option. Must be one of
"fix"
,"silentfix"
,"ignore"
,"warn"
, or"exception"
. May also be any combination of"fix"
or"silentfix"
with"+ignore"
,+warn
, or+exception" (e.g. ``"fix+warn"
). See Verification Options for more info.- errlist
python:list
A list of validation errors already found in the FITS file; this is used primarily for the validation system to collect errors across multiple HDUs and multiple calls to
req_cards
.
- keyword
Notes
If
pos=None
, the card can be anywhere in the header. If the card does not exist, the new card will have thefix_value
as its value when created. Also check the card’s value by using thetest
argument.
- run_option(option='warn', err_text='', fix_text='Fixed.', fix=None, fixable=True)¶
Execute the verification with selected option.
- property size¶
Size (in bytes) of the data portion of the HDU.
- update()¶
Update header keywords to reflect recent changes of columns.
- verify(option='warn')¶
Verify all values in the instance.
- Parameters:
- option
python:str
Output verification option. Must be one of
"fix"
,"silentfix"
,"ignore"
,"warn"
, or"exception"
. May also be any combination of"fix"
or"silentfix"
with"+ignore"
,"+warn"
, or"+exception"
(e.g."fix+warn"
). See Verification Options for more info.
- option
- verify_checksum()¶
Verify that the value in the
CHECKSUM
keyword matches the value calculated for the current HDU CHECKSUM.- Returns:
- valid
python:int
0 - failure
1 - success
2 - no
CHECKSUM
keyword present
- valid
- verify_datasum()¶
Verify that the value in the
DATASUM
keyword matches the value calculated for theDATASUM
of the current HDU data.- Returns:
- valid
python:int
0 - failure
1 - success
2 - no
DATASUM
keyword present
- valid
- writeto(name, output_verify='exception', overwrite=False, checksum=False)¶
Works similarly to the normal writeto(), but prepends a default
PrimaryHDU
are required by extension HDUs (which cannot stand on their own).
Column
¶
- class astropy.io.fits.Column(name=None, format=None, unit=None, null=None, bscale=None, bzero=None, disp=None, start=None, dim=None, array=None, ascii=None, coord_type=None, coord_unit=None, coord_ref_point=None, coord_ref_value=None, coord_inc=None, time_ref_pos=None)[source]¶
Bases:
NotifierMixin
Class which contains the definition of one column, e.g.
ttype
,tform
, etc. and the array containing values for the column.Construct a
Column
by specifying attributes. All attributes exceptformat
can be optional; see Column Creation and Creating an ASCII Table for more information regardingTFORM
keyword.- Parameters:
- name
python:str
, optional column name, corresponding to
TTYPE
keyword- format
python:str
column format, corresponding to
TFORM
keyword- unit
python:str
, optional column unit, corresponding to
TUNIT
keyword- null
python:str
, optional null value, corresponding to
TNULL
keyword- bscaleint-like, optional
bscale value, corresponding to
TSCAL
keyword- bzeroint-like, optional
bzero value, corresponding to
TZERO
keyword- disp
python:str
, optional display format, corresponding to
TDISP
keyword- start
python:int
, optional column starting position (ASCII table only), corresponding to
TBCOL
keyword- dim
python:str
, optional column dimension corresponding to
TDIM
keyword- arraypython:iterable, optional
a
list
,numpy.ndarray
(or other iterable that can be used to initialize an ndarray) providing initial data for this column. The array will be automatically converted, if possible, to the data format of the column. In the case were non-trivialbscale
and/orbzero
arguments are given, the values in the array must be the physical values–that is, the values of column as if the scaling has already been applied (the array stored on the column object will then be converted back to its storage values).- asciibool, optional
set
True
if this describes a column for an ASCII table; this may be required to disambiguate the column format- coord_type
python:str
, optional coordinate/axis type corresponding to
TCTYP
keyword- coord_unit
python:str
, optional coordinate/axis unit corresponding to
TCUNI
keyword- coord_ref_pointint-like, optional
pixel coordinate of the reference point corresponding to
TCRPX
keyword- coord_ref_valueint-like, optional
coordinate value at reference point corresponding to
TCRVL
keyword- coord_incint-like, optional
coordinate increment at reference point corresponding to
TCDLT
keyword- time_ref_pos
python:str
, optional reference position for a time coordinate column corresponding to
TRPOS
keyword
- name
- property array¶
The Numpy
ndarray
associated with thisColumn
.If the column was instantiated with an array passed to the
array
argument, this will return that array. However, if the column is later added to a table, such as viaBinTableHDU.from_columns
as is typically the case, this attribute will be updated to reference the associated field in the table, which may no longer be the same array.
ColDefs
¶
- class astropy.io.fits.ColDefs(input, ascii=False)[source]¶
Bases:
NotifierMixin
Column definitions class.
It has attributes corresponding to the
Column
attributes (e.g.ColDefs
has the attributenames
whileColumn
hasname
). Each attribute inColDefs
is a list of corresponding attribute values from allColumn
objects.- Parameters:
- change_attrib(col_name, attrib, new_value)[source]¶
Change an attribute (in the
KEYWORD_ATTRIBUTES
list) of aColumn
.- Parameters:
- col_name
python:str
orpython:int
The column name or index to change
- attrib
python:str
The attribute name
- new_value
object
The new value for the attribute
- col_name
- change_name(col_name, new_name)[source]¶
Change a
Column
’s name.- Parameters:
- col_name
python:str
The current name of the column
- new_name
python:str
The new name of the column
- col_name
- change_unit(col_name, new_unit)[source]¶
Change a
Column
’s unit.- Parameters:
- col_name
python:str
orpython:int
The column name or index
- new_unit
python:str
The new unit for the column
- col_name
- del_col(col_name)[source]¶
Delete (the definition of) one
Column
.- col_namestr or int
The column’s name or index
- info(attrib='all', output=None)[source]¶
Get attribute(s) information of the column definition.
- Parameters:
- attrib
python:str
Can be one or more of the attributes listed in
astropy.io.fits.column.KEYWORD_ATTRIBUTES
. The default is"all"
which will print out all attributes. It forgives plurals and blanks. If there are two or more attribute names, they must be separated by comma(s).- outputpython:file-like object, optional
File-like object to output to. Outputs to stdout by default. If
False
, returns the attributes as adict
instead.
- attrib
Notes
This function doesn’t return anything by default; it just prints to stdout.
FITS_rec
¶
- class astropy.io.fits.FITS_rec(input)[source]¶
Bases:
recarray
FITS record array class.
FITS_rec
is the data part of a table HDU’s data part. This is a layer over therecarray
, so we can deal with scaled columns.It inherits all of the standard methods from
numpy.ndarray
.Construct a FITS record array from a recarray.
- property columns¶
A user-visible accessor for the coldefs.
- copy(order='C')[source]¶
The Numpy documentation lies;
numpy.ndarray.copy
is not equivalent tonumpy.copy
. Differences include that it re-views the copied array as self’s ndarray subclass, as though it were taking a slice; this means__array_finalize__
is called and the copy shares all the array attributes (including._converted
!). So we need to make a deep copy of all those attributes so that the two arrays truly do not share any data.
- property formats¶
List of column FITS formats.
- classmethod from_columns(columns, nrows=0, fill=False, character_as_bytes=False)[source]¶
Given a
ColDefs
object of unknown origin, initialize a newFITS_rec
object.Note
This was originally part of the
new_table
function in the table module but was moved into a class method since most of its functionality always had more to do with initializing aFITS_rec
object than anything else, and much of it also overlapped withFITS_rec._scale_back
.- Parameters:
- columnspython:sequence of
Column
ora
ColDefs
The columns from which to create the table data. If these columns have data arrays attached that data may be used in initializing the new table. Otherwise the input columns will be used as a template for a new table with the requested number of rows.
- nrows
python:int
Number of rows in the new table. If the input columns have data associated with them, the size of the largest input column is used. Otherwise the default is 0.
- fillbool
If
True
, will fill all cells with zeros or blanks. IfFalse
, copy the data from input, undefined cells will still be filled with zeros/blanks.
- columnspython:sequence of
- property names¶
List of column names.
- tolist()[source]¶
Return the array as an
a.ndim
-levels deep nested list of Python scalars.Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the
item
function.If
a.ndim
is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar.- Parameters:
- none
- Returns:
- y
object
, orpython:list
ofobject
, orpython:list
ofpython:list
ofobject
, or … The possibly nested list of array elements.
- y
Notes
The array may be recreated via
a = np.array(a.tolist())
, although this may sometimes lose precision.Examples
For a 1D array,
a.tolist()
is almost the same aslist(a)
, except thattolist
changes numpy scalars to Python scalars:>>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [1, 2] >>> type(a_list[0]) <class 'numpy.uint32'> >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) <class 'int'>
Additionally, for a 2D array,
tolist
applies recursively:>>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]]
The base case for this recursion is a 0D array:
>>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1
FITS_record
¶
- class astropy.io.fits.FITS_record(input, row=0, start=None, end=None, step=None, base=None, **kwargs)[source]¶
Bases:
object
FITS record class.
FITS_record
is used to access records of theFITS_rec
object. This will allow us to deal with scaled columns. It also handles conversion/scaling of columns in ASCII tables. TheFITS_record
class expects aFITS_rec
object as input.- Parameters:
- input
array
The array to wrap.
- row
python:int
, optional The starting logical row of the array.
- start
python:int
, optional The starting column in the row associated with this object. Used for subsetting the columns of the
FITS_rec
object.- end
python:int
, optional The ending column in the row associated with this object. Used for subsetting the columns of the
FITS_rec
object.
- input
Table Functions¶
tabledump()
¶
- astropy.io.fits.tabledump(filename, datafile=None, cdfile=None, hfile=None, ext=1, overwrite=False)[source]¶
Dump a table HDU to a file in ASCII format. The table may be dumped in three separate files, one containing column definitions, one containing header parameters, and one for table data.
- Parameters:
- filenamepython:path-like object or python:file-like object
Input fits file.
- datafilepython:path-like object or python:file-like object, optional
Output data file. The default is the root name of the input fits file appended with an underscore, followed by the extension number (ext), followed by the extension
.txt
.- cdfilepython:path-like object or python:file-like object, optional
Output column definitions file. The default is
None
, no column definitions output is produced.- hfilepython:path-like object or python:file-like object, optional
Output header parameters file. The default is
None
, no header parameters output is produced.- ext
python:int
The number of the extension containing the table HDU to be dumped.
- overwritebool, optional
If
True
, overwrite the output file if it exists. Raises anOSError
ifFalse
and the output file exists. Default isFalse
.
Notes
The primary use for the
tabledump
function is to allow editing in a standard text editor of the table data and parameters. Thetableload
function can be used to reassemble the table from the three ASCII files.datafile: Each line of the data file represents one row of table data. The data is output one column at a time in column order. If a column contains an array, each element of the column array in the current row is output before moving on to the next column. Each row ends with a new line.
Integer data is output right-justified in a 21-character field followed by a blank. Floating point data is output right justified using ‘g’ format in a 21-character field with 15 digits of precision, followed by a blank. String data that does not contain whitespace is output left-justified in a field whose width matches the width specified in the
TFORM
header parameter for the column, followed by a blank. When the string data contains whitespace characters, the string is enclosed in quotation marks (""
). For the last data element in a row, the trailing blank in the field is replaced by a new line character.For column data containing variable length arrays (‘P’ format), the array data is preceded by the string
'VLA_Length= '
and the integer length of the array for that row, left-justified in a 21-character field, followed by a blank.Note
This format does not support variable length arrays using the (‘Q’ format) due to difficult to overcome ambiguities. What this means is that this file format cannot support VLA columns in tables stored in files that are over 2 GB in size.
For column data representing a bit field (‘X’ format), each bit value in the field is output right-justified in a 21-character field as 1 (for true) or 0 (for false).
cdfile: Each line of the column definitions file provides the definitions for one column in the table. The line is broken up into 8, sixteen-character fields. The first field provides the column name (
TTYPEn
). The second field provides the column format (TFORMn
). The third field provides the display format (TDISPn
). The fourth field provides the physical units (TUNITn
). The fifth field provides the dimensions for a multidimensional array (TDIMn
). The sixth field provides the value that signifies an undefined value (TNULLn
). The seventh field provides the scale factor (TSCALn
). The eighth field provides the offset value (TZEROn
). A field value of""
is used to represent the case where no value is provided.hfile: Each line of the header parameters file provides the definition of a single HDU header card as represented by the card image.
tableload()
¶
- astropy.io.fits.tableload(datafile, cdfile, hfile=None)[source]¶
Create a table from the input ASCII files. The input is from up to three separate files, one containing column definitions, one containing header parameters, and one containing column data. The header parameters file is not required. When the header parameters file is absent a minimal header is constructed.
- Parameters:
- datafilepython:path-like object or python:file-like object
Input data file containing the table data in ASCII format.
- cdfilepython:path-like object or python:file-like object
Input column definition file containing the names, formats, display formats, physical units, multidimensional array dimensions, undefined values, scale factors, and offsets associated with the columns in the table.
- hfilepython:path-like object or python:file-like object, optional
Input parameter definition file containing the header parameter definitions to be associated with the table. If
None
, a minimal header is constructed.
Notes
The primary use for the
tableload
function is to allow the input of ASCII data that was edited in a standard text editor of the table data and parameters. The tabledump function can be used to create the initial ASCII files.datafile: Each line of the data file represents one row of table data. The data is output one column at a time in column order. If a column contains an array, each element of the column array in the current row is output before moving on to the next column. Each row ends with a new line.
Integer data is output right-justified in a 21-character field followed by a blank. Floating point data is output right justified using ‘g’ format in a 21-character field with 15 digits of precision, followed by a blank. String data that does not contain whitespace is output left-justified in a field whose width matches the width specified in the
TFORM
header parameter for the column, followed by a blank. When the string data contains whitespace characters, the string is enclosed in quotation marks (""
). For the last data element in a row, the trailing blank in the field is replaced by a new line character.For column data containing variable length arrays (‘P’ format), the array data is preceded by the string
'VLA_Length= '
and the integer length of the array for that row, left-justified in a 21-character field, followed by a blank.Note
This format does not support variable length arrays using the (‘Q’ format) due to difficult to overcome ambiguities. What this means is that this file format cannot support VLA columns in tables stored in files that are over 2 GB in size.
For column data representing a bit field (‘X’ format), each bit value in the field is output right-justified in a 21-character field as 1 (for true) or 0 (for false).
cdfile: Each line of the column definitions file provides the definitions for one column in the table. The line is broken up into 8, sixteen-character fields. The first field provides the column name (
TTYPEn
). The second field provides the column format (TFORMn
). The third field provides the display format (TDISPn
). The fourth field provides the physical units (TUNITn
). The fifth field provides the dimensions for a multidimensional array (TDIMn
). The sixth field provides the value that signifies an undefined value (TNULLn
). The seventh field provides the scale factor (TSCALn
). The eighth field provides the offset value (TZEROn
). A field value of""
is used to represent the case where no value is provided.hfile: Each line of the header parameters file provides the definition of a single HDU header card as represented by the card image.
table_to_hdu()
¶
- astropy.io.fits.table_to_hdu(table, character_as_bytes=False)[source]¶
Convert an
Table
object to a FITSBinTableHDU
.- Parameters:
- table
astropy.table.Table
The table to convert.
- character_as_bytesbool
Whether to return bytes for string columns when accessed from the HDU. By default this is
False
and (unicode) strings are returned, but for large tables this may use up a lot of memory.
- table
- Returns:
- table_hdu
BinTableHDU
The FITS binary table HDU.
- table_hdu