Table will read/write, select, modify, or show the information of the rows and columns in recognized Table formats (including FITS binary, FITS ASCII, and plain text table files, see Tables). Output columns can also be determined by number or regular expression matching of column names, units, or comments. The executable name is asttable with the following general template
$ asttable [OPTION...] InputFile
One line examples:
## Get the table column information (name, units, or data type), and ## the number of rows: $ asttable table.fits --information ## Print columns named RA and DEC, followed by all the columns where ## the name starts with "MAG_": $ asttable table.fits --column=RA --column=DEC --column=/^MAG_/ ## Similar to the above, but with one call to `--column' (or `-c'), ## also sort the rows by the input's photometric redshift (`Z_PHOT') ## column. To confirm the sort, you can add `Z_PHOT' to the columns ## to print. $ asttable table.fits -cRA,DEC,/^MAG_/ --sort=Z_PHOT ## Similar to the above, but only print rows that have a photometric ## redshift between 2 and 3. $ asttable table.fits -cRA,DEC,/^MAG_/ --range=Z_PHOT,2:3 ## Only print rows with a value in the 10th column above 100000: $ asttable table.txt --range=10,10e5,inf ## Only print the 2nd column, and the third column multiplied by 5, ## Save the resulting two columns in `table.txt' $ asttable table.fits -c2,'arith $2 5 x' -otable.fits ## Sort the output columns by the third column, save output: $ asttable table.fits --sort=3 -ooutput.txt ## Subtract the first column from the second in `cat.txt' (can also ## be a FITS table) and keep the third and fourth columns. $ asttable cat.txt -c'arith $2 $1 -',3,4 -ocat.fits ## Convert sexagesimal coordinates to degrees (same can be done in a ## large table given as argument). $ echo "7h34m35.5498 31d53m14.352s" | asttable ## Convert RA and Dec in degrees to sexagesimal (same can be done in a ## large table given as argument). $ echo "113.64812416667 31.88732" \ | asttable -c'arith $1 degree-to-ra $2 degree-to-dec' ## Extract columns 1 and 2, as well as all those between 12 to 58: $ asttable table.fits -c1,2,$(seq -s',' 12 58)
Table’s input dataset can be given either as a file or from Standard input (piped from another program, see Standard input). In the absence of selected columns, all the input’s columns and rows will be written to the output. The full set of operations Table can do are described in detail below, but for a more high-level introduction to the various operations, and their precedence, see Operation precedence in Table.
If any output file is explicitly requested (with --output) the output table will be written in it. When no output file is explicitly requested the output table will be written to the standard output. If the specified output is a FITS file, the type of FITS table (binary or ASCII) will be determined from the --tabletype option. If the output is not a FITS file, it will be printed as a plain text table (with space characters between the columns). When the output is not binary (for example standard output or a plain-text), the --txtf32* or --txtf64* options can be used for the formatting of floating point columns (see Printing floating point numbers). When the columns are accompanied by meta-data (like column name, units, or comments), this information will also printed in the plain text file before the table, as described in Gnuastro text table format.
For the full list of options common to all Gnuastro programs please see Common options.
Options can also be stored in directory, user or system-wide configuration files to avoid repeating on the command-line, see Configuration files.
Table does not follow Automatic output that is common in most Gnuastro programs, see Automatic output.
Thus, in the absence of an output file, the selected columns will be printed on the command-line with no column information, ready for redirecting to other tools like awk
.
Sexagesimal coordinates as floats in plain-text tables:
When a column is determined to be a floating point type (32-bit or 64-bit) in a plain-text table, it can contain sexagesimal values in the format of ‘ echo "7h34m35.5498 31d53m14.352s" | asttable The inverse can also be done with the more general column arithmetic operators: echo "113.64812416667 31.88732" \ | asttable -c'arith $1 degree-to-ra $2 degree-to-dec' If you want to preserve the sexagesimal contents of a column, you should store that column as a string, see Gnuastro text table format. |
Only print the column information in the specified table on the command-line and exit.
Each column’s information (number, name, units, data type, and comments) will be printed as a row on the command-line.
If the column is a multi-value (vector) a [N]
is printed after the type, where N
is the number of elements within that vector.
Note that the FITS standard only requires the data type (see Numeric data types), and in plain text tables, no meta-data/information is mandatory. Gnuastro has its own convention in the comments of a plain text table to store and transfer this information as described in Gnuastro text table format.
This option will take precedence over all other operations in Table, so when it is called along with other operations, they will be ignored, see Operation precedence in Table. This can be useful if you forget the identifier of a column after you have already typed some on the command-line. You can simply add a -i to your already-written command (without changing anything) and run Table, to see the whole list of column names and information. Then you can use the shell history (with the up arrow key on the keyboard), and retrieve the last command with all the previously typed columns present, delete -i and add the identifier you had forgot.
Similar to --information, but only the number of the input table’s columns will be printed as a single integer (useful in scripts for example).
Similar to --information, but only the number of the input table’s rows will be printed as a single integer (useful in scripts for example).
Set the output columns either by specifying the column number, or name.
For more on selecting columns, see Selecting table columns.
If a value of this option starts with ‘arith
’, column arithmetic will be activated, allowing you to edit/manipulate column contents.
For more on column arithmetic see Column arithmetic.
To ask for multiple columns this option can be used in two ways: 1) multiple calls to this option, 2) using a comma between each column specifier in one call to this option. These different solutions may be mixed in one call to Table: for example, ‘-cRA,DEC,MAG’, or ‘-cRA,DEC -cMAG’ are both equivalent to ‘-cRA -cDEC -cMAG’. The order of the output columns will be the same order given to the option or in the configuration files (see Configuration file precedence).
This option is not mandatory, if no specific columns are requested, all the input table columns are output. When this option is called multiple times, it is possible to output one column more than once.
Sequence of columns: when dealing with a large number catalogs (hundreds for example!), it will be frustrating, annoying and buggy to insert the columns manually.
If you want to read all the input columns, you can use the special _all value to --column option.
A more generic solution (for example if you want every second one, or all the columns within a special range) is to use the $ asttable table.fits -c1,2,$(seq -s',' 12 58) |
FITS file that contains the WCS to be used in the wcs-to-img
and img-to-wcs
operators of Column arithmetic.
The extension name/number within the FITS file can be specified with --wcshdu.
If the value to this option is ‘none’, no WCS will be written in the output.
FITS extension/HDU in the FITS file given to --wcsfile (see the description of --wcsfile for more).
Concatenate (or add, or append) the columns of this option’s value (a filename) to the output columns. This option may be called multiple times (to add columns from more than one file into the final output), the columns from each file will be added in the same order that this option is called. The number of rows in the file(s) given to this option has to be the same as the input table (before any type of row-selection), see Operation precedence in Table.
By default all the columns of the given file will be appended, if you only want certain columns to be appended, use the --catcolumns option to specify their name or number (see Selecting table columns). Note that the columns given to --catcolumns must be present in all the given files (if this option is called more than once with more than one file).
If the file given to this option is a FITS file, it is necessary to also define the corresponding HDU/extension with --catcolumnhdu. Also note that no operation (such as row selection and arithmetic) is applied to the table given to this option.
If the appended columns have a name, and their name is already present in the table before adding those columns, the column names of each file will be appended with a -N
, where N
is a counter starting from 1 for each appended table.
Just note that in the FITS standard (and thus in Gnuastro), column names are not case-sensitive.
This is done because when concatenating columns from multiple tables (more than two) into one, they may have the same name, and it is not good practice to have multiple columns with the same name. You can disable this feature with --catcolumnrawname. Generally, you can use the --colmetadata option to update column metadata in the same command, after all the columns have been concatenated.
For example, let’s assume you have two catalogs of the same objects (same number of rows) in different filters.
Such that f160w-cat.fits has a MAGNITUDE
column that has the magnitude of each object in the F160W
filter and similarly f105w-cat.fits, also has a MAGNITUDE
column, but for the F105W
filter.
You can use column concatenation like below to import the MAGNITUDE
column from the F105W
catalog into the F160W
catalog, while giving each magnitude column a different name:
asttable f160w-cat.fits --output=both.fits \ --catcolumnfile=f105w-cat.fits --catcolumns=MAGNITUDE \ --colmetadata=MAGNITUDE,MAG-F160W,log,"Magnitude in F160W" \ --colmetadata=MAGNITUDE-1,MAG-F105W,log,"Magnitude in F105W"
For a more complete example, see Working with catalogs (estimating colors).
Loading external columns with Arithmetic: an alternative way to load external columns into your output is to use column arithmetic (Column arithmetic) In particular the load-col- operator described in Loading external columns. But this operator will load only one column per file/HDU every time it is called. So if you have many columns to insert, it is much faster to use --catcolumnfile. Because --catcolumnfile will load all the columns in one opening of the file, and possibly even read them all into memory in parallel! |
The HDU/extension of the FITS file(s) that should be concatenated, or appended, by column with --catcolumnfile. If --catcolumn is called more than once with more than one FITS file, it is necessary to call this option more than once. The HDUs will be loaded in the same order as the FITS files given to --catcolumnfile.
The column(s) in the file(s) given to --catcolumnfile to append. When this option is not given, all the columns will be concatenated. See --catcolumnfile for more.
Do not modify the names of the concatenated (appended) columns, see description in --catcolumnfile.
Transpose (as in a matrix) the given vector column(s) individually. When this operation is done (see Operation precedence in Table), only vector columns of the same data type and with the same number of elements should exist in the table. A usage of this operator is presented in the IFU spectroscopy tutorial in Extracting a single spectrum and plotting it.
As a generic example, see the commands below.
The in.txt table below has two vector columns (each with three elements) in two rows.
After running asttable
with --transpose, you can see how the vector columns have two elements per row (u8(3)
has been replaced by u8(2)
), and that the table now has three rows.
$ cat in.txt # Column 1: abc [nounits,u8(3),] First vector column. # Column 2: def [nounits,u8(3),] Second vector column. 111 112 113 211 212 213 121 122 123 221 222 223 $ asttable in.txt --transpose -O # Column 1: abc [nounits,u8(2),] First vector column. # Column 2: def [nounits,u8(2),] Second vector column. 111 121 211 221 112 122 212 222 113 123 213 223
Extract the given tokens/elements from the given vector column into separate single-valued columns. The input vector column can be identified by its name or counter, see Selecting table columns. After the columns are extracted, the input vector is deleted by default. To preserve the input vector column, you can use --keepvectfin described below. For a complete usage scenario see Vector columns.
Move the given columns into a newly created vector column. The given columns can be identified by their name or counter, see Selecting table columns. After the columns are copied, they are deleted by default. To preserve the inputs, you can use --keepvectfin described below. For a complete usage scenario see Vector columns.
Do not delete the input column(s) when using --fromvector or --tovector.
Add the rows of the given file to the output table. The selected columns in the tables given to this option should have the same number and datatype and the rows before control reaches this phase (after column selection and column concatenation), for more see Operation precedence in Table.
For example, if a.fits, b.fits and c.fits have the columns RA
, DEC
and MAGNITUDE
(possibly in different column-numbers in their respective table, along with many more columns), the command below will add their rows into the final output that will only have these three columns:
$ asttable a.fits --catrowfile=b.fits --catrowhdu=1 \ --catrowfile=c.fits --catrowhdu=1 \ -cRA,DEC,MAGNITUDE --output=allrows.fits
Provenance of each row: When merging rows from separate catalogs, it is important to keep track of the source catalog of each row (its provenance).
To do this, you can use --catrowfile in combination with the |
How to avoid repetition when adding rows: this option will simply add the rows of multiple tables into one, it does not check their contents! Therefore if you use this option on multiple catalogs that may have some shared physical objects in some of their rows, those rows/objects will be repeated in the final table. In such scenarios, to avoid potential repetition, it is better to use Match (with --notmatched and --outcols=AAA,BBB) instead of Table. For more on using Match for this scenario, see the description of --outcols in Invoking Match. |
The HDU/extension of the FITS file(s) that should be concatenated, or appended, by rows with --catrowfile. If --catrowfile is called more than once with more than one FITS file, it is necessary to call this option more than once also (once for every FITS table given to --catrowfile). The HDUs will be loaded in the same order as the FITS files given to --catrowfile.
Add column metadata when the output is printed in the standard output. Usually the standard output is used for a fast visual check, or to pipe into other metadata-agnostic programs (like AWK) for further processing. So by default meta-data are not included. But when piping to other Gnuastro programs (where metadata can be interpreted and used) it is recommended to use this option and use column names in the next program.
Only output rows that have a value within the given range in the STR
column (can be a name or counter).
Note that the range is only inclusive in the lower-limit.
For example, with --range=sn,5:20
the output’s columns will only contain rows that have a value in the sn
column (not case-sensitive) that is greater or equal to 5, and less than 20.
Also you can use the comma for separating the values such as this --range=sn,5,20
.
For the precedence of this operation in relation to others, see Operation precedence in Table.
This option can be called multiple times (different ranges for different columns) in one run of the Table program. This is very useful for selecting the final rows from multiple criteria/columns.
The chosen column does not have to be in the output columns. This is good when you just want to select using one column’s values, but do not need that column anymore afterwards.
For one example of using this option, see the example under --sigclip-median in Invoking Statistics.
Only return rows where the given coordinates are inside the polygon specified by the --polygon option.
The coordinate columns are the given STR1
and STR2
columns, they can be a column name or counter (see Selecting table columns).
For the precedence of this operation in relation to others, see Operation precedence in Table.
Note that the chosen columns does not have to be in the output columns (which are specified by the --column
option).
For example, if we want to select rows in the polygon specified in Dataset inspection and cropping, this option can be used like this (you can remove the double quotations and write them all in one line if you remove the white-spaces around the colon separating the column vertices):
asttable table.fits --inpolygon=RA,DEC \ --polygon="53.187414,-27.779152 \ : 53.159507,-27.759633 \ : 53.134517,-27.787144 \ : 53.161906,-27.807208" \
Flat/Euclidean space: The --inpolygon option assumes a flat/Euclidean space so it is only correct for RA and Dec when the polygon size is very small like the example above. If your polygon is a degree or larger, it may not return correct results. Please get in touch if you need such a feature (see Suggest new feature). |
Only return rows where the given coordinates are outside the polygon specified by the --polygon option. This option is very similar to the --inpolygon option, so see the description there for more.
The polygon to use for the --inpolygon
and --outpolygon options.
This option is parsed in an identical way to the same option in the Crop program, so for more information on how to use it, see Crop options.
Only output rows that are equal to the given number(s) in the given column. The first argument is the column identifier (name or number, see Selecting table columns), after that you can specify any number of values. For the precedence of this operation in relation to others, see Operation precedence in Table.
For example, --equal=ID,5,6,8 will only print the rows that have a value of 5, 6, or 8 in the ID
column.
This option can also be called multiple times, so --equal=ID,4,5 --equal=ID,6,7 has the same effect as --equal=4,5,6,7.
Equality and floating point numbers: Floating point numbers are only approximate values (see Numeric data types). In this context, their equality depends on how the input table was originally stored (as a plain text table or as an ASCII/binary FITS table). If you want to select floating point numbers, it is strongly recommended to use the --range option and set a very small interval around your desired number, do not use --equal or --notequal. |
The --equal and --notequal options also work when the given column has a string type.
In this case the given value to the option will also be parsed as a string, not as a number.
When dealing with string columns, be careful with trailing white space characters (the actual value maybe adjusted to the right, left, or center of the column’s width).
If you need to account for such white spaces, you can use shell quoting.
For example, --equal=NAME," myname "
.
Strings with a comma (,): When your desired column values contain a comma, you need to put a ‘ $ asttable table.fits --equal=AB,cd\,ef |
Only output rows that are not equal to the given number(s) in the given column.
The first argument is the column identifier (name or number, see Selecting table columns), after that you can specify any number of values.
For example, --notequal=ID,5,6,8 will only print the rows where the ID
column does not have value of 5, 6, or 8.
This option can also be called multiple times, so --notequal=ID,4,5 --notequal=ID,6,7 has the same effect as --notequal=4,5,6,7.
Be very careful if you want to use the non-equality with floating point numbers, see the special note under --equal for more. This option also works when the given column has a string type, see the description under --equal (above) for more.
Only output rows that are not blank in the given column of the input table. Like above, the columns can be specified by their name or number (counting from 1). This option can be called multiple times, so --noblank=MAG --noblank=PHOTOZ is equivalent to --noblank=MAG,PHOTOZ. For the precedence of this operation in relation to others, see Operation precedence in Table.
For example, if table.fits has blank values (NaN in floating point types) in the magnitude
and sn
columns, with --noblank=magnitude,sn
, the output will not contain any rows with blank values in these two columns.
If you want all columns to be checked, simply set the value to _all
(in other words: --noblank=_all).
This mode is useful when there are many columns in the table and you want a “clean” output table (with no blank values in any column): entering their name or number one-by-one can be buggy and frustrating.
In this mode, no other column name should be given.
For example, if you give --noblank=_all,magnitude, then Table will assume that your table actually has a column named _all
and magnitude
, and if it does not, it will abort with an error.
If you want to change column values using Column arithmetic (and set some to blank, to later remove), or you want to select rows based on columns that you have imported from other tables, you should use the --noblankend option described below. Also, see Operation precedence in Table.
Sort the output rows based on the values in the STR
column (can be a column name or number).
By default the sort is done in ascending/increasing order, to sort in a descending order, use --descending.
For the precedence of this operation in relation to others, see Operation precedence in Table.
The chosen column does not have to be in the output columns. This is good when you just want to sort using one column’s values, but do not need that column anymore afterwards.
When called with --sort, rows will be sorted in descending order.
Only print the given number of rows from the top of the final table. Note that this option only affects the output table. For example, if you use --sort, or --range, the printed rows are the first after applying the sort sorting, or selecting a range of the full input. This option cannot be called with --tail, --rowrange or --rowrandom. For the precedence of this operation in relation to others, see Operation precedence in Table.
If the given value to --head is 0, the output columns will not have any rows and if it is larger than the number of rows in the input table, all the rows are printed (this option is effectively ignored).
This behavior is taken from the head
program in GNU Coreutils.
Only print the given number of rows from the bottom of the final table. See --head for more. This option cannot be called with --head, --rowrange or --rowrandom.
Only return the rows within the requested positional range (inclusive on both sides).
Therefore, --rowrange=5,7
will return 3 of the input rows, row 5, 6 and 7.
This option will abort if any of the given values is larger than the total number of rows in the table.
For the precedence of this operation in relation to others, see Operation precedence in Table.
With the --head or --tail options you can only see the top or bottom few rows. However, with this option, you can limit the returned rows to a contiguous set of rows in the middle of the table. Therefore this option cannot be called with --head, --tail, or --rowrandom.
Select INT
rows from the input table by random (assuming a uniform distribution).
This option is applied after the value-based selection options (such as --sort, --range, and --polygon).
On the other hand, only the row counters are randomly selected, this option does not change the order.
Therefore, if --rowrandom is called together with --sort, the returned rows are still sorted.
This option cannot be called with --head, --tail, or --rowrange.
For the precedence of this operation in relation to others, see Operation precedence in Table.
This option will only have an effect if INT
is larger than the number of rows when it is activated (after the value-based selection options have been applied).
When there are fewer rows, a warning is printed, saying that this option has no effect.
The warning can be disabled with the --quiet option.
Due to its nature (to be random), the output of this option differs in each run.
Therefore 5 calls to Table with --rowrandom on the same input table will generate 5 different outputs.
If you want a reproducible random selection, set the GSL_RNG_SEED
environment variable and also use the --envseed option, for more see Generating random numbers.
Read the random number generator seed from the GSL_RNG_SEED
environment variable for --rowrandom (instead of generating a different seed internally on every run).
This is useful if you want a reproducible random selection of the input rows.
For more, see Generating random numbers.
Remove all rows in the requested output columns that have a blank value. Like above, the columns can be specified by their name or number (counting from 1). This option can be called multiple times, so --noblank=MAG --noblank=PHOTOZ is equivalent to --noblank=MAG,PHOTOZ. For the precedence of this operation in relation to others, see Operation precedence in Table.
for example, if your final output table (possibly after column arithmetic, or adding new columns) has blank values (NaN in floating point types) in the magnitude
and sn
columns, with --noblankend=magnitude,sn
, the output will not contain any rows with blank values in these two columns.
If you want blank values to be removed from the main input table _before_ any further processing (like adding columns, sorting or column arithmetic), you should use the --noblank option. With the --noblank option, the column(s) that is(are) given does not necessarily have to be in the output (it is just temporarily used for reading the inputs and selecting rows, but does not necessarily need to be present in the output). However, the column(s) given to this option should exist in the output.
If you want all columns to be checked, simply set the value to _all
(in other words: --noblankend=_all).
This mode is useful when there are many columns in the table and you want a “clean” output table (with no blank values in any column): entering their name or number one-by-one can be buggy and frustrating.
In this mode, no other column name should be given.
For example, if you give --noblankend=_all,magnitude, then Table will assume that your table actually has a column named _all
and magnitude
, and if it does not, it will abort with an error.
This option is applied just before writing the final table (after --colmetadata has finished). So in case you changed the column metadata, or added new columns, you can use the new names, or the newly defined column numbers. For the precedence of this operation in relation to others, see Operation precedence in Table.
Update the specified column metadata in the output table. This option is applied after all other column-related operations are complete, for example, column arithmetic, or column concatenation. For the precedence of this operation in relation to others, see Operation precedence in Table.
The first value (before the first comma) given to this option is the column’s identifier. It can either be a counter (positive integer, counting from 1), or a name (the column’s name in the output if this option was not called).
After the to-be-updated column is identified, at least one other string should be given, with a maximum of three strings. The first string after the original name will the selected column’s new name. The next (optional) string will be the selected column’s unit and the third (optional) will be its comments. If the two optional strings are not given, the original column’s units or comments will remain unchanged.
If any of the values contains a comma, you should place a ‘\
’ before the comma to avoid it getting confused with a delimiter.
For example, see the command below for a column description that contains a comma:
$ asttable table.fits \ --colmetadata=NAME,UNIT,"Comments\, with a comma"
Generally, since the comma is commonly used as a delimiter in many scenarios, to avoid complicating your future analysis with the table, it is best to avoid using a comma in the column name and units.
Some examples of this option are available in the tutorials, in particular Working with catalogs (estimating colors). Here are some more specific examples:
This will convert name of the original MAGNITUDE
column to MAG_F160W
, leaving the unit and comments unchanged.
This will convert name of the third column of the final output to MAG_F160W
and the units to mag
, while leaving the comments untouched.
This will convert name of the original MAGNITUDE
column to MAG_F160W
, and the units to mag
and the comments to Magnitude in F160W filter
.
Note the double quotations around the comment string, they are necessary to preserve the white-space characters within the column comment from the command-line, into the program (otherwise, upon reaching a white-space character, the shell will consider this option to be finished and cause un-expected behavior).
If your table is large and generated by a script, you can first do all your operations on your table’s data and write it into a temporary file (maybe called temp.fits).
Then, look into that file’s metadata (with asttable temp.fits -i
) to see the exact column positions and possible names, then add the necessary calls to this option to your previous call to asttable
, so it writes proper metadata in the same run (for example, in a script or Makefile).
Recall that when a name is given, this option will update the metadata of the first column that matches, so if you have multiple columns with the same name, you can call this options multiple times with the same first argument to change them all to different names.
Finally, if you already have a FITS table by other means (for example, by downloading) and you merely want to update the column metadata and leave the data intact, it is much more efficient to directly modify the respective FITS header keywords with astfits
, using the keyword manipulation features described in Keyword inspection and manipulation.
--colmetadata is mainly intended for scenarios where you want to edit the data so it will always load the full/partial dataset into memory, then write out the resulting datasets with updated/corrected metadata.
The plain-text format of 32-bit floating point columns when output is not binary (this option is ignored for binary outputs like FITS tables, see Printing floating point numbers). The acceptable values are listed below. This is just the format of the plain-text outputs; see --txtf32precision for customizing their precision.
fixed
Fixed-point notation (for example 123.4567
).
exp
Exponential notation (for example 1.234567e+02
).
The default mode is exp
since it is the most generic and will not cause any loss of data.
Be very cautious if you set it to fixed
.
As a rule of thumb, the fixed-point notation is only good if the numbers are larger than 1.0, but not too large!
Given that the total number of accurate decimal digits is fixed the more digits you have on the left of the decimal point (integer part), the more un-accurate digits will be printed on the right of the decimal point.
Number of digits after (to the right side of) the decimal point (precision) for columns with a 32-bit floating point datatype (this option is ignored for binary outputs like FITS tables, see Printing floating point numbers). This can take any positive integer (including 0). When given a value of zero, the floating point number will be rounded to the nearest integer.
The default value to this option is 6. This is because according to IEEE 754, 32-bit floating point numbers can be accurately presented to 7.22 decimal digits (see Printing floating point numbers). Since we only have an integer number of digits in a number, we’ll round it to 7 decimal digits. Furthermore, the precision is only defined to the right side of the decimal point. In exponential notation (default of --txtf32format), one decimal digit will be printed on the left of the decimal point. So the default value to this option is \(7-1=6\).
The plain-text format of 64-bit floating point columns when output is not binary (this option is ignored for binary outputs like FITS tables, see Printing floating point numbers). The acceptable values are listed below. This is just the format of the plain-text outputs; see --txtf64precision for customizing their precision.
fixed
Fixed-point notation (for example 12345.6789012345
).
exp
Exponential notation (for example 1.23456789012345e4
).
The default mode is exp
since it is the most generic and will not cause any loss of data.
Be very cautious if you set it to fixed
.
As a rule of thumb, the fixed-point notation is only good if the numbers are larger than 1.0, but not too large!
Given that the total number of accurate decimal digits is fixed the more digits you have on the left of the decimal point (integer part), the more un-accurate digits will be printed on the right of the decimal point.
Number of digits after the decimal point (precision) for columns with a 64-bit floating point datatype (this option is ignored for binary outputs like FITS tables, see Printing floating point numbers). This can take any positive integer (including 0). When given a value of zero, the floating point number will be rounded to the nearest integer.
The default value to this option is 15. This is because according to IEEE 754, 64-bit floating point numbers can be accurately presented to 15.95 decimal digits (see Printing floating point numbers). Since we only have an integer number of digits in a number, we’ll round it to 16 decimal digits. Furthermore, the precision is only defined to the right side of the decimal point. In exponential notation (default of --txtf64format), one decimal digit will be printed on the left of the decimal point. So the default value to this option is \(16-1=15\).
When output is a plain-text file or just gets printed on standard output (the terminal), all floating point columns are printed in fixed point notation (as in 123.456
) instead of the default exponential notation (as in 1.23456e+02
).
For 32-bit floating points, this option will use a precision of 3 digits (see --txtf32precision) and for 64-bit floating points use a precision of 6 digits (see --txtf64precision).
This can be useful for human readability, but be careful with some scenarios (for example 1.23e-120
, which will show only as 0.0
!).
When this option is called any value given the following options is ignored: --txtf32format, --txtf32precision, --txtf64format and --txtf64precision.
For example below you can see the output of table with and without this option:
$ asttable table.fits --head=5 -O # Column 1: OBJNAME [name ,str23, ] Name in HyperLeda. # Column 2: RAJ2000 [deg ,f64 , ] Right Ascension. # Column 3: DEJ2000 [deg ,f64 , ] Declination. # Column 4: RADIUS [arcmin,f32 , ] Major axis radius. NGC0884 2.3736267000000e+00 5.7138753300000e+01 8.994357e+00 NGC1629 4.4935191000000e+00 -7.1838322400000e+01 5.000000e-01 NGC1673 4.7109672000000e+00 -6.9820892700000e+01 3.499210e-01 NGC1842 5.1216920000000e+00 -6.7273195300000e+01 3.999171e-01 $ asttable table.fits --head=5 -O -Y # Column 1: OBJNAME [name ,str23, ] Name in HyperLeda. # Column 2: RAJ2000 [deg ,f64 , ] Right Ascension. # Column 3: DEJ2000 [deg ,f64 , ] Declination. # Column 4: RADIUS [arcmin,f32 , ] Major axis radius. NGC0884 2.373627 57.138753 8.994 NGC1629 4.493519 -71.838322 0.500 NGC1673 4.710967 -69.820893 0.350 NGC1842 5.121692 -67.273195 0.400
This is also useful when you want to make outputs of other programs more “easy” to read, for example:
$ echo 123.45678 | asttable 1.234567800000000e+02 $ echo 123.45678 | asttable -Y 123.456780
Can result in loss of information: be very careful with this option! It can loose precision or generally the full value if the value is not within a "good" range like this example. Such cases are the reason that this is not the default format of plain-text outputs. $ echo 123.4e-9 | asttable -Y 0.000000 |
JavaScript license information
GNU Astronomy Utilities 0.23 manual, July 2024.