Query¶
- class lsst.daf.butler.registry.queries.Query(*, graph: DimensionGraph, whereRegion: Region | None, managers: RegistryManagers, doomed_by: Iterable[str] = ())¶
Bases:
ABC
An abstract base class for queries that return some combination of
DatasetRef
andDataCoordinate
objects.- Parameters:
- graph
DimensionGraph
Object describing the dimensions included in the query.
- whereRegion
lsst.sphgeom.Region
, optional Region that all region columns in all returned rows must overlap.
- managers
RegistryManagers
A struct containing the registry manager instances used by the query system.
- doomed_by
Iterable
[str
], optional A list of messages (appropriate for e.g. logging or exceptions) that explain why the query is known to return no results even before it is executed. Queries with a non-empty list will never be executed.
- graph
Notes
The
Query
hierarchy abstracts over the database/SQL representation of a particular set of data IDs or datasets. It is expected to be used as a backend for other objects that provide more natural interfaces for one or both of these, not as part of a public interface to query results.Attributes Summary
The
DatasetType
of datasets returned by this query, orNone
if there are no dataset results (DatasetType
orNone
).An iterator over the dimension element columns used in post-query filtering of spatial overlaps (
Iterator
[DimensionElement
]).A SQLAlchemy object representing the full query (
sqlalchemy.sql.FromClause
orNone
).Methods Summary
any
(db, *[, region, execute, exact])Test whether this query returns any results.
count
(db, *[, region, exact])Count the number of rows this query would return.
explain_no_results
(db, *[, region, followup])Return human-readable messages that may help explain why the query yields no results.
extractDataId
(row, *[, graph, records])Extract a data ID from a result row.
extractDatasetRef
(row[, dataId, records])Extract a
DatasetRef
from a result row.extractDimensionsTuple
(row, dimensions)Extract a tuple of data ID values from a result row.
Return the columns for the datasets returned by this query.
getDimensionColumn
(name)Return the query column that contains the primary key value for the dimension with the given name.
getRegionColumn
(name)Return a region column for one of the dimension elements iterated over by
spatial
.isUnique
()Return
True
if this query's rows are guaranteed to be unique, andFalse
otherwise.makeBuilder
([summary])Return a
QueryBuilder
that can be used to construct a newQuery
that is joined to (and hence constrained by) this one.materialize
(db)Execute this query and insert its results into a temporary table.
rows
(db, *[, region])Execute the query and yield result rows, applying
predicate
.subset
(*[, graph, datasets, unique])Return a new
Query
whose columns and/or rows are (mostly) subset of this one's.Attributes Documentation
- datasetType¶
The
DatasetType
of datasets returned by this query, orNone
if there are no dataset results (DatasetType
orNone
).
- spatial¶
An iterator over the dimension element columns used in post-query filtering of spatial overlaps (
Iterator
[DimensionElement
]).Notes
This property is intended primarily as a hook for subclasses to implement and the ABC to call in order to provide higher-level functionality; code that uses
Query
objects (but does not implement one) should usually not have to access this property.
- sql¶
A SQLAlchemy object representing the full query (
sqlalchemy.sql.FromClause
orNone
).This is
None
in the special case where the query has no columns, and only one logical row.
Methods Documentation
- any(db: Database, *, region: Region | None = None, execute: bool = True, exact: bool = True) bool ¶
Test whether this query returns any results.
- Parameters:
- db
Database
Object managing the database connection.
- region
sphgeom.Region
, optional A region that any result-row regions must overlap in order to be yielded. If not provided, this will be
self.whereRegion
, if that exists.- execute
bool
, optional If
True
, execute at least aLIMIT 1
query if it cannot be determined prior to execution that the query would return no rows.- exact
bool
, optional If
True
, run the full query and perform post-query filtering if needed, until at least one result row is found. IfFalse
, the returned result does not account for post-query filtering, and hence may beTrue
even when all result rows would be filtered out.
- db
- Returns:
- count(db: Database, *, region: Region | None = None, exact: bool = True) int ¶
Count the number of rows this query would return.
- Parameters:
- db
Database
Object managing the database connection.
- region
sphgeom.Region
, optional A region that any result-row regions must overlap in order to be yielded. If not provided, this will be
self.whereRegion
, if that exists.- exact
bool
, optional If
True
, run the full query and perform post-query filtering if needed to account for that filtering in the count. IfFalse
, the result may be an upper bound.
- db
- Returns:
- count
int
The number of rows the query would return, or an upper bound if
exact=False
.
- count
Notes
This counts the number of rows returned, not the number of unique rows returned, so even with
exact=True
it may provide only an upper bound on the number of deduplicated result rows.
- explain_no_results(db: Database, *, region: Region | None = None, followup: bool = True) Iterator[str] ¶
Return human-readable messages that may help explain why the query yields no results.
- Parameters:
- db
Database
Object managing the database connection.
- region
sphgeom.Region
, optional A region that any result-row regions must overlap in order to be yielded. If not provided, this will be
self.whereRegion
, if that exists.- followup
bool
, optional If
True
(default) perform inexpensive follow-up queries if no diagnostics are available from query generation alone.
- db
- Returns:
- messages
Iterator
[str
] String messages that describe reasons the query might not yield any results.
- messages
Notes
Messages related to post-query filtering are only available if
rows
,any
, orcount
was already called with the same region (withexact=True
for the latter two).
- extractDataId(row: sqlalchemy.engine.RowProxy | None, *, graph: DimensionGraph | None = None, records: Mapping[str, Mapping[tuple, DimensionRecord]] | None = None) DataCoordinate ¶
Extract a data ID from a result row.
- Parameters:
- row
sqlalchemy.engine.RowProxy
orNone
A result row from a SQLAlchemy SELECT query, or
None
to indicate the row from anEmptyQuery
.- graph
DimensionGraph
, optional The dimensions the returned data ID should identify. If not provided, this will be all dimensions in
QuerySummary.requested
.- records
Mapping
[str
,Mapping
[tuple
,DimensionRecord
] ] Nested mapping containing records to attach to the returned
DataCoordinate
, for whichhasRecords
will returnTrue
. If provided, outer keys must include all dimension element names ingraph
, and inner keys should be tuples of dimension primary key values in the same order aselement.graph.required
. If not provided,DataCoordinate.hasRecords
will returnFalse
on the returned object.
- row
- Returns:
- dataId
DataCoordinate
A data ID that identifies all required and implied dimensions. If
records is not None
, this is havehasRecords()
returnTrue
.
- dataId
- extractDatasetRef(row: sqlalchemy.engine.RowProxy, dataId: DataCoordinate | None = None, records: Mapping[str, Mapping[tuple, DimensionRecord]] | None = None) DatasetRef ¶
Extract a
DatasetRef
from a result row.- Parameters:
- row
sqlalchemy.engine.RowProxy
A result row from a SQLAlchemy SELECT query.
- dataId
DataCoordinate
Data ID to attach to the
DatasetRef
. A minimal (i.e. base class)DataCoordinate
is constructed fromrow
ifNone
.- records
Mapping
[str
,Mapping
[tuple
,DimensionRecord
] ] Records to use to return an
ExpandedDataCoordinate
. If provided, outer keys must include all dimension element names ingraph
, and inner keys should be tuples of dimension primary key values in the same order aselement.graph.required
.
- row
- Returns:
- ref
DatasetRef
Reference to the dataset; guaranteed to have
DatasetRef.id
notNone
.
- ref
- extractDimensionsTuple(row: sqlalchemy.engine.RowProxy | None, dimensions: Iterable[Dimension]) tuple ¶
Extract a tuple of data ID values from a result row.
- abstract getDatasetColumns() DatasetQueryColumns | None ¶
Return the columns for the datasets returned by this query.
- Returns:
- columns
DatasetQueryColumns
orNone
Struct containing SQLAlchemy representations of the result columns for a dataset.
- columns
Notes
This method is intended primarily as a hook for subclasses to implement and the ABC to call in order to provide higher-level functionality; code that uses
Query
objects (but does not implement one) should usually not have to call this method.
- abstract getDimensionColumn(name: str) ColumnElement ¶
Return the query column that contains the primary key value for the dimension with the given name.
- Parameters:
- name
str
Name of the dimension.
- name
- Returns:
- column
sqlalchemy.sql.ColumnElement
. SQLAlchemy object representing a column in the query.
- column
Notes
This method is intended primarily as a hook for subclasses to implement and the ABC to call in order to provide higher-level functionality; code that uses
Query
objects (but does not implement one) should usually not have to call this method.
- abstract getRegionColumn(name: str) ColumnElement ¶
Return a region column for one of the dimension elements iterated over by
spatial
.- Parameters:
- name
str
Name of the element.
- name
- Returns:
- column
sqlalchemy.sql.ColumnElement
SQLAlchemy representing a result column in the query.
- column
Notes
This method is intended primarily as a hook for subclasses to implement and the ABC to call in order to provide higher-level functionality; code that uses
Query
objects (but does not implement one) should usually not have to call this method.
- abstract isUnique() bool ¶
Return
True
if this query’s rows are guaranteed to be unique, andFalse
otherwise.If this query has dataset results (
datasetType
is notNone
), uniqueness applies to theDatasetRef
instances returned byextractDatasetRef
from the result ofrows
. If it does not have dataset results, uniqueness applies to theDataCoordinate
instances returned byextractDataId
.
- abstract makeBuilder(summary: QuerySummary | None = None) QueryBuilder ¶
Return a
QueryBuilder
that can be used to construct a newQuery
that is joined to (and hence constrained by) this one.- Parameters:
- summary
QuerySummary
, optional A
QuerySummary
instance that specifies the dimensions and any additional constraints to include in the new query being constructed, orNone
to use the dimensions ofself
with no additional constraints.
- summary
- abstract materialize(db: Database) ContextManager[Query] ¶
Execute this query and insert its results into a temporary table.
- Parameters:
- db
Database
Database engine to execute the query against.
- db
- Returns:
- context
typing.ContextManager
[MaterializedQuery
] A context manager that ensures the temporary table is created and populated in
__enter__
(returning aMaterializedQuery
object backed by that table), and dropped in__exit__
. Ifself
is already aMaterializedQuery
,__enter__
may just returnself
and__exit__
may do nothing (reflecting the fact that an outer context manager should already take care of everything else).
- context
- rows(db: Database, *, region: Region | None = None) Iterator[Row | None] ¶
Execute the query and yield result rows, applying
predicate
.- Parameters:
- db
Database
Object managing the database connection.
- region
sphgeom.Region
, optional A region that any result-row regions must overlap in order to be yielded. If not provided, this will be
self.whereRegion
, if that exists.
- db
- Yields:
- abstract subset(*, graph: DimensionGraph | None = None, datasets: bool = True, unique: bool = False) Query ¶
Return a new
Query
whose columns and/or rows are (mostly) subset of this one’s.- Parameters:
- graph
DimensionGraph
, optional Dimensions to include in the new
Query
being constructed. IfNone
(default),self.graph
is used.- datasets
bool
, optional Whether the new
Query
should include dataset results. Defaults toTrue
, but is ignored ifself
does not include dataset results.- unique
bool
, optional Whether the new
Query
should guarantee unique results (this may come with a performance penalty).
- graph
- Returns:
- query
Query
A query object corresponding to the given inputs. May be
self
if no changes were requested.
- query
Notes
The way spatial overlaps are handled at present makes it impossible to fully guarantee in general that the new query’s rows are a subset of this one’s while also returning unique rows. That’s because the database is only capable of performing approximate, conservative overlaps via the common skypix system; we defer actual region overlap operations to per-result-row Python logic. But including the region columns necessary to do that postprocessing in the query makes it impossible to do a SELECT DISTINCT on the user-visible dimensions of the query. For example, consider starting with a query with dimensions (instrument, skymap, visit, tract). That involves a spatial join between visit and tract, and we include the region columns from both tables in the results in order to only actually yield result rows (see
predicate
androws
) where the regions in those two columns overlap. If the user then wants to subset to just (skymap, tract) with unique results, we have two unpalatable options:we can do a SELECT DISTINCT with just the skymap and tract columns in the SELECT clause, dropping all detailed overlap information and including some tracts that did not actually overlap any of the visits in the original query (but were regarded as _possibly_ overlapping via the coarser, common-skypix relationships);
we can include the tract and visit region columns in the query, and continue to filter out the non-overlapping pairs, but completely disregard the user’s request for unique tracts.
This interface specifies that implementations must do the former, as that’s what makes things efficient in our most important use case (
QuantumGraph
generation inpipe_base
). We may be able to improve this situation in the future by putting exact overlap information in the database, either by using built-in (but engine-specific) spatial database functionality or (more likely) switching to a scheme in which pairwise dimension spatial relationships are explicitly precomputed (for e.g. combinations of instruments and skymaps).