MockDatasetQuantum¶
- class lsst.pipe.base.tests.mocks.MockDatasetQuantum(*, task_label: str, data_id: dict[str, int | str | None], inputs: dict[str, list[lsst.pipe.base.tests.mocks._storage_class.MockDataset]])¶
Bases:
_BaseModelCompat
Description of the quantum that produced a mock dataset.
This is also used to represent task-init operations for init-output mock datasets.
Attributes Summary
Methods Summary
construct
([_fields_set])Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
copy
(*[, include, exclude, update, deep])Duplicate a model, optionally choose which fields to include, exclude and change.
dict
(*[, include, exclude, by_alias, ...])Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
from_orm
(obj)json
(*[, include, exclude, by_alias, ...])Generate a JSON representation of the model,
include
andexclude
arguments as perdict()
.model_construct
([_fields_set])model_copy
(*[, update, deep])model_dump
(*[, mode, include, exclude, ...])model_dump_json
(*[, indent, include, ...])model_rebuild
(*[, force, raise_errors, ...])model_validate
(obj, *[, strict, ...])model_validate_json
(json_data, *[, strict, ...])parse_file
(path, *[, content_type, ...])parse_obj
(obj)parse_raw
(b, *[, content_type, encoding, ...])schema
([by_alias, ref_template])schema_json
(*[, by_alias, ref_template])update_forward_refs
(**localns)Try to update ForwardRefs on fields based on this Model, globalns and localns.
validate
(value)Attributes Documentation
- model_fields = {'data_id': ModelField(name='data_id', type=Mapping[str, Union[int, str, NoneType]], required=True), 'inputs': ModelField(name='inputs', type=Mapping[str, list[lsst.pipe.base.tests.mocks._storage_class.MockDataset]], required=True), 'task_label': ModelField(name='task_label', type=str, required=True)}¶
Methods Documentation
- classmethod construct(_fields_set: SetStr | None = None, **values: Any) Model ¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if
Config.extra = 'allow'
was set since it adds all passed values
- copy(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, update: DictStrAny | None = None, deep: bool = False) Model ¶
Duplicate a model, optionally choose which fields to include, exclude and change.
- Parameters:
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
deep – set to
True
to make a deep copy of the model
- Returns:
new model instance
- dict(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, by_alias: bool = False, skip_defaults: bool | None = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny ¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- json(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, by_alias: bool = False, skip_defaults: bool | None = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Callable[[Any], Any] | None = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode ¶
Generate a JSON representation of the model,
include
andexclude
arguments as perdict()
.encoder
is an optional function to supply asdefault
to json.dumps(), other arguments as perjson.dumps()
.
- model_dump(*, mode: Literal['json', 'python'] | str = 'python', include: set[int] | set[str] | dict[int, Any] | dict[str, Any] | None = None, exclude: set[int] | set[str] | dict[int, Any] | dict[str, Any] | None = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True) dict[str, Any] ¶
- model_dump_json(*, indent: int | None = None, include: set[int] | set[str] | dict[int, Any] | dict[str, Any] | None = None, exclude: set[int] | set[str] | dict[int, Any] | dict[str, Any] | None = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True) str ¶
- classmethod model_rebuild(*, force: bool = False, raise_errors: bool = True, _parent_namespace_depth: int = 2, _types_namespace: dict[str, Any] | None = None) bool | None ¶
- classmethod model_validate(obj: Any, *, strict: bool | None = None, from_attributes: bool | None = None, context: dict[str, Any] | None = None) Self ¶
- classmethod model_validate_json(json_data: str | bytes | bytearray, *, strict: bool | None = None, context: dict[str, Any] | None = None) Self ¶
- classmethod parse_file(path: str | Path, *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model ¶
- classmethod parse_raw(b: str | bytes, *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model ¶
- classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny ¶
- classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode ¶