malariagen_data.af1.Af1.haplotypes#

Af1.haplotypes(region: str | Region | Mapping | List[str | Region | Mapping] | Tuple[str | Region | Mapping, ...], analysis: str = 'default', sample_sets: Sequence[str] | str | None = None, sample_query: str | None = None, sample_query_options: dict | None = None, inline_array: bool = True, chunks: int | str | Tuple[int | str, ...] | Callable[[Tuple[int, ...]], int | str | Tuple[int | str, ...]] = 'native', cohort_size: int | None = None, min_cohort_size: int | None = None, max_cohort_size: int | None = None, random_seed: int = 42) Dataset#

Access haplotype data.

Parameters#

regionstr or Region or Mapping or list of str or Region or Mapping or tuple of str or Region or Mapping

Region of the reference genome. Can be a contig name, region string (formatted like “{contig}:{start}-{end}”), or identifier of a genome feature such as a gene or transcript. Can also be a sequence (e.g., list) of regions.

analysisstr, optional, default: ‘default’

Which haplotype phasing analysis to use. See the phasing_analysis_ids property for available values.

sample_setssequence of str or str or None, optional

List of sample sets and/or releases. Can also be a single sample set or release.

sample_querystr or None, optional

A pandas query string to be evaluated against the sample metadata, to select samples to be included in the returned data.

sample_query_optionsdict or None, optional

A dictionary of arguments that will be passed through to pandas query() or eval(), e.g. parser, engine, local_dict, global_dict, resolvers.

inline_arraybool, optional, default: True

Passed through to dask from_array().

chunksint or str or tuple of int or str or Callable[[typing.Tuple[int, …]], int or str or tuple of int or str], optional, default: ‘native’

Define how input data being read from zarr should be divided into chunks for a dask computation. If ‘native’, use underlying zarr chunks. If a string specifying a target memory size, e.g., ‘300 MiB’, resize chunks in arrays with more than one dimension to match this size. If ‘auto’, let dask decide chunk size. If ‘ndauto’, let dask decide chunk size but only for arrays with more than one dimension. If ‘ndauto0’, as ‘ndauto’ but only vary the first chunk dimension. If ‘ndauto1’, as ‘ndauto’ but only vary the second chunk dimension. If ‘ndauto01’, as ‘ndauto’ but only vary the first and second chunk dimensions. Also, can be a tuple of integers, or a callable which accepts the native chunks as a single argument and returns a valid dask chunks value.

cohort_sizeint or None, optional

Randomly down-sample to this value if the number of samples in the cohort is greater. Raise an error if the number of samples is less than this value.

min_cohort_sizeint or None, optional

Minimum cohort size. Raise an error if the number of samples is less than this value.

max_cohort_sizeint or None, optional

Randomly down-sample to this value if the number of samples in the cohort is greater.

random_seedint, optional, default: 42

Random seed used for reproducible down-sampling.

Returns#

Dataset

A dataset with 4 dimensions: variants the number of sites in the selected region, allele the number of alleles (2), samples the number of samples, and ploidy the ploidy (2). There are 3 coordinates: variant_position has variants values and contains the position of each site, variant_contig has variants values and contains the contig of each site, sample_id has samples values and contains the identifier of each sample. The data variables are: variant_allele, it has (variants, alleles) values and contains the reference followed by the alternate allele for each site, call_genotype, it has (variants, samples, ploidy) values and contains both calls for each site and each sample.