malariagen_data.af1.Af1.pca#

Af1.pca(region: str | Region | Mapping | List[str | Region | Mapping] | Tuple[str | Region | Mapping, ...], n_snps: int, n_components: int = 20, thin_offset: int = 0, sample_sets: Sequence[str] | str | None = None, sample_query: str | None = None, sample_query_options: dict | None = None, sample_indices: List[int] | None = None, site_mask: str | None = 'default', site_class: str | None = None, min_minor_ac: int | float | None = 2, max_missing_an: int | float | None = 0, cohort_size: int | None = None, min_cohort_size: int | None = None, max_cohort_size: int | None = None, exclude_samples: str | int | List[str | int] | Tuple[str | int, ...] | None = None, fit_exclude_samples: str | int | List[str | int] | Tuple[str | int, ...] | None = None, random_seed: int = 42, inline_array: bool = True, chunks: int | str | Tuple[int | str, ...] | Callable[[Tuple[int, ...]], int | str | Tuple[int | str, ...]] = 'native') Tuple[DataFrame, ndarray]#

Run a principal components analysis (PCA) using biallelic SNPs from the selected genome region and samples.

Changed in version 8.0.0: SNP ascertainment has changed slightly.

This function uses biallelic SNPs as input to the PCA. The ascertainment of SNPs to include has changed slightly in version 8.0.0 and therefore the results of this function may also be slightly different. Previously, SNPs were required to be biallelic and one of the observed alleles was required to be the reference allele. Now SNPs just have to be biallelic. The following additional parameters were also added in version 8.0.0: site_class, cohort_size, min_cohort_size, max_cohort_size, random_seed.

Parameters#

regionstr or Region or Mapping or list of str or Region or Mapping or tuple of str or Region or Mapping

Region of the reference genome. Can be a contig name, region string (formatted like “{contig}:{start}-{end}”), or identifier of a genome feature such as a gene or transcript. Can also be a sequence (e.g., list) of regions.

n_snpsint

The desired number of SNPs to use when running the analysis. SNPs will be evenly thinned to approximately this number.

n_componentsint, optional, default: 20

Number of components to return.

thin_offsetint, optional, default: 0

Starting index for SNP thinning. Change this to repeat the analysis using a different set of SNPs.

sample_setssequence of str or str or None, optional

List of sample sets and/or releases. Can also be a single sample set or release.

sample_querystr or None, optional

A pandas query string to be evaluated against the sample metadata, to select samples to be included in the returned data.

sample_query_optionsdict or None, optional

A dictionary of arguments that will be passed through to pandas query() or eval(), e.g. parser, engine, local_dict, global_dict, resolvers.

sample_indiceslist of int or None, optional

Advanced usage parameter. A list of indices of samples to select, corresponding to the order in which the samples are found within the sample metadata. Either provide this parameter or sample_query, not both.

site_maskstr or None, optional, default: ‘default’

Which site filters mask to apply. See the site_mask_ids property for available values.

site_classstr or None, optional

Select sites belonging to one of the following classes: CDS_DEG_4, (4-fold degenerate coding sites), CDS_DEG_2_SIMPLE (2-fold simple degenerate coding sites), CDS_DEG_0 (non-degenerate coding sites), INTRON_SHORT (introns shorter than 100 bp), INTRON_LONG (introns longer than 200 bp), INTRON_SPLICE_5PRIME (intron within 2 bp of 5’ splice site), INTRON_SPLICE_3PRIME (intron within 2 bp of 3’ splice site), UTR_5PRIME (5’ untranslated region), UTR_3PRIME (3’ untranslated region), INTERGENIC (intergenic, more than 10 kbp from a gene).

min_minor_acint or float or None, optional, default: 2

The minimum minor allele count. SNPs with a minor allele count below this value will be excluded. Can also be a float, which will be interpreted as a fraction.

max_missing_anint or float or None, optional, default: 0

The maximum number of missing allele calls to accept. SNPs with more than this value will be excluded. Set to 0 to require no missing calls. Can also be a float, which will be interpreted as a fraction.

cohort_sizeint or None, optional

Randomly down-sample to this value if the number of samples in the cohort is greater. Raise an error if the number of samples is less than this value.

min_cohort_sizeint or None, optional

Minimum cohort size. Raise an error if the number of samples is less than this value.

max_cohort_sizeint or None, optional

Randomly down-sample to this value if the number of samples in the cohort is greater.

exclude_samplesstr or int or list of str or int or tuple of str or int or None, optional

Sample identifier or index within sample set. Multiple values can also be provided as a list or tuple.

fit_exclude_samplesstr or int or list of str or int or tuple of str or int or None, optional

Sample identifier or index within sample set. Multiple values can also be provided as a list or tuple.

random_seedint, optional, default: 42

Random seed used for reproducible down-sampling.

inline_arraybool, optional, default: True

Passed through to dask from_array().

chunksint or str or tuple of int or str or Callable[[typing.Tuple[int, …]], int or str or tuple of int or str], optional, default: ‘native’

Define how input data being read from zarr should be divided into chunks for a dask computation. If ‘native’, use underlying zarr chunks. If a string specifying a target memory size, e.g., ‘300 MiB’, resize chunks in arrays with more than one dimension to match this size. If ‘auto’, let dask decide chunk size. If ‘ndauto’, let dask decide chunk size but only for arrays with more than one dimension. If ‘ndauto0’, as ‘ndauto’ but only vary the first chunk dimension. If ‘ndauto1’, as ‘ndauto’ but only vary the second chunk dimension. If ‘ndauto01’, as ‘ndauto’ but only vary the first and second chunk dimensions. Also, can be a tuple of integers, or a callable which accepts the native chunks as a single argument and returns a valid dask chunks value.

Returns#

df_pcaDataFrame

A dataframe of sample metadata, with columns “PC1”, “PC2”, “PC3”, etc., added.

evrndarray

An array of explained variance ratios, one per component.

Notes#

This computation may take some time to run, depending on your computing environment. Results of this computation will be cached and re-used if the results_cache parameter was set when instantiating the API client.