Adir1 cloud data access#
This notebook provides information about how to download data from the MalariaGEN Vector Observatory Asian Vector Genomic Surveillance Project, for Anopheles dirus via Google Cloud. This includes sample metadata, raw sequence reads, sequence read alignments, and single nucleotide polymorphism (SNP) calls.
This notebook illustrates how to read data directly from the cloud, without having to first download any data locally. This notebook can be run from any computer, but will work best when run from a compute node within Google Cloud, because it will be physically closer to the data and so data transfer is faster. For example, this notebook can be run via Google Colab which are free interactive computing service running in the cloud.
To launch this notebook in the cloud and run it for yourself, click the launch icon () at the top of the page and select one of the cloud computing services available.
Data hosting#
All data required for this notebook is hosted on Google Cloud Storage (GCS). Data are hosted in the vo_adir_release_master_us_central1
bucket, which is a single-region bucket located in the United States. All data hosted in GCS are publicly accessible and do not require any authentication to access.
Setup#
Running this notebook requires some Python packages to be installed:
%pip install -q malariagen_data
To make accessing these data more convenient, we’ve created the malariagen_data Python package. This is experimental so please let us know if you find any bugs or have any suggestions. See the Adir1.0 API docs for documentation of all functions available from this package.
Import other packages we’ll need to use here.
import numpy as np
import dask
import dask.array as da
from dask.diagnostics.progress import ProgressBar
# silence some warnings
dask.config.set(**{'array.slicing.split_large_chunks': False})
import allel
import malariagen_data
Adir1
data access from Google Cloud is set up with the following code:
adir1 = malariagen_data.Adir1()
adir1
MalariaGEN Adir1 API client | |
---|---|
Please note that data are subject to terms of use, for more information see the MalariaGEN website or contact support@malariagen.net. See also the Adir1 API docs. | |
Storage URL | gs://vo_adir_production_us_central1/release/ |
Data releases available | 1.0 |
Results cache | None |
Cohorts analysis | 20250710 |
Site filters analysis | sc_20250610 |
Software version | malariagen_data 0.0.0 |
Client location | Queensland, Australia |
Sample sets#
Data are organised into different releases. As an example, data in Adir1.0 are organised into 4 sample sets. Each of these sample sets corresponds to a set of mosquito specimens contributed by a collaborating study. Depending on your objectives, you may want to access data from only specific sample sets, or all sample sets.
To see which sample sets are available, load the sample set manifest into a pandas dataframe:
df_sample_sets = adir1.sample_sets(release="1.0")
df_sample_sets
sample_set | sample_count | study_id | study_url | terms_of_use_expiry_date | terms_of_use_url | release | unrestricted_use | |
---|---|---|---|---|---|---|---|---|
0 | 1276-AD-BD-ALAM-VMF00156 | 47 | 1276-AD-BD-ALAM | https://www.malariagen.net/partner_study/1276-... | 2027-11-30 | https://www.malariagen.net/data/our-approach-s... | 1.0 | False |
1 | 1277-VO-KH-WITKOWSKI-VMF00151 | 26 | 1277-VO-KH-WITKOWSKI | https://www.malariagen.net/partner_study/1277-... | 2027-11-30 | https://www.malariagen.net/data/our-approach-s... | 1.0 | False |
2 | 1277-VO-KH-WITKOWSKI-VMF00183 | 248 | 1277-VO-KH-WITKOWSKI | https://www.malariagen.net/partner_study/1277-... | 2027-11-30 | https://www.malariagen.net/data/our-approach-s... | 1.0 | False |
3 | 1278-VO-TH-KOBYLINSKI-VMF00153 | 219 | 1278-VO-TH-KOBYLINSKI | https://www.malariagen.net/partner_study/1278-... | 2027-11-30 | https://www.malariagen.net/data/our-approach-s... | 1.0 | False |
For more information about these sample sets, you can read about each sample set from the URLs under the field study_url
.
Sample metadata#
Data about the samples that were sequenced to generate this data resource are available, including the time and place of collection, the gender of the specimen, and our call regarding the species of the specimen. These are organised by sample set.
E.g., load sample metadata for all samples in the Adir1.0 release into a pandas DataFrame:
df_samples = adir1.sample_metadata(sample_sets="1.0")
df_samples
sample_id | derived_sample_id | partner_sample_id | contributor | country | location | year | month | latitude | longitude | ... | admin1_name | admin1_iso | admin2_name | taxon | cohort_admin1_year | cohort_admin1_month | cohort_admin1_quarter | cohort_admin2_year | cohort_admin2_month | cohort_admin2_quarter | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | VBS46299-6321STDY9453299 | VBS46299-6321STDY9453299 | 158 | Shiaful Alam | Bangladesh | Bangladesh_2 | 2018 | 5 | 22.287 | 92.194 | ... | Chittagong Division | BD-B | Bandarban | baimaii | BD-B_baim_2018 | BD-B_baim_2018_05 | BD-B_baim_2018_Q2 | BD-B_Bandarban_baim_2018 | BD-B_Bandarban_baim_2018_05 | BD-B_Bandarban_baim_2018_Q2 |
1 | VBS46307-6321STDY9453307 | VBS46307-6321STDY9453307 | 2973 | Shiaful Alam | Bangladesh | Bangladesh_1 | 2018 | 6 | 22.254 | 92.203 | ... | Chittagong Division | BD-B | Bandarban | baimaii | BD-B_baim_2018 | BD-B_baim_2018_06 | BD-B_baim_2018_Q2 | BD-B_Bandarban_baim_2018 | BD-B_Bandarban_baim_2018_06 | BD-B_Bandarban_baim_2018_Q2 |
2 | VBS46315-6321STDY9453315 | VBS46315-6321STDY9453315 | 2340 | Shiaful Alam | Bangladesh | Bangladesh_1 | 2018 | 7 | 22.254 | 92.203 | ... | Chittagong Division | BD-B | Bandarban | baimaii | BD-B_baim_2018 | BD-B_baim_2018_07 | BD-B_baim_2018_Q3 | BD-B_Bandarban_baim_2018 | BD-B_Bandarban_baim_2018_07 | BD-B_Bandarban_baim_2018_Q3 |
3 | VBS46323-6321STDY9453323 | VBS46323-6321STDY9453323 | 2525 | Shiaful Alam | Bangladesh | Bangladesh_2 | 2018 | 7 | 22.287 | 92.194 | ... | Chittagong Division | BD-B | Bandarban | baimaii | BD-B_baim_2018 | BD-B_baim_2018_07 | BD-B_baim_2018_Q3 | BD-B_Bandarban_baim_2018 | BD-B_Bandarban_baim_2018_07 | BD-B_Bandarban_baim_2018_Q3 |
4 | VBS46331-6321STDY9453331 | VBS46331-6321STDY9453331 | 5249 | Shiaful Alam | Bangladesh | Bangladesh_1 | 2018 | 9 | 22.254 | 92.203 | ... | Chittagong Division | BD-B | Bandarban | baimaii | BD-B_baim_2018 | BD-B_baim_2018_09 | BD-B_baim_2018_Q3 | BD-B_Bandarban_baim_2018 | BD-B_Bandarban_baim_2018_09 | BD-B_Bandarban_baim_2018_Q3 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
535 | VBS46203-6296STDY10244759 | VBS46203-6296STDY10244759 | 5895 | Kevin Kobylinski | Thailand | Khirirat Nikhom, nine, Q | 2019 | 10 | 9.127 | 98.905 | ... | Surat Thani Province | TH-84 | Khiri Rat Nikhom | dirus | TH-84_diru_2019 | TH-84_diru_2019_10 | TH-84_diru_2019_Q4 | TH-84_Khiri-Rat-Nikhom_diru_2019 | TH-84_Khiri-Rat-Nikhom_diru_2019_10 | TH-84_Khiri-Rat-Nikhom_diru_2019_Q4 |
536 | VBS46204-6296STDY10244760 | VBS46204-6296STDY10244760 | 5972 | Kevin Kobylinski | Thailand | Khirirat Nikhom, eight, M | 2019 | 10 | 9.104 | 98.891 | ... | Surat Thani Province | TH-84 | Khiri Rat Nikhom | dirus | TH-84_diru_2019 | TH-84_diru_2019_10 | TH-84_diru_2019_Q4 | TH-84_Khiri-Rat-Nikhom_diru_2019 | TH-84_Khiri-Rat-Nikhom_diru_2019_10 | TH-84_Khiri-Rat-Nikhom_diru_2019_Q4 |
537 | VBS46205-6296STDY10244761 | VBS46205-6296STDY10244761 | 6024 | Kevin Kobylinski | Thailand | Khirirat Nikhom, eight, M | 2019 | 10 | 9.104 | 98.891 | ... | Surat Thani Province | TH-84 | Khiri Rat Nikhom | dirus | TH-84_diru_2019 | TH-84_diru_2019_10 | TH-84_diru_2019_Q4 | TH-84_Khiri-Rat-Nikhom_diru_2019 | TH-84_Khiri-Rat-Nikhom_diru_2019_10 | TH-84_Khiri-Rat-Nikhom_diru_2019_Q4 |
538 | VBS46206-6296STDY10244762 | VBS46206-6296STDY10244762 | 6036 | Kevin Kobylinski | Thailand | Khirirat Nikhom, eight, N | 2019 | 10 | 9.106 | 98.887 | ... | Surat Thani Province | TH-84 | Khiri Rat Nikhom | dirus | TH-84_diru_2019 | TH-84_diru_2019_10 | TH-84_diru_2019_Q4 | TH-84_Khiri-Rat-Nikhom_diru_2019 | TH-84_Khiri-Rat-Nikhom_diru_2019_10 | TH-84_Khiri-Rat-Nikhom_diru_2019_Q4 |
539 | VBS46207-6296STDY10244763 | VBS46207-6296STDY10244763 | 6037 | Kevin Kobylinski | Thailand | Khirirat Nikhom, eight, N | 2019 | 10 | 9.106 | 98.887 | ... | Surat Thani Province | TH-84 | Khiri Rat Nikhom | dirus | TH-84_diru_2019 | TH-84_diru_2019_10 | TH-84_diru_2019_Q4 | TH-84_Khiri-Rat-Nikhom_diru_2019 | TH-84_Khiri-Rat-Nikhom_diru_2019_10 | TH-84_Khiri-Rat-Nikhom_diru_2019_Q4 |
540 rows × 50 columns
The sample_id
column gives the sample identifier used throughout all Adir1.0 analyses.
The country
, location
, latitude
and longitude
columns give the location where the specimen was collected.
The year
and month
columns give the approximate date when the specimen was collected.
The sex_call
column gives the gender as determined from the sequence data.
Pandas can be used to explore and query the sample metadata in various ways. E.g., here is a summary of the numbers of samples by species:
df_samples.groupby("taxon").size()
taxon
baimaii 47
dirus 493
dtype: int64
SNP calls#
Data on SNP calls, including the SNP positions, alleles, site filters, and genotypes, can be accessed as an xarray Dataset.
E.g., access SNP calls for contig KB672490 for all samples in Adir1.0
.
ds_snps = adir1.snp_calls(region="KB672490", sample_sets="1.0")
ds_snps
<xarray.Dataset> Size: 202GB Dimensions: (variants: 21967539, alleles: 4, samples: 540, ploidy: 2) Coordinates: variant_position (variants) int32 88MB dask.array<chunksize=(65536,), meta=np.ndarray> variant_contig (variants) uint8 22MB dask.array<chunksize=(65536,), meta=np.ndarray> sample_id (samples) <U36 78kB dask.array<chunksize=(47,), meta=np.ndarray> Dimensions without coordinates: variants, alleles, samples, ploidy Data variables: variant_allele (variants, alleles) object 703MB dask.array<chunksize=(65536, 4), meta=np.ndarray> variant_filter_pass_dirus (variants) bool 22MB dask.array<chunksize=(300000,), meta=np.ndarray> call_genotype (variants, samples, ploidy) int8 24GB dask.array<chunksize=(300000, 47, 2), meta=np.ndarray> call_GQ (variants, samples) int8 12GB dask.array<chunksize=(300000, 47), meta=np.ndarray> call_MQ (variants, samples) float32 47GB dask.array<chunksize=(300000, 47), meta=np.ndarray> call_AD (variants, samples, alleles) int16 95GB dask.array<chunksize=(300000, 47, 4), meta=np.ndarray> call_genotype_mask (variants, samples, ploidy) bool 24GB dask.array<chunksize=(300000, 47, 2), meta=np.ndarray> Attributes: contigs: ('KB672490', 'KB672868', 'KB672979', 'KB673090', 'KB673201', 'K...
The arrays within this dataset are backed by Dask arrays, and can be accessed as shown below.
SNP sites and alleles#
We have called SNP genotypes in all samples at all positions in the genome where the reference allele is not “N”. Data on this set of genomic positions and alleles for a given chromosome (e.g., 2RL) can be accessed as Dask arrays as follows.
pos = ds_snps["variant_position"].data
pos
|
alleles = ds_snps["variant_allele"].data
alleles
|
Data can be loaded into memory as a NumPy array as shown in the following examples.
# read first 10 SNP positions into a numpy array
p = pos[:10].compute()
p
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=int32)
# read first 10 SNP alleles into a numpy array
a = alleles[:10].compute()
a
array([['A', 'C', 'G', 'T'],
['A', 'C', 'G', 'T'],
['A', 'C', 'G', 'T'],
['T', 'A', 'C', 'G'],
['T', 'A', 'C', 'G'],
['C', 'A', 'G', 'T'],
['A', 'C', 'G', 'T'],
['A', 'C', 'G', 'T'],
['A', 'C', 'G', 'T'],
['A', 'C', 'G', 'T']], dtype=object)
Here the first column contains the reference alleles, and the remaining columns contain the alternate alleles.
Note that a byte string data type is used here for efficiency. E.g., the Python code b'T'
represents a byte string containing the letter “T”, which here stands for the nucleotide thymine.
Note that we have chosen to genotype all samples at all sites in the genome, assuming all possible SNP alleles. Not all of these alternate alleles will actually have been observed in the Adir1
samples. To determine which sites and alleles are segregating, an allele count can be performed over the samples you are interested in. See the example below.
Site filters#
SNP calling is not always reliable, and we have created some site filters to allow excluding low quality SNPs.
Each set of site filters provides a “filter_pass” Boolean mask for each chromosome arm, where True indicates that the site passed the filter and is accessible to high quality SNP calling.
The site filters data can be accessed as dask arrays as shown in the examples below.
# access gamb_colu_arab site filters as a dask array
filter_pass = ds_snps['variant_filter_pass_dirus'].data
filter_pass
|
# read filter values for first 10 SNPs (True means the site passes filters)
f = filter_pass[:10].compute()
f
array([False, False, False, False, False, False, False, False, False,
False])
SNP genotypes#
SNP genotypes for individual samples are available. Genotypes are stored as a three-dimensional array, where the first dimension corresponds to genomic positions, the second dimension is samples, and the third dimension is ploidy (2). Values are coded as integers, where -1 represents a missing value, 0 represents the reference allele, and 1, 2, and 3 represent alternate alleles.
SNP genotypes can be accessed as dask arrays as shown below.
gt = ds_snps["call_genotype"].data
gt
|
Note that the columns of this array (second dimension) match the rows in the sample metadata, if the same sample sets were loaded. I.e.:
df_samples = adir1.sample_metadata(sample_sets="1.0")
gt = ds_snps["call_genotype"].data
len(df_samples) == gt.shape[1]
True
You can use this correspondance to apply further subsetting operations to the genotypes by querying the sample metadata. E.g.:
loc_funestus = df_samples.eval("taxon == 'baimaii'").values
print(f"found {np.count_nonzero(loc_funestus)} baimaii samples")
found 47 baimaii samples
ds_snps_funestus = ds_snps.isel(samples=loc_funestus)
ds_snps_funestus
<xarray.Dataset> Size: 18GB Dimensions: (variants: 21967539, alleles: 4, samples: 47, ploidy: 2) Coordinates: variant_position (variants) int32 88MB dask.array<chunksize=(65536,), meta=np.ndarray> variant_contig (variants) uint8 22MB dask.array<chunksize=(65536,), meta=np.ndarray> sample_id (samples) <U36 7kB dask.array<chunksize=(47,), meta=np.ndarray> Dimensions without coordinates: variants, alleles, samples, ploidy Data variables: variant_allele (variants, alleles) object 703MB dask.array<chunksize=(65536, 4), meta=np.ndarray> variant_filter_pass_dirus (variants) bool 22MB dask.array<chunksize=(300000,), meta=np.ndarray> call_genotype (variants, samples, ploidy) int8 2GB dask.array<chunksize=(300000, 45, 2), meta=np.ndarray> call_GQ (variants, samples) int8 1GB dask.array<chunksize=(300000, 45), meta=np.ndarray> call_MQ (variants, samples) float32 4GB dask.array<chunksize=(300000, 45), meta=np.ndarray> call_AD (variants, samples, alleles) int16 8GB dask.array<chunksize=(300000, 45, 4), meta=np.ndarray> call_genotype_mask (variants, samples, ploidy) bool 2GB dask.array<chunksize=(300000, 45, 2), meta=np.ndarray> Attributes: contigs: ('KB672490', 'KB672868', 'KB672979', 'KB673090', 'KB673201', 'K...
Data can be read into memory as numpy arrays, e.g., read genotypes for the first 5 SNPs and the first 3 samples:
g = gt[:5, :3, :].compute()
g
array([[[-1, -1],
[-1, -1],
[-1, -1]],
[[-1, -1],
[-1, -1],
[-1, -1]],
[[-1, -1],
[-1, -1],
[-1, -1]],
[[-1, -1],
[-1, -1],
[-1, -1]],
[[-1, -1],
[-1, -1],
[-1, -1]]], dtype=int8)
If you want to work with the genotype calls, you may find it convenient to use scikit-allel. E.g., the code below sets up a genotype array.
# use the scikit-allel wrapper class for genotype calls
gt = allel.GenotypeDaskArray(ds_snps["call_genotype"].data)
gt
0 | 1 | 2 | 3 | 4 | ... | 535 | 536 | 537 | 538 | 539 | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | ./. | ./. | ./. | ./. | ./. | ... | ./. | ./. | ./. | ./. | ./. | |
1 | ./. | ./. | ./. | ./. | ./. | ... | ./. | ./. | ./. | ./. | ./. | |
2 | ./. | ./. | ./. | ./. | ./. | ... | ./. | ./. | ./. | ./. | ./. | |
... | ... | |||||||||||
21967536 | 0/0 | 0/0 | 0/0 | 0/0 | 0/0 | ... | 0/0 | 0/0 | 0/0 | 0/0 | 0/0 | |
21967537 | 0/0 | 0/0 | 0/0 | 0/0 | 0/0 | ... | 0/0 | 0/0 | 0/0 | 0/0 | 0/0 | |
21967538 | 0/0 | 0/0 | 0/0 | 0/0 | 0/0 | ... | 0/0 | 0/0 | 0/0 | 0/0 | 0/0 |
Example computation#
Here’s an example computation to count the number of segregating SNPs on the longest contig (KB672490) that also pass site filters. This may take a minute or two, because it is scanning genotype calls at millions of SNPs in hundreds of samples.
# choose contig (longest contig)
region = "KB672490"
# choose site filter mask
# choose sample sets
sample_sets = ["1278-VO-TH-KOBYLINSKI-VMF00153"]
# access SNP calls
ds_snps = adir1.snp_calls(region=region, sample_sets=sample_sets)
# locate pass sites
loc_pass = ds_snps[f"variant_filter_pass_dirus"].values
# perform an allele count over genotypes
gt = allel.GenotypeDaskArray(ds_snps["call_genotype"].data)
with ProgressBar():
ac = gt.count_alleles(max_allele=3).compute()
# locate segregating sites
loc_seg = ac.is_segregating()
# count segregating and pass sites
n_pass_seg = np.count_nonzero(loc_pass & loc_seg)
n_pass_seg
[########################################] | 100% Completed | 139.73 s
2692310
Running larger computations#
Please note that free cloud computing services such as Google Colab and MyBinder provide only limited computing resources. Thus although these services are able to efficiently read Adir1
data stored on Google Cloud, you may find that you run out of memory, or computations take a long time running on a single core. If you would like any suggestions regarding how to set up more powerful compute resources in the cloud, please feel free to get in touch via the malariagen/vector-data GitHub discussion board.
Feedback and suggestions#
If there are particular analyses you would like to run, or if you have other suggestions for useful documentation we could add to this site, we would love to know, please get in touch via the malariagen/vector-data GitHub discussion board.