Sample Quality Control¶
You can calculate quality control statistics on your variant data using Spark SQL functions, which can be expressed in Python, R, Scala, or SQL.
Each of these functions returns an array of structs containing metrics for one sample. If sample ids are including in the input DataFrame, they will be propagated to the output. The functions assume that the genotypes in each row of the input DataFrame contain the same samples in the same order.
Functions |
Arguments |
Return |
---|---|---|
|
|
A struct containing the following summary stats:
|
|
|
A struct with |
|
|
A struct with |
Computing user-defined sample QC metrics¶
In addition to the built-in QC functions discussed above, Glow provides two ways to compute user-defined per-sample statistics.
aggregate_by_index
¶
First, you can aggregate over each sample in a genotypes array using the aggregate_by_index
function.
aggregate_by_index(array, initial_state, update_function, merge_function, eval_function)
Name |
Type |
Description |
---|---|---|
|
|
An |
|
|
The initial aggregation state for each sample. |
|
|
A function that returns a new single sample aggregation state given the current aggregation state and a new data element. |
|
|
A function that combines two single sample aggregation states. This function is necessary since the aggregation is computed in a distributed manner across all nodes in the cluster. |
|
|
A function that returns the output for a sample given that sample’s aggregation state. This function is optional. If it is not specified, the aggregation state will be returned. |
For example, this code snippet uses aggregate_by_index
to compute the mean for each array
position:
aggregate_by_index(
array_col,
(0d, 0l),
(state, element) -> (state.col1 + element, state.col2 + 1),
(state1, state2) -> (state1.col1 + state2.col1, state1.col2 + state2.col2),
state -> state.col1 / state.col2)
Explode and aggregate¶
If your dataset is not in a normalized, pVCF-esque shape, or if you want the aggregation output in a
table rather than a single array, you can explode the genotypes
array and use any of the
aggregation functions built into Spark. For example, this code snippet computes the number of sites
with a non-reference allele for each sample:
import pyspark.sql.functions as fx
exploded_df = df.withColumn("genotype", fx.explode("genotypes"))\
.withColumn("hasNonRef", fx.expr("exists(genotype.calls, call -> call != -1 and call != 0)"))
agg = exploded_df.groupBy("genotype.sampleId", "hasNonRef")\
.agg(fx.count(fx.lit(1)))\
.orderBy("sampleId", "hasNonRef")