Search speed comparison: naive exact vs. boyer-moore vs. k-mer index

Recently, I’ve been working my way through Ben Langmead’s excellent introduction to “Algorithms for DNA sequencing” on Coursera.com.    The class is a fascinating and well-taught intro to concepts about DNA short read alignment and assembly methods.

As part of the course, we have implement or modify python code relating to several simple matching algorithms, including the “naive exact” (NEM) matching method, the “boyer-moore” (BM) method, and a k-mer index approach.

I was curious about speed, so I made a figure showing the computational time that each approach takes.  P and T refer to the length of the short read to be aligned and the genome to align to, respectively.

Figure 1. Comparing the speed of the NEM, BM, and K-mer search methods on long and short patterns (P) and texts (T). The y-axis is on a log-scale.

Note that the y-axis is a log scale in units of microseconds.  Right away, it is obvious that k-mer index methods are orders of magnitude faster than ‘online’ methods like NEM and BM.

Also of interest is the fact that as the pattern gets shorter, the advantage of BM preprocessing of the pattern gets smaller.  You can see that going from 30 to 11 pattern length negates any advantage to BM searching.

 

Beyond Benjamini-Hochberg: Independent Hypothesis Weighting (IHW) for multiple test correction

Multiple hypothesis testing is a critical part of modern bioinformatic analysis.  When testing for significant changes between conditions on many thousands of genes, for instance in an RNA-Seq experiment, the goal is maximize the number of discoveries while controlling the false discoveries.

Typically, this is done by using the Benjamini-Hochberg (BH) procedure, which aims to adjust p-values so that no more than a set fraction (usually 5%) of discoveries are false positives (FDR = 0.05). The BH method is better powered and less stringent than the more strict family-wise error rate (FWER) control, and therefore more appropriate to modern genomics experiments that make thousands of simultaneous comparisons.  However, the BH method is still limited by the fact that it uses only p-values to control the FDR, while treating each test as equally powered.

A new method, Independent Hypothesis Weighting (IHW), aims to take advantage of the fact that individual tests may differ in their statistical properties, such as sample size, true effect size, signal-to-noise ratio, or prior probability of being false.  For example, in an RNA-Seq experiment, highly-expressed genes may have better signal-to-noise than low-expressed genes.

The IHW method applies weights (a non-negative number between zero and one) to each test in a data-driven way.  The input to the method is a vector of p-values (just like BH/FDR) and a vector of continuous or categorical covariates (i.e., any data about each test that is assumed to be independent of the test p-value under the null hypothesis).

From the paper linked above, Table 1 lists possible covariates:

Application Covariate

Differential expression analysis Sum of read counts per gene across all samples [12]
Genome-wide association study (GWAS) Minor allele frequency
Expression-QTL analysis Distance between the genetic variant and genomic location of the phenotype
ChIP-QTL analysis Comembership in a topologically associated domain [16]
t-test Overall variance [9]
Two-sided tests Sign of the effect
Various applications Signal quality, sample size

In simplified form, the IHW method takes the tests and groups them based on the supplied covariate.  It then calculates the number of discoveries (rejections of the null hypothesis) using a set of weights. The weights are iterated until the method converges on the optimal weights for each covariate-based group that maximize the overall discoveries.  Additional procedures are employed to prevent over-fitting of the data and to make the procedure scale easily to millions of comparisons.

The authors of the method claim that IHW is better powered than BH for making empirical discoveries when working with genomic data.  It can be accessed from within Bioconductor.

 

Unix one-liner to convert VCF to Oncotator format

Here is a handy unix one-liner to process mutect2 output VCF files into the 5 column, tab-separated format required by Oncotator for input (Oncotator is a web-based application that annotates human genomic point mutations and indels with transcripts and consequences). The output of Oncotator is a MAF-formatted file that is compatible with MutSigCV.

#!/bin/bash
FILES='*.vcf.gz'
for file in $FILES
do
zcat $file | grep -v "GL000*" | grep -v "FILTER" | grep "PASS" | cut -d$'\t' -f 1-5 | awk '$3=$2' | awk '$1="chr"$1' > $file.tsv
done

Breaking this down we have:

“zcat $file” :  read to stdout each line of a gzipped file

“grep -v “GL000*” :  exclude any variant that doesn’t map to a  named chromosome

“grep -v “FILTER” : exclude filter header lines

“grep “PASS””:  include all lines that pass mutect2 filters

“cut -d$’\t’ -f 1-5”  : cut on tabs and keep fields one through five

“awk ‘$3=$2’ :  set column 3 equal to column 2, i.e., start and end position are equal

“awk $1=’chr’$1″” : set column one equal to ‘chr’ plus column one (make 1 = chr1)