Beyond Benjamini-Hochberg: Independent Hypothesis Weighting (IHW) for multiple test correction

Multiple hypothesis testing is a critical part of modern bioinformatic analysis.  When testing for significant changes between conditions on many thousands of genes, for instance in an RNA-Seq experiment, the goal is maximize the number of discoveries while controlling the false discoveries.

Typically, this is done by using the Benjamini-Hochberg (BH) procedure, which aims to adjust p-values so that no more than a set fraction (usually 5%) of discoveries are false positives (FDR = 0.05). The BH method is better powered and less stringent than the more strict family-wise error rate (FWER) control, and therefore more appropriate to modern genomics experiments that make thousands of simultaneous comparisons.  However, the BH method is still limited by the fact that it uses only p-values to control the FDR, while treating each test as equally powered.

A new method, Independent Hypothesis Weighting (IHW), aims to take advantage of the fact that individual tests may differ in their statistical properties, such as sample size, true effect size, signal-to-noise ratio, or prior probability of being false.  For example, in an RNA-Seq experiment, highly-expressed genes may have better signal-to-noise than low-expressed genes.

The IHW method applies weights (a non-negative number between zero and one) to each test in a data-driven way.  The input to the method is a vector of p-values (just like BH/FDR) and a vector of continuous or categorical covariates (i.e., any data about each test that is assumed to be independent of the test p-value under the null hypothesis).

From the paper linked above, Table 1 lists possible covariates:

Application Covariate

Differential expression analysis Sum of read counts per gene across all samples [12]
Genome-wide association study (GWAS) Minor allele frequency
Expression-QTL analysis Distance between the genetic variant and genomic location of the phenotype
ChIP-QTL analysis Comembership in a topologically associated domain [16]
t-test Overall variance [9]
Two-sided tests Sign of the effect
Various applications Signal quality, sample size

In simplified form, the IHW method takes the tests and groups them based on the supplied covariate.  It then calculates the number of discoveries (rejections of the null hypothesis) using a set of weights. The weights are iterated until the method converges on the optimal weights for each covariate-based group that maximize the overall discoveries.  Additional procedures are employed to prevent over-fitting of the data and to make the procedure scale easily to millions of comparisons.

The authors of the method claim that IHW is better powered than BH for making empirical discoveries when working with genomic data.  It can be accessed from within Bioconductor.

 

Unix one-liner to convert VCF to Oncotator format

Here is a handy unix one-liner to process mutect2 output VCF files into the 5 column, tab-separated format required by Oncotator for input (Oncotator is a web-based application that annotates human genomic point mutations and indels with transcripts and consequences). The output of Oncotator is a MAF-formatted file that is compatible with MutSigCV.

#!/bin/bash
FILES='*.vcf.gz'
for file in $FILES
do
zcat $file | grep -v "GL000*" | grep -v "FILTER" | grep "PASS" | cut -d$'\t' -f 1-5 | awk '$3=$2' | awk '$1="chr"$1' > $file.tsv
done

Breaking this down we have:

“zcat $file” :  read to stdout each line of a gzipped file

“grep -v “GL000*” :  exclude any variant that doesn’t map to a  named chromosome

“grep -v “FILTER” : exclude filter header lines

“grep “PASS””:  include all lines that pass mutect2 filters

“cut -d$’\t’ -f 1-5”  : cut on tabs and keep fields one through five

“awk ‘$3=$2’ :  set column 3 equal to column 2, i.e., start and end position are equal

“awk $1=’chr’$1″” : set column one equal to ‘chr’ plus column one (make 1 = chr1)

 

A Unix one-liner to scrape GI numbers from a SAM file

I recently had a situation where I needed to scrape out all of the GI numbers from a SAM alignment file.  My first instinct was to turn to python to accomplish this, but I forced myself to find a command line tool or set of tools to quickly do this task as a one-liner.  First, here is the format of the first two lines of the file:

…..

HWI-D00635:61:C6RH0ANXX:4:1101:3770:8441 0 gi|599154892|gb|EYE94125.1| 1066 255 41M * 0 0 SNPDEMDGNILPWMVHLKRMALEVLKHLWSSKLAAFFTLSE * AS:i:88 NM:i:0 ZL:i:2534 ZR:i:217 ZE:f:3.9e-15 ZI:i:100 ZF:i:-2 ZS:i:125 MD:Z:41

HWI-D00635:61:C6RH0ANXX:4:1101:3770:8441 0 gi|115387347|ref|XP_001211179.1| 1065 255 41M * 0 0 SNPDEMDGNILPWMVHLKRMALEVLKHLWSSKLAAFFTLSE * AS:i:77 NM:i:8 ZL:i:1670 ZR:i:188 ZE:f:8.9e-12 ZI:i:80 ZF:i:-2 ZS:i:125 MD:Z:1Y3V3V11F11SSY3A1

….

You can see that what I want is the information between the pipes in the field “gi|#########|” .   Here is how I solved this with a bash script:


#!/bin/bash
for file in *.sam
do
echo ${file}
cat ${file} | egrep -o  "\|\S*\|(\S*)\|" | sed 's/|/,/g' | cut -f 2 -d ',' > ${file}.out
done

To unpack this briefly, the “cat” command outputs each line of the file to stdout, which is redirected to “egrep.”   Egrep looks for the regular expression “\|\S*\|(\S*)\|”.   This expression searches for a pipe, followed by any number of characters, with another pipe, then more characters, then another pipe.  The pipes are escaped with a backslash “\”.

The next step is to pipe to “sed”, which takes the incoming stream and replaces the pipes with commas.  This output is sent to “cut”, which uses the commas as delimiters, and takes the second field.

There are probably shorter ways to do this (cutting on pipes, for example), but already attempting this at the command line saved me a lot of time over coding this in python.

Sequencing depth for accurate SNP calling: bcbio case study

Intuitively, it is easy to grasp that the more sequencing depth (i.e., the greater the number of reads covering any given position in the genome) the more accurate the calling of SNPs and indels (insertions/deletions).   But how much difference does this actually make in the real world?  Is 20X coverage dramatically worse than 30X (considered a standard coverage depth on genomes)?

To find out, I conducted an experiment with the bcbio pipeline, a bioinformatics pipeline solution built in python that allows for automated and reproducible analyses on high-performance computing clusters.  One feature of bcbio is that it can perform validation surveys using high-confidence consensus calls from reference genomes like the NA12878 Coriell sample (from the Genome in a Bottle project).
For NA12878, researchers collated consensus SNP and indel calls from a large variety of sequencing technologies and calling methods to produce a very high-confidence callset for training other methods or validating a sequencing workflow.  bcbio includes these variant calls and can easily be setup to validate these calls against a sequenced NA12878 genome.

The sequencing depth experiment

I started with a NA12878 genome sequenced to 30X sequencing depth.  To compare shallower depths, I subsampled the data to generate 20X, 10X, etc…  [Please note: data was not subsetted randomly, rather “slices” were taken from the 30X dataset] To look at a 60X coverage datapoint, I combined data from two sequencing runs on both flow cells of a HiSeq4000 instrument.

The results after validation are shown in Figure 1 (depth of coverage is along the x-axis):

sequencing depth
Fig 1. SNP discovery as a function of increasing coverage of the GIAB validation sample.

 

The figure shows that, as expected, when sequencing depth decreases  the error rate increases, and SNP discovery declines.   It also makes the case for the commonly held view that 30X is enough coverage for genomes, since going to 60X leads to almost unnoticeable improvement in the % found and a slight increase in error.  Performance really degrades at 12X and below, with poor discovery rates and unacceptably high error rates.

I will be submitting a short manuscript to biorxiv.org soon describing this work in more detail.

Why you should think twice before rarefying your 16S data

According to a recent paper, the common practice of rarefying (randomly subsampling your 16S reads), is statistically incorrect and should be abandoned in favor of more sophisticated ‘mixture-model’ approaches favored by RNA-Seq analysis software.

The authors give a basic thought experiment to help clarify their reasoning.   I’ve copied the figure below.  It basically shows what happens when one rarifies library “B” from 1000 reads to 100 reads in order to match library “A.”  This is a standard procedure, even in the QIIME workflow.  By dividing the library size by 10, the variance of the data in B goes up by 10, and thus the statistical power to differentiate between OTU1 and OTU2’s abundances in samples A and B is lost.

Fig 1. An example of the effect of rarefying on statistical power.
Fig 1. An example of the effect of rarefying on statistical power.

As the authors point out in the paper:

“Although rarefying does equalize variances, it does so only by inflating the variances in all samples to the largest (worst) value among them at the cost of discriminating power (increased uncertainty). “

The solution to the rarefying problem is to take advantage of methods from the RNA-Seq world in order to determine differential taxonomic abundances, just as differential transcript abundances are calculated by the same methods.   The R package, phyloseq, provides an R container for taxonomic datasets from Qiime (in the BIOM format).  It also provides a convenient set of extensions that allow analysis of this data with RNA-Seq methods like edgeR and DESeq2.