I recently had a situation where I needed to scrape out all of the GI numbers from a SAM alignment file. My first instinct was to turn to python to accomplish this, but I forced myself to find a command line tool or set of tools to quickly do this task as a one-liner. First, here is the format of the first two lines of the file:
You can see that what I want is the information between the pipes in the field “gi|#########|” . Here is how I solved this with a bash script:
#!/bin/bash
for file in *.sam
do
echo ${file}
cat ${file} | egrep -o "\|\S*\|(\S*)\|" | sed 's/|/,/g' | cut -f 2 -d ',' > ${file}.out
done
To unpack this briefly, the “cat” command outputs each line of the file to stdout, which is redirected to “egrep.” Egrep looks for the regular expression “\|\S*\|(\S*)\|”. This expression searches for a pipe, followed by any number of characters, with another pipe, then more characters, then another pipe. The pipes are escaped with a backslash “\”.
The next step is to pipe to “sed”, which takes the incoming stream and replaces the pipes with commas. This output is sent to “cut”, which uses the commas as delimiters, and takes the second field.
There are probably shorter ways to do this (cutting on pipes, for example), but already attempting this at the command line saved me a lot of time over coding this in python.
In this second part of the “Hands On” series, I want to address how to create the input for the MatLab mutational signature framework from the output of my python code to prepare the SNP data for analysis.
First, creating a Matlab .mat file for input to the program. The code is expecting an input file that contains a set of mutational catalogues and metadata information about the cancer type and the mutational types and subtypes represented in the data.
Fig 1. The required data types within one .mat file to run the framework.
As you can see from Fig 1., you need to provide a 96 by X matrix, where X is the number of samples in your mutational catalogue. You also need an X by 1 cell array specifying sample names, a 96 by 1 cell array specifying the subtypes (ACA, ACC, ACG, etc…) and a 96 by 1 cell array specifying the types (C>A, C>A, C>A, etc…). These must correspond to the same order as specified in the “originalGenomes” matrix or the results won’t make sense.
My code outputs .csv files for all of these needed inputs. For example, when you run my python code on your SNP list, you will get a “subtypes.csv”, “types.csv”, “samples.csv”, and “samples_by_counts.csv” matrix (i.e., originalGenomes) corresponding to the above cell arrays in Fig 1.
Now, the challenge is to get those CSV files into MatLab. You should have downloaded and installed MatLab on your PC. Open MatLab and select “Import Data.”
Fig 2. Select the “Import Data” button.
Browse to one of the output CSV files and select it. It will open in a new window like in Fig 3 below:
Fig 3. The data import window from MatLab.
Be sure to select the correct data type in the “imported data” section. Also, select only row 1 for import (row 2 is empty). Once you’re finished, click Import Selection. It will create a 1×96 cell called “types.” It looks like this:
Fig 4. The new imported cell data “types.”
We’re almost done, but we have to switch the cell to be 96×1 rather than 1×96. To do this, just double-click it and select “transpose” in the variable editor. Now you should repeat this process for the other CSV input files, being sure to select “matrix” as the data type for the “samples_by_counts” file. Pay special attention to make sure the dimensions and data types are correct.
Once you have everything in place you should be ready do run the mutational analysis framework from the paper. To do this, open the “example2.m” Matlab script included with the download. In the “Define parameters” section, change the file paths to the .mat file you just created:
Fig 5. Define your parameters for the signature analysis.
Here you can see in Fig 5, I’ve specified 100 iterations per core, a number of possible signatures from 1-8, and the relative path to the input and output files. The authors say that ~1000 iterations is necessary for accuracy, but I’ve found little difference in the predictions between 50-500 iterations. I would perform as many iterations as possible, given constraints from time and computational power.
Note also that you may need to change the part of the code that deals with setting up a parallel computing pool. Since MatLab 2014, I believe, the “matlabpool” processing command has been deprecated. Substituting the “parpool” command seems to work fine for me (Mac OS X 10.10, 8 core Macbook Pro Retina) as follows:
if ( parpool('local') == 0 )
parpool open; % opens the default matlabpool, if it is not already opened
end
This post is getting lengthy, so I will stop here and post one more part later about how to compare the signatures you calculate with the COSMIC database using the cosine similarity metric.
A recent collaboration with a clinician here at UI hospital and clinics introduced me to the idea of mutational signatures in cancer. Characterizing mutational signatures is made possible by the falling cost and increasing accuracy of whole-genome sequencing methods. Tumors are sequenced across the entire genome and the catalog of somatic mutations (i.e, SNPs) is used to compute the mutational signatures of a tumor’s genome.
The idea is that the collection of somatic mutations found in a tumor are the result of a variety of defective DNA-repair or DNA-replication machinery combined with the action of known or unknown mutagens and environmental exposures. The processes operate over time and leave a “footprint” in the tumor DNA that can be examined. These sum of all of the mutational processes operating within a tumor cell is a distinct mutational “signature” that differs by tumor types.
For example, in lung cancer, the bulk of somatic mutations are C>A transversions resulting from chronic exposure to tobacco smoke. In melanoma, the predominant mutation type is C>T and CC>TT at dipyrimidines, a mutation type associated with UV-light exposure. And in colorectal cancer, defective DNA mismatch repair contributes the majority of the mutations.
Mutational signature of a cancer by the operation of several mutational processes over time. From http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3990474/figure/fig0005/. Used under CC License BY 3.0.
A recent paper in Nature has formalized this notion of mutational signatures in tumors and provided a mathematical framework (written in MatLab) for assessing how many and which signatures are operational within an uncharacterized tumor type (generally there between 2 and 6 processes).
In the paper, the authors analyzed almost 5 million somatic cancer SNPs and identified 21 unique signatures of mutational processes through a mathematical process of deconvolution, followed by experimental validation. A curated catalog of the most current signatures based on available sequence data can be found at the COSMIC database.
In part 2 of this post, I’ll go into more detail on the mutational signatures and link to some python code I’ve written to help get flat-file lists of SNPs into the correct form for easy input into the MatLab framework.
Single splice-altering variants can alter mRNA structure and cause disease
The splicing of introns and joining of exons to form mRNA is dependent on complex cellular machinery and conserved sequences within introns to be performed correctly. Single-nucleotide variants in splicing consensus regions, or “scSNVs” (defined as −3 to +8 at the 5’ splice site and −12 to +2 at the 3’ splice site) have the potential to alter the normal pattern of mRNA splicing in deleterious ways. Even those variants that are exonic and synonymous (i.e., they do not alter the amino acid incorporated into a polypeptide) can potentially affect splicing. Altered splicing can have important downstream effects in human disease such as cancer.
Using machine-learning to predict splice-altering variants
In the paper “In silico prediction of splice-altering single nucleotide variants in the human genome,” the researchers took on the problem of predicting which single-nucleotide variants (SNVs) have the potential to be splice-altering by computing “ensemble scores” for potential variants, combining the results of several popular splicing prediction software tools into one probability score.
They did this by using “random forest” (rf) and “adaptive-boosting” (adaboost) classifiers from machine-learning methods to give improved ensemble predictions that are demonstrated to do better than predictions from an individual tool, leading to improvements in the sensitivity and specificity of the predictions.
As part of their supplementary material, the authors pre-computed rf and adaboost scores for every SNV in a library of nearly ~16 million such sites collated from human RefSeq and Ensembl databases. The scores are probabilities of a particular SNV being splice-altering (0 to 1).
Exploratory analysis of the database
I performed an exploratory data analysis of chromosome 1 (chr1) SNVs from the database that was made available with the paper.
First, I just looked at where the SNVs on chrom 1 were located as classified by Ensembl region:
Fig 1. Number of variants in the scSNV database on chromosome 1 by Ensembl region. Not surprisingly, ‘intronic’, ‘exonic’, and ‘splicing’ are the most common regions for potential splice-altering SNPs.
As can be seen from Fig 1, most of the SNVs are located in introns, exons, and splicing consensus sites according to their Ensembl records.
Next, I created histograms for the chrom 1 SNVs by their Ensembl classification, looking at rf scores only (keep in mind that the scale on the y-axis for the plots in Fig 2 and 3 differs dramatically between regions). The x-axis is the probability of being splice-altering according to the pre-computed rf score.
Fig 2. Random-forest (rf) score by ensembl region for the ~15 M scSNVs in the database.
I noticed the fact that within ‘exonic’ regions on chrom 1, the rf scores take on a range of values from 0.0 to 1.0 in a broad distribution, while in other regions like ‘UTR3’, ‘UTR5’, ‘downstream’, etc… the distributions are narrowly skewed towards zero. For the ‘intronic’ region, the majority of sites have low probability of being splice-altering, while at the ‘splicing’ consensus sites, the vast majority are predicted to be splice-altering variants. This appears to make intuitive sense.
I performed the same analysis for the adaboost scores, as shown in Fig 3 (below). You can see that the adaboost scores take on a more binary distribution than the rf scores, with any individual SNV likely to be classified as ~1 or 0 according to the adaptive boosting method. Just like the rf scores, SNVs in ‘exonic’ regions are equally likely to be splice-altering as not, while those in ‘splicing’ regions are highly likely to be splice-altering. An SNV in an ‘intronic’ regions is ~3X more likely to have no effect on splicing.
Fig 3. Ada score by Ensembl region for the scSNV database.
Finally, I looked at the relationship between the two scoring methods for the SNVs that fall within the Ensembl-characterized ‘splicing’ regions on chrom 1. That scatter plot is shown below in Fig 4.
I suppose I was expecting a tight linear correlation between the two approaches, however the data show that the rf and adaboost methods differ substantially in their assessment of the collection of SNVs in these regions.
It is obvious from the plot below that there are many SNVs that the rf method considers to have low probability of being splice-altering that are found to have very high (>0.9) probability by the adaboost method.
Fig 4. Scatter plot of rf score vs. ada score for SNVs within ‘splicing’ regions on chrom 1.
This result would appear to suggest that if one is going to classify variants as “splice-altering” from this database, it would be best to consider both predictions or some combination of them rather than relying on either score alone if the goal is not to miss any potentially important sites. Conversely, if the goal is to only consider sites with very high likelihood of being splice-altering, a threshold could be set such that both scores need to be above 0.8, for example.
If you have raw Illumina HiSeq reads or MiSeq run across several lanes of data, you may need to concatenate them together at the command line before trimming and merging them together. Raw data from Illumina sequencers generally follows a standard naming convention. The files might look like this:
There’s a lot going on here, so let’s unpack this briefly.
The backquotes (` ) mean that the output of the command is presented as command-line parameters to the enclosing command.
So in this case the ‘find’ command is listing all file names in the current directory at a depth of 1 (no recursion into lower directories).
Next, this is pipe’d (with |) to egrep which searches the list of filenames for a regular expression that indicates a string starting with 16, followed by any number of any characters, then an ‘R1’. Since the expression matches whole strings, this will find the above files.
These filenames are then passed to ‘cat’ as command line arguments. The concatenated files are then redirected with the greater than (‘>’) to the new fastq file. The ‘&’ indicates that the shell run this in the background.
I hope this is useful. I spent about an hour tracking this all down, so I wouldn’t have to process dozens of samples by hand.