Are deep neural nets “Software 2.0”?

Image from: https://cdn.edureka.co/blog/wp-content/uploads/2017/05/Deep-Neural-Network-What-is-Deep-Learning-Edureka.png

Recent blog posts by Andrej Karpathy at Medium.com and Pete Warden at PeteWarden.com have caused a paradigm shift in the way I think about neural nets.  Instead of thinking of them as powerful machine learning tools, the authors  instead suggest that we should think of neural nets, and in particular, convolution deep nets, as ‘self-writing programs.’   Hence the term, “Software 2.0.”

It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program. A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.   — Andrej Karpathy, Medium.com

I found this to be a dramatic reversal in my thinking about these techniques, but it opens up a deeper understanding and is much more intuitive.  The fact is that combinations of artificial neurons can be used to model any logical operation.  Therefore you can conceptualize training a neural net as searching programming space for an optimal program that behaves in the way you specify.  You provide the inputs and desired outputs, and the model searches for the optimal program.

This stands in contrast to the “Software 1.0” paradigm where the programmer uses her skill and experience to conceptualize the right combination of specific instructions to produce the desired behavior.   While it seems certain that Software 1.0 and 2.0 will co-exist for a long time, this new way of understanding deep learning is crucial and exciting, in my opinion.

 

 

Five (easy) ways to start learning about convolution neural nets

A schematic of a Convolution Neural Network (CNN).

Here are five different ways to gain an introduction to the topic of CNNs.  Each approach is geared toward a different style of learning:

1

Visualize them in real time with your own inputs (this is amazing!)

2

Watch a lecture by the “godfather” of neural nets,  Geoff Hinton.

3

Take a top-ranked online course on Deep Learning.

4

Learn the math behind them.

5

Code one yourself in python. 

My favorite talks from GLBIO2017 in Chicago

GLBIO2017

I just got back from Great Lakes Bio 2017 (GLBIO2017) at the University of Illinois-Chicago (UIC) campus.  It was a great meeting and I really enjoyed the quality of the research presented as well as the atmosphere of the campus and neighborhood.

I was very surprised by just how nice the Chicago “West Loop” neighborhood near Randolph Street and down towards Greektown really is.  I had some great meals, including a memorable Italian dinner at Formentos.

But the purpose of this post is to briefly describe a few of my favorite talks from the meeting.  So here goes, in no particular order:

Kevin White, Tempus Labs:

I was really impressed with Kevin White’s GLBIO2017 talk and demo of his company’s technology (despite the ongoing technical A/V issues!)  Tempus labs is a clinical sequencing company but also an informatics company focused on cancer treatment that seeks to pull together all of the disparate pieces of patient data that float around in EHR databases and are oftentimes not connected in meaningful ways.

The company sequences patient samples (whole exome and whole genome) and then also hoovers up reams of patient EHR data using Optical Character Recognition (OCR), Natural Language Processing (NLP), and human expert curation to turn the free-form flat text of medical records from different clinics and systems into a form of “tidy data” that can be accessed from an internal database.

Then, clinical and genomic data are combined for each patient in a deep-learning system that looks at treatments and outcomes for other similar patients and presents the clinician with charts that show how patients in similar circumstances fared with varying treatments, given certain facts of genotype and tumor progression, etc…  The system is pitched as “decision support” rather than artificial “decision making.”  That is, a human doctor is still the primary decider of treatment for each patient, but the Tempus deep learning system will provide expert support and suggest probabilities for success at each critical care decision point.

The system also learns and identifies ongoing clinical trials, and will present relevant trials to the clinician so that patients can be informed of possibly beneficial trials that they can join.

Murat Eren,  merenlab.org

Murat Eren’s talk on tracking microbial colonization in fecal microbiome transplantation (i.e., “poop pills”) was excellent and very exciting.  Although the “n” was small (just 4 donors and 2 recipients) he showed some very interesting results from transferring fecal microbiota (FM) from healthy individuals to those with an inflammatory bowel disease.

Among the interesting results are the fact that he was able to assemble 97 metagenomes in the 4 donor samples.  Following the recipients at 4 and 8-weeks post FM transplant showed that the microbial genomes could be classed into those that transfer and colonize permissively (both recipients), those that colonize one or the other recipient, and those that fail to colonize both.  Taxa alone did not explain why some microbes colonized easily, while other failed to colonize.

He also showed that 8 weeks post FM transplant, the unhealthly recipients had improved symptoms but also showed that in a PCA analysis of the composition of the recipient gut and the healthy human gut from 151 human microbiome project (HMP) samples, the recipients moved into the “healthy” HMP cluster from being extreme outliers on day 0.

He also investigated differential gene function enrichment between the permissive colonizers and the microbes that never colonized recipient’s guts and found that sporulation genes may be a negative factor driving the failure (or success) of transplantation.   He proposed that the recent and notable failure of the Seres microbiome drug in clinical trials may be owing to the fact that the company killed the live cultures in favor of more stable spore-forming strains when formulating the drug.  His work would suggest that these strains are less successful at colonizing new hosts.

Bo Zhang, 3D genome browser

With the ever-increasing volume of genomic and regulatory data and the complexity of that data, there is a need for accessible interfaces to it.   Bo Zhang’s group at Penn State has worked to make a new type of genome browser available that focuses on the 3D structure of the genome, pulling together disparate datatypes including chromatin interaction data, ChIP-Seq, RNA-Seq, etc…  You can also browse a complete view of the regulatory landscape and 3D architecture of any region of the genome.  You can also check the expression of any queried gene across hundreds of tissue/cell types measured by the ENCODE consortium.  On the virtual 4C page, they provide multiple methods to link distal cis-regulatory elements with their potential target genes, including virtual 4C, ChIA-PET and cross-cell-type correlation of proximal and distal DHSs.

The 3D Genome Browser flow chart.

 

All in all, GLBIO2017 was a very enjoyable and informative meeting where I met a lot of great colleagues and learned much.  I am looking forward to next year!

Search speed comparison: naive exact vs. boyer-moore vs. k-mer index

Recently, I’ve been working my way through Ben Langmead’s excellent introduction to “Algorithms for DNA sequencing” on Coursera.com.    The class is a fascinating and well-taught intro to concepts about DNA short read alignment and assembly methods.

As part of the course, we have implement or modify python code relating to several simple matching algorithms, including the “naive exact” (NEM) matching method, the “boyer-moore” (BM) method, and a k-mer index approach.

I was curious about speed, so I made a figure showing the computational time that each approach takes.  P and T refer to the length of the short read to be aligned and the genome to align to, respectively.

Figure 1. Comparing the speed of the NEM, BM, and K-mer search methods on long and short patterns (P) and texts (T). The y-axis is on a log-scale.

Note that the y-axis is a log scale in units of microseconds.  Right away, it is obvious that k-mer index methods are orders of magnitude faster than ‘online’ methods like NEM and BM.

Also of interest is the fact that as the pattern gets shorter, the advantage of BM preprocessing of the pattern gets smaller.  You can see that going from 30 to 11 pattern length negates any advantage to BM searching.