10 Common Mistakes in Fragment Screening

There is an excellent review paper from Dan Erlanson and Ben Davis that came out last year detailing some of the more common mistakes and artifacts that can arise in fragment-based screening campaigns (so-called “unknown knowns”).  I encourage readers to go read the original paper.  I have summarized some of the key points below:

1) Not checking compound identity to make sure what you think you purchased is what you actually have.

2) Low-level impurities in compound stocks can cause problems at the high concentrations used in fragment screens.

3) DMSO, commonly used to store fragments in plates, can act as a mild oxidant and is also hygroscopic.

4) Pan-assay interference compounds (PAINS) are common in many libraries and are found to give false positives to many targets.

5) Reactive functional groups in fragment hits can cause covalent binding or aggregation of the target.

6) Many fragments can show binding or inhibition while acting as aggregators rather than reversible binders.  Including a small % of detergent can help eliminate these kinds of fragments from giving positive signals.

7) STD-NMR is very sensitive to weak binders, but because it relies on a relatively fast disassociation rate for the ligand, tight binders (<1 uM) can be missed by this method.

8) X-ray crystallographic structures are often taken as the “truth” when they are in fact a model of an electron density.  Fragments can often be modeled into the density in incorrect orientations or in place of solvent atoms.

9) SPR methods are very sensitive to fragment binding, but can be confounded by non-specific binding of fragment to the target or chip, as well as compound-dependent aggregation.

10) Fragment hits should be validated by more than one method before embarking on optimization.  They should also be screened for being aggregators by DLS or other methods.

Using R to automate ROC analysis

ROC analysis is used in many types of research.  I use it to examine the ability of molecular docking to enrich a list of poses for experimental hits.  This is a pretty standard way to compare the effectiveness of docking methodologies and make adjustments in computational parameters.

An example ROC plot on a randomly generated dataset
An example ROC plot on randomized data

Normally this kind of plot would take at least an hour to make by hand in Excel, so I wrote a function in R that generates a publication-quality ROC plot on the fly.  This is handy if you want to play around with the hit threshold of the data (i.e., the binding affinity) or experiment with different scoring functions.

According to wikipedia:

a receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot which illustrates the performance of a binary classifier system as its discrimination threshold is varied. It is created by plotting the fraction of true positives out of the total actual positives (TPR = true positive rate) vs. the fraction of false positives out of the total actual negatives (FPR = false positive rate), at various threshold settings.

There are already several ROC plot calculators on the web.  But I wanted to write my own using the R statistical language owing to its ability to produce very high-quality, clean graphics.  You can find the code here:

https://github.com/mchimenti/data-science-coursera/blob/master/roc_plot_gen.R

The function takes a simple 2 column input in csv format.   One column is “score,” the other is “hit” (1 or 0).   In the context of docking analysis, “score” is the docking score and hit is whether or not the molecule was an experimental binder.   The area-under-curve is calculated using the “trapz” function from the “pracma” (practical mathematics) package.