MixOmics: a “swiss army knife” for -omics integration

Introduction

Genomics is the study of an organism’s complete set of genetic material, including its DNA sequence, genes, and regulation of gene expression. Other “omics” techniques, such as proteomics and metabolomics, focus on the study of proteins and metabolites, respectively. By analyzing these different types of data together, researchers can generate new insights into the inner workings of an organism and how it responds to its environment.

For example, by combining genomics data with proteomics and metabolomics data, researchers can gain a more complete understanding of an organism’s gene expression, protein production, and metabolic processes, and how these processes work together to create health, or dysfunction to create disease. This knowledge can provide valuable insights for a wide range of applications, including drug development, disease diagnosis, and environmental monitoring.

Finding correlations between related datasets means looking for patterns or relationships between different sets of data. This can provide valuable insights into the underlying biological processes and functions of an organism. For example, if two datasets show a strong positive correlation, it suggests that they are related in some way and that changes in one dataset may be associated with changes in the other. By identifying these correlations, researchers can better understand the mechanisms behind biological processes and how they are regulated. This can be useful for a variety of applications, such as predicting the effects of potential drugs or identifying potential targets for medical intervention.

I have surveyed the literature for tools to integrate multiple -omics datasets together. As is the case for any task in bioinformatics, there are dozens of options. However, when I considered criteria such as ease of installation, documentation quality, robust user community, user support and published analyses, I believe that the “mixOmics” package (available for download and installation from Bioconductor) is one of the best tools out there for doing this type of integration analysis.

The mixOmics approach

The mixOmics package encompasses many different versions of multivariate algorithms for integrating multiple datasets. Multivariate analysis is well-suited to this problem space where there are far more features than samples. By reducing the dimension of the data, the analysis makes it easier for a human analyst to see patterns and interpret correlations. One of the most common types of algorithm in mixOmics for doing this is called “partial least squares.”

Fig 1. An overview of the mixOmics package. The methods can handle single ‘omics, multiple ‘omics on the same samples (N-integration), and the same ‘omics on multiple sets of samples (P-integration) to find correlations in the data. Some examples of the graphic outputs are shown in the bottom two panels of the figure.

The partial least squares (PLS) method is a mathematical technique used to analyze relationships between two or more datasets. It works by identifying the underlying patterns and correlations in the data, and then using this information to construct a set of “composite” variables that capture the most important features of the data (this is analogous to PCA analysis, but differs by focusing on maximizing correlation/covariance among latent variables).

These composite (latent) variables can then be used to make predictions or draw conclusions about the relationships between the datasets. For example, if two datasets are known to be related in some way, the PLS method can be used to identify the specific features of each dataset that are most strongly correlated with the other, and then construct composite variables based on these features. PLS is more robust than PCA to highly correlated features and can be used to make predictions between the dependent and independent variables.

mixOmics takes the PLS method a step further by integrating a ‘feature-selection’ option called “sparse PLS” or just “sPLS” that uses “lasso” penalization to reduce unnecessary features from the final model to aid interpretation and also computational time. Lasso regression works by adding a regularization term to the ordinary least squares regression model, which is a measure of the complexity of the model. This regularization term, called the “lasso,” forces the coefficients of the model to be zero for the less important predictors, effectively eliminating them from the model.

This results in a simpler and more interpretable model that is better able to make accurate predictions. Lasso regression is particularly useful for datasets with a large number of predictors, as it can help to identify the most important predictors and reduce the risk of overfitting the model.

Conclusion

In future posts, I will describe in more detail what can be done with MixOmics and show some results from our own studies that have produced stunningly detailed and intricate correlation networks. If you are interested in this kind of work, I would encourage you to check out mixOmics as a possible avenue for analysis. There are other packages, and many of them are excellent, but the learning curve of mixOmics is quite shallow and it is well supported with a dynamic and active user community. It is also very flexible to different experimental scenarios, so that you can analyze your data several different ways while using the same package and R script.

Perturbation analysis of spatial single cell RNA-seq with ‘augur’

Spatial single cell RNA-seq data are essentially regular single-cell RNA-seq data that have spatial coordinates associated through localization on a special capture slide. I had previously used so-called “perturbation” analysis successfully with 10X single-cell data and I wanted to apply the technique to spatial single cell to understand how a treatment affects the spatially-resolved clusters.

Here, I want to briefly describe the steps I went through to perform ‘augur’ perturbation analysis of 10X Visium Spatial single cell RNA-seq data. augur works as follows:

Augur is an R package to prioritize cell types involved in the response to an experimental perturbation within high-dimensional single-cell data. The intuition underlying Augur is that cells undergoing a profound response to a given experimental stimulus become more separable, in the space of molecular measurements, than cells that remain unaffected by the stimulus. Augur quantifies this separability by asking how readily the experimental sample labels associated with each cell (e.g., treatment vs. control) can be predicted from molecular measurements alone. This is achieved by training a machine-learning model specific to each cell type, to predict the experimental condition from which each individual cell originated. The accuracy of each cell type-specific classifier is evaluated in cross-validation, providing a quantitative basis for cell type prioritization.

I followed both the Seurat 10X Visium vignette as well as a dataset integration protocol to combine two treatment (a gene knockout, in this case) and control samples (S1 and S2). Normalization was performed by “SCTransform” as recommended for spatial RNA-seq data prior to integration. PCA, K-nearest neighbors, clustering, and uMAP were calculated as described in the Seurat vignette using default values. Cell types were assigned in collaboration with the experimentalists.

With the integrated, clustered and, assigned dataset in hand, I was ready to enter the “augur” workflow as described in the paper, with some minor tweaks. First, because this is spatial and not regular scRNA-seq, there is no “RNA” default assay to set after integration. I chose to set “SCT” as the assay instead, because this represents the normalized and scaled dataset which is what you want for input to an ML model.

```{r, celltype_priority}

library(Augur)
DefaultAssay(s1s2.int) <- "SCT"
augur <- Augur::calculate_auc(s1s2.int, label_col = "orig.ident", cell_type_col = "cell_type", 
                              n_threads = 6, 
                              rf_params = list(trees = 15, mtry = 2, min_n = NULL, importance = "accuracy"),
                              n_subsamples = 25,
                              )
```

Above, you can see the actual call to augur “calculate_auc” method. I found that by specifying ‘rf_params’ and reducing the number of trees, I got better separation between cell types in the AUC readout. The calculation takes about 20 minutes to run on a 2018 MacBook Pro 13 inch laptop.

When the algorithm completes, you can visualize your results. Using the vignette for regular scRNA-seq you can do this:

library(patchwork)
p1 <- plot_umap(augur, s1s2.int, mode = "default", palette = "Spectral")
p1 <- p1 + geom_point(size=0.1) + ggtitle("Augur Perturbation by Type (Red = Most)")
p2 <- DimPlot(s1s2.int, reduction = "umap", group.by = "cell_type") + ggtitle("S1/S2 Integrated Cell Types")
p1 + p2 

The resulting plot looks like this:

Augur perturbation analysis by AUC (red is more perturbed; left) and UMAP plot of cell types (right).

This is great and helpful, but it doesn’t take advantage of the spatially resolved nature of the data. To do that, you have to modify the integrated seurat object with the augur results:

### Make a dataframe of AUC results 
auc_tab <- augur$AUC
auc_tab$rank <- c(1:9)

### Grab the cells by type and barcode 
tib <- s1s2.int$cell_type %>% as_tibble(rownames = "Barcode") %>% rename(cell_type=value)

### Join the AUC information to the barcode on cell_type 
tib <- tib %>% left_join(., auc_tab)

### Sanity check 
assertthat::are_equal(colnames(s1s2.int), tib$Barcode)

### Update the seurat object with new augur metadata 
s1s2.int$AUC <- round(tib$auc, 3) 
s1s2.int$RANK <- tib$rank

Here, I am simply pulling out the AUC results into a table by cell type. Then I get the cell type information from the seurat object and merge the AUC information into it. I just set new metadata on the seurat object to transfer information about AUC and Rank for each barcode (i.e., cell). I do a sanity check to make sure the barcodes match (they do, as expected).

Now you can plot the spatially resolved AUC information:

SpatialDimPlot(s1s2.int, group.by = "AUC", cols = rev(c("#D73027", "#F46D43", "#FDAE61", "#FEE090", "#FFFFBF", "#E0F3F8", "#ABD9E9", "#74ADD1", "#4575B4")))

This takes advantage of the “group.by” flag in the Spatial Dim Plots command to use the AUC metadata. I’m also using a custom color scheme from ColorBrewer that shades the cell types from low to high AUC along a rainbow for ease of viewing. The plot looks like this:

Spatially-resolved perturbation (AUC) of cell clusters in the WT (left) and knockout (right) samples.

Calculate % mitochondrial for mouse scRNA-seq

Seurat is a popular R/Bioconductor package for working with single-cell RNA-seq data. As part of the very first steps of filtering and quality-controlling scRNA-seq data in Seurat, you calculate the % mitochondrial gene expression in each cell, and filter out cells above a threshold. The tutorial provides the following code for doing this in human cells:

 
mito.genes = grep(pattern = "^MT-", x = rownames(x = pbmc@data), value = TRUE)
percent.mito = Matrix::colSums(pbmc@raw.data[mito.genes, ])/Matrix::colSums(pbmc@raw.data)


pbmc = AddMetaData(object = pbmc, metadata = percent.mito, col.name = "percent.mito")
VlnPlot(object = pbmc, features.plot = c("nGene", "nUMI", "percent.mito"), nCol = 3)

Creating a catalog of mitochondrial genes by searching with ‘grep’ for any gene names that start with “MT-” works just fine for the human reference transcriptome. Unfortunately, it doesn’t work for mouse (at least for mm10, which is the reference assembly I’m working with). There are two workarounds for this, in my opinion.

The easiest is to change the regular expression in the “grep” command from “^MT-” to “^mt-” since a search through the mm10 reference (version 3.0.0) in the cellranger reference files reveals that for whatever reason, the MT genes are labeled with lowercase ‘mt’ instead.

A second, and perhaps more thorough, approach is to take advantage of the Broad Institute’s “Mouse Mitocarta 2.0” encyclopedia of mitochondrial genes (note that you could do this same procedure for human MT genes too).

By creating a list of the top 100-200 genes with the strongest evidence for MT expression, it seems likely that you more accurately capture true mitochondrial gene expression. Below is some code to use the “MitoCarta 2.0” (downloaded as a CSV file) for this procedure. You will need to import “tidyverse” to work with tibbles:

library(tidyverse)
library(seurat)

mouse_mito = as.tibble(read.csv("Mouse.MitoCarta2.0_page2.csv", header = TRUE))
mouse_mito = mouse_mito %>% select(c(Symbol, MCARTA2.0_score)) %>% slice(1:100)
mito.genes = as.character(mouse_mito$Symbol)
mito.genes = mito.genes[mito.genes %in% rownames(sample2@raw.data)]

percent.mito = Matrix::colSums(sample2@raw.data[mito.genes,]) / Matrix::colSums(sample2@raw.data)

Gene expression boxplots with ggplot2

The ubiquitous RNAseq analysis package, DESeq2, is a very useful and convenient way to conduct DE gene analyses.  However, it lacks some useful plotting tools.   For example, there is no convenience function in the library for making nice-looking boxplots from normalized gene expression data.

There are other packages one can rely on, for example ‘pcaExplorer’, but I like a simple approach sometimes to plot just a couple of genes.  So below I show you how to quickly plot your favorite gene using only ggplot2 (there is no “one weird trick” though…):

traf1_counts <- counts(ddsTxi['ENSG00000056558',], normalized = TRUE)
m <- list(counts = as.numeric(traf1_counts), group = as.factor(samples$group))
m <- as.tibble(m)
q <- ggplot(m, aes(group, counts)) + geom_boxplot() + geom_jitter(width = 0.1)
q <- q + labs(x = "Experimental Group", y = "Normalized Counts ", title = "Normalized Expression of TRAF1")
q <- q + scale_x_discrete(labels=c("00hn" = "PMN, 0hrs", "06hn" = "PMN, 6hrs",
                                   "06hy" = "PMN+Hp, 6hrs", "24hn" = "PMN, 24hrs", "24hy" = "PMN+Hp, 24hrs"))
q

As you can see above, first we must grab the normalized counts at the row corresponding with the Traf1 Ensembl ID using the ‘counts‘ function that operates on the ‘ddsTxi’ DESeqDataSet object.

In order to create a dataframe (well, a tibble to be specific) for plotting, we first create a list (‘m’) that combines the counts (as a numeric vector) and metadata group.  These two vectors will form the columns of the tibble for plotting, and we must give them names (i.e., “counts” and “group”) so the tibble conversion doesn’t complain.

The list, m, is then converted to a tibble with ‘as.tibble‘ and plotted with ggplot2, using an ‘aes(group,counts)‘ aesthetic plus a boxplot aesthetic.  The rest of the code is just modifying axis labels and tickmarks.  The final product looks like this:

Boxplot of normalized Traf1 expression in 5 different conditions (3 replicates each).