Stitching Thought Together
PSC Powers Harvard’s, Allen Institute’s 3D-Reconstruction of Excitatory Visual Neuron Wiring
March 28, 2016
Why It’s Important:
One of the mysteries of brain function is how we make sense of the jumble of images that confront our eyes. Neuroscientists have discovered that most individual nerve cells in the brain respond to specific elements in the visual environment. For example, a nerve cell may fire in response to vertical lines—another, in response to horizontal or slanted lines. Researchers suspected that the mammalian cortex amplifies this signal by having nerve cells that respond to similar elements excite each other. This mutual excitation may help those elements stand out and prime the network for their further processing. But scientists had no anatomical evidence that this actually happens.
“The challenge here was to go from a block of brain tissue to a coherent, three-dimensional digital volume that someone could then visualize on a computer and trace individual nerve-cell axons and dendrites. This involves careful cutting of the tissue block into extremely thin slices, staining them, then transferring these slices into an electron microscope where they are imaged one small area at a time. In this process there are inevitable physical distortions, and, if one simply stacked the images together, nothing would line up properly. Supercomputing allows us to register these images to one another and reconstruct a well-aligned volume.”
—Greg Hood, Pittsburgh Supercomputing Center (PSC)
How PSC Helped:
A team led by Wei-Chung Allen Lee of Harvard University and R. Clay Reid of the Allen Institute for Brain Science in Seattle identified brain nerve cells that respond to visual elements in living mice. Then they took a series of millions of microscope images of ultra-thin (about 40-nanometer-thick) tissue slices around these nerve cells. In a collaboration facilitated by PSC’s Art Wetzel, PSC computational scientist Greg Hood helped them to reconnect these images into a three-dimensional volume using PSC’s AlignTK software. But because these slices are fragile, microscopic tears and other artifacts happened, requiring manual intervention to correct. So researchers had to move back and forth between computation and manual “repair” of the images until the quality of the aligned volume was good enough to trace the connections between the nerve cells. The team reported in the journal Nature on March 28, 2016 that mammalian excitatory nerve cells that respond to a given visual feature do indeed make more and larger connections to other excitatory nerve cells that are tuned to respond to similar visual elements.
Deeper Dive: A Computational Tour de Force
In their Nature paper, the Harvard and Allen Institute group and coauthor Hood have produced a reconstruction of an excitatory nerve-cell network in the brain’s cortex at a subcellular level.
The work was a tour de force of two-photon fluorescence microscopy (to identify the nerve cells responding to a given orientation of visual lines), electron microscopy and computation. To connect the electron microscope images into a 3D reconstruction, the scientists first had to merge thousands of individual images for each slice into a single large image of the entire slice. They next aligned the adjacent slices. They could then combine these pairwise alignments over the entire stack to calculate exactly how to correct the distortion present in each slice and place its corrected image back into an aligned stack.
The group only reconstructed 0.03 cubic millimeter of the mouse brain, a volume that would go into a teaspoon about 167,000 times. But this still resulted in about 10 million camera images, amounting to roughly 100 Terabytes of raw data—about the computer storage required for nearly 30 million high-resolution, large-format photographs.
Greg Hood’s work on this project was funded by a National Institutes of Health grant to The National Center for Multiscale Modeling of Biological Systems (MMBioS).