Images derived from crystal structures help neural network running on Bridges-2 to predict ability to create a given crystal in the real world
To create new electronic and other tools, materials scientists need new types of crystals with specific electrical and physical properties. But while they have been able to predict whether a given new material has the properties needed, they’ve been limited in their ability to predict whether it’s possible to create it in the real world. A team led from the University of Illinois Chicago has taken a completely new artificial intelligence (AI) tack, coding crystal structures as abstract 3D images that powerful neural-network AI programs, honed for image recognition, can “understand” in a way far beyond human experts’ abilities. Their AI showed a high accuracy in predicting synthesizability in a group of test materials.
Above: Overall framework of the synthesizability-likelihood-prediction AI. a) The researchers obtained hypothetical crystal structures never synthesized using CSPD algorithms alongside those that are synthesized or naturally formed from the Crystallographic Open Database (COD). b) They converted the crystal structures (top) and properties data into digitized, abstract 3D images (bottom). c) The AI analyzed the 3D images without human supervision, first learning and then successfully predicting crystal synthesizability. From Davariashtiyani, A., Kadkhodaie, Z. & Kadkhodaei, S. Predicting synthesizability of crystalline materials via deep learning. Commun Mater 2, 115 (2021), reproduced under Creative Commons.
Why It’s Important
Figuring out how materials act the way they do—and predicting how new materials will act—may not exactly be glamorous. But materials science lies at the center of the miraculous developments of the Information Age. It also provides us with new structural materials, whether they’re making a military plane harder to see on radar, giving a high-stress component of a reactor greater strength or allowing an electrical circuit to operate with essentially zero resistance.
Predicting what properties new materials, particularly crystals, will have is really important for developing tomorrow’s smartphones, energy storage, aircraft and many other useful devices. But such predictions, even with advanced computing, have been limited. Even with the best supercomputers, predicting the properties of a crystal from first principles is impossible. Instead, scientists had to take theoretical shortcuts, which posed their own problems. Scientists could often predict how changing a given atom in a crystal would change its properties. But that limited them to small changes in already-known materials. Other predictions, using a type of AI called machine learning, could predict when a given new crystal had the right properties and whether it was stable enough to exist. But often creating that crystal in the real world was virtually impossible.
“The reason we want to know if we can or cannot make a certain crystal form of a material is the properties of materials highly depend on their structure. So if you target a certain property—let’s say very high electric conductivity or superconductivity or a very hard material … [you] should be able to synthesize a certain crystal form [with that property]. It should be thermodynamically stable, but that’s not enough. It should also be accessible through [today’s] processing techniques.”—Sara Kadkhodaei, University of Illinois Chicago
That’s why Sara Kadkhodaei of the University of Illinois Chicago assigned her very first graduate student, Ali Davariashtiyani, to use a completely new approach to machine learning on the problem. Her hope was that Davariashtiyani could create a computing solution that drew on the strengths of AI to predict required properties and the ability to be created in the lab. To do this, they turned to the National Science Foundation (NSF)-funded Bridges-2 advanced research computer at PSC.
University of Illinois Chicago
University of Illinois Chicago
How PSC Helped
As a new junior faculty member, Kadkhodaei was not exactly rich in physical research resources. But she did have access to a kind of dream team. She herself was a materials scientist. Davariashtiyani had studied computer science as a master’s-degree candidate. And Kadkhodaei’s sister, Zahra Kadkhodaie of New York University (NYU), specialized in computational neuroscience—particularly in neural network modeling. A kind of machine-learning AI, neural networks have made huge leaps in image processing and recognition by copying theories of how nerve cells in the brain communicate and process information.
The teammates wondered whether they could reduce a crystal’s chemical structure, properties and above all ability to be synthesized to a kind of abstract, 3D image that encoded all that information. No human could ever create or understand such an image directly, without computers translating it. But once such an image existed, a neural-network AI might be able to “understand” it, much as neural networks can now successfully pick out images of cats from the Internet.
“At the very beginning, when my student started this project, he only had access to [Bridges-2]. And it was key to his success, to the progress of this project; otherwise, we couldn’t really do this. He did use Bridges-2 especially at the early stages of the project, to develop his code, run his code, train the machine learning, test the machine-learning models. All this has happened on Bridges-2.”—Sara Kadkhodaei, University of Illinois Chicago
The only problem was that their AI would be data- and computation-hungry, particularly in the graphics processing units (GPUs) that have powered the AI revolution since 2012. That’s where Bridges-2, which was designed to excel at tasks combining “Big Data” and AI, came in. By writing a relatively simple proposal for a startup allocation on Bridges-2, Davariashtiyani was able to get started, eventually generating enough results to write a full research allocation that helped give him the computing time he needed to complete the work.
Davariashtiyani started with a known group of crystals whose properties were labeled as synthesizable, “training” the AI to recognize which were and weren’t buildable in the real world. Then he tested the AI on another known group of crystals without labeling their properties, to see how accurately it predicted synthesizability. The results were encouraging. Without human intervention, the AI was able to predict whether or not those crystals could be synthesized in the lab. His AI had “Area Under the ROC Curve” values—AUC, a measure of how much a prediction improves upon random guessing—above 0.9, not far from the 1.0 perfect score, with accuracies (not too many false positives or missed “good” crystals) also above 90 percent. The scientists reported their results in the journal Communications Materials in November 2021.
Today, Kadkhodaei has access to more computing resources. Her team is using an expanded supercomputer toolbox, including Stampede2 at the Texas Advanced Computing Center — another resource, along with Bridges-2, available through the XSEDE network of NSF supercomputing centers — for the next phase of their work. They’re also continuing to use Bridges-2. The work will include expanding into predictions of how pressure can influence the synthesizability of crystals, as well as developing AI models that can discover crystalline materials with extreme hardness.