Brainpower inspires energy-efficient AI

In This Story

People Mentioned in This Story
Body

The key to energy-efficient AI could lie in the human brain. George Mason University researchers Maryam Parsa (PI) and Giorgio Ascoli (co-PI) are leading a Department of Energy–funded project to make artificial intelligence dramatically more efficient by mimicking the brain’s computing strategies. 

Maryam Parsa. Photo provided. 

“AI today is powerful but extremely energy-hungry,” said Parsa, an assistant professor in George Mason’s Department of Electrical and Computer Engineering who specializes in neuromorphic computing. “We want to understand why the human brain is so efficient and why AI is not—and then bridge that gap.” 

Unlike traditional AI models that rely on continuous signals and massive data centers, the brain uses sparse, event-driven signals called spikes.  

“Any one neuron only fires a spike here and there, but the timing of those spikes carries a lot of meaning,” explained Ascoli, a distinguished professor in George Mason’s Bioengineering Department specializing in neuroscience. “That’s why the brain can do so much with so little energy.” 

To replicate this efficiency holistically, Parsa and Ascoli teamed up with Akhilesh Jaiswal at the University of Wisconsin–Madison and Pedram Khalili at Northwestern University. The team is building a full-stack neuromorphic computing system spanning devices, circuits, architectures, algorithms, and applications. Each principal investigator is contributing unique expertise to the system: 

  • Ascoli contributes neuroscience insights, using Izhikevich models, mathematical frameworks that capture real neuron spiking dynamics, to inform system design. 

  • Khalili develops spintronic devices, components that utilize magnetic materials to harness an electron's quantum properties of spin and charge for processing and storing data. By naturally exhibiting neuron-like temporal behavior, these devices are expected to reduce energy demands. 

  • Jaiswal designs analog circuits to integrate these spintronic devices into functional architectures. 

  • Parsa leads the project as well as neuromorphic learning algorithms and application-level integration, exploring diverse learning approaches to optimize speed, precision, resiliency, privacy, and energy savings. 

Giorgio Ascoli. Photo provided.

Izhikevich models allow neuromorphic systems to mimic the timing, diversity, and variability of real neurons, enabling more brain-like learning and efficiency,” Ascoli said. His extensive experience with these models, including simulating hippocampal networks, provides the biological foundation for the project. 

“Instead of forcing existing devices to mimic brain behavior, we asked: what if we leverage new devices that naturally exhibit these dynamics?” Parsa said. This approach ensures innovations at the hardware level align with biologically inspired algorithms, creating a cohesive system rather than isolated solutions. 

By introducing greater biological complexity and leveraging novel spintronic devices, the team hopes to reduce AI’s energy footprint while improving performance metrics such as accuracy and privacy. Early results are promising; even simple changes to spike distributions improved privacy without sacrificing accuracy. 

The research also explores learning algorithms beyond the traditional. “Our goal is to see how different neuron models and learning rules interact and which combinations optimize for speed, precision, or energy savings,” Parsa said. 

Ultimately, the project could transform AI by making it greener and more brain-like. “If we succeed, we’ll have systems that learn faster, consume less power, and operate more like the brain,” said Ascoli. 

“This is not just about making AI more efficient,” said Parsa. “It’s about rethinking computing from the ground up."