October 8, 2025 New design is a step toward non-volatile in-memory computing Written By: Jason Daley Departments: Electrical & Computer Engineering Categories: Faculty|Research A collaborative team of engineers at the University of Wisconsin-Madison has designed and simulated a new kind of memory that not only stores data, but can also process it in the same place, an operation called in-memory computing. This “polymorphic” 2D memory could change the way computers store and process massive amounts of data—from AI and big data analytics to portable devices—making them faster, more efficient and more adaptable than ever. The new memory design was developed by Assistant Professors Akhilesh Jaiswal and Ying Wang and their students in the Department of Electrical and Computer Engineering. The research was published by the journal npj 2D Materials and Applications on September 30, 2025. Akhilesh Jaiswal To explain the concept, Jaiswal says imagine a lecture where the instructor pauses after every question, leaves the room, retrieves the answer from a set of data in another room, then comes back to continue their lesson. That inefficient process, he says, is essentially how conventional computers work, shuttling data between the memory and a processor every time it needs to run a new command. The UW-Madison team’s new memory, however, exhibits polymorphic behavior, in which the same memory structure can enable multiple functional operations without shuffling the data back and forth between a processor and the memory itself. It can read data simultaneously from multiple memory locations, perform compute and CAM (content addressable memory) operations, or search the entire memory to find a data pattern of interest. And it can do all that in one spot, without the time-wasting step of rushing between the lecture hall and the research room. At the heart of this breakthrough is a tiny device called a ferroelectric tunnel junction (FTJ) which serves as a tiny, atomic-scale switch. A sandwich of materials just a few atoms thick, FTJs are built from a special two-dimensional material known as molybdenum disulfide (MoS2). By changing the electric dipole in the ferroelectric layer, this device can “flip” its electrical state—just like toggling the light switch on or off. That flip corresponds to digital 0s and 1s, the binary language computers use to store information. Best of all, unlike conventional memory which loses data if there is no power, this is non-volatile memory, meaning that the FTJ-based device retains information, even after the electricity is turned off. Ying Wang It’s not just the memory element itself that is impressive, but also the circuit techniques surrounding it. Instead of redesigning or altering the memory array, the team engineered peripheral circuits that can reconfigure how the memory is used, effectively transforming the same memory stack into a multi-functional compute engine. With these reconfigurable circuits, the MoS2-based FTJ memory can do far more than traditional storage. It can carry out in-memory computing, performing Boolean logic operations directly where data is stored; it can read batches of data in a single shot, speeding up access and reducing delays; it can carry out self-referencing read operations that automatically correct for manufacturing variations; and it can reconfigure the memory as a content-addressable memory to search large datasets in parallel at high speed. As proof of concept, the team used fabricated MoS2-based FTJ devices and simulated them alongside commercial transistors in GlobalFoundries’ advanced 22-nanometer technology. This flexible “many-in-one” memory could reduce the energy cost of data-heavy workloads like training AI models and real-time video analytics, and pave the way for smaller, more efficient electronics. “This is an exciting step toward computing systems where memory is not a passive component, but an active and adaptable part of the process,” says Jaiswal. Other UW-Madison authors on this paper include Md Abdullah-Al Kaiser, Kayode Oluwaseyi Adebunmi, Haolin You, Chen Shao, and Yulu Mao. This work was supported in part by the U.S. National Science Foundation under award numbers CCF-2319617 and ECCS-2339093.