This Perspective explores the potential of organic electrochemical neurons, which are based on organic electrochemical transistors, in the development of adaptable and biointegrable neuromorphic event-based sensing applications.
Human brains outperform computers in many forms of processing and are far more energy efficient. What if we could harness their power in a new form of biological computing?
In multicellular organisms, many biological pathways exhibit a curious structure, involving sets of protein variants that bind or interact with one another in a many-to-many fashion. What functions do these seemingly complicated architectures provide? And can similar architectures be useful in synthetic biology? Here, Dr. Elowitz discusses recent work in his lab that shows how many-to-many circuits can function as versatile computational devices, explore the roles these computations play in natural biological contexts, and show how many-to-many architectures can be used to design synthetic multicellular behaviors.
About Michael Elowitz. Michael Elowitz is a Howard Hughes Medical Institute Investigator and Roscoe Gilkey Dickinson Professor of Biology and Biological Engineering at Caltech. Dr. Elowitz’s laboratory has introduced synthetic biology approaches to build and understand genetic circuits in living cells and tissues. As a graduate student with Stanislas Leibler, Elowitz developed the Repressilator, an artificial genetic clock that generates gene expression oscillations in individual E. coli cells. Since then, his lab has continued to design and build synthetic genetic circuits, bringing a “build to understand” approach to bacteria, yeast, and mammalian cells. He and his group have shown that gene expression is intrinsically stochastic, or ‘noisy’, and revealed how noise functions to enable probabilistic differentiation, time-based regulation, and other functions. Currently, Elowitz’s lab is bringing synthetic approaches to understand and program multicellular functions including multistability, cell-cell communication, epigenetic memory, and cell fate control, and to provide foundations for using biological circuits as therapeutic devices. His lab also co-develops systems such as “MEMOIR” that allows cells to record their own lineage histories and tools for RNA export, and precise gene expression. Elowitz received his PhD in Physics from Princeton University and did postdoctoral research at Rockefeller University. Honors include the HFSP Nakasone Award, MacArthur Fellowship, Presidential Early Career Award, Allen Distinguished Investigator Award, the American Academy of Arts and Sciences, and election to the National Academy of Sciences.
Fab 29.1 and Fab 29.2 will span roughly 81,000 square meters, with a combined length of 530 meters and a width of 153 meters. Including roof structures for air conditioning and heating, the buildings will reach a height of 36.7 meters, with several underground floors as well. The cross-section plans show multiple above-ground floors with heights ranging from 5.7 to 6.5 meters.
Initially, construction of Intel’s Fab 29 was scheduled to begin in the first half of 2023, but delays in subsidy approvals pushed the start to the summer of 2024. Recently it turned out that construction of Intel’s Fab 29 modules 1 and 2 near Magdeburg, Germany, has been delayed to May 2025 due to the pending approval of EU subsidies and the requirement to relocate black soil for reuse at another site.
Intel’s Fab 29 modules 1 and 2 were initially scheduled to start operations in late 2027 and make chips on Intel’s 14A (1.4nm) and 10A (1nm) production nodes. Typically, Intel launches new client PC products in the second half of the year and ramps up production in the first half. The fabs were intended to produce client PC products set for release in the second half of 2028. Although production could begin if the fabs were ready by mid-2028, the timeline would be tight. However, some of the latest reports indicate a different schedule, estimating four to five years for construction, with production now expected to start between 2029 and 2030.
Engineers at the University of California, Los Angeles (UCLA) have unveiled a major advancement in optical computing technology that promises to enhance data processing and encryption. The work is published in the journal Laser & Photonics Reviews.
This innovative work, led by Professor Aydogan Ozcan and his team, showcases a reconfigurable diffractive optical network capable of executing high-dimensional permutation operations, offering a significant leap forward in telecommunications and data security applications.
Permutation operations, essential for various applications, including telecommunications and encryption, have traditionally relied on electronic hardware. However, the UCLA team’s advancement uses all-optical diffractive computing to perform these operations in a multiplexed manner, significantly improving efficiency and scalability.
Mechanical systems are highly suitable for realizing applications such as quantum information processing, quantum sensing and bosonic quantum simulation. The effective use of these systems for these applications, however, relies on the ability to manipulate them in unique ways, specifically by ‘squeezing’ their states and introducing nonlinear effects in the quantum regime.
A research team at ETH Zurich led by Dr. Matteo Fadel recently introduced a new approach to realize quantum squeezing in a nonlinear mechanical oscillator. This approach, outlined in a paper published in Nature Physics, could have interesting implications for the development of quantum metrology and sensing technologies.
“Initially, our goal was to prepare a mechanical squeezed state, namely a quantum state of motion with reduced quantum fluctuations along one phase-space direction,” Fadel told Phys.org. “Such states are important for quantum sensing and quantum simulation applications. They are one of the gates in the universal gate set for quantum computing with continuous-variable systems—meaning mechanical degrees of freedom, electromagnetic fields, etc., as opposed to qubits that are discrete-variable systems.”
Researchers from Germany, Italy, and the UK have achieved a major advance in the development of materials suitable for on-chip energy harvesting. By composing an alloy made of silicon, germanium and tin, they were able to create a thermoelectric material, promising to transform the waste heat of computer processors back into electricity.
With all elements coming from the 4th main group of the periodic table, these new semiconductor alloy can be easily integrated into the CMOS process of chip production. The research findings are published in ACS Applied Energy Materials.
The increasing use of electronic devices in all aspects of our lives is driving up energy consumption. Most of this energy is dissipated into the environment in the form of heat.
Researchers have developed a breakthrough method for quantum information transmission using light particles called qudits, which utilize the spatial mode and polarization properties to enable faster, more secure data transfer and increased resistance to errors.
This technology could greatly enhance the capabilities of a quantum internet, providing long-distance, secure communication, and leading to the development of powerful quantum computers and unbreakable encryption.
Scientists have made a significant breakthrough in creating a new method for transmitting quantum information using particles of light called qudits. These qudits promise a future quantum internet that is both secure and powerful.