Menu

Blog

Archive for the ‘supercomputing’ category: Page 45

Jan 24, 2022

Facebook is building ‘the most powerful AI computer in the world’

Posted by in categories: robotics/AI, supercomputing

Meta says it wants to build the most powerful artificial intelligence supercomputer in the world.

The Facebook owner has already designed and built what it calls the AI Research SuperCluster, or RSC, which it says is among the fastest AI supercomputers in the world.

It hopes to top that league by mid-2022, it said, in what would be a major step towards increasing its artificial intelligence capabilities.

Jan 24, 2022

Meta says its new AI supercomputer will be the world’s fastest

Posted by in categories: augmented reality, robotics/AI, supercomputing

Has the first phase of a new AI. Once the AI Research SuperCluster (RSC) is fully built out later this year, the company believes it will be the fastest AI supercomputer on the planet, capable of “performing at nearly 5 exaflops of mixed precision compute.”

The company says RSC will help researchers develop better AI models that can learn from trillions of examples. Among other things, the models will be able to build better augmented reality tools and “seamlessly analyze text, images and video together,” according to Meta. Much of this work is in service of its vision for the metaverse, in which it says AI-powered apps and products will have a key role.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together,” technical program manager Kevin Lee and software engineer Shubho Sengupta wrote.

Jan 20, 2022

The Human Brain-Scale AI Supercomputer Is Coming

Posted by in categories: government, robotics/AI, supercomputing

What’s next? Human brain-scale AI.

Funded by the Slovakian government using funds allocated by the EU, the I4DI consortium is behind the initiative to build a 64 AI exaflop machine (that’s 64 billion, billion AI operations per second) on our platform by the end of 2022. This will enable Slovakia and the EU to deliver for the first time in the history of humanity a human brain-scale AI supercomputer. Meanwhile, almost a dozen other countries are watching this project closely, with interest in replicating this supercomputer in their own countries.

There are multiple approaches to achieve human brain-like AI. These include machine learning, spiking neural networks like SpiNNaker, neuromorphic computing, bio AI, explainable AI and general AI. Multiple AI approaches require universal supercomputers with universal processors for humanity to deliver human brain-scale AI.

Jan 20, 2022

Quantum Computer With More Than 5,000 Qubits Launched

Posted by in categories: quantum physics, supercomputing

Official launch marks a milestone in the development of quantum computing in Europe.

A quantum annealer with more than 5,000 qubits has been put into operation at Forschungszentrum Jülich. The Jülich Supercomputing Centre (JSC) and D-Wave Systems, a leading provider of quantum computing systems, today launched the company’s first cloud-based quantum service outside North America. The new system is located at Jülich and will work closely with the supercomputers at JSC in the future. The annealing quantum computer is part of the Jülich UNified Infrastructure for Quantum computing (JUNIQ), which was established in autumn 2019 to provide researchers in Germany and Europe with access to various quantum systems.

Jan 19, 2022

Light-matter interactions simulated on the world’s fastest supercomputer

Posted by in categories: physics, supercomputing

Light-matter interactions form the basis of many important technologies, including lasers, light-emitting diodes (LEDs), and atomic clocks. However, usual computational approaches for modeling such interactions have limited usefulness and capability. Now, researchers from Japan have developed a technique that overcomes these limitations.

In a study published this month in The International Journal of High Performance Computing Applications, a research team led by the University of Tsukuba describes a highly efficient method for simulating light-matter interactions at the atomic scale.

What makes these interactions so difficult to simulate? One reason is that phenomena associated with the interactions encompass many areas of physics, involving both the propagation of light waves and the dynamics of electrons and ions in matter. Another reason is that such phenomena can cover a wide range of length and time scales.

Jan 17, 2022

New Silicon Carbide Qubits Bring Us One Step Closer to Quantum Networks

Posted by in categories: quantum physics, supercomputing

Chromium defects in silicon carbide may provide a new platform for quantum information.

Quantum computers may be able to solve science problems that are impossible for today’s fastest conventional supercomputers. Quantum sensors may be able to measure signals that cannot be measured by today’s most sensitive sensors. Quantum bits (qubits) are the building blocks for these devices. Scientists are investigating several quantum systems for quantum computing and sensing applications. One system, spin qubits, is based on the control of the orientation of an electron’s spin at the sites of defects in the semiconductor materials that make up qubits. Defects can include small amounts of materials that are different from the main material a semiconductor is made of. Researchers recently demonstrated how to make high quality spin qubits based on chromium defects in silicon carbide.

Jan 11, 2022

Supercomputing! The Purest Indicator of Structural Technological and Economic Progress (1H 2022)

Posted by in categories: economics, supercomputing

How to check the trends of Supercomputing Progress, and how this is as close to a pure indicator of technological progress rates as one can find. The recent flattening of this trend has revealed a flattening in all technological and economic progress relative to long-term trendlines.

Top500.org chart : https://top500.org/statistics/perfdevel/

Continue reading “Supercomputing! The Purest Indicator of Structural Technological and Economic Progress (1H 2022)” »

Jan 11, 2022

Nanowire transistor with integrated memory to enable future supercomputers

Posted by in categories: nanotechnology, robotics/AI, supercomputing

For many years, a bottleneck in technological development has been how to get processors and memories to work faster together. Now, researchers at Lund University in Sweden have presented a new solution integrating a memory cell with a processor, which enables much faster calculations, as they happen in the memory circuit itself.

In an article in Nature Electronics, the researchers present a new configuration, in which a cell is integrated with a vertical transistor selector, all at the nanoscale. This brings improvements in scalability, speed and compared with current mass storage solutions.

The fundamental issue is that anything requiring large amounts of data to be processed, such as AI and , requires speed and more capacity. For this to be successful, the memory and processor need to be as close to each other as possible. In addition, it must be possible to run the calculations in an energy-efficient manner, not least as current technology generates high temperatures with high loads.

Jan 10, 2022

Newcomer Conduit Leverages Frontera to Understand SARS-CoV-2 ‘Budding’

Posted by in categories: biotech/medical, genetics, supercomputing

I am happy to say that my recently published computational COVID-19 research has been featured in a major news article by HPCwire! I led this research as CTO of Conduit. My team utilized one of the world’s top supercomputers (Frontera) to study the mechanisms by which the coronavirus’s M proteins and E proteins facilitate budding, an understudied part of the SARS-CoV-2 life cycle. Our results may provide the foundation for new ways of designing antiviral treatments which interfere with budding. Thank you to Ryan Robinson (Conduit’s CEO) and my computational team: Ankush Singhal, Shafat M., David Hill, Jr., Tamer Elkholy, Kayode Ezike, and Ricky Williams.


Conduit, created by MIT graduate (and current CEO) Ryan Robinson, was founded in 2017. But it might not have been until a few years later, when the pandemic started, that Conduit may have found its true calling. While Conduit €™s commercial division is busy developing a Covid-19 test called nanoSPLASH, its nonprofit arm was granted access to one of the most powerful supercomputers in the world €”Frontera, at the Texas Advanced Computing Center (TACC) €”to model the €œbudding € process of SARS-CoV-2.

Budding, the researchers explained, is how the virus €™ genetic material is encapsulated in a spherical envelope €”and the process is key to the virus €™ ability to infect. Despite that, they say, it has hitherto been poorly understood:

Continue reading “Newcomer Conduit Leverages Frontera to Understand SARS-CoV-2 ‘Budding’” »

Jan 5, 2022

Bug in backup software results in loss of 77 terabytes of research data at Kyoto University

Posted by in categories: cybercrime/malcode, supercomputing

Computer maintenance workers at Kyoto University have announced that due to an apparent bug in software used to back up research data, researchers using the University’s Hewlett-Packard Cray computing system, called Lustre, have lost approximately 77 terabytes of data. The team at the University’s Institute for Information Management and Communication posted a Failure Information page detailing what is known so far about the data loss.

The team, with the University’s Information Department Information Infrastructure Division, Supercomputing, reported that files in the /LARGEO (on the DataDirect ExaScaler storage system) were lost during a system backup procedure. Some in the press have suggested that the problem arose from a faulty script that was supposed to delete only old, unneeded log files. The team noted that it was originally thought that approximately 100TB of files had been lost, but that number has since been pared down to 77TB. They note also that the failure occurred on December 16 between the hours of 5:50 and 7pm. Affected users were immediately notified via emails. The team further notes that approximately 34 million files were lost and that the files lost belonged to 14 known research groups. The team did not release information related to the names of the research groups or what sort of research they were conducting. They did note data from another four groups appears to be restorable.

Page 45 of 96First4243444546474849Last