Menu

Blog

Archive for the ‘supercomputing’ category: Page 10

Apr 2, 2024

World first supercomputer capable of brain-scale simulation being built at Western Sydney University

Posted by in categories: biological, neuroscience, supercomputing

😗😁😘 year 2023.


The world’s first supercomputer capable of simulating networks at the scale of the human brain has been announced by researchers at the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University.

DeepSouth uses a neuromorphic system which mimics biological processes, using hardware to efficiently emulate large networks of spiking neurons at 228 trillion synaptic operations per second — rivalling the estimated rate of operations in the human brain.

Continue reading “World first supercomputer capable of brain-scale simulation being built at Western Sydney University” »

Apr 1, 2024

Supercomputer simulations decode the mass puzzle of the first stars

Posted by in category: supercomputing

Ching-Yao Tang and Ke-Jung Chen used the powerful supercomputer at Berkeley National Lab to create the world’s first high-resolution 3D hydrodynamics simulations of turbulent star-forming clouds for the first stars. Their results indicate that supersonic turbulence effectively fragments the star-forming clouds into several clumps, each with dense cores ranging from 22 to 175 solar masses, destined to form the first stars of masses of about 8 to 58 solar masses that agree well with the observation.

Furthermore, if the turbulence is weak or unresolved in the simulations, the researchers can reproduce similar results from previous simulations. This result first highlights the importance of turbulence in the first star formation and offers a promising pathway to decrease the theoretical mass scale of the . It successfully reconciles the mass discrepancy between simulations and observations, providing a strong theoretical foundation for the first star formation.

Apr 1, 2024

Pushing material boundaries for better electronics

Posted by in categories: nanotechnology, robotics/AI, supercomputing

A recently tenured faculty member in MIT’s departments of Mechanical Engineering and Materials Science and Engineering, Kim has made numerous discoveries about the nanostructure of materials and is funneling them directly into the advancement of next-generation electronics.

His research aims to push electronics past the inherent limits of silicon — a material that has reliably powered transistors and most other electronic elements but is reaching a performance limit as more computing power is packed into ever smaller devices.

Today, Kim and his students at MIT are exploring materials, devices, and systems that could take over where silicon leaves off. Kim is applying his insights to design next-generation devices, including low-power, high-performance transistors and memory devices, artificial intelligence chips, ultra-high-definition micro-LED displays, and flexible electronic “skin.” Ultimately, he envisions such beyond-silicon devices could be built into supercomputers small enough to fit in your pocket.

Apr 1, 2024

Scientists Ignited a Thermonuclear Explosion Inside a Supercomputer

Posted by in categories: space, supercomputing

Computer simulations are giving us new insight into the riotous behavior of cannibal neutron stars.

When a neutron star slurps up material from a close binary companion, the unstable thermonuclear burning of that accumulated material can produce a wild explosion that sends X-radiation bursting across the Universe.

How exactly these powerful eruptions evolve and spread across the surface of a neutron star is something of a mystery. But by trying to replicate the observed X-ray flares using simulations, scientists are learning more about their ins and outs – as well as the ultra-dense neutron stars that produce them.

Mar 31, 2024

Frontiers: The Internet comprises a decentralized global system that serves humanity’s collective effort to generate

Posted by in categories: biotech/medical, education, internet, nanotechnology, Ray Kurzweil, robotics/AI, supercomputing

Process, and store data, most of which is handled by the rapidly expanding cloud. A stable, secure, real-time system may allow for interfacing the cloud with the human brain. One promising strategy for enabling such a system, denoted here as a “human brain/cloud interface” (“B/CI”), would be based on technologies referred to here as “neuralnanorobotics.” Future neuralnanorobotics technologies are anticipated to facilitate accurate diagnoses and eventual cures for the ∌400 conditions that affect the human brain. Neuralnanorobotics may also enable a B/CI with controlled connectivity between neural activity and external data storage and processing, via the direct monitoring of the brain’s ∌86 × 109 neurons and ∌2 × 1014 synapses. Subsequent to navigating the human vasculature, three species of neuralnanorobots (endoneurobots, gliabots, and synaptobots) could traverse the blood–brain barrier (BBB), enter the brain parenchyma, ingress into individual human brain cells, and autoposition themselves at the axon initial segments of neurons (endoneurobots), within glial cells (gliabots), and in intimate proximity to synapses (synaptobots). They would then wirelessly transmit up to ∌6 × 1016 bits per second of synaptically processed and encoded human–brain electrical information via auxiliary nanorobotic fiber optics (30 cm3) with the capacity to handle up to 1018 bits/sec and provide rapid data transfer to a cloud based supercomputer for real-time brain-state monitoring and data extraction. A neuralnanorobotically enabled human B/CI might serve as a personalized conduit, allowing persons to obtain direct, instantaneous access to virtually any facet of cumulative human knowledge. Other anticipated applications include myriad opportunities to improve education, intelligence, entertainment, traveling, and other interactive experiences. A specialized application might be the capacity to engage in fully immersive experiential/sensory experiences, including what is referred to here as “transparent shadowing” (TS). Through TS, individuals might experience episodic segments of the lives of other willing participants (locally or remote) to, hopefully, encourage and inspire improved understanding and tolerance among all members of the human family.

“We’ll have nanobots that
 connect our neocortex to a synthetic neocortex in the cloud
 Our thinking will be a
 biological and non-biological hybrid.”

— Ray Kurzweil, TED 2014

Mar 31, 2024

Microsoft and OpenAI Reportedly Building $100 Billion Secret Supercomputer to Train Advanced AI

Posted by in categories: robotics/AI, supercomputing

“Microsoft has demonstrated its ability to build pioneering AI infrastructure used to train and deploy the world’s leading AI models,” a Microsoft spokesperson told the site. “We are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability.”

Needless to say, that’s a mammoth investment. As such, it shines an even brighter spotlight on a looming question for the still-nascent AI industry: how’s the whole thing going to pay for itself?

So far, most companies in the space — Microsoft and OpenAI included — have offered significant AI services for free, sometimes with a more advanced upsell version like OpenAI’s ChatGPT Plus.

Mar 30, 2024

Microsoft Reportedly Building ‘Stargate’ to Transport OpenAI Into the Future

Posted by in categories: robotics/AI, supercomputing

Microsoft and OpenAI might be concocting a $100 billion supercomputer to accelerate their artificial intelligence models.

Mar 29, 2024

Microsoft and OpenAI reportedly plan to build a $100 billion AI supercomputer called “Stargate”

Posted by in categories: robotics/AI, supercomputing

According to insiders, Microsoft and OpenAI are planning to build a $100 billion supercomputer called “Stargate” to massively accelerate the development of OpenAI’s AI models, The Information reports.

Microsoft and OpenAI executives are forging plans for a data center with a supercomputer made up of millions of specialized server processors to accelerate OpenAI’s AI development, according to three people who took part in confidential talks.

The project, code-named “Stargate,” could cost as much as $100 billion, according to one person who has spoken with OpenAI CEO Sam Altman about it and another who has seen some of Microsoft’s initial cost estimates.

Mar 28, 2024

Cerebras Update: The Wafer Scale Engine 3 Is A Door Opener

Posted by in categories: robotics/AI, supercomputing

Cerebras held an AI Day, and in spite of the concurrently running GTC, there wasn’t an empty seat in the house.

As we have noted, Cerebras Systems is one of the very few startups that is actually getting some serious traction in training AI, at least from a handful of clients. They just introduced the third generation of Wafer-Scale Engines, a monster of a chip that can outperform racks of GPUs, as well as a partnership with Qualcomm to provide custom training and Go-To-Market collaboration with the Edge AI leader. Here’s a few take-aways from the AI Day event. Lots of images from Cerebras, but they tell the story quite well! We will cover the challenges this bold startup still faces in the Conclusions at the end.

As the third generation of wafer-scale engines, the new WSE-3 and the system in which it runs, the CS-3, is an engineering marvel. While Cerebras likes to compare it to a single GPU chip, thats really not the point, which is to simplify scaling. Why cut up a a wafer of chips, package each with HBM, put the package on a board, connect to CPUs with a fabric, then tie them all back together with networking chips and cables? Thats a lot of complexity that leads to a lot of programing to distribute the workload via various forms of parallelism then tie them all back together into a supercomputer. Cerebras thinks it has a better idea.

Mar 27, 2024

Nvidia and Cerebras highlight the crazy acceleration in processing power

Posted by in categories: robotics/AI, space, supercomputing

Coming hot on the heels of two massive announcements last year, last week Nvidia and Cerebras showed yet again that the pace of computing is still accelerating.

The first CS-2 based Condor Galaxy AI supercomputers went online in late 2023, and already Cerebras is unveiling its successor the CS-3, based on the newly launched Wafer Scale Engine 3, an update to the WSE-2 using 5nm fabrication and boasting a staggering 900,000 AI optimized cores with sparse compute support. CS-3 incorporates Qualcomm AI 100 Ultra processors to speed up inference.

Continue reading “Nvidia and Cerebras highlight the crazy acceleration in processing power” »

Page 10 of 96First7891011121314Last