Menu

Blog

Archive for the ‘information science’ category: Page 4

Dec 11, 2024

Quantum computing’s next step: New algorithm boosts multitasking

Posted by in categories: computing, information science, quantum physics

Quantum computers differ fundamentally from classical ones. Instead of using bits (0s and 1s), they employ “qubits,” which can exist in multiple states simultaneously due to quantum phenomena like superposition and entanglement.

For a quantum computer to simulate dynamic processes or process data, among other essential tasks, it must translate complex input data into “quantum data” that it can understand. This process is known as quantum compilation.

Essentially, quantum compilation “programs” the quantum computer by converting a particular goal into an executable sequence. Just as the GPS app converts your desired destination into a sequence of actionable steps you can follow, quantum compilation translates a high-level goal into a precise sequence of quantum operations that the quantum computer can execute.

Dec 11, 2024

Forget Black Holes — White Holes Would Break Your Puny Brain

Posted by in categories: cosmology, evolution, information science, neuroscience, singularity

Black holes have long fascinated scientists, known for their ability to trap anything that crosses their event horizon. But what if there were a counterpart to black holes? Enter the white hole—a theoretical singularity where nothing can enter, but energy and matter are expelled with immense force.

First proposed in the 1970s, white holes are essentially black holes in reverse. They rely on the same equations of general relativity but with time flowing in the opposite direction. While a black hole pulls matter in and lets nothing escape, a white hole would repel matter, releasing high-energy radiation and light.

Despite their intriguing properties, white holes face significant scientific challenges. The laws of thermodynamics, particularly entropy, make it improbable for matter to move backward in time, as white holes would require. Additionally, introducing a singularity into the Universe without a preceding collapse defies current understanding of cosmic evolution.

Dec 11, 2024

Leaner Large Language Models could enable Efficient Local Use on Phones and Laptops

Posted by in categories: computing, engineering, information science, mobile phones

Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server—a process that’s expensive, energy-intensive and often slow.

Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs. Their findings are published on the arXiv preprint server.

The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information. This type of leaner LLM could be stored and accessed locally on a device like a phone or laptop and could provide performance nearly as accurate and nuanced as an uncompressed version.

Dec 11, 2024

Google DeepMind’s Breakthrough “AlphaQubit” Closing in on the Holy Grail of Quantum Computing

Posted by in categories: information science, quantum physics, robotics/AI

The dream of building a practical, fault-tolerant quantum computer has taken a significant step forward.

In a breakthrough study recently published in Nature, researchers from Google DeepMind and Google Quantum AI said they have developed an AI-based decoder, AlphaQubit, which drastically improves the accuracy of quantum error correction—a critical challenge in quantum computing.

“Our work illustrates the ability of machine learning to go beyond human-designed algorithms by learning from data directly, highlighting machine learning as a strong contender for decoding in quantum computers,” researchers wrote.

Dec 9, 2024

A new way to create realistic 3D shapes using generative AI

Posted by in categories: information science, media & arts, robotics/AI, virtual reality

Creating realistic 3D models for applications like virtual reality, filmmaking, and engineering design can be a cumbersome process requiring lots of manual trial and error.

While generative artificial intelligence models for images can streamline artistic processes by enabling creators to produce lifelike 2D images from text prompts, these models are not designed to generate 3D shapes. To bridge the gap, a recently developed technique called Score Distillation leverages 2D image generation models to create 3D shapes, but its output often ends up blurry or cartoonish.

Continue reading “A new way to create realistic 3D shapes using generative AI” »

Dec 9, 2024

Next-Generation Size Selection for Optimized Long-Read Sequencing Workflow

Posted by in categories: biotech/medical, health, information science

All DNA is prone to fragmentation, whether it is derived from a biological matrix or created during gene synthesis; thus, any DNA sample will contain a range of fragment sizes. To really exploit the true benefits of long read sequencing, it is necessary to remove these shorter fragments, which might other wise be sequenced preferentially.

DNA size selection can exclude short fragments, maximizing data yields by ensuring that those fragments with the most informational content are not blocked from accessing detection centers (for example, ZMWs) by shorter DNA fragments.

Next-generation size-selection solutions Starting with clean, appropriate-length fragments for HiFi reads can accelerate research by reducing the computation and data processing time needed post-sequencing. Ranger Technology from Yourgene Health is a patent-protected process for automating electrophoresis-based DNA analysis and size selection. Its fluorescence machine vision system and image analysis algorithms provide real-time interpretation of the DNA separation process.

Dec 9, 2024

AI Supercharging Crop Breeding to Protect Farmers from Climate

Posted by in categories: climatology, genetics, information science, robotics/AI

Avalo, a crop development company based in North Carolina, is using machine learning models to accelerate the creation of new and resilient crop varieties.

The traditional way to select for favorable traits in crops is to identify individual plants that exhibit the trait – such as drought resistance – and use those plants to pollinate others, before planting those seeds in fields to see how they perform. But that process requires growing a plant through its entire life cycle to see the result, which can take many years.

Avalo uses an algorithm to identify the genetic basis of complex traits like drought, or pest resistance in hundreds of crop varieties. Plants are cross-pollinated in the conventional way, but the algorithm can predict the performance of a seed without needing to grow it – speeding up the process by as much as 70%, according to Avalo chief technology officer Mariano Alvarez.

Dec 8, 2024

Engineers develop device that merges sensing and computing functions for reconfigurable computing platform

Posted by in categories: information science, robotics/AI

In recent years, engineers have been trying to create hardware systems that better support the high computational demands of machine learning algorithms. These include systems that can perform multiple functions, acting as sensors, memories and computer processors all at once.

Researchers at Peking University recently developed a new reconfigurable neuromorphic computing platform that integrates sensing and computing functions in a single device. This system, outlined in a paper published in Nature Electronics, is comprised of an array of multiple phototransistors with one memristor (MP1R).

“The inspiration for this research stemmed from the limitations of traditional vision computing systems based on the CMOS von Neumann architecture,” Yuchao Yang, senior author of the paper, told Tech Xplore.

Dec 7, 2024

A single algorithm can help robots make good decisions in real time

Posted by in categories: entertainment, information science, robotics/AI

In 2018, Google DeepMind’s AlphaZero program taught itself the games of chess, shogi, and Go using machine learning and a special algorithm to determine the best moves to win a game within a defined grid. Now, a team of Caltech researchers has developed an analogous algorithm for autonomous robots—a planning and decision-making control system that helps freely moving robots determine the best movements to make as they navigate the real world.

“Our algorithm actually strategizes and then explores all the possible and important motions and chooses the best one through dynamic simulation, like playing many simulated games involving moving robots,” says Soon-Jo Chung, Caltech’s Bren Professor of Control and Dynamical Systems and a senior research scientist at JPL, which Caltech manages for NASA. “The breakthrough innovation here is that we have derived a very efficient way of finding that optimal safe motion that typical optimization-based methods would never find.”

Continue reading “A single algorithm can help robots make good decisions in real time” »

Dec 7, 2024

AWS, NVIDIA Offer Deep Dive Into Their Partnership to Develop Hybrid Quantum Computing

Posted by in categories: computing, information science, quantum physics

AWS and NVIDIA are teaming up to address one of the biggest challenges in quantum computing: integrating classical computing into the quantum stack, according to an AWS Quantum Technologies blog post. This partnership brings NVIDIA’s open-source CUDA-Q quantum development platform to Amazon Braket, enabling researchers to design, simulate and execute hybrid quantum-classical algorithms more efficiently.

Hybrid computing — where classical and quantum systems work together — is actually a facet of all quantum computing applications. Classical computers handle tasks like algorithm testing and error correction, while quantum computers tackle problems beyond classical reach. As quantum processors improve, the demand for classical computing power grows exponentially, especially for tasks like error mitigation and pre-processing.

The collaboration between AWS and NVIDIA is designed to ease this transition by providing researchers with seamless access to NVIDIA’s CUDA-Q platform directly within Amazon Braket. This integration allows users to test their programs using powerful GPUs, then execute the same programs on quantum hardware without extensive modifications.

Page 4 of 32912345678Last