Menu

Blog

Archive for the ‘information science’ category: Page 11

Oct 5, 2024

Numerical simulation of deformable droplets in three-dimensional, complex-shaped microchannels

Posted by in categories: computing, information science, physics

The physics of drop motion in microchannels is fundamental to provide insights when designing applications of drop-based microfluidics. In this paper, we develop a boundary-integral method to simulate the motion of drops in microchannels of finite depth with flat walls and fixed depth but otherwise arbitrary geometries. To reduce computational time, we use a moving frame that follows the droplet throughout its motion. We provide a full description of the method, including our channel-meshing algorithm, which is a combination of Monte Carlo techniques and Delaunay triangulation, and compare our results to infinite-depth simulations. For regular geometries of uniform cross section, the infinite-depth limit is approached slowly with increasing depth, though we show much faster convergence by scaling with maximum vs average velocities. For non-regular channel geometries, features such as different branch heights can affect drop partitioning, breaking the symmetric behavior usually observed in regular geometries. Moreover, non-regular geometries also present challenges when comparing the results for deep and infinite-depth channels. To probe inertial effects on drop motion, the full Navier–Stokes equations are first solved for the entire channel, and the tabulated solution is then used as a boundary condition at the moving-frame surface for the Stokes flow inside the moving frame. For moderate Reynolds numbers up to Re = 5, inertial effects on the undisturbed flow are small even for more complex geometries, suggesting that inertial contributions in this range are likely small. This work provides an important tool for the design and analysis of three-dimensional droplet-based microfluidic devices.

Oct 4, 2024

AI can reduce a 100,000-equation quantum problem to just 4 equations

Posted by in categories: information science, quantum physics, robotics/AI

The Hubbard model is a studied model in condensed matter theory and a formidable quantum problem. A team of physicists used deep learning to condense this problem, which previously required 100,000 equations, into just four equations without sacrificing accuracy. The study, titled “Deep Learning the Functional Renormalization Group,” was published on September 21 in Physical Review Letters.

Dominique Di Sante is the lead author of this study. Since 2021, he holds the position of Assistant Professor (tenure track) at the Department of Physics and Astronomy, University of Bologna. At the same time, he is a Visiting Professor at the Center for Computational Quantum Physics (CCQ) at the Flatiron Institute, New York, as part of a Marie Sklodowska-Curie Actions (MSCA) grant that encourages, among other things, the mobility of researchers.

He and colleagues at the Flatiron Institute and other international researchers conducted the study, which has the potential to revolutionize the way scientists study systems containing many interacting electrons. In addition, if they can adapt the method to other problems, the approach could help design materials with desirable properties, such as superconductivity, or contribute to clean energy production.

Oct 3, 2024

How Big Data is Saving Earth from Asteroids: A Cosmic Shield

Posted by in categories: information science, robotics/AI, space

As technology advances, Big Data will play an increasingly important role in protecting Earth from asteroids. By harnessing the power of data analytics, AI, and machine learning, scientists can monitor and predict asteroid movements with greater accuracy than ever before. This enables us to develop early warning systems and potentially deflect asteroids before they can cause harm. Aspiring data scientists interested in contributing to such significant fields can gain the necessary skills by enrolling in a data science course in Chennai, where they can learn to utilize these advanced tools and techniques.

Oct 3, 2024

AI Innovations in Diagnosing Myopic Maculopathy

Posted by in categories: biotech/medical, information science, robotics/AI

What methods can be developed to help identify symptoms of myopia and its more serious version, myopic maculopathy? This is what a recent study published in JAMA Ophthalmology hopes to address as an international team of researchers investigated how artificial intelligence (AI) algorithms can be used to identify early signs of myopic maculopathy, as left untreated it can lead to irreversible damage to a person’s eyes. This study holds the potential to help researchers develop more effective options for identifying this worldwide disease, as it is estimated that approximately 50 percent of the global population will suffer from myopia by 2050.

“AI is ushering in a revolution that leverages global knowledge to improves diagnosis accuracy, especially in its earliest stage of the disease,” said Dr. Yalin Wang, who is a professor in the School of Computing and Augmented Intelligence at Arizona State University and a co-author on the study. “These advancements will reduce medical costs and improve the quality of life for entire societies.”

For the study, the researchers used a novel AI algorithm known as NN-MobileNet to scan retinal images and classify the severity of myopic maculopathy, which currently has five levels of severity in the medical field. The team then used deep neural networks to determine what’s known as the spherical equivalent, which is how eye doctors prescribe glasses and contacts to their patients. Combining these two methods enabled researchers to create a new AI algorithm capable of identifying early signs of myopic maculopathy.

Oct 3, 2024

Tracking neurons across days with high-density probes

Posted by in categories: information science, neuroscience

https://rdcu.be/dVhCN

Imagine trying to understand the brain’s activity over time—an incredibly complex and dynamic process that happens at different speeds.


To solve this problem, we developed a pipeline called UnitMatch, which operates after spike sorting. Before applying UnitMatch, the user spike sorts each recording independently using their preferred algorithm. UnitMatch then deploys a naive Bayes classifier on the units’ average waveform in each recording and tracks units across recordings, assigning a probability to each match.

Continue reading “Tracking neurons across days with high-density probes” »

Oct 1, 2024

New insights into exotic nuclei creation using Langevin equation model

Posted by in categories: biotech/medical, information science

The improved accuracy of MNT reaction predictions provided by this model could facilitate the production of isotopes that are difficult to generate using other methods. These isotopes are valuable for scientific research and , such as diagnostics and treatments. According to Prof. Zhang, the goal is to keep the model comprehensive yet practical for experimental use.

This development represents a step forward in , contributing to the understanding of exotic nuclei production through MNT reactions. Further refinement of the model may enhance its utility in guiding future research and improving rare isotope production processes.

This research was conducted in collaboration with Beijing Normal University, Beijing Academy of Science and Technology, and the National Laboratory of Heavy Ion Accelerator of Lanzhou.

Sep 29, 2024

CRISPR CREME: An AI Treat to Enable Virtual Genomic Experiments

Posted by in categories: biotech/medical, genetics, information science, robotics/AI

Koo and his team tested CREME on another AI-powered DNN genome analysis tool called Enformer. They wanted to know how Enformer’s algorithm makes predictions about the genome. Koo says questions like that are central to his work.

“We have these big, powerful models,” Koo said. “They’re quite compelling at taking DNA sequences and predicting gene expression. But we don’t really have any good ways of trying to understand what these models are learning. Presumably, they’re making accurate predictions because they’ve learned a lot of the rules about gene regulation, but we don’t actually know what their predictions are based off of.”

With CREME, Koo’s team uncovered a series of genetic rules that Enformer learned while analyzing the genome. That insight may one day prove invaluable for drug discovery. The investigators stated, “CREME provides a powerful toolkit for translating the predictions of genomic DNNs into mechanistic insights of gene regulation … Applying CREME to Enformer, a state-of-the-art DNN, we identify cis-regulatory elements that enhance or silence gene expression and characterize their complex interactions.” Koo added, “Understanding the rules of gene regulation gives you more options for tuning gene expression levels in precise and predictable ways.”

Sep 29, 2024

Can AI feel distress? Inside a new framework to assess sentience

Posted by in categories: information science, robotics/AI

From artificial-intelligence algorithms to zebrafish, this book take a precautionary approach to assessing how sentient such entities are.

Sep 27, 2024

Mathematicians Surprised By Hidden Fibonacci Numbers

Posted by in categories: information science, mathematics, physics

What I believe is that symmetry follows everything even mathematics but what explains it is the Fibonacci equation because it seems to show the grand design of everything much like physics has I believe the final parameter of the quantified parameter of infinity.


Recent explorations of unique geometric worlds reveal perplexing patterns, including the Fibonacci sequence and the golden ratio.

Sep 26, 2024

Shrinking augmented reality displays into eyeglasses to expand their use

Posted by in categories: augmented reality, biotech/medical, information science, robotics/AI

Augmented reality (AR) takes digital images and superimposes them onto real-world views. But AR is more than a new way to play video games; it could transform surgery and self-driving cars. To make the technology easier to integrate into common personal devices, researchers report in ACS Photonics how to combine two optical technologies into a single, high-resolution AR display. In an eyeglasses prototype, the researchers enhanced image quality with a computer algorithm that removed distortions.

Page 11 of 327First89101112131415Last