Menu

Blog

Archive for the ‘information science’ category: Page 125

Aug 14, 2022

Novel AI algorithm may be the key for a breakthrough epilepsy treatment

Posted by in categories: biotech/medical, information science, robotics/AI

A group of scientists from the University College London has developed an artificial intelligence (AI) algorithm that can detect drug-resistant focal cortical dysplasia (FCD), a subtle anomaly in the brain that leads to epileptic seizures. This is a promising step for scientists toward detecting and curing epilepsy in its early stages.

To develop the algorithm, the Multicentre Epilepsy Lesion Detection project (MELD) gathered more than 1,000 patients’ MRI scans from 22 international epilepsy centers, which reports where anomalies are in cases of drug-resistant focal cortical dysplasia (FCD), a major reason behind epilepsy.

Aug 14, 2022

Amplitudes and the Riemann Zeta Function

Posted by in categories: computing, information science, mathematics, quantum physics

Circa 2021 This gets very close to a master algorithm for math and helps with quantum computing too.


Abstract. Humans carrying the CORD7 (cone-rod dystrophy 7) mutation possess increased verbal IQ and working memory. This autosomal dominant syndrome is caused b.

Aug 14, 2022

Self-Taught AI Shows Similarities to How the Brain Works

Posted by in categories: biotech/medical, information science, internet, robotics/AI

Around the same time, neuroscientists developed the first computational models of the primate visual system, using neural networks like AlexNet and its successors. The union looked promising: When monkeys and artificial neural nets were shown the same images, for example, the activity of the real neurons and the artificial neurons showed an intriguing correspondence. Artificial models of hearing and odor detection followed.

But as the field progressed, researchers realized the limitations of supervised training. For instance, in 2017, Leon Gatys, a computer scientist then at the University of Tübingen in Germany, and his colleagues took an image of a Ford Model T, then overlaid a leopard skin pattern across the photo, generating a bizarre but easily recognizable image. A leading artificial neural network correctly classified the original image as a Model T, but considered the modified image a leopard. It had fixated on the texture and had no understanding of the shape of a car (or a leopard, for that matter).

Self-supervised learning strategies are designed to avoid such problems. In this approach, humans don’t label the data. Rather, “the labels come from the data itself,” said Friedemann Zenke, a computational neuroscientist at the Friedrich Miescher Institute for Biomedical Research in Basel, Switzerland. Self-supervised algorithms essentially create gaps in the data and ask the neural network to fill in the blanks. In a so-called large language model, for instance, the training algorithm will show the neural network the first few words of a sentence and ask it to predict the next word. When trained with a massive corpus of text gleaned from the internet, the model appears to learn the syntactic structure of the language, demonstrating impressive linguistic ability — all without external labels or supervision.

Aug 14, 2022

Researchers create algorithm to help predict cancer risk associated with tumor variants

Posted by in categories: biotech/medical, chemistry, information science, robotics/AI

Vanderbilt researchers have developed an active machine learning approach to predict the effects of tumor variants of unknown significance, or VUS, on sensitivity to chemotherapy. VUS, mutated bits of DNA with unknown impacts on cancer risk, are constantly being identified. The growing number of rare VUS makes it imperative for scientists to analyze them and determine the kind of cancer risk they impart.

Traditional prediction methods display limited power and accuracy for rare VUS. Even machine learning, an artificial intelligence tool that leverages data to “learn” and boost performance, falls short when classifying some VUS. Recent work by the lab of Walter Chazin, Chancellor’s Chair in Medicine and professor of biochemistry and chemistry, led by co-first authors and postdoctoral fellows Alexandra Blee and Bian Li, featured an active machine learning technique.

Active machine learning relies on training an algorithm with existing data, as with machine learning, and feeding it new information between rounds of training. Chazin and his lab identified VUS for which predictions were least certain, performed biochemical experiments on those VUS and incorporated the resulting data into subsequent rounds of algorithm training. This allowed the model to continuously improve its VUS classification.

Aug 14, 2022

A step towards quantum gravity

Posted by in categories: information science, particle physics, quantum physics

In Einstein’s theory of general relativity, gravity arises when a massive object distorts the fabric of spacetime the way a ball sinks into a piece of stretched cloth. Solving Einstein’s equations by using quantities that apply across all space and time coordinates could enable physicists to eventually find their “white whale”: a quantum theory of gravity.

In a new article in The European Physical Journal H 0, Donald Salisbury from Austin College in Sherman, USA, explains how Peter Bergmann and Arthur Komar first proposed a way to get one step closer to this goal by using Hamilton-Jacobi techniques. These arose in the study of particle motion in order to obtain the complete set of solutions from a single function of particle position and constants of the motion.

Three of the four —strong, weak, and electromagnetic—hold under both the ordinary world of our everyday experience, modeled by , and the spooky world of quantum physics. Problems arise, though, when trying to apply to the fourth force, gravity, to the quantum world. In the 1960s and 1970s, Peter Bergmann of Syracuse University, New York and his associates recognized that in order to someday reconcile Einstein’s of with the quantum world, they needed to find quantities for determining events in space and time that applied across all frames of reference. They succeeded in doing this by using the Hamilton-Jacobi techniques.

Aug 13, 2022

Quantum computer made of 6 super-sized atoms could imitate the brain

Posted by in categories: information science, particle physics, quantum physics, robotics/AI

Simulations of a quantum computer made of six rubidium atoms suggest it could run a simple brain-inspired algorithm that can learn to remember and make simple decisions.

Aug 9, 2022

1.1 quintillion operations per second: US has world’s fastest supercomputer

Posted by in categories: information science, supercomputing

The US has retaken the top spot in the world supercomputer rankings with the exascale Frontier system at Oak Ridge National Laboratory (ORNL) in Tennessee.

The Frontier system’s score of 1.102 exaflop/s makes it “the most powerful supercomputer to ever exist” and “the first true exascale machine,” the Top 500 project said Monday in the announcement of its latest rankings. Exaflop/s (or exaflops) is short for 1 quintillion floating-point operations per second.

Frontier was more than twice as fast as a Japanese system that placed second in the rankings, which are based on the LINPACK benchmark that measures the “performance of a dedicated system for solving a dense system of linear equations.”

Aug 8, 2022

Automated techniques could make it easier to develop AI

Posted by in categories: information science, robotics/AI

Machine-learning researchers make many decisions when designing new models. They decide how many layers to include in neural networks and what weights to give inputs at each node. The result of all this human decision-making is that complex models end up being “designed by intuition” rather than systematically, says Frank Hutter, head of the machine-learning lab at the University of Freiburg in Germany.

A growing field called automated machine learning, or autoML, aims to eliminate the guesswork. The idea is to have algorithms take over the decisions that researchers currently have to make when designing models. Ultimately, these techniques could make machine learning more accessible.

Aug 7, 2022

A New Method for Making Graphene has an Awesome Application: A Space Elevator!

Posted by in categories: information science, space travel

The material of the future could make an imaginative concept of the past real.


Brief history of the space elevator

Continue reading “A New Method for Making Graphene has an Awesome Application: A Space Elevator!” »

Aug 5, 2022

Saving the world one algorithm at a time | The Age of A.I.

Posted by in categories: education, existential risks, food, information science, robotics/AI

Many say that human beings have destroyed our planet. Because of this these people are endeavoring to save it through the help of artificial intelligence. Famine, animal extinction, and war may all be preventable one day with the help of technology.

The Age of A.I. is a 8 part documentary series hosted by Robert Downey Jr. covering the ways Artificial Intelligence, Machine Learning and Neural Networks will change the world.

Continue reading “Saving the world one algorithm at a time | The Age of A.I.” »