Menu

Blog

Archive for the ‘information science’ category: Page 114

Oct 12, 2022

Team uses digital cameras, machine learning to predict neurological disease

Posted by in categories: biotech/medical, health, information science, robotics/AI

In an effort to streamline the process of diagnosing patients with multiple sclerosis and Parkinson’s disease, researchers used digital cameras to capture changes in gait—a symptom of these diseases—and developed a machine-learning algorithm that can differentiate those with MS and PD from people without those neurological conditions.

Their findings are reported in the IEEE Journal of Biomedical and Health Informatics.

The goal of the research was to make the process of diagnosing these diseases more accessible, said Manuel Hernandez, a University of Illinois Urbana-Champaign professor of kinesiology and who led the work with graduate student Rachneet Kaur and industrial and enterprise systems engineering and mathematics professor Richard Sowers.

Oct 11, 2022

The 5 Biggest Artificial Intelligence (AI) Trends In 2023

Posted by in categories: business, information science, robotics/AI, transportation

Over the last decade, Artificial intelligence (AI) has become embedded in every aspect of our society and lives. From chatbots and virtual assistants like Siri and Alexa to automated industrial machinery and self-driving cars, it’s hard to ignore its impact.

Today, the technology most commonly used to achieve AI is machine learning — advanced software algorithms designed to carry out one specific task, such as answering questions, translating languages or navigating a journey — and become increasingly good at it as they are exposed to more and more data.

Worldwide, spending by governments and business on AI technology will top $500 billion in 2023, according to IDC research.

Continue reading “The 5 Biggest Artificial Intelligence (AI) Trends In 2023” »

Oct 11, 2022

OpenAI Chief Scientist: Should We Make Godlike AI That Loves Us, or Obeys Us?

Posted by in categories: information science, robotics/AI

A leading artificial intelligence expert is once again shooting from the hip in a cryptic Twitter poll.

In the poll, OpenAI chief scientist Ilya Sutskever asked his followers whether advanced super-AIs should be made “deeply obedient” to their human creators, or if these godlike algorithms should “truly deeply [love] humanity.”

In other words, he seems to be pondering whether we should treat superintelligences like pets — or the other way around. And that’s interesting, coming from the head researcher at the firm behind GPT-3 and DALL-E, two of the most impressive machine learning systems available today.

Oct 11, 2022

AGI-22 | Joscha Bach — It from no Bit: Basic Cosmology from an AI Perspective

Posted by in categories: blockchains, cosmology, information science, robotics/AI, singularity

Joscha Bach is a cognitive scientist focused on cognitive architectures, mental representation, emotion, social modeling, and learning.

Currently the Principal AI Engineer, Cognitive Computing at Intel Labs, having authored the book “Principles of Synthetic Intelligence”, his focus is how to build machines that can perceive, think and learn.

Continue reading “AGI-22 | Joscha Bach — It from no Bit: Basic Cosmology from an AI Perspective” »

Oct 10, 2022

Deepmind Introduces ‘AlphaTensor,’ An Artificial Intelligence (AI) System For Discovering Novel, Efficient And Exact Algorithms For Matrix Multiplication

Posted by in categories: information science, mathematics, mobile phones, robotics/AI

Improving the efficiency of algorithms for fundamental computations is a crucial task nowadays as it influences the overall pace of a large number of computations that might have a significant impact. One such simple task is matrix multiplication, which can be found in systems like neural networks and scientific computing routines. Machine learning has the potential to go beyond human intuition and beat the most exemplary human-designed algorithms currently available. However, due to the vast number of possible algorithms, this process of automated algorithm discovery is complicated. DeepMind recently made a breakthrough discovery by developing AplhaTensor, the first-ever artificial intelligence (AI) system for developing new, effective, and indubitably correct algorithms for essential operations like matrix multiplication. Their approach answers a mathematical puzzle that has been open for over 50 years: how to multiply two matrices as quickly as possible.

AlphaZero, an agent that showed superhuman performance in board games like chess, go, and shogi, is the foundation upon which AlphaTensor is built. The system expands on AlphaZero’s progression from playing traditional games to solving complex mathematical problems for the first time. The team believes this study represents an important milestone in DeepMind’s objective to improve science and use AI to solve the most fundamental problems. The research has also been published in the established Nature journal.

Matrix multiplication has numerous real-world applications despite being one of the most simple algorithms taught to students in high school. This method is utilized for many things, including processing images on smartphones, identifying verbal commands, creating graphics for video games, and much more. Developing computing hardware that multiplies matrices effectively consumes many resources; therefore, even small gains in matrix multiplication efficiency can have a significant impact. The study investigates how the automatic development of new matrix multiplication algorithms could be advanced by using contemporary AI approaches. In order to find algorithms that are more effective than the state-of-the-art for many matrix sizes, AlphaTensor further leans on human intuition. Its AI-designed algorithms outperform those created by humans, which represents a significant advancement in algorithmic discovery.

Oct 9, 2022

From Analog to Digital Computing: Is Homo sapiens’ Brain on Its Way to Become a Turing Machine?

Posted by in categories: information science, robotics/AI

The abstract basis of modern computation is the formal description of a finite state machine, the Universal Turing Machine, based on manipulation of integers and logic symbols. In this contribution to the discourse on the computer-brain analogy, we discuss the extent to which analog computing, as performed by the mammalian brain, is like and unlike the digital computing of Universal Turing Machines. We begin with ordinary reality being a permanent dialog between continuous and discontinuous worlds. So it is with computing, which can be analog or digital, and is often mixed. The theory behind computers is essentially digital, but efficient simulations of phenomena can be performed by analog devices; indeed, any physical calculation requires implementation in the physical world and is therefore analog to some extent, despite being based on abstract logic and arithmetic. The mammalian brain, comprised of neuronal networks, functions as an analog device and has given rise to artificial neural networks that are implemented as digital algorithms but function as analog models would. Analog constructs compute with the implementation of a variety of feedback and feedforward loops. In contrast, digital algorithms allow the implementation of recursive processes that enable them to generate unparalleled emergent properties. We briefly illustrate how the cortical organization of neurons can integrate signals and make predictions analogically. While we conclude that brains are not digital computers, we speculate on the recent implementation of human writing in the brain as a possible digital path that slowly evolves the brain into a genuine (slow) Turing machine.

The present essay explores key similarities and differences in the process of computation by the brains of animals and by digital computing, by anchoring the exploration on the essential properties of a Universal Turning Machine, the abstract foundation of modern digital computing. In this context, we try to explicitly distance XVIIIth century mechanical automata from modern machines, understanding that when computation allows recursion, it changes the consequences of determinism. A mechanical device is usually both deterministic and predictable, while computation involving recursion is deterministic but not necessarily predictable. For example, while it is possible to design an algorithm that computes the decimal digits of π, the value of any finite sequence following the nth digit, cannot (yet) be computed, hence predicted, with n sufficiently large.

Oct 8, 2022

Scientists Claim To Have Discover What Existed BEFORE The Beginning Of The Universe!

Posted by in categories: information science, mathematics, quantum physics

Non-scientific versions of the answer have invoked many gods and have been the basis of all religions and most philosophy since the beginning of recorded time.

Now a team of mathematicians from Canada and Egypt have used cutting edge scientific theory and a mind-boggling set of equations to work out what preceded the universe in which we live.

In (very) simple terms they applied the theories of the very small – the world of quantum mechanics – to the whole universe – explained by general theory of relativity, and discovered the universe basically goes though four different phases.

Oct 8, 2022

A smartphone’s camera and flash could help people measure blood oxygen levels at home

Posted by in categories: biotech/medical, information science, mobile phones, robotics/AI

This technique involves having participants place their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels from the blood flow patterns in the resulting video.


Conditions like asthma or COVID-19 make it harder for bodies to absorb oxygen from the lungs. This leads to oxygen saturation percentages dropping to 90% or below, indicating that medical attention is needed.

In a clinic, doctors monitor oxygen saturation using pulse oximeters — those clips you put over your fingertip or ear. But monitoring oxygen saturation at home multiple times a day could help patients keep an eye on COVID symptoms, for example.

Continue reading “A smartphone’s camera and flash could help people measure blood oxygen levels at home” »

Oct 7, 2022

Before the Big Bang 6: Can the Universe Create Itself?

Posted by in categories: cosmology, information science, media & arts, neuroscience, particle physics, quantum physics, time travel

Richard Gott, co author with Neil De Grasse Tyson of “Welcome to The Universe” argues the key to understanding the origin of the universe may be the concept of closed time like curves. These are solutions to Einstein’s theory that may allow time travel into the past. in this film, Richard Gott of Princeton University explains the model he developed with LIxin Li. Gott explores the possibility of a closed time like curve forming in the early universe and how this might lead to the amazing property of the universe being able to create itself. Gott is one of the leading experts in time travel solution to Einstein’s equations and is author of the book “Time Travel In Einstein’s Universe”.
This film is part of a series of films exploring competing models of th early universe with the creators of those models. We have interviewed Stephen Hawking, Roger Penrose, Alan Guth and many other leaders of the field. To see other episodes, click on the link below:
https://www.youtube.com/playlist?list=PLJ4zAUPI-qqqj2D8eSk7yoa4hnojoCR4m.

We would like to thank the following who helped us are this movie:
Animations:
Morn 1415
David Yates.
NASA
ESA
M Buser, E Kajari, and WP Schleich.
Storyblocks.
Nina McCurdy, Anthony Aguirre, Joel Primack, Nancy Abrams.
Pixabay.
Ziri Younsi.

Continue reading “Before the Big Bang 6: Can the Universe Create Itself?” »

Oct 7, 2022

Discovering faster matrix multiplication algorithms with reinforcement learning

Posted by in category: information science

A reinforcement learning approach based on AlphaZero is used to discover efficient and provably correct algorithms for matrix multiplication, finding faster algorithms for a variety of matrix sizes.