Menu

Blog

Archive for the ‘information science’ category: Page 138

Apr 25, 2022

Elon Musk acquires Twitter for roughly $44 billion

Posted by in categories: cybercrime/malcode, economics, Elon Musk, information science, robotics/AI

The company’s board and the Tesla CEO hammered out the final details of his $54.20 a share bid.

The agreement marks the close of a dramatic courtship and a sharp change of heart at the social-media network.

Elon Musk acquired Twitter for $44 billion on Monday, the company announced, giving the world’s richest person command of one of its most influential social media sites — which serves as a platform for political leaders, a sounding board for experts across industries and an information hub for millions of everyday users.

Continue reading “Elon Musk acquires Twitter for roughly $44 billion” »

Apr 25, 2022

Why it’s so damn hard to make AI fair and unbiased

Posted by in categories: information science, robotics/AI

There are competing notions of fairness — and sometimes they’re incompatible, as facial recognition and lending algorithms show.

Apr 25, 2022

Quantifying Human Consciousness With the Help of AI

Posted by in categories: information science, robotics/AI

A new deep learning algorithm is able to quantify arousal and awareness in humans at the same time.

#consc… See more.


Summary: A new deep learning algorithm is able to quantify arousal and awareness in humans at the same time.

Continue reading “Quantifying Human Consciousness With the Help of AI” »

Apr 23, 2022

Growing Anomalies at the Large Hadron Collider Raise Hopes

Posted by in categories: information science, particle physics

Amid the chaotic chains of events that ensue when protons smash together at the Large Hadron Collider in Europe, one particle has popped up that appears to go to pieces in a peculiar way.

All eyes are on the B meson, a yoked pair of quark particles. Having caught whiffs of unexpected B meson behavior before, researchers with the Large Hadron Collider beauty experiment (LHCb) have spent years documenting rare collision events featuring the particles, in hopes of conclusively proving that some novel fundamental particle or effect is meddling with them.

In their latest analysis, first presented at a seminar in March, the LHCb physicists found that several measurements involving the decay of B mesons conflict slightly with the predictions of the Standard Model of particle physics — the reigning set of equations describing the subatomic world. Taken alone, each oddity looks like a statistical fluctuation, and they may all evaporate with additional data, as has happened before. But their collective drift suggests that the aberrations may be breadcrumbs leading beyond the Standard Model to a more complete theory.

Apr 22, 2022

How to generate smart games using machine learning?

Posted by in categories: information science, robotics/AI

Machine learning and machine learning algorithms are finding new applications in game building. Machine learning NPCs with machine learning processors have made it possible to have a virtual player.


Study reveals the different ways the brain parses information through interactions of waves of neural activity.

Apr 22, 2022

Quasiparticles used to generate millions of truly random numbers a second

Posted by in categories: cybercrime/malcode, information science, quantum physics

This could lead to a truly random number generator making things much more secure.


Random numbers are crucial for computing, but our current algorithms aren’t truly random. Researchers at Brown University have now found a way to tap into the fluctuations of quasiparticles to generate millions of truly random numbers per second.

Random number generators are key parts of computer software, but technically they don’t quite live up to their name. Algorithms that generate these numbers are still deterministic, meaning that anyone with enough information about how it works could potentially find patterns and predict the numbers produced. These pseudo-random numbers suffice for low stakes uses like gaming, but for scientific simulations or cybersecurity, truly random numbers are important.

Continue reading “Quasiparticles used to generate millions of truly random numbers a second” »

Apr 22, 2022

Scientists create algorithm to assign a label to every pixel in the world, without human supervision

Posted by in categories: information science, robotics/AI, transportation

Labeling data can be a chore. It’s the main source of sustenance for computer-vision models; without it, they’d have a lot of difficulty identifying objects, people, and other important image characteristics. Yet producing just an hour of tagged and labeled data can take a whopping 800 hours of human time. Our high-fidelity understanding of the world develops as machines can better perceive and interact with our surroundings. But they need more help.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Microsoft, and Cornell University have attempted to solve this problem plaguing vision models by creating “STEGO,” an that can jointly discover and segment objects without any human labels at all, down to the pixel.

Continue reading “Scientists create algorithm to assign a label to every pixel in the world, without human supervision” »

Apr 21, 2022

Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations

Posted by in categories: information science, mathematics, robotics/AI

For centuries, mathematicians have tried to prove that Euler’s fluid equations can produce nonsensical answers. A new approach to machine learning has researchers betting that “blowup” is near.

Apr 20, 2022

0 comments on “Toward Self-Improving Neural Networks: Schmidhuber Team’s Scalable Self-Referential Weight Matrix Learns to Modify Itself”

Posted by in categories: information science, robotics/AI

Back in 1993, AI pioneer Jürgen Schmidhuber published the paperA Self-Referential Weight Matrix, which he described as a “thought experiment… intended to make a step towards self-referential machine learning by showing the theoretical possibility of self-referential neural networks whose weight matrices (WMs) can learn to implement and improve their own weight change algorithm.” A lack of subsequent practical studies in this area had however left this potentially impactful meta-learning ability unrealized — until now.

In the new paper A Modern Self-Referential Weight Matrix That Learns to Modify Itself, a research team from The Swiss AI Lab, IDSIA, University of Lugano (USI) & SUPSI, and King Abdullah University of Science and Technology (KAUST) presents a scalable self-referential WM (SRWM) that leverages outer products and the delta update rule to update and improve itself, achieving both practical applicability and impressive performance in game environments.

The proposed model is built upon fast weight programmers (FWPs), a scalable and effective method dating back to the ‘90s that can learn to memorize past data and compute fast weight changes via programming instructions that are additive outer products of self-invented activation patterns, aka keys and values for self-attention. In light of their connection to linear variants of today’s popular transformer architectures, FWPs are now witnessing a revival. Recent studies have advanced conventional FWPs with improved elementary programming instructions or update rules invoked by their slow neural net to reprogram the fast neural net, an approach that has been dubbed the “delta update rule.”

Apr 15, 2022

AI and jobs: Where humans are better than algorithms, and vice versa

Posted by in categories: employment, information science, robotics/AI

It’s easy to get caught up in the doom-and-gloom predictions about artificial intelligence wiping out millions of jobs. Here’s a reality check.