Menu

Blog

Archive for the ‘information science’ category: Page 127

Jul 29, 2022

Inca Knots Inspire Quantum Computer

Posted by in categories: computing, information science, quantum physics

We think of data storage as a modern problem, but even ancient civilizations kept records. While much of the world used stone tablets or other media that didn’t survive the centuries, the Incas used something called quipu which encoded numeric data in strings using knots. Now the ancient system of recording numbers has inspired a new way to encode qubits in a quantum computer.

With quipu, knots in a string represent a number. By analogy, a conventional qubit would be as if you used a string to form a 0 or 1 shape on a tabletop. A breeze or other “noise” would easily disturb your equation. But knots stay tied even if you pick the strings up and move them around. The new qubits are the same, encoding data in the topology of the material.

In practice, Quantinuum’s H1 processor uses 10 ytterbium ions trapped by lasers pulsing in a Fibonacci sequence. If you consider a conventional qubit to be a one-dimensional affair — the qubit’s state — this new system acts like a two-dimensional system, where the second dimension is time. This is easier to construct than conventional 2D quantum structures but offers at least some of the same inherent error resilience.

Jul 29, 2022

Elon Musk — People Will Understand — Finally It’s Happening!

Posted by in categories: Elon Musk, existential risks, information science

Explains why we can meet aliens soon. He is on to something. Elon Musk disagrees with the research that argues that there are not aliens,. Elon Musk explains why drake equation is important and why Fermi paradox is wrong.

SUBSCRIBE IF YOU LIKED THIS VIDEO
╔═╦╗╔╦╗╔═╦═╦╦╦╦╗╔═╗
║╚╣║║║╚╣╚╣╔╣╔╣║╚╣═╣
╠╗║╚╝║║╠╗║╚╣║║║║║═╣
╚═╩══╩═╩═╩═╩╝╚╩═╩═╝

Continue reading “Elon Musk — People Will Understand — Finally It’s Happening!” »

Jul 28, 2022

#58 Dr. Ben Goertzel — Artificial General Intelligence

Posted by in categories: biological, blockchains, information science, neuroscience, physics, robotics/AI, singularity

Patreon: https://www.patreon.com/mlst.
Discord: https://discord.gg/ESrGqhf5CB

The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” — that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots.

Continue reading “#58 Dr. Ben Goertzel — Artificial General Intelligence” »

Jul 27, 2022

DayDreamer: An algorithm to quickly teach robots new behaviors in the real world

Posted by in categories: information science, robotics/AI

Training robots to complete tasks in the real-world can be a very time-consuming process, which involves building a fast and efficient simulator, performing numerous trials on it, and then transferring the behaviors learned during these trials to the real world. In many cases, however, the performance achieved in simulations does not match the one attained in the real-world, due to unpredictable changes in the environment or task.

Researchers at the University of California, Berkeley (UC Berkeley) have recently developed DayDreamer, a tool that could be used to train robots to complete tasks more effectively. Their approach, introduced in a paper pre-published on arXiv, is based on learning models of the world that allow robots to predict the outcomes of their movements and actions, reducing the need for extensive trial and error training in the real-world.

Continue reading “DayDreamer: An algorithm to quickly teach robots new behaviors in the real world” »

Jul 27, 2022

Team scripts breakthrough quantum algorithm

Posted by in categories: computing, information science, particle physics, quantum physics

City College of New York physicist Pouyan Ghaemi and his research team are claiming significant progress in using quantum computers to study and predict how the state of a large number of interacting quantum particles evolves over time. This was done by developing a quantum algorithm that they run on an IBM quantum computer. “To the best of our knowledge, such particular quantum algorithm which can simulate how interacting quantum particles evolve over time has not been implemented before,” said Ghaemi, associate professor in CCNY’s Division of Science.

Entitled “Probing geometric excitations of fractional quantum Hall states on quantum computers,” the study appears in the journal of Physical Review Letters.

“Quantum mechanics is known to be the underlying mechanism governing the properties of elementary particles such as electrons,” said Ghaemi. “But unfortunately there is no easy way to use equations of quantum mechanics when we want to study the properties of large number of electrons that are also exerting force on each other due to their .”

Jul 27, 2022

Watch: 🤖 🤖 Will AI become an “existential threat?”

Posted by in categories: employment, existential risks, information science, robotics/AI

https://www.youtube.com/watch?v=z71PECJte44

What does the future of AI look like? Let’s try out some AI software that’s readily available for consumers and see how it holds up against the human brain.

🦾 AI can outperform humans. But at what cost? 👉 👉 https://cybernews.com/editorial/ai-can-outperform-humans-but-at-what-cost/

Continue reading “Watch: 🤖 🤖 Will AI become an ‘existential threat?’” »

Jul 26, 2022

Machine Learning Paves Way for Smarter Particle Accelerators

Posted by in categories: information science, particle physics, robotics/AI

Staff Scientist Daniele Filippetto working on the High Repetition-Rate Electron Scattering Apparatus. (Credit: Thor Swift/Berkeley Lab)

– By Will Ferguson

Scientists have developed a new machine-learning platform that makes the algorithms that control particle beams and lasers smarter than ever before. Their work could help lead to the development of new and improved particle accelerators that will help scientists unlock the secrets of the subatomic world.

Jul 26, 2022

Roboticists discover alternative physics

Posted by in categories: information science, physics, robotics/AI

Energy, mass, velocity. These three variables make up Einstein’s iconic equation E=MC2. But how did Einstein know about these concepts in the first place? A precursor step to understanding physics is identifying relevant variables. Without the concept of energy, mass, and velocity, not even Einstein could discover relativity. But can such variables be discovered automatically? Doing so could greatly accelerate scientific discovery.

This is the question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe through a , then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science.

Continue reading “Roboticists discover alternative physics” »

Jul 25, 2022

Kinetic energy: Newton vs. Einstein | Who’s right?

Posted by in categories: energy, information science, physics

Using Newtonian physics, physicists have found an expression for the value of kinetic energy, specifically KE = ½ m v^2. Einstein came up with a very different expression, specifically KE = (gamma – 1) m c^2. In this video, Fermilab’s Dr. Don Lincoln shows how these two equations are the same at low energy and how you get from one to the other.

Relativity playlist:

Continue reading “Kinetic energy: Newton vs. Einstein | Who’s right?” »

Jul 24, 2022

Protein sequence design by deep learning

Posted by in categories: information science, robotics/AI, space

The design of protein sequences that can precisely fold into pre-specified 3D structures is a challenging task. A recently proposed deep-learning algorithm improves such designs when compared with traditional, physics-based protein design approaches.

ABACUS-R is trained on the task of predicting the AA at a given residue, using information about that residue’s backbone structure, and the backbone and AA of neighboring residues in space. To do this, ABACUS-R uses the Transformer neural network architecture6, which offers flexibility in representing and integrating information between different residues. Although these aspects are similar to a previous network2, ABACUS-R adds auxiliary training tasks, such as predicting secondary structures, solvent exposure and sidechain torsion angles. These outputs aren’t needed during design but help with training and increase sequence recovery by about 6%. To design a protein sequence, ABACUS-R uses an iterative ‘denoising’ process (Fig.