Toggle light / dark theme

Some big M&A is afoot in Israel in the world of smart transportation. According to multiple reports and sources that have contacted TechCrunch, chip giant Intel is in the final stages of a deal to acquire Moovit, a startup that applies AI and big data analytics to track traffic and provide transit recommendations to some 800 million people globally. The deal is expected to close in the coming days at a price believed to be in the region of $1 billion.

We have contacted Nir Erez, the founder and CEO of Moovit, as well as Intel spokespeople for a comment on the reports and will update this story as we learn more. For now, Moovit’s spokesperson has not denied the reports and what we have been told directly.

“At this time we have no comment, but if anything changes I’ll definitely let you know,” Moovit’s spokesperson.

The triumph of Google’s AlphaGo in 2016 against Go world champion Lee Sedol by 4:1 caused quite the stir that reached far beyond the Go community, with over a hundred million people watching while the match was taking place. It was a milestone in the development of AI: Go had withstood the attempts of computer scientists to build algorithms that could play at a human level for a long time. And now an artificial mind had been built, dominating someone that had dedicated thousands of hours of practice to hone his craft with relative ease.

This was already quite the achievement, but then AlphaGoZero came along, and fed AlphaGo some of its own medicine: it won against AlphaGo with a margin of 100:0 only a year after Lee Sedol’s defeat. This was even more spectacular, and for more than the obvious reasons. AlphaGoZero was not only an improved version of AlphaGo. Where AlphaGo had trained with the help of expert games played by the best human Go players, AlphaGoZero had started literally from zero, working the intricacies of the game out without any supervision.

Given nothing more than the rules of the game and how to win, it had locked itself in its virtual room and played against itself for only 34 hours. It didn’t combine historically humanity’s built up an understanding of the principles and aesthetics of the game with the unquestionably superior numerical power of computers, but it emerged, just by itself, as the dominant Go force of the known universe.

Education Saturday with Space Time.


It’s not surprising that the profound weirdness of the quantum world has inspired some outlandish explanations – nor that these have strayed into the realm of what we might call mysticism. One particularly pervasive notion is the idea that consciousness can directly influence quantum systems – and so influence reality. Today we’re going to see where this idea comes from, and whether quantum theory really supports it.

The behavior of the quantum world is beyond weird. Objects being in multiple places at once, communicating faster than light, or simultaneously experiencing multiple entire timelines … that then talk to each other. The rules governing the tiny quantum world of atoms and photons seem alien. And yet we have a set of rules that give us incredible power in predicting the behavior of a quantum system – rules encapsulated in the mathematics of quantum mechanics. Despite its stunning success, we’re now nearly a century past the foundation of quantum mechanics and physicists are still debating how to interpret its equations and the weirdness they represent.

Researchers at the University of Massachusetts and the Air Force Research Laboratory Information Directorate have recently created a 3D computing circuit that could be used to map and implement complex machine learning algorithms, such convolutional neural networks (CNNs). This 3D circuit, presented in a paper published in Nature Electronics, comprises eight layers of memristors; electrical components that regulate the electrical current flowing in a circuit and directly implement neural network weights in hardware.

“Previously, we developed a very reliable memristive device that meets most requirements of in-memory computing for artificial neural networks, integrated the devices into large 2-D arrays and demonstrated a wide variety of machine intelligence applications,” Prof. Qiangfei Xia, one of the researchers who carried out the study, told TechXplore. “In our recent study, we decided to extend it to the third dimension, exploring the benefit of a rich connectivity in a 3D neural .”

Essentially, Prof. Xia and his team were able to experimentally demonstrate a 3D computing circuit with eight memristor layers, which can all be engaged in computing processes. Their circuit differs greatly from other previously developed 3D , such as 3D NAND flash, as these systems are usually comprised of layers with different functions (e.g. a sensor layer, a computing layer, a control layer, etc.) stacked or bonded together.

The news: In a fresh spin on manufactured pop, OpenAI has released a neural network called Jukebox that can generate catchy songs in a variety of different styles, from teenybop and country to hip-hop and heavy metal. It even sings—sort of.

How it works: Give it a genre, an artist, and lyrics, and Jukebox will produce a passable pastiche in the style of well-known performers, such as Katy Perry, Elvis Presley or Nas. You can also give it the first few seconds of a song and it will autocomplete the rest.

Rice University researchers have discovered a hidden symmetry in the chemical kinetic equations scientists have long used to model and study many of the chemical processes essential for life.

The find has implications for drug design, genetics and biomedical research and is described in a study published this month in the Proceedings of the National Academy of Sciences. To illustrate the biological ramifications, study co-authors Oleg Igoshin, Anatoly Kolomeisky and Joel Mallory of Rice’s Center for Theoretical Biological Physics (CTBP) used three wide-ranging examples: protein folding, enzyme catalysis and motor protein efficiency.

In each case, the researchers demonstrated that a simple mathematical ratio shows that the likelihood of errors is controlled by kinetics rather than thermodynamics.

The Newtonian laws of physics explain the behavior of objects in the everyday physical world, such as an apple falling from a tree. For hundreds of years Newton provided a complete answer until the work of Einstein introduced the concept of relativity. The discovery of relativity did not suddenly prove Newton wrong, relativistic corrections are only required at speeds above about 67 million mph. Instead, improving technology allowed both more detailed observations and techniques for analysis that then required explanation. While most of the consequences of a Newtonian model are intuitive, much of relativity is not and is only approachable though complex equations, modeling, and highly simplified examples.

In this issue, Korman et al.1 provide data from a model of the second gas effect on arterial partial pressures of volatile anesthetic agents. Most readers might wonder what this information adds, some will struggle to remember what the second gas effect is, and others will query the value of modeling rather than “real data.” This editorial attempts to address these questions.

The second gas effect2 is a consequence of the concentration effect3 where a “first gas” that is soluble in plasma, such as nitrous oxide, moves rapidly from the lungs to plasma. This increases the alveolar concentration and hence rate of uptake into plasma of the “second gas.” The second gas is typically a volatile anesthetic, but oxygen also behaves as a second gas.4 Although we frequently talk of inhalational kinetics as a single process, there are multiple steps between dialing up a concentration and the consequent change in effect. The key steps are transfer from the breathing circuit to alveolar gas, from the alveoli to plasma, and then from plasma to the “effect-site.” Separating the two steps between breathing circuit and plasma helps us understand both the second gas effect and the message underlying the paper by Korman et al.1

An exact solution of the Einstein—Maxwell equations yields a general relativistic picture of the tachyonic phenomenon, suggesting a hypothesis on the tachyon creation. The hypothesis says that the tachyon is produced when a neutral and very heavy (over 75 GeV/c^2) subatomic particle is placed in electric and magnetic fields that are perpendicular, very strong (over 6.9 × 1017 esu/cm^2 or oersted), and the squared ratio of their strength lies in the interval (1,5]. Such conditions can occur when nonpositive subatomic particles of high energy strike atomic nuclei other than the proton. The kinematical relations for the produced tachyon are given. Previous searches for tachyons in air showers and some possible causes of their negative results are discussed.

Can we study AI the same way we study lab rats? Researchers at DeepMind and Harvard University seem to think so. They built an AI-powered virtual rat that can carry out multiple complex tasks. Then, they used neuroscience techniques to understand how its artificial “brain” controls its movements.

Today’s most advanced AI is powered by artificial neural networks —machine learning algorithms made up of layers of interconnected components called “neurons” that are loosely inspired by the structure of the brain. While they operate in very different ways, a growing number of researchers believe drawing parallels between the two could both improve our understanding of neuroscience and make smarter AI.

Now the authors of a new paper due to be presented this week at the International Conference on Learning Representations have created a biologically accurate 3D model of a rat that can be controlled by a neural network in a simulated environment. They also showed that they could use neuroscience techniques for analyzing biological brain activity to understand how the neural net controlled the rat’s movements.