Circa 2022 😀
New mathematical formulation means huge paradigm shift in physics would not be necessary.
Circa 2022 😀
New mathematical formulation means huge paradigm shift in physics would not be necessary.
Juncal Arbelaiz Mugica is a native of Spain, where octopus is a common menu item. However, Arbelaiz appreciates octopus and similar creatures in a different way, with her research into soft-robotics theory.
More than half of an octopus’ nerves are distributed through its eight arms, each of which has some degree of autonomy. This distributed sensing and information processing system intrigued Arbelaiz, who is researching how to design decentralized intelligence for human-made systems with embedded sensing and computation. At MIT, Arbelaiz is an applied math student who is working on the fundamentals of optimal distributed control and estimation in the final weeks before completing her PhD this fall.
She finds inspiration in the biological intelligence of invertebrates such as octopus and jellyfish, with the ultimate goal of designing novel control strategies for flexible “soft” robots that could be used in tight or delicate surroundings, such as a surgical tool or for search-and-rescue missions.
Training the large neural networks behind many modern AI tools requires real computational might: For example, OpenAI’s most advanced language model, GPT-3, required an astounding million billion billions of operations to train, and cost about US $5 million in compute time. Engineers think they have figured out a way to ease the burden by using a different way of representing numbers.
Back in 2017, John Gustafson, then jointly appointed at A*STAR Computational Resources Centre and the National University of Singapore, and Isaac Yonemoto, then at Interplanetary Robot and Electric Brain Co., developed a new way of representing numbers. These numbers, called posits, were proposed as an improvement over the standard floating-point arithmetic processors used today.
Now, a team of researchers at the Complutense University of Madrid have developed the first processor core implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. They presented their results at last week’s IEEE Symposium on Computer Arithmetic.
Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.
Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.
For several years now, physicists have been trying to use neural networks to detect phase transitions as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or quantum systems, where they are often difficult to detect.
Welcome to another episode of Conversations with Coleman.
My guest today is David Chalmers. David is a professor of philosophy and neuroscience at NYU and the co-director of NYU Centre for Mind, Brain and Consciousness.
David just released a new book called “Reality+: Virtual Worlds and the Problems of Philosophy”, which we discuss in this episode. We also discuss whether we’re living in a simulation, the progress that’s been made in virtual reality, whether virtual worlds count as real, whether people would and should choose to live in a virtual world, and many other classic questions in the philosophy of mind and more.
#Ad.
The best way to learn anything is by doing it yourself. Learn interactively with Brilliant’s fun hands-on lessons in math, science, and computer science. Brilliant has lots of great courses for all ability and knowledge levels, so you’ll find something that interests you. Master all sorts of technical subjects, with topics ranging from Geometry to Classical Mechanics to Programming with Python to Cryptocurrency and much more.
Instead of just memorizing, Brilliant teaches you how to think about STEM by guiding you through fun problems. You’ll get practice with real problem solving, which helps you train your critical thinking and creative problem-solving skills. You’ll come to understand how STEM actually works, and how it’s relevant to your everyday life.
Head over to https://brilliant.org/CWC to get started with a free week of unlimited access to Brilliant’s interactive lessons. The first 200 listeners will also get 20% off an annual membership.
FOLLOW COLEMAN
YouTube — http://bit.ly/38kzium.
Posted in cosmology, mathematics, physics
Infinity is back. Or rather, it never (ever, ever…) went away. While mathematicians have a good sense of the infinite as a concept, cosmologists and physicists are finding it much more difficult to make sense of the infinite in nature, writes Peter Cameron.
Each of us has to face a moment, often fairly early in our life, when we realize that a loved one, formerly a fixture in our life, was not infinite, but has left us, and that someday we too will have to leave this place.
This experience, probably as much as the experience of looking at the stars and wondering how far they go on, shapes our views of infinity. And we urgently want answers to our questions. This has been so since the time, two and a half millennia ago, when Malunkyaputta put his doubts to the Buddha and demanded answers: among them he wanted to know if the world is finite or infinite, and if it is eternal or not.
An MIT professor who studies quantum computing is sharing a $3 million Breakthrough Prize.
MIT math professor Peter Shor shared in the Breakthrough Prize in Fundamental Physics with three other researchers, David Deutsch at the University of Oxford, Charles Bennett at IBM Research, and Gilles Brassard at the University of Montreal. All of them are “pioneers in the field of quantum information,” the prize foundation said in a statement.
The 2023 Breakthrough Prizes are intended to honor fundamental discoveries in life sciences, physics, and math that are changing the world.
Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?
Filmed in the dandenong ranges in victoria, australia.
Many thanks for watching!
China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #technology.
“Techno Jungles”
In 2019, Google announced that its 53-qubit Sycamore processor had finished a task in 3.3 minutes that would have taken a conventional supercomputer at least 2.5 days to accomplish. According to reports, China’s 66-Qubit Zuchongzhi 2 Quantum Processor was able to complete the same task 1 million times faster in October of last year. Together with the Shanghai Institute of Technical Physics and the Shanghai Institute of Microsystem and Information Technology, a group of researchers from the Chinese Academy of Sciences Center for Excellence in Quantum Information and Quantum Physics were responsible for the development of that processor.
According to NDTV, the Chinese government under Xi Jinping has spent $10 billion on the country’s National Laboratory for Quantum Information Sciences. This demonstrates China’s significant commitment to the field of quantum computing. According to Live Science, the nation is also a world leader in the field of quantum networking, which involves the transmission of data that has been encoded through the use of quantum mechanics over great distances.
Classical computers cannot compete with the capabilities of quantum computers when it comes to certain tasks due to the peculiar mathematics that governs the quantum world. Quantum computers perform calculations using qubits, which can simultaneously exist in many states, in contrast to classical computers, which perform calculations using bits, which can only have one of two states (typically represented by a 1 or a 0). Because of this, quantum computers solve problems significantly faster than traditional computers. But despite the existence of theories that have been around for decades and predict that quantum computing will outperform classical computing, the construction of practical quantum computers has proven to be a great deal more difficult.
If you enjoyed considering please like and subscribe to it, it helps grow our channel.
Nine Inch Nails “Me I’m Not” remixed with US military, math, science, and computer footage from the Prelinger Archives.