“Tech billionaires are buying up luxurious bunkers to survive a societal collapse they helped create,” Rushkoff says.
The world is going to hell in a handbasket. And no, we’re not saying that; science does. It seems that billionaires cannot ignore all the signals pointing at a doomsday scenario while trying to make their way out of this world — or stay in this world.
NASA on Monday will attempt a feat humanity has never before accomplished: deliberately smacking a spacecraft into an asteroid to slightly deflect its orbit, in a key test of our ability to stop cosmic objects from devastating life on Earth.
The Double Asteroid Redirection Test (DART) spaceship launched from California last November and is fast approaching its target, which it will strike at roughly 14,000 miles per hour (23,000 kph).
Neutronium was the material used in the hull of the doomsday machine in Star Trek.
Now I’m not terribly sure what the mechanical properties of neutronium would be like. It certainly is very dense (about a billion tons per cm3, about the volume of the end of your little finger), but it interacts with matter only weakly. I would expect both it to be pretty inefficient at stopping both electromagnetic radiation (neutrons only have a magnetic moment), and matter.
For Physics & Chemistry experiments for kids delivered to your door head to https://melscience.com/sBIs/ and use promo code DRBECKY50 for 50% off the first month of any subscription (valid until 22nd October 2022).
Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan. 00:11 The concept of understanding under-recognised as an important aspect of developing AI 00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?) 04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?) 05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens? 07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments. 10:18 Webs of knowledge — contextual understanding. 12:16 Early childhood development — concept formation and navigation. 13:11 The intuitive ability for concept navigation isn’t complete. Is the concept of understanding a catch all? 14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing? 17:32 Why is understanding (the nature of) understanding important? Is understanding reductive? Can it be broken down? 19:52 What would be the most basic primitive understanding be? 22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding? Approaches — engineering, and copy the brain. 24:34 Is common sense the same thing as understanding? How are they different? 26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood? 27:40 Compression and understanding. 29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how? 31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom. 33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature. 35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions? 36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process? 37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off? 38:37 What comes first — understanding or generality? 40:47 Minsky’s ‘Society of Mind’ 42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines? 48:15 Anthropomorphism in AI literature. 50:48 Deism — James Gates and error correction in super-symmetry. 52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory? 52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox? 55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox. 56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs? 01:01:52 The Great Filter and the The Fermi Paradox. 01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood) 01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument. 01:04:23 More on behavioral tests for AI understanding. 01:06:00 Zombie machines — David Chalmers Zombie argument. 01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges? 01:08:11 Revisiting behavioral ‘turing’ tests for understanding. 01:13:05 Shape sorters and reverse shape sorters. 01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries… 01:15:11 Neural nets and adaptivity. 01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?
Filmed in the dandenong ranges in victoria, australia.
In a first-of-its-kind test for planetary defense, NASA’s DART spacecraft is scheduled next week to crash into an asteroid and alter the celestial body’s course.
If all goes according to plan, on September 26th at 7:14 pm Eastern Daylight Time, NASA’s DART spacecraft will meet a fiery end. DART, whose name stands for Double Asteroid Redirection Test, is poised to intentionally crash into an asteroid that, at the time of impact, will be 11 million km from Earth. The goal of the mission is to alter the speed and trajectory of the impacted space boulder. The technology developed for the mission could one day aid in shifting the orbit of an asteroid that—unlike this one—is on a collision course with Earth.
“Our DART spacecraft is going to impact an asteroid in humanity’s first attempt to change the motion of a natural celestial body,” said Tom Statler, a scientist in NASA’s planetary defense team, in a recent press conference about the mission. “It will be a truly historic moment for the entire world.”