Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?
Filmed in the dandenong ranges in victoria, australia.
Many thanks for watching!
Comments are closed.