Menu

Blog

Archive for the ‘existential risks’ category: Page 25

Jun 5, 2023

The rise of AI: ‘AI doomsday’ or the best thing since sliced bread?

Posted by in categories: existential risks, robotics/AI, security

A raft of industry experts have given their views on the likely impact of artificial intelligence on humanity in the future. The responses are unsurprisingly mixed.

The Guardian has released an interesting article regarding the potential socioeconomic and political impact of the ever-increasing rollout of artificial intelligence (AI) on society. By asking various experts in the field on the subject, the responses were, not surprisingly, a mixed bag of doom, gloom, and hope.

Continue reading “The rise of AI: ‘AI doomsday’ or the best thing since sliced bread?” »

Jun 1, 2023

Terrifying New Use Of AI Brings Humanity One Step Closer To Extinction

Posted by in categories: existential risks, robotics/AI

Published 49 mins ago.

May 31, 2023

Geneticists discover hidden ‘whole genome duplication’ that may explain why some species survived mass extinctions

Posted by in categories: biotech/medical, evolution, existential risks, genetics

Geneticists have unearthed a major event in the ancient history of sturgeons and paddlefish that has significant implications for the way we understand evolution. They have pinpointed a previously hidden “whole genome duplication” (WGD) in the common ancestor of these species, which seemingly opened the door to genetic variations that may have conferred an advantage around the time of a major mass extinction some 200 million years ago.

The big-picture finding suggests that there may be many more overlooked, shared WGDs in other species before periods of extreme environmental upheaval throughout Earth’s tumultuous history.

The research, led by Professor Aoife McLysaght and Dr. Anthony Redmond from Trinity College Dublin’s School of Genetics and Microbiology, has just been published in Nature Communications.

May 30, 2023

AI intelligence could cause human extinction say tech leaders

Posted by in categories: existential risks, robotics/AI

As apocalyptic warnings go, today is right up there. Some of the world’s most influential tech geniuses and entrepreneurs say AI risks the extinction of humanity.

Having lobbed the ball firmly in the court of global leaders and lawmakers the question is: will they have any idea what to do about it?

May 30, 2023

Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement

Posted by in categories: biotech/medical, existential risks, robotics/AI

It’s another high-profile warning about AI risk that will divide experts. Signatories include Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman.

A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe that AI poses to humanity.

The 22-word statement, trimmed short to make it as broadly acceptable as possible, reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Continue reading “Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement” »

May 30, 2023

We Are (Probably) Safe From Asteroids For 1,000 Years, Say Scientists

Posted by in categories: asteroid/comet impacts, existential risks

When will an asteroid hit Earth and wipe us out? Not for at least 1,000 years, according to a team of astronomers. Probably.

Either way, you should get to know an asteroid called 7482 (1994 PC1), the only one known whose orbital path will cross that of Earth’s consistently for the next millennium—and thus has the largest probability of a “deep close encounter” with us, specifically in 502 years. Possibly.

Published on a preprint archive and accepted for publication in The Astronomical Journal, the paper states that astronomers have almost found all the kilometer-sized asteroids. There’s a little under 1,000 of them.

May 24, 2023

Whole Brain Emulation

Posted by in categories: existential risks, mapping, neuroscience, robotics/AI

I had an amazing experience at the Foresight Institute’s Whole-Brain Emulation (WBE) Workshop at a venue near Oxford! For more information and a list of participants, see: https://foresight.org/whole-brain-emulation-workshop-2023/ I had the opportunity to work within a group of some of the most brilliant, ambitious, and visionary people I’ve ever encountered on the quest for recreating the human brain in a computer. We also discussed in depth the existential risks of upcoming artificial superintelligence and how to mitigate these risks, perhaps with the aid of WBE.

My subgroup focused on exploring the challenge of human connectomics (mapping all of the neurons and synapses in the brain).


WBE is a potential technology to generate software intelligence that is human-aligned simply by being based directly on human brains. Generally past discussions have assumed a fairly long timeline to WBE, while past AGI timelines had broad uncertainty. There were also concerns that the neuroscience of WBE might boost AGI capability development without helping safety, although no consensus did develop. Recently many people have updated their AGI timelines towards earlier development, raising safety concerns. That has led some people to consider whether WBE development could be significantly speeded up, producing a differential technology development re-ordering of technology arrival that might lessen the risk of unaligned AGI by the presence of aligned software intelligence.

May 23, 2023

Artificial Intelligence Explosion: How AI May Cause The End of The World??

Posted by in categories: existential risks, robotics/AI

Artificial intelligence is a superior lifeform that humans are creating, and many AI researchers have outlined various scenarios in which this technology can pose an existential risk to humanity that could result in the literal end of the world.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI Marketplace: https://taimine.com/

Continue reading “Artificial Intelligence Explosion: How AI May Cause The End of The World??” »

May 13, 2023

Advanced Life Should Have Already Peaked Billions of Years Ago

Posted by in categories: alien life, existential risks, information science

Did humanity miss the party? Are SETI, the Drake Equation, and the Fermi Paradox all just artifacts of our ignorance about Advanced Life in the Universe? And if we are wrong, how would we know?

A new study focusing on black holes and their powerful effect on star formation suggests that we, as advanced life, might be relics from a bygone age in the Universe.

Universe Today readers are familiar with SETI, the Drake Equation, and the Fermi Paradox. All three are different ways that humanity grapples with its situation. They’re all related to the Great Question: Are We Alone? We ask these questions as if humanity woke up on this planet, looked around the neighbourhood, and wondered where everyone else was. Which is kind of what has happened.

Apr 30, 2023

The ‘Don’t Look Up’ Thinking That Could Doom Us With AI

Posted by in categories: asteroid/comet impacts, existential risks, robotics/AI

Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.

Sadly, I now feel that we’re living the movie Don’t look up for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.

Continue reading “The ‘Don’t Look Up’ Thinking That Could Doom Us With AI” »

Page 25 of 150First2223242526272829Last