Menu

Blog

Archive for the ‘existential risks’ category: Page 119

May 31, 2013

How Could WBE+AGI be Easier than AGI Alone?

Posted by in categories: complex systems, engineering, ethics, existential risks, futurism, military, neuroscience, singularity, supercomputing

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Continue reading “How Could WBE+AGI be Easier than AGI Alone?” »

May 23, 2013

Comic: Rationality Matters

Posted by in categories: education, existential risks, fun, humor


May 19, 2013

Who Wants To Live Forever?

Posted by in categories: business, ethics, existential risks, futurism, homo sapiens, human trajectories, life extension, philosophy, sustainability

Medical science has changed humanity. It changed what it means to be human, what it means to live a human life. So many of us reading this (and at least one person writing it) owe their lives to medical advances, without which we would have died.

Live expectancy is now well over double what it was for the Medieval Briton, and knocking hard on triple’s door.

What for the future? Extreme life extension is no more inherently ridiculous than human flight or the ability to speak to a person on the other side of the world. Science isn’t magic – and ageing has proven to be a very knotty problem – but science has overcome knotty problems before.

A genuine way to eliminate or severely curtail the influence of ageing on the human body is not in any sense inherently ridiculous. It is, in practice, extremely difficult, but difficult has a tendency to fall before the march of progress. So let us consider what implications a true and seismic advance in this area would have on the nature of human life.

Continue reading “Who Wants To Live Forever?” »

Apr 11, 2013

Faith in the Fat of Fate may be Fatal for Humanity

Posted by in categories: existential risks, futurism, human trajectories, philosophy

This essay was originally published at Transhumanity.

They don’t call it fatal for nothing. Infatuation with the fat of fate, duty to destiny, and belief in any sort of preordainity whatsoever – omnipotent deities notwithstanding – constitutes an increase in Existential Risk, albeit indirectly. If we think that events have been predetermined, it follows that we would think that our actions make no difference in the long run and that we have no control over the shape of those futures still fetal. This scales to the perceived ineffectiveness of combating or seeking to mitigate existential risk for those who have believe so fatalistically. Thus to combat belief in fate, and resultant disillusionment in our ability to wreak roiling revisement upon the whorl of the world, is to combat existential risk as well.

It also works to undermine the perceived effectiveness of humanity’s ability to mitigate existential risk along another avenue. Belief in fate usually correlates with the notion that the nature of events is ordered with a reason on purpose in mind, as opposed to being haphazard and lacking a specific projected end. Thus believers-in-fate are not only more likely to doubt the credibility of claims that existential risk could even occur (reasoning that if events have purpose, utility and conform to a mindfully-created order then they would be good things more often than bad things) but also to feel that if they were to occur it would be for a greater underlying reason or purpose.

Thus, belief in fate indirectly increases existential risk both a. by undermining the perceived effectiveness of attempts to mitigate existential risk, deriving from the perceived ineffectiveness of humanity’s ability to shape the course and nature of events and effect change in the world in general, and b. by undermining the perceived likelihood of any existential risks culminating in humanity’s extinction, stemming from connotations of order and purpose associated with fate.

Continue reading “Faith in the Fat of Fate may be Fatal for Humanity” »

Mar 20, 2013

An Upside to Fukushima: Japan’s Robot Renaissance

Posted by in categories: engineering, existential risks, nuclear energy, robotics/AI

FUKUSHIMA.MAKES.JAPAN.DO.MORE.ROBOTS
Fukushima’s Second Anniversary…

Two years ago the international robot dorkosphere was stunned when, in the aftermath of the Tohoku Earthquake and Tsunami Disaster, there were no domestically produced robots in Japan ready to jump into the death-to-all-mammals radiation contamination situation at the down-melting Fukushima Daiichi nuclear power plant.

…and Japan is Hard at Work.
Suffice it to say, when Japan finds out its robots aren’t good enough — JAPAN RESPONDS! For more on how Japan has and is addressing the situation, have a jump on over to AkihabaraNews.com.

Oh, and here’s some awesome stuff sourced from the TheRobotReport.com:


Larger Image
- PDF With Links

Mar 19, 2013

Ten Commandments of Space

Posted by in categories: asteroid/comet impacts, biological, biotech/medical, cosmology, defense, education, engineering, ethics, events, evolution, existential risks, futurism, geopolitics, habitats, homo sapiens, human trajectories, life extension, lifeboat, military, neuroscience, nuclear energy, nuclear weapons, particle physics, philosophy, physics, policy, robotics/AI, singularity, space, supercomputing, sustainability, transparency

1. Thou shalt first guard the Earth and preserve humanity.

Impact deflection and survival colonies hold the moral high ground above all other calls on public funds.

2. Thou shalt go into space with heavy lift rockets with hydrogen upper stages and not go extinct.

Continue reading “Ten Commandments of Space” »

Mar 4, 2013

Human Brain Mapping & Simulation Projects: America Wants Some, Too?

Posted by in categories: biological, biotech/medical, complex systems, ethics, existential risks, homo sapiens, neuroscience, philosophy, robotics/AI, singularity, supercomputing

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Continue reading “Human Brain Mapping & Simulation Projects: America Wants Some, Too?” »

Mar 3, 2013

Petition for Americium Emergency Stockpile

Posted by in categories: asteroid/comet impacts, business, chemistry, counterterrorism, defense, economics, engineering, ethics, events, existential risks, futurism, geopolitics, habitats, human trajectories, military, nuclear energy, nuclear weapons, physics, policy, polls, rants, robotics/AI, space, transparency, treaties

I continue to survey the available technology applicable to spaceflight and there is little change.

The remarkable near impact and NEO on the same day seems to fly in the face of the experts quoting a probability of such coincidence being low on the scale of millenium. A recent exchange on a blog has given me the idea that perhaps crude is better. A much faster approach to a nuclear propelled spaceship might be more appropriate.

Unknown to the public there is such a thing as unobtanium. It carries the country name of my birth; Americium.

A certain form of Americium is ideal for a type of nuclear solid fuel rocket. Called a Fission Fragment Rocket, it is straight out of a 1950’s movie with massive thrust at the limit of human G-tolerance. Such a rocket produces large amounts of irradiated material and cannot be fired inside, near, or at the Earth’s magnetic field. The Moon is the place to assemble, test, and launch any nuclear mission.

Continue reading “Petition for Americium Emergency Stockpile” »

Feb 19, 2013

Human Extinction Looms

Posted by in categories: asteroid/comet impacts, defense, ethics, events, existential risks, space, transparency

Humanities wake-up call has been ignored and we are probably doomed.

The Chelyabinsk event is a warning. Unfortunately, it seems to be a non-event in the great scheme of things and that means the human race is probably also a non-starter. For years I have been hoping for such an event- and saw it as the start of a new space age. Just as Sputnik indirectly resulted in a man on the Moon I predicted an event that would launch humankind into deep space.

Now I wait for ISON. Thirteen may be the year of the comet and if that does not impress upon us the vulnerability of Earth to impacts then only an impact will. If the impact throws enough particles into the atmosphere then no food will grow and World War C will begin. The C stands for cannibalism. If the impact hits the ring of fire it may generate volcanic effects that may have the same effect. If whatever hits Earth is big enough it will render all life above the size of microbes extinct. We have spent trillions of dollars on defense- yet we are defenceless.

Our instinctive optimism bias continues to delude us with the idea that we will survive no matter what happens. Beside the impact threat is the threat of an engineered pathogen. While naturally evolved epidemics always leave a percentage of survivors, a bug designed to be 100 percent lethal will leave none alive. And then there is the unknown- Earth changes, including volcanic activity, can also wreck our civilization. We go on as a species the same way we go on with our own lives- ignoring death for the most part. And that is our critical error.

Continue reading “Human Extinction Looms” »

Feb 8, 2013

Machine Morality: a Survey of Thought and a Hint of Harbinger

Posted by in categories: biological, biotech/medical, engineering, ethics, evolution, existential risks, futurism, homo sapiens, human trajectories, robotics/AI, singularity, supercomputing

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

Continue reading “Machine Morality: a Survey of Thought and a Hint of Harbinger” »