Menu

Blog

Archive for the ‘information science’ category: Page 322

Oct 5, 2012

Want to Get 70 Billion Copies of Your Book In Print? Print It In DNA

Posted by in categories: biological, biotech/medical, chemistry, futurism, information science, media & arts

I have been meaning to read a book coming out soon called Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves. It’s written by Harvard biologist George Church and science writer Ed Regis. Church is doing stunning work on a number of fronts, from creating synthetic microbes to sequencing human genomes, so I definitely am interested in what he has to say. I don’t know how many other people will be, so I have no idea how well the book will do. But in a tour de force of biochemical publishing, he has created 70 billion copies. Instead of paper and ink, or pdf’s and pixels, he’s used DNA.

Much as pdf’s are built on a digital system of 1s and 0s, DNA is a string of nucleotides, which can be one of four different types. Church and his colleagues turned his whole book–including illustrations–into a 5.27 MB file–which they then translated into a sequence of DNA. They stored the DNA on a chip and then sequenced it to read the text. The book is broken up into little chunks of DNA, each of which has a portion of the book itself as well as an address to indicate where it should go. They recovered the book with only 10 wrong bits out of 5.27 million. Using standard DNA-copying methods, they duplicated the DNA into 70 billion copies.

Scientists have stored little pieces of information in DNA before, but Church’s book is about 1,000 times bigger. I doubt anyone would buy a DNA edition of Regenesis on Amazon, since they’d need some expensive equipment and a lot of time to translate it into a format our brains can comprehend. But the costs are crashing, and DNA is a far more stable medium than that hard drive on your desk that you’re waiting to die. In fact, Regenesis could endure for centuries in its genetic form. Perhaps librarians of the future will need to get a degree in biology…

Link to Church’s paper

Source

Oct 2, 2012

In Conversation with Albert-lászló Barabási on Thinking in Network Terms

Posted by in categories: complex systems, information science

One question that fascinated me in the last two years is, can we ever use data to control systems? Could we go as far as, not only describe and quantify and mathematically formulate and perhaps predict the behavior of a system, but could you use this knowledge to be able to control a complex system, to control a social system, to control an economic system?

We always lived in a connected world, except we were not so much aware of it. We were aware of it down the line, that we’re not independent from our environment, that we’re not independent of the people around us. We are not independent of the many economic and other forces. But for decades we never perceived connectedness as being quantifiable, as being something that we can describe, that we can measure, that we have ways of quantifying the process. That has changed drastically in the last decade, at many, many different levels.

Continue reading “Thinking in Network Terms” and watch the hour long video interview

Aug 19, 2012

Artilects Soon to Come

Posted by in categories: complex systems, counterterrorism, cybercrime/malcode, defense, engineering, ethics, events, evolution, existential risks, futurism, information science, military, neuroscience, supercomputing

Whether via spintronics or some quantum breakthrough, artificial intelligence and the bizarre idea of intellects far greater than ours will soon have to be faced.

http://www.sciencedaily.com/releases/2012/08/120819153743.htm

Aug 13, 2012

The Electric Septic Spintronic Artilect

Posted by in categories: biological, biotech/medical, business, chemistry, climatology, complex systems, counterterrorism, defense, economics, education, engineering, ethics, events, evolution, existential risks, futurism, geopolitics, homo sapiens, human trajectories, information science, military, neuroscience, nuclear weapons, policy, robotics/AI, scientific freedom, singularity, space, supercomputing, sustainability, transparency

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

May 25, 2012

OpenOffice / LibreOffice & A Warning For Futurists

Posted by in categories: complex systems, futurism, human trajectories, information science, open access, open source

I spend most of my time thinking about software, and occasionally I come across issues that are relevant to futurists. I wrote my book about the future of software in OpenOffice, and needed many of its features. It might not be the only writing / spreadsheet / diagramming / presentation, etc. tool in your toolbox, but it is a worthy one. OpenDocument Format (ODF) is the best open standard for these sorts of scenarios and LibreOffice is currently the premier tool to handle that format. I suspect many of the readers of Lifeboat have a variant installed, but don’t know much of the details of what is going on.

The OpenOffice situation has been a mess for many years. Sun didn’t foster a community of developers around their work. In fact, they didn’t listen to the community when it told them what to do. So about 18 months ago, after Oracle purchased Sun and made the situation worse, the LibreOffice fork was created with most of the best outside developers. LibreOffice quickly became the version embraced by the Linux community as many of the outside developers were funded by the Linux distros themselves. After realizing their mess and watching LibreOffice take off within the free software community, Oracle decided to fire all their engineers (50) and hand the trademark and a copy of the code over to IBM / Apache.

Now it would be natural to imagine that this should be handed over to LibreOffice, and have all interested parties join up with this effort. But that is not what is happening. There are employees out there whose job it is to help Linux, but they are actually hurting it. You can read more details on a Linux blog article I wrote here. I also post this message as a reminder about how working together efficiently is critical to have faster progress on complicated things.

Apr 15, 2012

Risk Assessment is Hard (computationally and otherwise)

Posted by in categories: existential risks, information science, policy

How hard is to assess which risks to mitigate? It turns out to be pretty hard.

Let’s start with a model of risk so simplified as to be completely unrealistic, yet will still retain a key feature. Suppose that we managed to translate every risk into some single normalized unit of “cost of expected harm”. Let us also suppose that we could bring together all of the payments that could be made to avoid risks. A mitigation policy given these simplifications must be pretty easy: just buy each of the “biggest for your dollar” risks.

Not so fast.

The problem with this is that many risk mitigation measures are discrete. Either you buy the air filter or you don’t. Either your town filters its water a certain way or it doesn’t. Either we have the infrastructure to divert the asteroid or we don’t. When risk mitigation measures become discrete, then allocating the costs becomes trickier. Given a budget of 80 “harms” to reduce, and risks of 50, 40, and 35, then buying the 50 leaves 15 “harms” that you were willing to pay to avoid left on the table.

Continue reading “Risk Assessment is Hard (computationally and otherwise)” »

Feb 25, 2011

Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction

Posted by in categories: complex systems, existential risks, information science, robotics/AI

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Continue reading “Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction” »

Oct 25, 2010

Open Letter to Ray Kurzweil

Posted by in categories: biotech/medical, business, economics, engineering, futurism, human trajectories, information science, open source, robotics/AI

Dear Ray;

I’ve written a book about the future of software. While writing it, I came to the conclusion that your dates are way off. I talk mostly about free software and Linux, but it has implications for things like how we can have driverless cars and other amazing things faster. I believe that we could have had all the benefits of the singularity years ago if we had done things like started Wikipedia in 1991 instead of 2001. There is no technology in 2001 that we didn’t have in 1991, it was simply a matter of starting an effort that allowed people to work together.

Proprietary software and a lack of cooperation among our software scientists has been terrible for the computer industry and the world, and its greater use has implications for every aspect of science. Free software is better for the free market than proprietary software, and there are many opportunities for programmers to make money using and writing free software. I often use the analogy that law libraries are filled with millions of freely available documents, and no one claims this has decreased the motivation to become a lawyer. In fact, lawyers would say that it would be impossible to do their job without all of these resources.

My book is a full description of the issues but I’ve also written some posts on this blog, and this is probably the one most relevant for you to read: https://lifeboat.com/blog/2010/06/h-conference-and-faster-singularity

Continue reading “Open Letter to Ray Kurzweil” »

Jun 5, 2010

Friendly AI: What is it, and how can we foster it?

Posted by in categories: complex systems, ethics, existential risks, futurism, information science, policy, robotics/AI

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

Continue reading “Friendly AI: What is it, and how can we foster it?” »

Apr 18, 2010

Ray Kurzweil to keynote “H+ Summit @ Harvard — The Rise Of The Citizen Scientist”

Posted by in categories: biological, biotech/medical, business, complex systems, education, events, existential risks, futurism, geopolitics, human trajectories, information science, media & arts, neuroscience, robotics/AI

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

Continue reading “Ray Kurzweil to keynote "H+ Summit @ Harvard — The Rise Of The Citizen Scientist"” »