A.I

Standard

Nick Bostrom: What would happen if machines surpassed human intellect?
Bostrom: By 2050 we may have a 50/50 chance of achieving human-level A.I.
He says We want an A.I. that is safe and ethical, but it could get beyond our control
Bostrom: Superintelligent machines could present major existential risks to humans
Editor’s note: Nick Bostrom is professor and director of the Future of Humanity Institute at the Oxford Martin School at Oxford University. He is the author of “Superintelligence: Paths, Dangers, Strategies” (OUP). The opinions expressed in this commentary are solely those of the author.
(CNN) — Machines have surpassed humans in physical strength, speed and stamina. What would happen if machines surpassed human intellect as well? The question is not just hypothetical; we need to start taking this possibility seriously.
Most people might scoff at the prospect of machines outsmarting humanity. After all, even though today’s artificial intelligence can beat humans within narrow domains (such as chess or trivia games), machine brains are still extremely rudimentary in general intelligence.
Machines currently lack the flexible learning and reasoning ability that enables an average human to master any of thousands of different occupations, as well as all the tasks of daily life. In particular, while computers are useful accessories to scientists, they are very, very far from doing the interesting parts of the research themselves.

Nick Bostrom
But this could change. We know that evolutionary processes can produce human-level general intelligence, because they have already done so at least once in Earth’s history. How quickly engineers achieve a similar feat is still an open question.
By 2050 we may, according to a recent survey of leading artificial intelligence researchers, have a 50/50 chance of achieving human-level machine intelligence (defined here as “one that can carry out most human professions at least as well as a typical human”).
Even a cursory glance at technological development reveals multiple paths that could lead to human-level machine intelligence in this century. One likely path would be to continue studying the general properties of the human brain to decipher the computational structures it uses to generate intelligent behavior. Another path would be the more mathematical “top-down” approach. And if somehow all the other approaches don’t work, scientists might simply brute-force the evolutionary process on computers.
Google buys into artificial intelligence Recycling with artificial intelligence Could robots take over Earth?
Regardless of when and how we get there, the consequences of reaching human-level machine intelligence are profound, because human-level machine intelligence is not the final destination. Machine intelligence would reach a recursive tipping point after which the design and improvement of such intelligence would no longer be in human hands.
The next stop from human level intelligence, just a short distance farther along the tracks, is machine superintelligence. The train might not even decelerate at Humanville Station: It is likely instead to swoosh right past.
This brings us to what I think may well be the most important task of our time. If there will eventually be an “intelligence explosion,” how exactly can we set up the initial conditions so as to achieve an outcome that is survivable and beneficial to existing persons?
In “Superintelligence: Paths, Dangers, Strategies,” I focus on the dynamics of an intelligence explosion; what will happen if and when we gain the ability to create machine superintelligence? This topic is largely ignored and poorly funded. But we must keep at it: How could we engineer a controlled detonation that would protect human values from being overwritten by the arbitrary values of a misbegotten artificial superintelligence?
The picture that emerges from this work is fascinating and disconcerting. It looks like there are major existential risks associated with the creation of entities of greater-than-human intelligence. A superintelligence wouldn’t even need to start with a physical embodiment to be catastrophically dangerous. Major engineering projects and financial transactions on Earth are mediated by digital communication networks that would be at the mercy of an artificial superintelligence.

Placing an online order for an innocent-looking set of advanced blueprints or fooling its creators into thinking it is benign could be an initial step, followed by the possibility of permanently altering the global biosphere to pursue its preferences.
The control problem—how to engineer a superintelligence to be safe and human-friendly—appears to be very difficult. It should be solvable in principle, but in practice it may not be solved in time for when the solution is needed. The difficulty is compounded by the need to get it right on the first try. An unfriendly superintelligence would not permit a mulligan. Remember HAL from “2001: A Space Odyssey”? Let’s try to avoid that.
If we could solve the technical problem of constructing a motivation system that we can load with some terminal goal of our choosing, a further question remains: Which goal would we give the superintelligent A.I.? Much would hinge on that choice. In some scenarios, the first superintelligence becomes extremely powerful and shapes the entire future according to its preferences.
We want an A.I. that is safe, beneficial and ethical, but we don’t know exactly what that entails. Some may think we have already arrived upon full moral enlightenment, but is is far more likely that we still have blind spots. Our predecessors certainly had plenty — in the practice of slavery and human sacrifice, or the condoning of manifold forms of brutality and oppression that would outrage the modern conscience. It would be a grave mistake to think we have reached our moral apogee, and thus lock our present-day ethics into such powerful machines.
In this sense, we have philosophy with a deadline. Our wisdom must precede our technology, and that which we value in life must be carefully articulated—or rather, it must be pointed to with the right mathematics—if it is to be the seed from which our intelligent creations grow.

-This actually does need to be addressed due to human greed and laziness. People will sign off on this if it saves them money and if it makes life easier. Let me tell you, I love my smart phone but am very aware that ‘The Man’ knows my every move because of it and because of my computer. I have just given up the protest because the computer makes my life easier. 

Tell me, what the fuck do you people actually think that a sentient machine will do to an inferior being that it considers irrelevant? Read up on the process of logical thought taken to extremely literal standards and you have a machine that would not give a second thought to our extinction! 

Artificial life-it will happen. Good or bad?

Standard

Scientists create “artificial life” – synthetic DNA that can self-replicate

Scientists create "artificial life" - synthetic DNA that can self-replicateSEXPAND

 

In one of the biggest breakthroughs in recent history, scientists have created a synthetic genome that can self-replicate. So what does this mean? Are we about to become gray goo?

Led by Craig Venter of the J. Craig Venter Institute (JCVI), the team of scientists combined two existing techniques to transplant synthetic DNA into a bacteria. First they chemically synthesized a bacterial genome, then they used well-known nuclear transfer techniques (used in IVF) to transplant the genome into a bacteria. And apparently the bacteria replicated itself, too, thus creating a second generation of the synthetic DNA. The process is being hailed as revolutionary.

How to make a synthetic genome

Researchers created a synthetic genome by copying an existing one — Mycoplasma mycoides — and transplanting it into Mycoplasma capricolum. How can we be sure that the M. mycoides is synthetic? When recreating it, the team added a number of non-functional “watermarks” to the genome, making it distinct from the wild version. Once implanted, the M. mycoides genome “booted up” the recipient cells, deleting or disrupting 14 genes. The bacteria went on to function normally, meaning the transplant worked.

Scientists create "artificial life" - synthetic DNA that can self-replicateSEXPAND

“This is the first synthetic cell that’s been made, and we call it synthetic because the cell is totally derived from a synthetic chromosome, made with four bottles of chemicals on a chemical synthesizer, starting with information in a computer,” said Venter. “This becomes a very powerful tool for trying to design what we want biology to do. We have a wide range of applications [in mind].”

“If the methods described here can be generalized, design, synthesis , assembly and transplantation of synthetic chromosomes will no longer be a barrier to the progress of synthetic biology,” write the authors in the paper (available free online from Science).

Proof of concept

At present, this is a proof of concept, but has some immense potential for the future. The research team at JVCI have been working on this technology for approximately 15 years, and now have a number of possible organisms planned: an algae that would suck up carbon dioxide and excrete hydrocarbons for biofuels; faster vaccine production; water cleaning; and using light energy to create hydrogen gas from water.

As anyone with even a glancing familiarity with scifi knows, self-replicating technology could lead to disaster. JCVI have done their due diligence here, and all their engineered creations require nutrients found in the lab to survive. They also have the technology to create “suicide genes” that will prevent the synthetics from living outside of a controlled environment.

Aware of the ethical and security issues involved, JCVI has also been in talks with the U.S. government since 2003, as well as being reviewed by independent bioethics groups since 1997.

Scientists create "artificial life" - synthetic DNA that can self-replicateSEXPAND

Ethics of synthetic life

So what does this all mean? Beyond the applications I already mentioned, it’s also helping us understand how life works – specifically, how it’s transmitted through DNA. “This is an important step we think, both scientifically and philosophically. It’s certainly changed my views of the definitions of life and how life works,” Venter said.

Nature has compiled a number of opinions from prominent academics on the project. Everyone acknowledges that this is just the first step in what could be a very interesting development.

“We now have an unprecedented opportunity to learn about life. Having complete control over the information in a genome provides a fantastic opportunity to probe the remaining secrets of how it works,” says Mark Bedau of Reed College, Oregon. “A prosthetic genome hastens the day when life forms can be made entirely from non-living materials. As such, it will revitalize perennial questions about the significance of life — what it is, why it is important and what role humans should have in its future.”

Jim Collins of Boston University reminds us that there’s still much left we don’t know:

Frankly, scientists do not know enough about biology to create life. Although the Human Genome Project has expanded the parts list for cells, there is no instruction manual for putting them together to produce a living cell. It is like trying to assemble an operational jumbo jet from its parts list – impossible. Although some of us in synthetic biology may have delusions of grandeur, our goals are much more modest.

There’s a long way to go with this technology, but this advance is incredibly significant, and from it we may see the dawn of a new revolution in molecular biology and genetic engineering.

Press ReleaseArticle in Science

 

A.I

Standard

UPCOMING AAAI EVENTS

July 2013

The Seventh International AAAI Conference on Weblogs and Social Media begins July 8 in Cambridge, Massachusetts, USA

The Twenty-Seventh AAAI Conference will be held in Bellevue, Washington July 14–18.

The Twenty-Fifth IAAI Conferencewill be held in Bellevue, Washington July 14–18.

The Fourth AAAI Symposium on Educational Advances in Artificial Intelligence will be held in Bellevue Washington July 15–16.

October 2013

The Ninth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment will be held October 14–18 in Boston, Massachusetts USA

November 2013

The First AAAI Conference on Human Computation and Crowdsourcing will be held November 7–9 in Palm Springs, California USA

The AAAI Fall Symposium begins on November 15 near Washington, DC.

March 2014

The AAAI Spring Symposium begins on March 24 in Palo Alto, CA.

RESOURCES & LINKS

• AAAI Is on Facebook!• AAAI Affiliates
• AAAI Press
• AAAI Press Room
• AAAI Fellows
• AI Magazine
• Author Pages
• Awards
• Calendar
• Digital Library
• International AI Site
• Job Bank
• Meetings
• Membership Chapters
• Resources
• Sponsored Journals
• Workshops

The biggest task for the religious will be to explain how artificial life forms will be able to have personalities and appear to have individuality. I know that this is a long way off but they need to realize that it WILL happen. The more pressing issue that threatens them is when a human clone is psychologically tested and deemed to be no different than any other human being. Clones will have ‘souls’ which will fly against everything that is said in most religious text. Hopefully, we will be good stewards of this technology. Religious leaders, on the other hand, are going to have to put on their thinking caps and come up with more creative fairy tales to hoodoo the disenchanted beliebers. ( yes, I called them beliebers because they are just as idiotic ).