DANGEROUS FUTURE: Artificial intelligence found to take on the mind of its creator


Recognizing the Pandora’s Box of uncertainty surrounding the controversial technology, experts familiar with the progression of artificial intelligence (A.I.) computing and where it’s heading in today’s ever-changing world have some serious concerns. One of them is the very real possibility that A.I. computers will one day have the ability to take on the minds of their creators and programmers, for better or for worse.

New research out of Princeton University suggests that, given the way we already understand how it “learns” and functions, A.I. technology is looking more and more like the prototype for Frankenstein’s monster. As they’re fed inputs and information about how to act in ways similar to humans, giving them the capability to perform human-like functions without humans present, A.I. computers are learning how to become human.

Along with a team of faculty from the Center for Information Technology Policy (C.I.T.P.) at Princeton, postdoctoral research associate and C.I.T.P fellow, Aylin Caliskan, surmises that more focus is needed to ascertain how A.I. technology has the potential to adopt some very undesirable personality traits. For instance, the capability of potentially unleashing chaos that, as it propagates, becomes increasingly more difficult to contain.

Published in the journal Science, his paper, entitled “Semantics derived automatically from language corpora contain human-like biases,” addresses these and other issues with A.I. technology that many of those pushing its development seem to be ignoring. These A.I. learning machines are designed to pick up whatever they’re exposed to, which includes any number of things from the good, to the bad, to the ugly.

“Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording,” explains Science Daily. “These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.”

Princeton worries that A.I. technology might learn traditional gender roles – isn’t the prospect of A.I. robots becoming KILLING MACHINES a little more concerning?

To perform their research, the Princeton team used an experimental program developed at Stanford University that functions much like the Implicit Association Test (I.A.T.), a social studies tool developed at the University of Washington in the late 1990s. The purpose of this test is to evaluate human biases based on a series of questions – for example, whether or not words like “rose” and “daisy” conjure up pleasant imagery, and conversely, whether or not words like “ant” and “moth” conjured up unpleasant imagery.

The goal of I.A.T. is to identify how humans respond to certain words or phrases, and more importantly what they associate with these words or phrases. Using the Stanford tool which uses similar evaluative techniques for computers, the Princeton team evaluated a huge mass of internet content containing 840 billion words to identify areas where A.I. robots might pick up and learn “bad” things, or develop biased “opinions.”

What constitutes “bad” or “biased” opinions, though, is subjective in and of itself. In this case, Princeton researchers seem less concerned about A.I. robots learning how to kill people for sport, for instance, as they are about them potentially learning traditional gender roles, such as the husband being the provider and the wife being the rearer of children.

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” says Hanna Wallach, a senior researchers at Microsoft Research in New York City, who was not involved with the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Sources for this article include:

ScienceDaily.com

Collective-Evolution.com



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES