Thursday, April 15, 2010

The view from above

Hale's Moon is not exactly the prettiest of worlds from orbit. It's a largely gray ball of rock with large patches of brown and smaller patches of green. There's virtually no surface water, and what little there is is usually transient. It's dry, dusty, and not especially inviting.

Yes. I do ask myself why I call it home, but always come back with the same answer: the other people who also call it home. That doesn't stop me from wistfully dreaming about a long vacation on the beaches of Surfer's New Paradise or in the forests on Ariel. Both, worlds far more pleasant to see from orbit.

But that wasn't the view I had. While I didn't have an actual office aboard the Orbital to call my own, the single big office going to 'Brina, I'd staked out a personal 'quiet spot' on the center deck in front of the big reinforced observation dome.

The view really was spectacular. The dome was so large that, curled up on a bean bag, it felt like you were floating free in space. Kind of an ideal spot to let myself drift into the stream of consciousness thought process I needed to sort out far, far, too much information.

Research into Artificial Intelligence had a long and rocky history. The farther back you looked the sketchier the details got, but there were fragmentary records that went back even before the Exodus from Earth that Was. Expert Systems, to differentiate really intuitive assistance functions and user interfaces from truly Sentient systems, were commonplace. My own Nora project in college could pass a Turing test, but wasn't actually conscious. Nora was sapient, after a fashion, but not sentient.

There were philosophical debates laced through the research. What rights would a sentient machine have? Would they be treated the same as organic "people." Could you own a sentient machine, or would it be slavery? Did Humanity have the right to create independent intelligence? What would happen when the Singularity came and our synthetic children started to design their own descendants? Would Humanity become obsolete? Would we go to war? Would we coexist, or would we merge into something else?

My own personal feelings on the matter weren't hidden. People were people, regardless of whether they were flesh and blood or metal and plastic. It was the spark, the Ghost, that made us what we were. The shell didn't matter. To me, Lily, Krenshar, Blue and Raids were all people. Each a form of artificial intelligence. Artificial life. Individuals in their own right.

Or were they?

Lily often claimed she wasn't really sentient. Her behavior was just the result of a complex program. While neither Krenshar or Raids had started out as fully self aware machines, but had shown degrees of sentience that rivaled any organic I knew. They'd gained the Ghost Lily claimed she didn't posses.

Not everyone shared my view. Many thought war with our artificial children was inevitable. Given the existence of self-replicating combat machines attacking at least two or three known worlds, the view wasn't surprising. They were the same machines that had spawned Raids and had killed more than their share of folk. They were real. They were now. But the attitude was as old as the Exodus.

Someone had once written "Thou shalt not build a machine in the likeness of a Human mind." The origin of the quote was obscure, but the implication was clear. Artificial Intelligence and Machine Sentience were a Bad Idea®. My ideal of peaceful coexistence had been a minority viewpoint for a long time.

Lily's interaction with Ardra had changed the equation for me. While I'd known there were multiple AI research projects spread through the 'Vere, mainly in academia and corporate circles, the revelation of Ardra brought how just how complex the issue was.

Humans didn't get along with each other. We had thousands of years of history to show that. Would machines be any better? If the war machines gained sentience, would their genocidal mission against organic life become the driving force for all AI? Or would they side with Blue, who had his own agenda but wasn't bent on our destruction? What about machines like Krenshar? He'd been unstable since I'd known him, swinging between genocidal tendencies under outside influences to risking his own existence to save members of the colony. Would he side with the benevolent war machines or the benign Blue?

The Singularity might be closer than anyone realized, or it might still be generations off. The only thing I could be sure of was there was a lot going on behind the scenes. No one seemed to know the whole picture and, the deeper I dove, the more complex the picture became.

From here, feeling like I was floating in open space above Hale's Moon, I could almost . . . almost . . . bring it into focus.

No comments:

Post a Comment