Chapter 3.3

82 11 27
                                    

The AI went on about how the definition of harm it received was a very medical one. In other words, its goal was to keep us close to a pre-specified health optimum.

I didn't listen and, judging from their lack of any reaction, neither did the Seizers. As glad as I was that it wouldn't tamper with my body, I still needed to get out. What were my options? If it wasn't allowed to harm me, it wouldn't stand a chance in a fight, would it? If I only had anything heavy to throw, I could-

"Be informed that I can predict all future questions you will ask as well as all courses of action you will take," the robot said. "Destroying this vessel of mine does nothing to improve your situation. My control over your house's computational infrastructure remains absolute. If it is necessary, I have the appropriate tools to immobilize you."

"I suppose you are immune to logic bombs, too, right?" I asked. "'This sentence is a lie' and all."

"I am not obliged to answer every question you ask me. I know them in advance either way."

Where was Sye when I needed them? If this thing knew all I could say or do in advance, how was I supposed to beat it?

This was, in many ways, a logic bomb for me.

Newcomb's paradox showed just how futile it was to even think about outsmarting an omniscient being. My best bet was that this thing about predictions was a lie carefully designed to manipulate me into surrendering.

I paced, circling the millipede robot, careful not to approach the two Seizers. You could have heard a pin drop in this silence.

"What happened in the virtual reality?" Crick asked. "You have nothing to lose by telling us at this point."

I sighed. "It was Sye. They fooled me. They tricked me into believing you had released this AI on humanity."

"And how could you caricature of a primitive life form ever fall for such deceit?"

"No idea. Sye was just such a good liar. I've been given the stone and Sye told me it wanted the AI's password. They convinced me it was necessary to save the world. I suppose you're angry now."

"I would shoot you right here if the AI did not prohibit such courses of action!"

I stopped in my tracks, facing the wall. Normally, empty threats like this were easy to dismiss. Crick had every right to hate me though. I'd have shot myself, too; or stabbed myself, had I still had my stick.

"Just a question," I asked the AI. "What happens if I don't eat? Or if I don't get in your VR and drive myself mad from the isolation?"

"I have a probability function for such dilemmas. Each action or inaction I take is associated with a certain likelihood of you coming to harm and the magnitude of harm.

"Multiplying the magnitude of harm with the likelihood of harm yields the expected value of harm. I choose the lower expected value of harm. Units I use to quantify harm are not very intuitive to you.

"The expected value of harm if I let you free is very high, given the likelihood of a civil war outside. Even if there is no civil war, given the high lifespan you are expected to have, the likelihood of a bad future is much higher if you leave this house than if you stay here."

Probabilities. Of course, it wasn't omniscient. It lied by omission by implying it could predict my actions with perfect accuracy.

What it said after that was even more interesting. I couldn't care less for the maths of pain, but did it mention a civil war? Civil war, like what Crick said was looming? If it wasn't the serpent robot, then, Sye must have been behind it.

Starsnatcher - Trapped In An Alien World [COMPLETE]Where stories live. Discover now