On Sun, Jun 23, 2024, 4:52 PM John Rose <johnr...@polyplexic.com> wrote:

>
> This type of technology could eventually enable a simultaneous
> multi-stream multi-consciousness:
>
> https://x.com/i/status/1804708780208165360
>
> It is imperative to develop a test for consciousness.
>

Yes. We distinguish conscious humans from unconscious by the ability to
respond to input, form memories, and experience pleasure and pain. Animals
clearly are conscious by the first two requirements, but so are all the
apps on my phone. We know that humans meet the third because we can ask
them if something hurts. We can test whether animals experience reward and
punishment by whether we can train them by reinforcement learning. If an
animal does X and you reward it with food, it will do more of X. If you
give it electric shock, it will do less of X. By this test, birds, fish,
octopuses, and lobsters feel pain, but insects mostly do not.

>
> If qualia are complex events that would be a starting point, qualia split
> into two things, impulse and event, event as symbol  emission and the
> stream of symbols analyzed for generative "fake" data. It may not be a
> binary test it may be a scale like a thermometer, a Zombmometer depending
> on the quality of the simulated p-zombie craftmanship.
>
>
> https://www.researchgate.net/publication/361940578_Consciousness_as_Complex_Event_Towards_a_New_Physicalism
>

I just read the introduction but I agree with what I think is the premise,
that we can measure the magnitude (but not the sign) of a reinforcement
signal by the number of bits needed to describe the state change; the
length of the shortest program that outputs the trained state given the
untrained state as input. This agrees with my intuition that a strong
signal has more effect than a weak one, that repitition counts, and that
large brained animals with large memory capacities are more conscious than
small ones. We can't measure conditional Kolmogorov complexity directly but
we can search for upper bounds.

By this test, reinforcement learning algorithms are conscious. Consider a
simple program that outputs a sequence of alternating bits 010101... until
it receives a signal at time t. After that it outputs all zero bits. In
code:

for (int i=0;;i++) cout<<(i<t & i%2);

If t is odd, then it is a positive reinforcement signal that rewards the
last output bit, 0. If t is even, then it is a negative signal that
penalizes the last output bit, 1. In either case the magnitude of the
signal is about 1 bit. Since humans have 10^9 bits of long term memory,
this program is about one billionth as conscious as a human.

I don't have a problem with this definition as much as I do the assumed
moral obligation to not harm conscious agents. Is there really such a rule,
and what do you mean by harm? If a state of maximum utility is static
because any thought or perception is painful because it would result in a
different state, then how is this different from death?

The argument comes from effective altruism. You know, the ones who try to
rationalize ethics and always pull the lever in the trolley problem. What
is wrong with doing the most good possible, you or SBF might ask? To find
out, I went on the Facebook EA forum and asked a simple question. Does life
has net positive or negative utility? Should we strive for as much
conscious life as possible, seeding the galaxy with von Neumann probes, or
seek the extermination of all life, so that carnivores could never again
evolve? Should we become vegan, or should we raise as many farm animals as
possible because they have objectively better lives than wild animals
because they are well fed and protected from predators and disease? Or does
it matter? I could not get a consistent answer.

So yes, blobs of lab grown neurons are conscious in the same way that LLMs
are conscious. That doesn't mean we can't do what we want with them. Ethics
is not rational. It is whatever evolution, culture, and your parents
programmed you to feel good about doing.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Med8706f3e05447bcb2817ad4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to