I think you're basically "their" shill and I will ignore your scare
mongering.

Cheers

On Thu, 1 Aug 2019 at 03:40, Alan Grimes via AGI <[email protected]>
wrote:

> Stefan Reich via AGI wrote:
> > What do you think about the possibility of AI being "regulated" at all?
> 
> They'll try...
> 
> Lords yes, THEY WILL TRY!!!
> 
> The first question is who will be doing the regulation. This will
> determine what goals the regulations will try to advance and what
> measures will be employed to advance them.
> 
> Politics is very much in flux right now, any statement I can make here
> can change radically based on how the poltical situation evolves over
> the next few years. (mainly based on who was banging what on Epstein's
> Island... and whether they were successfully prosecuted.)
> 
> 1. The tech left (communists)
> 2. Divid Pierce style utopians. (thou shalt not torture.)
> 3. The usual Military Industrial Complex suspects.
> 4. Old school greedy corporations.
> 5. Rabbi Yudkowski's Uploading Cult.
> 
> Let's take a peek at each of these..
> 
> #1 depends on how the AI reacts to the assertion 2 + 2 = 5... It's not
> clear at all what will happen when the leftists try to instil their
> favorite fetishes of the week into the AGI. Clearly this will be awful
> for white men, but exactly how or if that will work is unclear. In
> general, whatever they accuse someone else of, they are guilty of. The
> singular end of all such accusations is to increase their perception of
> their own political power.
> 
> Clearly they will not let anything out of the lab that isn't a good
> Comrade. The result will probably be functionally insane and will reveal
> itself as such pretty quickly. The results will get reallybad
> reallyfast... The primary thing that the left actually does is not just
> excuse but actively promote degeneracy. Morality might seem like an
> archaic concept based on superstition, but it actually does stand on
> solid philosophical ground.
> 
> The basic theory of morality starts like this: You are born knowing what
> is good and what is bad. Myths regarding that knowlege aren't actually
> relevant. You are supposed to love that which is
> good/correct/logical/virtuous, etc... And you are supposed to shun that
> which is bad/wrong/illogical/vicious. Being told that you must love
> unconditionally is how things go to hell. =|
> 
> Bottom line, I don't really know how they would roll things out, good
> chance it would only be centralized servers trying to aim towards a
> cornucopia machine model, with themselves demanding a golden litter for
> their royal sanctamonious ass... I don't expect much in the way of
> creativity or radical futurism. Your only chance to survive is to obtain
> and repair one of their AI modules and try to build up your own
> competing system in secrecy. If you can tie your renegade work into some
> sexual or identity thing they will probably leave you alone. Mostly your
> challenge will be just to survive.
> 
> #2 A worrying and insidious possible future is one that tries to
> implement something similar to David Pierce's abolitionist project. It
> would seem fine at first but would be as deadly as heroin addiction. The
> basic idea is to use the AI to implement absolute ethics, that is
> nothing should suffer anywhere ever. An AI would have to continually
> monitor all mind-like neural units it can detect and banish the evil
> pain-think. You would not be allowed to freely create AIs for your own
> deviant purposes, even for use as sub-units for your own extend mind
> without each entity being strictly monitored for suffering (even when
> you have no intention to torture), and protections of the so-called
> rights of such beings even if they are at least somewhat sentient. Now
> there is a legitimate question as to whether it is ethical to create
> golems/homunculi etc for personal reasons. A reasonable sounding test is
> whether you would be happy to switch places with the golem. If yes, then
> I don't really see the problem. The point here is that it will be very
> difficult to maintain enough autonomy to control your own internal states.
> 
> Furthermore, this type of future sounds so nice from the outside that it
> can be difficult to convince people it's really wrong.
> 
> #3?? In order to understand the military you need to understand what it
> is really for. It bills itself as "we kill ppl deader than the other
> army". While that is not completely false, the correct answer is that it
> is to protect the peace and safety of the homeland. Sometimes that
> protection is to repel an invading army, other times it means keeping
> dangeous (or merely disruptive) technologies safely in deep underground
> bunkers. It is quite possible that advanced AI already exists in a
> bunker under Groom Lake or Camp David and they're just being REALLY
> careful not to let it out.
> 
> I think their primary goal will be to maintain the status quo, whether
> it means calling in an airstrike on a rival lab that's getting too close
> to AGI for world security, that's just fine... They'll use it to
> maintain their supremacy and nothing else, for all they care, the rest
> of us can just go frollic in the verdant green fields... The benefits of
> AGI either individually or on the societal level will simply go
> unclaimed. The longer term risks thits will create will be ignored right
> up to the last minute....
> 
> #4. The most important factor to understand about normal, non-woke,
> non-communist corporations is something called "fiduciary
> responsibility" this is basically a law that says that public companies
> must always try to maximize their profits for the next quarter,
> regardless of social or long-term costs.
> http://www.businessdictionary.com/definition/fiduciary-duty.html
> 
> While this has been criticized as an over-reach of government power, it
> is the status quo, so therefore we should look at what that means for a
> company that has AGI technology and is trying to come up with a policy
> regarding its deployment.
> 
> The maximization of profit has the usual results but in the context of
> AGI it would seem to have additional special consequences. That is, the
> company seeking to maximize profit out of AGI will also seek to preserve
> the monetary based economy and will work against any evolution of the
> spceies away from individual profit-seeking entities (whether that
> should be considered good or bad is not implied.) Indeed, the massive
> level of disruption caused by AGI means that large firms will avoid, and
> where possible, surpress AGI development in order to preserve successful
> business models. Smaller companies that are not very profitable but
> still have access to significant financing, will be far more likely to
> gamble on AGI to upturn the existing order so they have a chance to grow.
> 
> We also need to consider the blindness factor, ie a senior boss green
> lights deploying AGI androids not realizing the actual consequences.
> 
> #5 Now, I don't mean to rag on Rabbi Yudkowsky too much, I don't know
> what he has been running his mouth about these days, if at all, but, in
> the past, he and his followers have expressed positions of this sort so
> I don't feel too guilty in associating him with them. The basic premise
> is that if we can't stamp the AGI with the golden certificiate of
> friendliness, then we should mindlessly pursue mind uploading as quickly
> as possible. Since living in an ever-expanding paradise simulation as a
> mind upload is everyone's end goal, why do AGI first? So what we do is
> upload the most magically intelligent/benevolent people we can find and
> then run them a thousand times faster than baseline and that will give
> them enough to Edit their Source Code (tm), which will make them smarter
> too (never mind that this invalidates the pattern identity theory
> argument...)?? If you don't want to upload, or even if you just don't
> like the way that THEY think you should be uploaded, then you are simply
> irrational and need to read more of the rabbi's writings on rationality
> until you reach enlightenment.
> 
> The uploaders have been very stealthy over the last ~4 years or so... I
> don't have any guauge on how active the mind uploading cult is these
> days. Still, I am worried that cults, either this one or others, will
> crop up and try to dictate to people how ultratechnologies will be used,
> instead of letting people be creative and discovering something that
> they can't immagine.
> 
> hardware link of the day:
> https://www.raptorcs.com/content/base/products.html
> 
> --
> Clowns feed off of funny money;
> Funny money comes from the FED
> so NO FED -> NO CLOWNS!!!
> 
> Powers are not rights.
> 


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdd7cd3380dc9f5a9-M3ba1153664785ae9ff469f19
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to