Hi, Comments inlined below:
On 22/03/23 7:34, in_pharo_users--- via Pharo-users wrote:
Offray, and to all others, you are missing the issue. The problem we face is not to measure 'intelligence' of a system, but it's ability to verbally act indistinguishable from a human. This ability is allready given as chatbots are accepted by millions of users, f.i. as user interfaces. (measurement = 'true', right?) ChatGPT has the ability to follow a certain intention, f.i. to convince the user to buy a certain product. For this purpose, chat bots are getting now equipped with life like portrait pictures, speech input and output systems with life like voices, phone numbers that they can use to make calls or being called. They are fed with all available data on the user, and we know that ALL information about every single internet user in available and is being consolidared on necessity. The chat bots are able to use this information to guide their conversational strategy, as the useful aspects of the users mindset are extracted from his internet activity. These chat bots are now operated on social network platforms with life like names, 'pretending' to be human. These bots act verbally indistinguishable from humans for most social media users, as the most advanced psychotronic technology to manufacture consent. The first goal of such a propaganda will naturally be to manufacture consent about humans accepting being manipulated by AI chat bots, right?
I don't think I have missed the point, as we agreed (I think) on chatbots not being intelligent, just having such appearance. That why I'm calling "AI" #ApparentIntelligence (in the sense of look alike, but not real). Of course, something looking like a real thing without being the real thing can be used for manipulation since the first times of gossip, printing press and now automatization, with the changes in scale/danger that such medium changes imply.
I don't think that manufactured consent is so easy, as this very thread shows. What is being automated is manufactured polarization (but humans can do pretty well by our own on polarization).
How can this be achieved? Like allways in propaganda, the first attempt is to - suppress awareness of the propaganda, then - suppress the awareness of the problematic aspects of the propaganda content, then - reframe the propaganda content as acceptable, then as something to wish for, - achive collaboration of the propaganda victim with the goals of the propaganda content. Interestingly, this is exactly the schema that your post follows, Offray.
On the contrary, my post is advocating for a critical reading of Apparent Intelligence, by reframing the terms and the acritical technoutopic / technoapocalyptic readings/discourses that are spreading rapidly on the wider web, as I think that this community has shown an historical different position beyond/resisting hype and current trends. So I don't see how any of the steps you mention are "blueprint followed" in my post, and I think they will be difficult to locate without specific examples.
This often takes the form of domain framing, like we see in our conversation: the problem is shifted to the realm of academics - here informatics/computer sciences - and thus delegated to experts exclusively. We saw this in the 9/11 aftermath coverup. Then, Offray, you established yourself as an expert in color, discussing aspects that have allready been introduced by others and including the groups main focus 'Smalltalk', thus manufacturing consent and establishing yourself as a reliable 'expert', and in reverse trying to hit at me, whom you have identified as an adversary. Then you offered a solution in color to the problem at hand with 'traceable AI' and thus tried to open the possibility of collaboration with AI proponents for the once critical reader.
Heh, heh. On the contrary seems that the one seeing a scheme and a enemies location/confrontation with deep plots and tactics is you. Providing external creditable sources beyond opinion, belonging to a established discursive falsafiable tradition (i.e. one that you can criticize instead of blindly accept) is a way to enrich discourse/argumentation beyond conspiracy theories. You could also quote your sources instead, which would allow the community to see where our positions are hold/sustained, even if we use different domain frames, which is better that claiming no domain or expertise in pursuit of openness. So instead of this are my opinions without any external source or reference to pretend no expertise or domain framing, we could advocate for openness by welcoming different expertise and argumentation and making our sources/bias as evident as possible.
I do not state, Offray, that you are knowingly an agent to promote the NWO AI program. I think you just 'learned' / have been programmed to be a successful academic software developer, because to be successful in academics, it is neccessary to learn to argue just like that since the downfall of academic science in the tradition of, let's say, Humboldt. So, I grant that you may be a victim of propaganda yourself, instead of being a secret service sposored agent. You took quite some time to formulate your post, though. You acted to contain the discussion about AI in this vital and important informatics community to technical detail, when it is neccessary that academics and community members look beyond the narrow borders of their certifications and shift their thinking to the point of view where they can see what technology does in the real world.
I offered a view point with sources. In no way the discussion about Apparent Intelligence was contained and, on the contrary, I tried to offer arguments from cognition and philosophy, beyond technical details, which have been pretty absent in the general discourse and rush gold of technoutopism / technoapocalypse, conspiracy and paranoia. Other sources could be added to enrich and inform de conversation and they would be pretty welcomed (at least from me and several members I've known from this community)
Best, Offray