I’d note first that your analogy with chess and engine moves ignores an 
asymmetry: a grandmaster can often sense the “non-human” quality of an 
engine’s play, whereas engines are not equipped to detect distinctly human 
patterns unless explicitly trained for that. Even then, the very subtleties 
grandmasters notice—i*ntuitive plausibility, certain psychological 
hallmarks—are not easily reduced to a static dataset of “human moves.”* 
That’s why a GM plus an engine can typically spot purely artificial play in 
a way the engine itself cannot reciprocate. Particularly with longer sets 
of moves, but sometimes a single move suffices.

Chess self-play is btw trivial to scale because it operates in a closed 
domain with clear rules. You can’t replicate that level of synthetic data 
generation in, say, urban traffic, nuclear power plants, surgery, or modern 
warfare.  

On self-driving cars, relying on “training” alone to handle 
out-of-distribution anomalies can lead to catastrophic, repetitive failures 
in the face of curbside anomalies of various kinds (unusual geometries, 
rock formations, obstacles not in the training data etc). It is effin 
disturbing to see a vehicle costing x thousand dollars repeatedly slamming 
into an obstacle, change direction, only to do so again and again and 
again… 

Humans, even dumb ones like yours truly, by contrast, tend to halt or 
re-evaluate instantly after one near miss. Equating “education” for humans 
and “training data” for AI ignores how real-world safety demands robust 
adaptation to new, possibly singular events. 

Now you will again perform one of the following rhetorical strategies to 
obfuscate: 

1. Equate AI with human/Löbian learning through whataboutism or symmetry 
arguments

2. Undermine the uniqueness and gravity of AI limitations by highlighting 
human failures/mistakes, thereby diluting AI risks

3. Cherry pick examples of dramatic appearing AI advantages

4. Employ historical allusions to show parallels between AI and scientific 
discoveries. E.g. The references to Planck’s “quantized atom” and the slow, 
iterative acceptance of quantum mechanics cast doubt on the suggestion that 
AI’s ad hoc solutions are a unique liability. The argument is that 
scientists, too, often start with incomplete or approximate theories and 
refine them over time. *While this is valid to a point, it conflates two 
different processes: humans can revise and expand theories with continuing 
insight, whereas most current AI systems cannot autonomously re-architect 
their own approach without extensive retraining or reprogramming. *

As well as others I don't want to bother with.

But this is irrelevant because in the last post, that wasn’t meant for you, 
I prepared a trap regardless, as I will only engage good faith arguments 
that push the discussion in new directions (I will engage more deeply with 
Brent’s reply for this reason). And, being your usual modest self, you took 
the bait, which ends my conversation with you on this subject for now as I 
will illustrate in the concluding section of this post. *Even if I 
appreciate your input to this list, your posts hyping AI advances at this 
point, are becoming almost all spam/advertising for ideological reasons, as 
I will illustrate.*

The bait was luring you to clarify your stance that “a problem is solved or 
it isn’t” while simultaneously implying that AI failure is more tolerable 
than human failure. *That is self-contradictory. If correctness is 
correctness, then the consequences of AI mistakes should weigh at least as 
heavily as those of human missteps—yet you are clearly content to overlook 
AI’s more rigid blind spots. *

*That’s rigid ideology/personal belief validation masking itself as 
discussion. It does not respect sincere and open discussions that make this 
list a resource imho. Employ your usual rhetorical tricks and split hairs 
as you wish in reply. Your horse in this race is revealed. You are free to 
believe in perfect AI’s and let them run your monetary affairs, surgery, 
life etc. I simply don’t care and wish you all the best with that. *

On Wednesday, December 25, 2024 at 8:57:46 PM UTC+8 John Clark wrote:

> On Wed, Dec 25, 2024 at 12:06 AM PGC <[email protected]> wrote:
>
> *> You see what I'm getting at. It’s true that from a purely outcome-based 
>> perspective—either a problem is solved, or it isn’t—it can seem irrelevant 
>> whether an AI is “really” reasoning or simply following patterns. This I'll 
>> gladly concede to John's argument.If Einstein’s “real reasoning” and an 
>> AI’s “simulated reasoning” both yield a correct solution, the difference 
>> might appear purely philosophical.*
>
>
> *Thank you.  *
>  
>
>> *> However, when we look at how that solution is derived—whether there’s 
>> a coherent, reusable framework or just a brute-force pattern assembly 
>> within a large but narrowly defined training distribution—we begin to see 
>> distinctions that matter. Achievements like superhuman board-game play and 
>> accurate protein folding often leverage enormous coverage in training or 
>> simulation data.*
>
>
> *In other words an AI, like a human being, needs to be educated. No human 
> being can be introduced to Chess for the very first time and immediately 
> play it at a grandmaster level, it takes a human years of study and it 
> takes an AI hours of study.    *
>
>> * > This can conceal limited adaptability when confronted with radically 
>> unfamiliar inputs, and even if a large model appears to have used 
>> “higher-order abstraction,” it may not have produced a stable or 
>> generalizable theory—only an ad hoc solution  that might fail under new 
>> conditions.*
>
>  
> *The same thing is true for human scientists. When in 1900 Max Planck 
> first introduced the concept that the atom was quantized his idea fit the 
> data for the blackbody radiation curve but it caused all sorts of other 
> problems; for example the electron should constantly lose energy by 
> radiating electromagnetic waves and crash in the nucleus, but that doesn't 
> happen.  Even Planck thought his idea was just a mathematical trick that 
> allowed him to predict what the blackbody radiation curve would look like 
> at a given temperature. Quantum Mechanics wasn't fully fleshed out until 
> 1927 and even today some think it's not entirely complete. Usually before 
> you find a theory that can give you a perfect answer all of the time, you 
> find a theory that can give you an approximate answer most of the time. *  
>      
>
> *> in high-risk domains like brain surgery, we risk encountering 
>> unforeseen circumstances well beyond any training distribution.*
>
>
> *Regardless of if your surgeon is human or robotic if, during your brain 
> surgery, he or it encounters something never seen before in his or its 
> education or previous practice then you're probably toast.  *
>
>  
>
>>  * > AI’s specialized “intelligence” is inefficient compared to a 
>> competent human team*
>
>
> *Compared with humans current AIs are certainly inefficient if you're 
> talking about energy consumption, but not if you're talking about time 
> consumption. No human being could start with zero knowledge of Chess and 
> become a grandmaster of the game after two hours of self study, especially 
> if during those two hours he couldn't talk to other Chess players and had 
> no access to Chess books; but an AI can.  *
>
>  
>
>> *> Other domains off the top in which this robust kind of generalization 
>> is critical are Self-Driving Vehicles in unmapped terrains or extreme 
>> weather:*
>
>
> *Self-Driving cars are already safer than the average human driver 
> in unmapped terrains or extreme weather, but for legal and societal reasons 
> they will not become ubiquitous until they are MUCH safer than even the 
> very safest human driver.  *
>
>  
>
>> * > Another one would be financial trading during market crises: Extreme 
>> market shifts can quickly invalidate learned patterns, and “intuitive” AI 
>> decisions might fail without robust abstraction and loose billions.*
>
>
> *In those extreme circumstances where seconds and even milliseconds become 
> important I would feel far more comfortable with an AI managing my money 
> than a human. *
>
>  
>
>> *> How about nuclear power plant operations? Rare emergency scenarios, if 
>> not seen in training, could lead to catastrophic outcomes*
>
>
> *And that's exactly what caused the Chernobyl disaster and the 3 mile 
> island meltdown, and in both cases the operators of the reactors were human 
> beings. The Fukushima nuclear accident was not caused by a reactor operator 
> error but by the human reactor designers who thought it was a good idea to 
> build a nuclear plant very near the ocean and put the emergency diesel 
> generators in the basement even though it was directly above a major 
> earthquake fault line. *
>
> * John K Clark    See what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>*  
>
> 5v2
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/e03017da-ed25-4fa8-a4bc-9899733215a2n%40googlegroups.com.

Reply via email to