Non credo che questo accadrà, ma credo che sia uno dei desideri
dell'industria ICT quello di rimpiazzare il processo di programmazione,
che presenta diversi problemi di scala, con qualcosa di più gestibile
industrialmente.
<https://cacm.acm.org/magazines/2023/1/267976-the-end-of-programming/fulltext>
Viewpoint
The End of Programming
By Matt Welsh
Communications of the ACM, January 2023, Vol. 66 No. 1, Pages 34-35
10.1145/3570220
[...]
*Programming will be obsolete.* I believe the conventional idea of
"writing a program" is headed for extinction, and indeed, for all but
very specialized applications, most software, as we know it, will be
replaced by AI systems that are /trained/ rather than /programmed./ In
situations where one needs a "simple" program (after all, not everything
should require a model of hundreds of billions of parameters running on
a cluster of GPUs), those programs will, themselves, be generated by an
AI rather than coded by hand.
I do not think this idea is crazy. No doubt the earliest pioneers of
computer science, emerging from the (relatively) primitive cave of
electrical engineering, stridently believed that all future computer
scientists would need to command a deep understanding of semiconductors,
binary arithmetic, and microprocessor design to understand software.
Fast-forward to today, and I am willing to bet good money that 99% of
people who are writing software have almost no clue how a CPU actually
works, let alone the physics underlying transistor design. By extension,
I believe the computer scientists of the future will be so far removed
from the classic definitions of "software" that they would be
hard-pressed to reverse a linked list or implement Quicksort. (I am not
sure I remember how to implement Quicksort myself.)
AI coding assistants such as CoPilot are only scratching the surface of
what I am describing. It seems totally obvious to me that /of course/
all programs in the future will ultimately be written by AIs, with
humans relegated to, at best, a supervisory role. Anyone who doubts this
prediction need only look at the very rapid progress being made in other
aspects of AI content generation, such as image generation. The
difference in quality and complexity between DALL-E v1 and DALL-E
v2—announced only 15 months later—is staggering. If I have learned
anything over the last few years working in AI, it is that it is /very/
easy to underestimate the power of increasingly large AI models. Things
that seemed like science fiction only a few months ago are rapidly
becoming reality.
So I am not just talking about things like Github's CoPilot replacing
programmers.^1
<https://cacm.acm.org/magazines/2023/1/267976-the-end-of-programming/fulltext#R1>
I am talking about /replacing the entire concept of writing programs
with training models./ In the future, CS students are not going to need
to learn such mundane skills as how to add a node to a binary tree or
code in C++. That kind of education will be antiquated, like teaching
engineering students how to use a slide rule.
The engineers of the future will, in a few keystrokes, fire up an
instance of a four-quintillion-parameter model that already encodes the
full extent of human knowledge (and then some), ready to be given any
task required of the machine. The bulk of the intellectual work of
getting the machine to do what one wants will be about coming up with
the right examples, the right training data, and the right ways to
evaluate the training process. Suitably powerful models capable of
generalizing via few-shot learning will require only a few good examples
of the task to be performed. Massive, human-curated datasets will no
longer be necessary in most cases, and most people "training" an AI
model will not be running gradient descent loops in PyTorch, or anything
like it. They will be teaching by example, and the machine will do the rest.
[...]
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa