Questo signore si dichiara Agilist, una delle metodologie moderne di sviluppo 
software, ma sembra non aderire ad alcuni suoi principi fondamentali, quali:

1. Pairwise programming: qui un copilot funge benissimo da secondo nella coppia
2. Deliver working software frequently: il che significa dedicare gran parte 
del tempo a scrivere codice

— Beppe


> On 29 Feb 2024, at 12:32, nexa-requ...@server-nexa.polito.it wrote:
> 
> From: Daniela Tafani <daniela.taf...@unipi.it 
> <mailto:daniela.taf...@unipi.it>>
> To: "nexa@server-nexa.polito.it <mailto:nexa@server-nexa.polito.it>" 
> <nexa@server-nexa.polito.it <mailto:nexa@server-nexa.polito.it>>
> Subject: [nexa] LMs and AI make software development harder
> Message-ID: <1ff9c3d20cd643eabb1bbe07a054d...@unipi.it 
> <mailto:1ff9c3d20cd643eabb1bbe07a054d...@unipi.it>>
> Content-Type: text/plain; charset="Windows-1252"
> 
> LLMs and AI make software development harder
> 
> LLMs and AI make software development harder. Wait, what? Isn’t the whole 
> point of AI to make writing code easier? Well, yes. But writing code is the 
> easy part of software development. The hard part is understanding the 
> problem, designing business logic and debugging tough bugs. And that’s where 
> AI code assistants like copilot or chatgpt make our job harder, as they strip 
> a way the easy parts of our job and only leave us with the hard parts and 
> make it harder for new developers to master the craft of software development.
> Coding is the easy part?
> 
> Is coding really that easy? No, not exactly easy - mastering a programming 
> language still takes years of practice. But when looking at software 
> development as a whole, writing code is one of the easier part and it is no 
> wonder that chatgpt and copilot can write decent code. First, they have 
> trained on millions of lines of code and second, code is by its nature very 
> easy to understand for a machine as programming languages are very structured 
> languages with limited vocabularies. For a LLM it is probably much easier to 
> learn than natural language.
> 
>    Programming languages are just very powerful tools that we use to solve 
> problems
> 
> In the end, programming languages are just very powerful tools that we use to 
> solve problems. And the hard part is not the learning tool, but understanding 
> the problem and designing a solution for it. This is instantly obvious as 
> most software engineering problems could be solved by a lot of different 
> programming languages, which one to pick is a matter of context or even 
> personal preference.
> 
> Another indicator that programming is that easy part is, that the more senior 
> a software developer gets, the less time they usually spend writing code. 
> Instead seniors spend more time understanding the problem, designing the 
> solution, jumping in to debug tough bugs or doing design decisions and of 
> course mentoring junior team members. While this might not be true for every 
> senior developer, when looking at my software development bubble this is a 
> clear trend.
> The hard parts of software development
> 
> Copilot and other AI assistants are a great help for developers, but they are 
> not flawless. A part of it is natural, as they are trained on existing code 
> without any context and there are also some bad habits from the training data 
> that code assistants might have picked up. And while this might get optimized 
> over time, at the moment it means that developers still have to review the 
> code that is generated by the AI code assistants. And reviews are hard - 
> especially if one cannot query the author of the code for their intent.
> 
> And even if the code is good enough, it might still introduce flaws into the 
> control logic of a program, might be missing edge cases or introduce a 
> regression bug when integrated into an existing code base. This means that 
> developers have to debug the code that is generated by the AI code assistants 
> in case of an error. And debugging is hard - especially for these kind of 
> problems where it might be hard to recreate the circumstances that cause the 
> bug in the first place.
> 
> As the generated code heavily depends on the context we give the AI code 
> assistants, this means that we have to be very precise in our descriptions 
> which means that we have to understand the problem very well, which requires 
> domain knowledge and context awareness on the side of the developer. Even if 
> we just focus on the technical part, being aware of the surrounding 
> architecture and the existing code is crucial to get good results.
> 
> Granted we could ask LLMs like chatgpt for help with integration into the 
> codebase or we could just pass it the whole codebase and let it redesign 
> everything. But apart from requiring lot of input to give enough context 
> debugging in an unfamiliar codebase is even tougher than debugging stuff that 
> you wrote yourself.
> 
> And then there is the whole thing about figuring out what exactly our product 
> should do, how it should behave and how it should look like. At the moment 
> this still requires a lot of human smarts and while AI tools might allow us 
> to iterate faster on figuring out what we want to build in the end it is 
> still a human that has to make the decision.
> AI generated software development is exhausting
> 
> It seems a given that AI assistants will change our job by automating away 
> writing code and even helping us with some design decisions. It is very 
> convenient that we can ask chatgpt questions regarding system design and get 
> reasonable answers. What is still left to us is making the decision on which 
> answer to pick and which prompt to give to the LLM to get the results that we 
> need. And this is very exhausting - decision fatigue is a thing and it is 
> very real. Already before AI code assistants the limiting factor in the speed 
> of delivering software was not the often the decision making process of an 
> organization or a team - not writing the code.
> 
>    The limiting factor in delivery speed is decision making, not writing code
> 
> On top of that is that current company structures will most likely still hold 
> software developers accountable for the code that is running in a product, 
> not the AI code assistants that wrote them in the first place. This will add 
> another layer of stress on it, not just do we need to make more decisions 
> faster, we are also to blame if the AI code assistants make a mistake.
> 
> And if there is a mistake then the debugging needs to be done, which often 
> needs a lot of context and background knowledge to be efficient. AI tools are 
> of less help there, because they cannot figure out context changes by 
> themselves. They might help us with the easy part of debugging like running 
> tests with different variations, to narrow down the cause but finding the 
> prompt for an LLM to generate the fix will still be on us.
> Are AI tools replacing developers?
> 
> AI assistants might lower the initial hurdle to get into software 
> development, but they will not make it easier to become a good, experienced 
> software developer. Most of the senior developers I know gained the 
> background knowledge and context needed to formulate complex solutions from 
> years of slogging through (bad) code and learning from their mistakes. This 
> might be an inefficient way of learning but it is very effective in building 
> up the domain knowledge that is needed for software development. This 
> knowledge is also something that is very hard to teach in a formal way, as 
> books or online tutorials by nature are somewhat generic and and adaption to 
> real life situations still needs hands-on experience.
> 
> As I see it, broad usage of AI tools will change the the skill distribution 
> of software developers. We might end up with a lot more junior developers 
> that are able to write code - or at least prompt the LLMs write the code - 
> but lack the deep understanding of software development to be efficient in 
> decision making. On the other hands senior developers that have acquired the 
> context and domain knowledge will be fewer and fewer as the effort to acquire 
> this knowledge will be higher as AI tools will hide away the parts that would 
> enable us to learn unless the generated code is reviewed in-depth, which then 
> raises the question if we gain that much efficiency through the tools at all.
> 
> So are AI tools replacing developers? Currently no, they will transform the 
> job of a developer but they will not replace them. The question is how we as 
> an industry will make sure that we retain the knowledge and experience that 
> we have gained over the years. It will also raise the question how we handle 
> the human side of software development, as the job will either become more 
> boring because we just feed machines with prompts, yet more stressful because 
> we have to make more hard decisions faster. Or maybe AI tools are really just 
> a hype and a fad and nothing will change at all.
> Written on November 3, 2023 
> 
> 
> https://dominikberner.ch/ai-tools-make-our-job-harder/

_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to