On Thursday, 14 January 2021 at 10:28:13 UTC, Basile B. wrote:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
I've always heard programmers complain about Garbage Collector GC. But I never understood why they complain. What's bad about GC?

Semi serious answer:

In the domain of hoby-ism and small companies programmers that work with statically typed languages all believe that they are super hero in the domain of memory managment. When they see "GC" they think that they are considered as 2nd grade student ^^

It's basically snobbism.

Hi Basile,

My experience:

in 90's I worked with Pascal, C and C++ with rudimentary memory management: basically it was no difference between working with memory or files in terms of life-cycle management: you must alloc/free memory and you must open/close files. The secret for "stability" was a set of conventions to determine who was the responsible of the resource handler or memory pointer: I developed some ERP/CRMs, some multimedia products and some industrial environment applications (real time ones).

At the end of 90's I began to work with VB and the COM model (that uses references counter) and I discovered that the best way to manage memory (avoiding death-locks) was treating objects as "external" unmanaged resources: The VB6 "WITH" statement was key to use ARM techniques (similar to future "using" in C#).

And then arrived GC with C#, Java and Scala: I have found GC good enough for all applications and services that I have been developing last 20 years because this languages (and it's frameworks+based libraries) have never crossed certain limits: they always separated managed and unmanaged resources: developer is responsible of unmanaged resources, and Memory is managed by GC. Language itself offers you good tooling to ARM (like "using" in c#, "try-with-resources" in java, ...).

Finally arrived the last actors to the scene: mainly javascript and derivatives (when working in a browser context), where developer is abstracted of how memory and resources are really managed (I can remember critical bugs in chrome like Image object memory leaks because this "abstraction").

GC has introduced a "productive" way of working removing old memory problems for large scale projects (and finally with other kind of resources in some scenarios) but, as developers/architects, we have de responsibility to recognize the limits to each technique and when it fits to our needs.

After all, my opinion is that if I was to develop something like a Real Time app (industrial/medical/aeronautics/...) or a game where a large amount of objects must be mutated ~30 times per second, GC "unpredictable" or "large" time cost will be enough to stop using it. There is other reasons (like "efficient" memory management when we need to manage large amounts of memory or to run in limited memory environments).


I understand perfectly the D community people that needs to work without GC: **it is not snobbish**: it is a real need. But not only a "need"... sometimes it is basically the way a team wants to work: explicit memory management vs GC.

D toke the way of GC without "cutting" the relationship with C/C++ developers: I really don't have enough knowledge of the language and libraries to know the level of support that D offers to non GC based developments, but I find completely logic trying to maintain this relationship (in the basis that GC must continue been the default way of working)

Sorry for my "extended", may be unnecessary, explanation (and my "poor" english :-p).

Reply via email to