Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having spent years writing code in both GC’ed and non-GC’ed languages, it’s pretty clear that it programmers spend about the same amount of time worrying about memory management with either paradigm.

The main differences are that people using GC systems spend their time doing magical incantations to manage the GC wizard, and they’re a lot more smug about the goodness of their system.



>programmers spend about the same amount of time worrying about memory management with either paradigm

I have no idea how you've come to that conclusion. For the vast majority of memory allocation in any GC'd language I can think of, you spend zero time thinking about memory management 98% of the time.

whereas in a manual language, you must consider it every time you allocate memory. That's not a bad thing, it's often dead simple in the vast majority of contexts, but still.

I've also very rarely seen code in professional code that's mean to poke or prod the GC into running.


>For the vast majority of memory allocation in any GC'd language I can think of, you spend zero time thinking about memory management 98% of the time.

>whereas in a manual language, you must consider it every time you allocate memory.

This is like bizarro world to me. I genuinely can't comprehend that.

Here's how memory management looks like to me, using concrete examples in languages featuring RAII like C++ or Rust:

- I have a function that needs to compute a hash so I need to allocate a hash context for it. If it's small I'll put it on the stack, otherwise it'll go on the heap. In either case when the function returns the destructor is automatically called.

- I have a function that takes the name of a file as parameter and returns its contents in a buffer. Clearly the buffer needs to be allocated on the heap, so I allocate a large enough vector or string read into it and return that. The caller will then either use it and drop it immediately or store it somewhere to be dropped later, either on its own or as part of the structure it belongs to. In either case the destructor will be called automatically and the memory will be freed.

That's easily 99.9% of what memory management looks like in the programs that I write. I have absolutely no idea what using a GC would change at any point here. Note that while this is technically "manual" memory management I never have to explicitly free anything and in the vast majority of cases I don't even have to bother implementing the destructor myself (I basically only need to implement them if I need to release external resources like, for instance, raw OpenGL handles, file descriptors and the like. The GC wouldn't help either here).

It's genuinely a non-issue as far as I'm concerned. I literally never think "uh, I have no idea when this object won't be used anymore, I wish I had a GC to figure it out". I can't even imagine when such a scenario would crop up.


Here's you're example with GC:

- I have a function that needs to compute a hash so I need to allocate a hash context for it. At some point after the function has returned the destructor is automatically called.

- I have a function that takes the name of a file as parameter and returns its contents in a buffer. I allocate a large enough vector or string read into it and return that. The caller will then either use it or store it somewhere. In either case the destructor will be called automatically and the memory will be freed.

See how it's simpler?


>he caller will then either use it and drop it immediately or store it somewhere to be dropped later, either on its own or as part of the structure it belongs to.

You're glossing over a lot of complexity there. That's the entire point of GC, you don't have to think about when it's dropped.

Also, I wasn't arguing for the blanket necessity of GC in all contexts and for all problems.


Can you point out a specific example then? I'm genuinely not trying to play dumb. I very honestly not see that complexity that should be obvious to me given that I spend most of my time writing code in non-GC languages.

Maybe it's just Stockholm syndrome and I'm so used to working that way that I don't even see the problem anymore but there are now a long string of replies talking abstractly about the overhead of programming in a non-GC environment and I simply don't relate at all. It's a complete non-issue as far as I'm concerned.


>store it somewhere to be dropped later, either on its own or as part of the structure it belongs to.

Wherever you're storing the reference, you have to eventually free the memory. That propagates memory-management code throughout your codebase, and is error-prone.

I totally agree that if you're allocating on the stack it's a non-issue. If you're returning a reference and always freeing in the caller, that's pretty easy too, but you have to have that logic in every single caller. If you're storing a reference on the heap somewhere, you suddenly have dynamic lifetimes with indeterminate reference counts, and it gets quite hard to 1) correctly manage memory and 2) verify that you're correctly managing memory.

The alternative is that in all three instances in a GC language, I don't have to care about memory management. Barring unusual edge cases, I can't forget to deallocate something and end up with a memory leak.


”If you're storing a reference on the heap somewhere, you suddenly have dynamic lifetimes with indeterminate reference counts, and it gets quite hard to 1) correctly manage memory and 2) verify that you're correctly managing memory.”

It is axiomatic that GC programs leak memory. I’ve never encountered one that didn’t. The difference is that it’s harder to see, and most programmers who have grown up with GC systems don’t know how to fix it.

”The alternative is that in all three instances in a GC language, I don't have to care about memory management.”

Again, no. The alternative is thar you aren’t paying attention, and don’t see the leaks until they become fatal. Just like cockroaches, you have them, somewhere.


For 95% of the code you write it's not a concern; that last 5% sure can eat up time if your software has any concern about performance.


Totally agree. I don't think that tradeoff is particularly out-balanced by the time it takes to do manual memory management.


”I have no idea how you've come to that conclusion.”

30 years of experience.

”For the vast majority of memory allocation in any GC'd language I can think of, you spend zero time thinking about memory management 98% of the time.”

No, you think about 98% of the code 0% of the time. The remaining 2% ends up causing so many problems that your amortized time spent on memory management is the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: