This is really odd. One of the biggest criticisms of Java is that it consumes so much memory, for which the rebuttal is the JVM can be tuned to use less! But no one does this in practice, so I assume there must be a reason that renders the “tuning” argument to be penny wise and pound foolish. I.e., you end up giving up something more valuable in exchange for that lower memory value. It seems like these Java apologists are trying to give the appearance that Java competes with (for example) Go in memory usage and startup performance and runtime performance when in reality it’s probably more like “you get to choose one of the three”, especially with respect to the top level comment about how the AOT story deceptively requires hidden tradeoffs.
Most developers don't think about tuning the runtime because performance is not one of their acceptance criteria... at best what happens is you have a JVM savvy ops engineer who looks at it in production and recommends some tuning options... these often then get rejected by the devs because they don't understand the features and are afraid tweaking things will break and cause them problems. So they tell the ops team to throw more/bigger servers at the problem.
"nobody" was deliberately an overly extreme statement. As implied by my statement, obviously some people do tune their apps, but the people complaining that the JVM needs gigabytes of memory just to run are clearly not in that group.
In late 90s I ran our JUG website with homegrown CMS written in Java with servlets on slackware linux server running also MySQL and it had only 16MB of physical memory for everything. We are _very_ spoiled nowadays and tuning is simply not necessary for most of tasks.
The current default collector doesn't give memory back to the OS. So if you have several peaky memory usage apps, you can't try and get them to elastically negotiate heap size tradeoffs with one another - you need to pack them in with max heap limits manually. That requires a lot of tuning, and it's still less than theoretically optimal.
We fork a child JVM to run our peakiest jobs for just this reason. Also help keep services up when something OOMs.
> The current default collector doesn't give memory back to the OS.
That's a pretty irrelevant point, as the current default collector in Sun's JVM does reduce the Java heap based on tuneable parameters. While it doesn't return the virtual address space to the OS, that generally doesn't impact memory consumption on the "current default" OS's. (Certainly there are specialized cases where you might care about that, and for that there are other collectors and other JVM's for that matter.)
> So if you have several peaky memory usage apps, you can't try and get them to elastically negotiate heap size tradeoffs with one another - you need to pack them in with max heap limits manually.
That's simply not true. The default GC does adjust heap size based on utilization, so you absolutely can run peaky apps that manage to negotiate different times for their peaks in a constrained memory space.
> We fork a child JVM to run our peakiest jobs for just this reason.
Well, I guess that's one way to address the problem, but you've unfortunately misunderstood how your tool works.
> Well, I guess that's one way to address the problem, but you've unfortunately misunderstood how your tool works.
No, I don't think you have the context.
The peaky process will be killed for OOM by Linux; we explicitly don't want services to die, which they would if they lived in the same process. So, the services live in the parent process, and the peaky allocation happens in the child process. For context, at steady state the services consume about 2GB, whereas the peaky process may consume 30GB for 30 minutes or a couple of hours. We use resource-aware queuing / scheduling to limit the number of these processes running concurrency.
It's true that G1 will, under duress (e.g. under micro-benchmark scenarios with explicit calls to System.gc()), give up some heap to the OS, but it's not what you see in practice, without exceptional attention paid to tuning. Process exit is particularly efficient as a garbage collector though.
The OOM killer kicks in when you run out of virtual memory, not physical memory. If you genuinely have processes that only periodically actually need their heap to be large, but don't return unused memory to the OS, you can simply allow the OS to page out the address space that isn't currently used. There are subtle differences between returning address space to the OS, and simply not using address space, but they aren't the kind of differences that impact your problem.
G1's heap sizing logic is readily adjustable. The old defaults did rarely return memory to the OS, but you could tune them to suit your needs. Either way, this is no longer accurate an accurate representation of G1's behaviour as the runtime has adapted to changing execution contexts: https://bugs.openjdk.java.net/browse/JDK-8204089
If the full amount paid for your developer including office space, taxes, salary, benefits is 200k which pays for 48 weeks x 40 hours you are paying 104 dollars per hour. Ram probably costs you 2 to 4 dollars per gb.
Saving 1gb memory is worth it if it does not cost your developer more than 2 minutes to figure out.
RAM is billed by the hour (or minute?) by cloud providers, and it’s 1GB per process, not 1GB total. If you’re running 20 virtual servers, that’s 20 GB. Moreover, if you’re shipping a desktop app, it’s 1GB * number of licenses. Finally, the “it’s not worth tuning” argument proves my point—Java proponents will tell you that Java doesn’t need to consume that much memory—you just have to tune it, but no one tunes it because it’s too hard/not worth it.
Cloud providers generally don't charge for RAM independently of other resources like CPU... and RAM isn't generally purchasable in 1GB increments.
Accordingly, shaving 1GB off all your runtimes won't save you much money.
There are more recently developed exceptions to that rule: container packing & FaaS offerings like AWS Lambda. Unsurprisingly, this has lead to the emergence of Java runtimes, frameworks, and libraries that are significantly more miserly with their use of memory (and are also designed for faster start up times as well).
That said, while a lot of people complain about their cloud bill, most places I've seen have payrolls and/or software licensing costs that make their cloud bill look like a rounding error. Sure, when you reach a certain size it is worth trying to squeeze out some extra ducats with greater efficiency, but more often than not, your efficiency concerns lie elsewhere.
Saying "no one tunes it" was deliberately overstating the case. If "everyone thinks the JVM needs 1GB just to run", then yes, "no one tunes it". Neither statement is true, but they both likely reflect some people's context.
But this, of course, applies to every project in any language and is in no way limited to Java or OOP. It is always balance between delivering functionality now with some solution or later MAYBE better optimized.
Then in round two, optimized solution may be harder to maintain and extend, or further optimization may be de-prioritized to some new functionality with higher business value. We all know it.
You are trying to project your belief to all Java applications and this simply does not work. There are both good apps and not good apps and there are many metrics to evaluate "good".
It appears to be specific to Java. Other languages don’t seem to exhibit high memory usage with the same frequency or severity as Java, and that’s not because developers of other languages spend more time optimizing.
If indeed this observed memory bloat is just a matter of poorly written Java apps, then that’s even more interesting. Why does it seem like Java has such a high incidence of poorly written apps relative to other languages? Is it OOP or some other cultural element?
> It appears to be specific to Java. Other languages don’t seem to exhibit high memory usage with the same frequency or severity as Java, and that’s not because developers of other languages spend more time optimizing.
Clearly you haven't looked at the memory overhead in scripting languages. ;-) They generally have far more object overhead, but their runtimes are designed for a very different use case, so their base runtime tends to be simple and tuned for quick startup. There are JVMs designed for similar cases with similar traits. It's just not the common choice.
> If indeed this observed memory bloat is just a matter of poorly written Java apps, then that’s even more interesting. Why does it seem like Java has such a high incidence of poorly written apps relative to other languages? Is it OOP or some other cultural element?
Your prejudice is showing in the other possibilities you haven't considered: perhaps memory intensive apps are more likely to be written in Java than other languages? Perhaps Java is more often selected in cases where memory utilization isn't a significant concern?
You can find a preponderance of poorly written apps in a lot of languages... JavaScript and PHP tend to be the butt of jokes due to their notoriety. Poorly written apps isn't a language specific phenomena.
For a variety of reasons that don't involve memory (the design of the language, the thread vs. process execution model, the JIT'd runtimes, the market penetration of the language), as well as some that do involve memory (threaded GC offers a great opportunity to trade memory for faster execution), Java applications are often long running applications that execute in environments with comparatively vague memory constraints, and so the common runtimes, frameworks, libraries, and programming techniques, etc., have evolved to trade memory for other advantages.
But if you look at what people do with Java in constrained memory environments, or even look at the hardware that Java has historically run on, you'll plainly see that what you are observing isn't intrinsic to the language.
> It appears to be specific to Java. Other languages don’t seem to exhibit high memory usage with the same frequency or severity as Java, and that’s not because developers of other languages spend more time optimizing.
It's because Java has strict memory limits. The limit of a bad C++ app is your machines whole memory (in theory more), so most people never notice if an app continues to leak memory or has weird memory spikes where it needs ten GB instead of one for a minute before it goes back to normal. Java forces you to either look at it or go the lazy route and just allocate more RAM to the JVM. Whatever you choose, you at least have to acknowledge it, so people tend to notice.
Sun's JVM has a setting for maximum heap size, but there are of course lots of other JVM's, and there are lots of other ways to consume memory.
> The limit of a bad C++ app is your machines whole memory (in theory more)
Well, that depends. Most people run operating systems that can impose limits, and you can certainly set a maximum heap size for your C++ runtime that works similarly to Java's limit. You just don't tend to do it, because you're already explicitly managing the memory, so there's no reason for setting a generalized limit for your execution environment.
> so most people never notice if an app continues to leak memory or has weird memory spikes where it needs ten GB instead of one for a minute before it goes back to normal
It also helps that short running apps and forking apps tend to hide the consequences of a lot of memory leaks, and in the specific case of C++, where memory mismanagement is often a symptom or a cause of severe bugs, you tend to invest a lot of time up front on memory management.
Just have a look on outbreak of Electron apps. People choose to use language they know and with which they can deliver value effectively instead of C or assembler.
This is actually a very good point but I don't know how this breaks down exactly. Can you give an example of a virtual server suitable for go vs java and the respective price points from a common provider?
I think you misunderstand. The argument is simply that if it were as important as some suggest it is, there'd be an effort to use memory much more efficiently. Java does run on incredibly small memory footprints, but the runtime that most people use deliberately trades memory for other advantages, and even then people choose to operate with far more memory than it requires.
That seems like empiricle evidence that other factors are far more important.
> One of the biggest criticisms of Java is that it consumes so much memory, for which the rebuttal is the JVM can be tuned to use less! But no one does this in practice, so I assume there must be a reason that renders the “tuning” argument to be penny wise and pound foolish
Nope, the main reason is simply that memory is cheap and plentiful, so there is simply no reason to spend any effort to tune base memory usage when writing the kind of applications Java is typically used for.