Hacker Newsnew | past | comments | ask | show | jobs | submit | idbentley's commentslogin

Just an appreciation post, not a question. I have reached out to graphviz as my tool of choice for diagramming for my entire career. Sometimes years go by without me using it, but I always end of finding a place to use it.

Recently I started teaching Software Development, and once again reached for graphviz for a huge variety of classroom uses.

Thanks!


I don't see any information about the source code... No mention of Open source on the website.


We have it at the bottom of the page, but I agree that it's not very visible. We'll add it at the top of the page. Thanks for your feedback! Appreciate it!


Thanks!


I've worked with developers who use this pattern frequently for code execution.

try{ .. business logic .. } catch (NullPointerException e) { .. else .. }

Rather than a null guard. That's what occurred to me.


I've seen a horrifying adaptation of this pattern in C++, where a piece of code was using catch(...) to detect dereferencing an invalid (not even necessarily null, just an object that's gone!) pointer.


That's strange, as dereferencing an invalid pointer in C++ will not cause anything to be thrown. In the best case you immediately get a segfault; if you're unlucky the code just goes on reading random data from memory and crashing some time after that.


The standard is U.B., so throwing is a valid response.

On Win32, a segfault / access violation - and similar low-level errors, like division by zero - is represented as something called "structured exception". These have a standard OS ABI such that they can be caught, and stack can be unwound, across different languages.

In MSVC, normally, they are not treated as C++ exceptions for the purpose of catching them, although it will still participate in stack unwinding (if something else below is catching them). However, there is an opt-in compiler switch that does make them look like C++ exceptions in a sense that catch(...) will catch them. It's not something you'll see in most code written in the past 15 years or so, for obvious reasons.



I remember seeing sample code from WebLogic that returned values from methods using exceptions. Mind you this was in documentation rather than in production code - but hardly a good example to follow!


This is arguably more robust, because "foo.bar.quux.doTheThing()" is three potential null pointer exceptions in a row, and the code to do consecutive testing is ugly and verbose.


I guess there are extremely few people in the topic who understand how NPE works internally.

There is no null check in the generated assembly code, when a null dereference occurs - it's an effective kernel trap (as 0 is not mapped to the process). The latter uses the code execution pointer to understand what has been attempted to execute and throws the exception. This may or may not involve stack crawl (which is very expensive) depending on JIT ability to prove if the stack would be unused.

Nulls should be avoided, all fields should be initialized, etc. Nulls are great for =very= high performance code as null checks are virtually free.


This is why Rust's '?' operator for error propagation is quite nice.


Or kotlin's '.?' operator for null propagation.


Or C#'s '?.' operator for null propagation.


Or Swift's ? And !


Having a try..catch round such code would seem sensible but to rely on that over normal checking seems spectacularly horrible.


It’s a code smell if you are calling a method chain that long at all.


This is largely not the case outside of the C-like languages that inherited nulls.

Long method chains with a strong type system or a functional (also called "fluent") style are extremely expressive and common in languages that obviate the need to handle null values at every member.


Regardless of safety guarantees (or lack thereof), a method chain that long indicates a lot of knowledge of the structure and hierarchy of the dependency. It makes refactoring/testing/mocking more difficult.


What does the length of the method chain have to with anything? Let's say I define an interface Foo.bar.xyz with method call, and I call the implementation foo.bar.xyzimpl.method()

Everything is just as testable and decoupled, it's just a different project structure.

Method chains have nothing to do with the things you stated.


It's usually a property chain, not a method chain - I don't think get...() is meaningfully a method in most cases, even if implemented as such.


Cannot agree with you more. I personally hate chained expressions. Awful for debugging too.


Java debugging tools are beyond excellent for debugging such code (not that it should be written that way)


I still have to select the sub-expression whose value I wish to inspect. This involves precisely aiming the cursor at text, which is a lot of mental burden (x millions).

A task I thank the developer who creates a variable to hold such a reference, for it creates a much better experience.


It's only awful for debugging because the tools are awful.


The tools for Java are greate (try C++ if you disagree).

It's the code that is the problem here.


The tools for Java are much better than those for C++. However, claiming the code is the problem is silly. I'm not going to introduce a bunch of local variables just to appease a debugger.

If splitting on separate lines makes code more readable, then sure, do that. But often it just makes the code longer and harder to follow.


Makes me think about the stratification issues I have on my phone these days - some conversations in What'sApp, Telegram, Signal, Riot.im not to mention text messages, email etc.

Becomes exhausting.


I just don't understand why this conversation seems to be dominated by financial discussion. The main problem that many NYs had with the Amazon deal was dilution of our culture. Dumping 25,000 high earning employees into a complex and vibrant city is following San Francisco into folly.

They probably oversell the economic benefit, and it's pretty foul the way a corporation can push around politicians. But the bottom line is, these jobs wouldn't go to existing New Yorkers in large part, and that makes the economic benefit to New Yorkers thing difficult to reason about.


Do you... live in New York? I don't mean to be rude but you say "our" culture can't handle 25,000 high earning employees. 25,000 new high earners would be utterly unnoticed in the sea of Wall St.-ers. Honestly I think it'd improve the high-earner culture here if anything...


I live in Brooklyn.

In lower Manhattan they might be unnoticed, but that is not New York. Queens, is not Manhattan. You're talking about a major gentrification event. Can't handle is bad wording, but it would seriously exacerbate the wealth distribution problem here, which is already the worst in the country.


I really don't get this as an indictment of MongoDB, or their OpsManager product really.

They used the version of OpsManager that doesn't manage the deployment - is specifically not a deployment manager. Mongo does offer a managed version of this software, which the author mentions - with a justification for why they couldn't use that offering. However, I think this was the main mistake that The Guardian made. As the author notes: "Database management is important and hard – and we’d rather not be doing it ourselves." They underestimated the complexity of managing database infrastructure. If they had been attempting to set up and manage a large scale redundant PostgreSQL system, they would have spent an enormous engineering effort to do so as well. Using a fully managed solution - like PostgreSQL on RDS from the beginning would have saved them time. Comparing such a fully managed solution to an unmanaged one is an inappropriate comparison.

Full disclosure - I used to work at MongoDB. I have my biases and feelings w.r.t the product & company. In this case I felt that this article didn't actually represent the problem or it's source very accurately.


Fair criticisms! It's true if we'd used Mongo Atlas or something similar it would likely have been a different story - often the MongoDB support spent half the time on the phone trying to work out what version of mongo, opsmanager etc. we were running.

Re criticism of OpsManager - I think this is fair, given the sheer number of hoops we had to jump through to get a functioning OpsManager system running in AWS - no provided cloudformation, AMIs etc. £40,000 a year felt like a lot for a system that took 2 or more weeks of dev time to install/upgrade. The authentication schema thing was a bit of a pain as well, though we were going from a very nearly EOL version of Mongo (2.4 I think).


> often the MongoDB support spent half the time on the phone trying to work out what version of mongo, opsmanager etc. we were running.

That sounds awful. Reading these stories I'm happy I work with small companies without such a huge infrastructure.


Support at this scale is a hard game. I know some of the guys who work over there, and they make a valiant effort.

Training is exceptionally hard. Databases are hard to manage, and it takes years to learn to diagnose their function on unknown hardware / software.

This all being said, "no provided cloudformation, AMIs etc." is no bueno - not a good experience for the user.

If you haven't used Mongo with WiredTiger, you really haven't used it at it's best.


I do like the article but it sounds like they’re in over their heads, this whole (very risky) project could have been avoided if they just brought in someone that knew what they were doing.

> Clocks are important – don’t lock down your VPC so much that NTP stops working.

> Automatically generating database indexes on application startup is probably a bad idea.

> Database management is important and hard – and we’d rather not be doing it ourselves.

This is true of any database infrastructure with redundancy/scalability requirements.

What they did was take a technical problem and solve it by buying an off the shelf solution. Which is fine, of course, but I’m a bit surprised by the reaction here on HN.


The pricing of hosted MongoDB solutions is high especially as volumes increase. The last I checked, disk space doesn't free up when you drop documents or collections and that adds to hosted cost unless you find time to fix the issue through manual intervention. This costing is moving us away from MongoDB in the future.


MongoDB will reuse free space when documents or collections are deleted (even though the storageSize stat will remain the same). You can compact the database to release free space to the OS. You can read more about how MongoDB's WiredTiger storage engine reclaims space here: https://docs.mongodb.com/manual/faq/storage/#how-do-i-reclai...

I work for MongoDB so if you have any questions about storage, feel free to reach out.


YouTube works fine in Firefox. I use it all the time. It works well in mobile firefox as well.


I was affected by this bug for example: https://news.ycombinator.com/item?id=10877810


That bug was YouTube's fault.


That's why I said that YouTube has quite a few bugs in Firefox ;)


TLDR: all standard applications for working with files are unaware of resource forks. This is confusing, and hurts new computer users. #consideredharmful


No.

All apps designed for Macs by people who actually know what they are doing are resource fork aware, even though resource forks have gone out of fashion.

Like it was said before, resource forks have been around since the first version of the Macintosh (I believe MacOS was called "System" at that time), and it was a rather clever way to keep data such as dialog boxes, message strings and icons out of the executable file while keeping it a single file.


In 15 years of working with Macs (as a programmer and as a semi-pro DAW/NLE user), I've never been confused once by them.

So it sure might be confusing (and I see how) but it's absolutely not very common.


> In 15 years of working with Macs (as a programmer and as a semi-pro DAW/NLE user), I've never been confused once by them.

Gosh, are you saying that the common use of ADS was a common mac pattern and people very familiar with the history of macs would understand this well?

Do you really think that refutes the point that they're confusing to everyone else? While many OSs have implementations of ADS, almost no one uses them.


>Gosh, are you saying that the common use of ADS was a common mac pattern and people very familiar with the history of macs would understand this well?

It was quite common, yes, and even more extended in the past, but I'm saying something else: that noticing it and having issues with it wasn't that common. It's a leaky abstraction, but you don't often meet that leak.

Case in point TFA's issue. He has a zero-sized font file where all the data are in the resource leak. All fonts I've dealt with in OS X have been proper files, you can copy over to other FS normally.

>Do you really think that refutes the point that they're confusing to everyone else?

No, as I wrote: "It sure can be confusing (and I see how)".

But it's not that often that it has a chance to be confusing (at least in my experience -- but I've also not seen much discussion in support forums, questions from friends/colleagues with Macs etc about such as issues, whereas I've seen for many other issues).

>While many OSs have implementations of ADS, almost no one uses them.

Wouldn't that make them even MORE confusing in those OSs, the times they're finally used? As opposed to an OS that regularly uses them?


You act like that's not intentional. If it's a feature that's there for legacy reasons only and it's used as little as possible by the system, and not one iota more, why the heck should Finder, et al expose it to users? In the (very, very, very) rare case that a user actually needs to get into the resource fork of some antiquated file, that user (who is going to be very technical by definition, otherwise how the heck would they even stumble across such a file or care about what's in it?) can simply use the very widely known and documented command line tools or APIs for dealing with it. Anything more than that and you'd just be encouraging people to use a feature that you don't want anyone to use in the first place.

There's a difference between having a feature you want everyone to use, vs. having a feature that only exists for very specific legacy reasons in very specific systems-level backwards-compatibility scenarios, that you don't want anyone to use under any circumstances. Exposing the feature more than it already is exposed would be far more confusing and hurtful to new computer users, most of whom don't understand working with filesystems in general much less specific low-level filesystem features like resource forks.


... if you're not used to a system that has them.


All the applications listed on the blog aren't standard applications from the Mac old days, except for Finder.


Seems like an indictment of poor porting of unix tools to work on OSX, rather than "all standard applications" being unaware of them.



Find a better job.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: