Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing that I find odd about Google is that it feels like they drink their own coolaid. Reports I've seen suggest that nearly no one knew this was coming. Teams who have been working on ports for months were blind-sided. Employees generally did not know.

It just seems like a very poor way to run a business. It feels sloppy and needlessly messy. Especially when providing soft landings is so possible for a company with the absurd revenue that Google gets. It feels like a double warning: Google will withdraw suddenly, whenever they like, and they won't use one iota of their largess to help you deal with the consequences of their actions.



Obviously this isn't the same thing, but yes - this seems right. But, I remember my first time going to Google's campus in 2010 and connecting to their guest wifi network. 1Gbps! Insanely cool! Pulled up gmail and google docs and it worked PERFECTLY! My first thought was "...oh this is why their webapps suck - they have no idea how it's being used in the real world where latency is above 10ms"


For a specific example of latency incompetency that immediately came to mind while reading this: Chrome.

Chrome will not run properly on first execution, as in ran for the first time after a cold start of the computer, when executed off a HDD. Why? Because the HDD takes too long to read off data. Chrome expects SSD latency and fuck your computer if it's not residing on one.

When executed off a HDD, I've found Chrome only runs properly from second execution onwards after the underlying operating system has cached most of the stuff Chrome wants in RAM in anticipation of subsequent executions.

I want to say this is optimization for ever more powerful hardware, but I'm inclined to say it's also sheer incompetency that Chrome literally can't fallback gracefully if it doesn't get data as quickly as it wants.


Made some offhanded comment about Chrome perf on Twitter earlier this year and a Google friend replied something like "Well, pretty much the whole Chrome team just got upgraded to local test machines with at least 32gb of RAM. Godspeed everyone."


Makes you wonder what would happen if companies occasionally did the exact opposite to their engineers.

"Oh, you know that 32 GB machine you've got? We're replacing it with this new 16 GB one. If the test suite is too slow on your new machine, I guess you'll just have to make the tests faster."


What would happen is those engineers would rightly be concerned that their leadership had lost their marbles, and would quickly find new jobs elsewhere. These kinds of “fun” thought experiments don’t pan out in the real world.


You having fun/ being able to develop fast isn't your customer's problem/ the problem's of people actually using the things you build. Windows Vista devs with 8GB of DDR2 when real world customers had 512mb of DDR learned this lesson hard.

EDIT: Also - client side native software and web dev are insanely different. Web/ serverside people seem to disregard this. Constantly.


> You having fun/ being able to develop fast isn't your customer's problem/ the problem's of people actually using the things you build.

I cannot parse this sentence. What does vista having its minimum requirements poorly defined have to do with being forced to develop on underpowered hardware?

If my boss says “we are giving you a worse machine because we think that will make you write better code” I am out of there. There are plenty of ways to emulate weaker hardware and do performance testing and to make it a development priority that don’t involve intentionally hamstringing your engineers.


>I cannot parse this sentence. What does vista having its minimum requirements poorly defined have to do with being forced to develop on underpowered hardware?

Basically: An end-user has computer with 4GB of DDR3. Devs for <software> wrote for and tested on a machine with 64GB of DDR5. <Software> ends up running like shit on end-user's computer.

It isn't the end-user's problem that the software runs like shit, because the devs programmed to an unrealistic common denominator. The end-user is going to find <software> that doesn't run like shit on his computer, and the devs only have themselves to blame for losing a customer because they were so out of tune with reality.


> It isn't the end-user's problem that the software runs like shit

It may not be their fault but it almost certainly is their problem…


Tell me you've never developed native applications outside an iOS simulator, or OS development without telling me...


Do you actually have something to add to the discussion? Or you just want to take potshots at me?

You’re all over the place. Please explain why you think using underpowered hardware is the only legitimate way to write software that works on that hardware.


You can write your code on a nice fast machine. A dev machine should be as fast as possible. Those devs with their big fast machines should be required to run and test on much lower spec machines though.

Testing only in a VM on a beefy dev box leads to terribly performing software on customer machines. There's a multitude of performance problems that only come up when a system starts paging to disk, a machine has a HDD, or a CPU gets maxed out. These issues will be completely occulted on a dev machine with tons of RAM, 16 cores, and an NVMe disk.

Far too many developers have the beefy dev box with no requirements to test on more prosaic configurations. Even limiting a VM's CPU and memory isn't a good environment for performance testing because it's still faster than actual low end hardware.


And it can't be cost issue, the crappiest machines are some cheap laptops from any big box retailer. Okay, that fact itself might make getting them harder, but still. Just buy a once a year a low few hundred euro laptop and add to pile. Rotate in 5-10 years or as they fail.


Preach my friend


Yes, I started this discussion thread you're responding to. No, there isn't really a way to do native development without experiencing what your customers do. Build locally on the high powered one, run on your lower spec'd machine. This isn't a potshot. This is me being annoyed that 15 years later, people keep making the same mistakes of not testing on "real world" hardware. Your VM isn't a real user representation. Stop thinking so.


I think the problem is the either or nature. One dev one box is a mistake. The team should have a few they share.

In particular I often fight the slowest machine leaving the office. That should stay for a long time, set aside for testing.


It's a matter of tact. They could say, "Here are your 32GB RAM machines. Your old one? You get to keep it too. Make sure Chrome works perfectly on your old machine."


This is how it should go. Not setting the high perf machine as a benchmark.


At least not every engineer. I love using older machines and any chance to make things work in a hardware friendly way.

I have to say, I appreciate the insane expertise browser engine developers have in making JS and layouting run fast.


This is what CI (Continuous Integration) and CD (Continuous Delivery) systems were designed for. If certain tests exceed the performance budget of a low-resource environment in CI/CD, the engineer responsible will be required to fix it before a release can ship.


You’d have less fallout if you just delay giving them upgrades long past the point they are due.


This is what CI can be used for


32gb is hilariously low for Google, my machine has 196gb of ram.


bragging about having 196GB in a thread about real world performance is exactly why this thread happened


No, this thread happened because not enough effort was put into automating performance testing.

Which, unlike making everyone develop on palm-pilots, is the correct solution to performance problems.


Wasn't meant to be a brag, just letting you know that 32gb of ram is on the low end for most employees.


I can tell from the build time memory requirement of Blink.


I was just whining how when Chrome-based browsers first open up on my 5400rpm hdd, I paste in the URL and press Enter, then it loads the default home page and wipes out what I pasted... "you're goin' nowhere!"


Opening Google maps, the text entry is editable far, far before when it should be. As soon as the page loads, I can start typing. Then the JavaScript starts running, and helpfully selects the text entry, placing the cursor at the beginning of the text field. Then some cached suggestion loads, inserting some suggested search for the area being viewed.

The end result is that anything I type gets jumbled or overwritten multiple times before the page settles down and can actually be used.


This is the general curse of async UI these days, and it's everywhere from the web to native desktop apps to the OS itself.

Obviously, it can be done right, but it seems that most devs who are so eager to jump on the async bandwagon have no idea that they have to make that effort now.


That's far from being only a Chrome problem. That extremely irritating behavior is going to be with us until OS developers see the light and we get hard/soft realtime GUIs.


I've seen this exact behaviour in a few Electron apps on a Raspberry Pi 400.

Make sure nothing else is running, start the application, expect failure, start it again, works as expected.


In the first-boot-on-HDD scenario, why would latency make the program fail to start at all? I'd expect it to just start slowly.


Unanticipated race condition perhaps? A process takes 2 minutes instead of 5 seconds, and then a later part of the startup fails because it has no way to handle the totally unexpected lack of data


For starters Chrome will not wait for extensions to load/initialize. Its possible in case of HDD there might be more things that simply time out/chrome disables because it doesnt want to wait.


JIT. It's using a "known state"/ cached blob to start then quickly falls apart as it does it's "SSD expected" memory management voodoo.


This seems weird to me, can you define "run properly"?


Chrome also runs a full built in antivirus scan of your whole PC on first launch after updating.


Okay, something I can speak too! (Though I left in 2009 but it's close enough)

I was a SRE on Gmail, and I can assure you that the experience for devs was not great intentionally. We had a pool of dedicated machines that ran various versions specifically used for development. I made sure those where always on our worst clusters (slowest CPUs and disks) so the dev experience was always the worst case scenario for performance. :-)


Would love to hear more about that! I feel like there was a very specific period between 2010 and 2012 when web app stuff just sucked but Google Docs etc was great fun ~2006-2009


How did they have a 1Gbps Wi-Fi AP in 2010, and how did you have a 1Gbps Wi-Fi client in 2010? Wi-Fi is barely 1Gbps over a decade later.


Dualband bonded 2.4GHz and 5GHz WiFi 4. No idea if it was effective speed or reported.


Wouldn't matter much for the usecase that was presented in parent. Latency can still be at ~1ms.


You'd think everyone working from home where they have real-world internet speeds would fix that.


Well people weren't working from home in 2010


You have to pretty naive to not realize that Google could flip directions any day. Look what happened to Nest? They went from being apart of Google, getting kicked out, and back to being apart of Google in a short time frame. Or the pixel that went from being a flagship phone to being a mid tier phone and then back to a flagship phone in a 3 year period. There’s more examples like Google fiber which stopped expanding in 2016 to adding a bunch of new cities this year. Leadership flips positions all the time.

They don’t have a real vision for the company. All they know how to do is search and ads.


They do have video streaming down. YouTube is still a great product IMO.


Youtube is not great by any objective measure, but they are the only worthwhile player in town.


That seems like it is great, along the measure of “economically viable”.


> Reports I've seen suggest that nearly no one knew this was coming

When you're making a decision like this, you have to go full steam ahead until you decide to stop. Because not going full steam ahead compromises you in the event that you choose not to cancel.

It's like a negotiation. If you're thinking of completely folding to your counterparty, the last thing you ever want to do is tell them that they have you on the ropes.


And like in any game of minds, it goes into a guessing game: ”Are they thinking of X? They would never admit, but perhaps this and that can be interpreted as signal for X.”

This is fine for some situations, like games, and can even be fun.

It is not ”fun” if you are a paying customer and and X is ”Suddenly kill the service I’m using.”

The only remedy to this is trust. (Which a simple short term zero sum game theoretic analysis does not account for.)


>they won't use one iota of their largess to help you deal with the consequences of their actions.

Google's refunding all Stadia game and hardware purchases.

Disclosure: I work at Google, but not on Stadia.


IMO no need for a disclosure if you're just stating a single fact.


Company policy requires me state this anytime I say something positive about Google. Otherwise there's the risk of it coming off as astroturfing.


I think it's better to err on the side of caution. By stating that fact you're defending Google, might be open that you work for the organisation.


It’s intentional, it’s a way to brag. They don’t need to at all. There’s a reason why nobody at other companies does that - people at other companies are normal and don’t think they’re gods gift to the world.


If you search for "Disclosure I work at Microsoft" site:news.ycombinator.com you get 300+ results.


Though Xicrosofter returns none.


It's required by company policy. Otherwise the comments could be seen as astroturfing. The policy is so strict that it specifically says you need to put a disclosure in every individual tweet, a disclosure in your Twitter bio isn't sufficient. I think that part of the rule is broken frequently though.


Interesting. Is it limited to Twitter? I can see the company fearing a backlash in social medias where you're operating under your own name (and thus can be traced back to Google) but what about anonymous and semi-anonymous forums like HN?


I re-read it, and it actually doesn't explicitly mention Twitter. It does mention how you can disclose using hashtags, which is why I misremembered it as talking about Twitter, because Twitter and hashtags are so linked in my mind.

It looks like it applies to basically any online post/comment, regardless of if it's anonymous.


Yes, but what I am talking about is the professional side. Sibling comments are talking about the Nov 1st launch games who were in the dark. Obviously Google can make it right - but that doesn't seem like what they are going to do.


It looks like Google is going to make it right:

https://nitter.net/OldeSkuul/status/1575863134857793536

https://nitter.net/burgerbecky/status/1575721820904632320

I don't really know if there's a better way to do it. If you're going to shut down and refund everyone's purchases, you need to disable purchases at the exact moment you make the announcement, otherwise people will buy things for semi-free knowing they'll get refunded in the future. If you tell game devs before the general public, it's going to leak out. If it leaks out you either need to lie and say it's not shutting down, or announce that it is shutting down, in which case the game devs didn't really get much advance notice.


Reader, Wave, G+, Hangouts, Stadia and soon RCS.

Never build a business on a Google project, only ever use Google as a side hustle.


RCS is a failed telco standard, not a Google invention, and Apple refusing to adopt it is probably the single biggest reason it never took off.

https://en.wikipedia.org/wiki/Rich_Communication_Services


RCS is (was?) designed so telcos can charge per message, requires (and is tied to) a phone number, and end-to-end encryption is an afterthought.

Is there even an RCS client for iOS? There is for signal…


What's happening with RCS?


why use it for a side hustle?


OP said "as a side hustle", not for. Perhaps OP resells Google services or Cloud offerings?


If staff were told early it would be at the top of HN and Blind 10 minutes later. Same for if developers were told to wind down porting. In both cases, customers would be upset about being able to spend money on a service without being told it's circling the drain. There's no clean way of doing it without telling everyone at once and in that case, immediately shutting the store and giving a deadline in the future is the option that gives the most advance warning.

I also guess there are legal complications about telling employees early. Google employees are Google investors.


Everybody at Google knew it was doomed but they still collected their $200k a year.[1][2]

[1] https://helios-i.mashable.com/imagery/articles/06gGKvZPUYACk...

[2] https://mashable.com/article/google-engineer-manu-cornet-com...


The announcement seems sudden but actually it's with about three months notice. It does seem a little short, but how much would it help to drag it out further? How much time do you want to spend on a project that's shutting down?


There were games preparing to launch on November 1st.

Those should never have been approved, if the thing was shutting down. In those cases, it's normal to stop making approvals for new games, for anything expected to finish within six months of your termination date. You don't necessarily need to make an announcement yet, but you shouldn't have anyone expecting a launch when the service is dead or dying.


Big companies are quite paranoid about leaking shutdowns even internally, because they don't want the press to know, customers to start asking questions, team members leaving, etc etc.

I once casually speculated to a director that product X felt like it was going to get the axe. He was visibly freaked out and responded with a fervent Shakespearean lady-doth-protest-too-much denial, complete with demanding to know where I'd heard this from. Inevitably, it turns out that X's days were already numbered, but he already knew and I didn't.


That's true... But when you act as a publisher, then you're required to take the risk of leaking something's final days.

When you approve something for launch, and then kill the product before it can either launch or launch effectively, then you become liable for the investment in the launched failure. Google can absolutely be sued by those that just had a failed launch. They misrepresented themselves. This risk, tends to be higher for the company, than allowing people to guess that the product is about to die, because game products tend to be rather large investments.


Were there stadia exclusives? What were people doing to make it ready?

I can “launch” (pun intended) AAA title on a cloud VM in the time it takes for steam to download the game.

I haven’t used stadia; honestly curious.


I honestly don't know. But every developer I've seen talking about publishing on Stadia, also talked about working with the team, so there were definite considerations that they needed to take into account.

There's stories like this [0] which suggests that the performance for Stadia was different than other platforms you might deploy for. The threading behaviour is a little bit different than just a VM, which is to be expected, but can come out surprising. So there's probably Stadia-specific patching for different games.

[0] "Stadia Adventures in slow server code on Unity" https://www.youtube.com/watch?v=s-SpWSEWYbU


This strains plausibility. Do you want to cite these reports you mention?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: