Hacker Newsnew | past | comments | ask | show | jobs | submit | danudey's commentslogin

Like "never work at Meta unless you can out-toxic your coworkers".

Yea I knew meta was toxic, but publicly beefing over something over a decade ago is a whole other matter. I can’t even remember what I was working on 10 years ago, and even if I did I wouldn’t be bringing people down that much later.

The problem is a lot of very strong engineers are also very difficult to work with. I worked at Meta too and can tell you the other side of the coin is that people who were too toxic could get canned as well!

Yes, I have worked with the strong but arrogant/snarky engineers. Luckily most of them got canned or forced out because the environment they create around themselves more than negates the positive impact they have. The strongest engineers I have worked with are all humble and kind.

It is their loss, I cannot imagine letting a minor work quarrel live rent free in my head for over a decade. I feel bad enough when something is stuck in my mind for a week.


Funny, I was thinking what a relief it was to see people making their arguments frankly like on the HN of 10+ years ago.

Like "Hey, I wonder if Conway's Law works both ways. Huh. Wow. It looks like that is indeed the case."

If you're interested in implementing this directly into your dockerfiles with some minimal changes, Docker already supports this to a degree:

https://docs.docker.com/reference/dockerfile/#copy---link

The TL;DR:

If you change your dockerfile to use `COPY --link <foo> <bar>`, then docker will create a layer containing only the files that would be copied, and that layer is treated as independent of layers coming before it. The only caveat is that you need to have a build cache with previous builds and use --cache-from to specify it, which means saving build state.

That said, there are a lot of benefits you can get very quickly if you can implement it. For example, if you have a dockerfile which creates a container, builds your golang application in it, and then copies the result into a fresh alpine:3.23.3 image, and you use a local cache for that build, then when you update to alpine 3.23.4 it will see that the build layers have not changed, therefore the `COPY --link` layer has not changed. Thus, it can just directly apply that on top of the new alpine image without doing any extra work.

Apparently it can even be smart enough to realize that it doesn't need to pull down the new alpine:3.23.4 image; it can just create a manifest that references its layers and upload the manifest; the new alpine image layers are there, the original 'my application' layers are already there, so it just creates a new manifest and publishes it. No bandwidth used at all!

> How many copies of `python3.10` do I have floating around `/var/lib/docker`.

Well, if you use 'FROM python:3.10' for your images then only one.

If you're careful, you can sort of pull together contents of multiple images by using `COPY --link`, and then even if you have 10 layers then changing from python:3.10 to python:3.14 only changes one of them.

Again, this does require that you maintain a cache, but that cache can live in a lot of places that doesn't have to be the local filesystem: https://docs.docker.com/reference/cli/docker/buildx/build/#c...


I'm well aware of `COPY --link`, it doesn't solve the problem. I'm a heavy heavy user of it, combined with throwaway build stages. `COPY --link` won't help my `apt install` commands.

The use case here isn't `FROM python:3.10`, it's `FROM ubuntu; RUN apt install -y vim wget curl software-properties-common python3.10`/`RUN rosdep install`/`RUN --mount=type=cache,target=/root/.cache/uv --mount=type=bind,source=uv.lock,target=uv.lock --mount=type=bind,source=pyproject.toml,target=pyproject.toml uv sync --locked --no-install-project`. All of those dependencies get merged onto a single layer that isn't shared with anything else. You'd better hope something like tensorflow isn't one of those dependencies.


Meta: I think your example code would benefit from being a code block; in HN this is done by prefixing with 2 spaces.

eg.

  FROM ubuntu
  RUN apt install -y vim wget curl software-properties-common python3.10
  RUN rosdep install
  RUN --mount=type=cache,target=/root/.cache/uv --mount=type=bind,source=uv.lock,target=uv.lock --mount=type=bind,source=pyproject.toml,target=pyproject.toml uv sync --locked --no-install-project

They were intended to be three separate examples but point taken, yes, I should have

> Well, if you use 'FROM python:3.10' for your images then only one.

Negative, there can be multiple versions of an image with the same tag and a different SHA.


And which supports DICOM calibration, which normally costs you >$5k for a smaller (e.g. 21") display.

It's now vastly cheaper to buy a Mac and a 27" Studio Display XDR than it is to buy a single 21" DICOM display for your clinic. Heck, it's not much more expensive to buy two SD XDRs than to buy one standard DICOM display.


If you're a radiologist making $300k+ you're going to want to use certified displays so that you don't get sued for using non-approved devices for diagnostic use, and that's going to cost you maybe $6k for a 21" monitor.

https://www.monitors.com/products/jvc-cl-s500-rn?variant=427...

$3300 for a 27" display is ridiculous in comparison.

(Acknowledging that the link I provided is for a pair of monitors, but also those monitors are half price because they're refurbished)


So throughput was already good but TTFT was the metric that needed more improvement?

To add to the sibling "good is relative" it also depends what you're running, not just your relative tolerances of what good is. E.g. in a MoE the decode speedup means the speed of prompt processing delay is more noticeable for the same size model in RAM.

Good is relative but first token was clearly the biggest limitation.

Let’s say TTFT needed the most improvement. At some point, loading the model with enough context size may take tens of seconds in some macs.

Yeah TTFT was terrible. I don’t think it’s unreasonable to benchmark the most-improved metric.

Overtime for police can often be largely attributed to special events or occasions; for example, the city might have an entirely correct amount of officers for most of the year, but then during superbowl, presidential visits, Fourth of July parties, Pride, etc. They have a much higher need for patrol units, escorts, traffic management, etc. for those denser areas with more going on. They can't simply transfer officers around because that would leave other areas of the city under-patrolled, which runs the risk of unacceptably higher response times.

As a result, officers that worked Saturday through Thursday might also come in for a shift on Friday/Saturday, or might work a longer shift or split shift that day.

So the problem might not be that the police force needs 30% more staffing, but that the police force needs 80% more staffing on extremely rare occasions.


So you get 40 years of "sewers cost us almost nothing to maintain woo" and then five years of "sewer maintenance is costing us hundreds of millions of dollars this year".


> New "Xbox" games run best on PS5 Pro (for consoles at least).

I dunno, the way Windows 11 is going these days that caveat is getting a lot of scrutiny. I would say that, in many cases, they run best on PS5 Pro full stop - at least until the GabeCube releases, for those who can afford it.


> Sounds like a concept of a plan.

It's safe to assume that her first e-mail to staff is not going to include a comprehensive breakdown of every action she plans to take.

Not saying she does or doesn't know what she's doing, but it would be weird if she went into much more detail at this point before she's even ramped up.


Would be good to provide clarity to the trenches though.


Clarity would be "you're all fired" but they cant write that, can they.


Except that Satya whined about people using the term "AI slop" and Sharma specifically uses that term in her e-mail. An interesting contradiction.


Her press release and tweets sound like someone asked AI what to say to make Xbox fans like you. I tested what it would tell me to say if I was CEO and it gave me very similar talking points.


To be fair, that is a great social engineering technique to trying an win back the market.

I'll take it as a social engineering technique because that is the sector she came from and still being pushed by the CEO.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: