Hacker Newsnew | past | comments | ask | show | jobs | submit | craftkiller's commentslogin

Along similar lines, there's no way I would buy an OLED at this price point. If I'm dropping $3k on a monitor, it needs to be a technology that lasts, not a technology that wears out over time.

Current gen OLEDs almost don't wear out (saying this as an OLED owner). To see the wear you need to have a completely black room and the wear is unnoticeable unless you're specifically looking for it. You don't need to spend 3k, 1k is enough.

Ah, you should update wikipedia then: https://en.wikipedia.org/wiki/OLED#Lifespan

> In 2016, LG Electronics reported an expected lifetime of 100,000 hours

23 years for an older generation OLED seems fine to me, I don't understand the problem here?


The US Department of Energy report from the same year reports far lower numbers, which I'd be more inclined to trust since they are impartial / not trying to market a product.

True, but those numbers are from 2016, 10 years ago. For a more apples to apples comparison see [1] [2] [3].

[1] https://youtu.be/H43wnV-v7V0

[2] https://youtu.be/RbEgQrigiLc

[3] https://youtu.be/AZfwHcMLorY

In my case, 3205 hours of use:

- 428 pixel cleans

- 1/3 brightness (my room is pretty dim and I often code during the night)

- static control on

- pixel shift on

- apl low

- sub-logo dim on

- corner dim on.

During the day I am not able to see any burn in. During the night it's unnoticeable unless you're looking for it. And it's only visible on gray backgrounds, unnoticeable during normal use. My phone (Nothing Phone 2) fails to capture it no matter how hard I try (even during the night).

The only issue I had was at 2417 hours and it was vertical white stripes like this: [4] but they were completely gone after a manual pixel clean. No issues since. I am never going back, worth every penny I spent.

[4] https://www.reddit.com/r/gigabyte/comments/1gyv1db/fo32u2_ve...


That doesn’t sound very reassuring. 3205 hours, or a little over a year at 8 hours a day. Be generous and call it two years of use. You’re babying it with low brightness, dynamic dimming, etc. etc. and the fact that there’s anything, even if you have to “look for it”, is not a good sign.

I bought an LG 32" 4k OLED for $999 and it's hands down the best display I've ever used. No burn in even with lots of static browser/terminal windows for days and days. The fact that it's $3k and _not_ OLED is insulting.

I believe these monitors are meant for professionals, which means it is going to be used in bright office buildings. That means running the display at high brightness which is the worst case for OLED since they degrade faster at higher brightness. Quoting wikipedia:

> A US Department of Energy paper shows that the expected lifespans of OLED lighting products goes down with increasing brightness, with an expected lifespan of 40,000 hours at 25% brightness, or 10,000 hours at 100% brightness


> If I'm dropping $3k on a monitor, it needs to be a technology that lasts, not a technology that wears out over time.

I bought my OLED TV when fearmongering was the highest, and it still works perfectly with zero burn-ins. So it is definitely possible. I bought the tv 8 years ago.


Yeah my LG C9 looks great, minor dimming where the captions are, but that’s it.

In the 7 years since they’ve gotten better, with micro lens arrays and stuff to improve brightness without heat causing faster decay.

RTINGs has some great content on TV longevity, but I haven’t seen anything for monitor workloads.


It was always "Microshit" to me

How would you have a user-configurable switch that physically disconnects things? The mechanism for that sounds complex. I'm not a hardware person, but I imagine you'd need to route the traces for each possible component to the switch and then have like a dip switch panel to control which behaviors are controlled by the switch. Either that or a software-controlled equivalent to a dip switch panel that can only be configured in the bootloader, otherwise the software-controlled physical disconnect would be no safer than a software disconnect.

Im not a hardware person either, but ex the button physically turns off the canera, and software polls for camera power and can respond

Ah I think I understand what you're suggesting now. This hypothetical switch is both a physical and software disconnect. Some features like the camera would be physically disconnected by the switch and therefore would not be user-configurable but then some other features (for example, GPS) could additionally be software disconnected at the same time.

That seems like a neat idea, but IMO I wouldn't trust the software-controlled half of it, so I'd end up only using the non-configurable physical portion of it.


That makes sense but it's also just a guess. The quote above would be equally applicable to an entirely software option that is toggled with a physical switch rather than an option in a menu.

Since these things are almost certainly digital devices, just having a switch that cuts power to them could work.

As soon as it is sufficiently complex, it becomes a "trust me bro" switch.

The list of replacements institutions from the memo states at the bottom:

> These institutions meet the following criteria: intellectual freedom, minimal relationships with adversaries, minimal public expressions in opposition of the Department, and Graduate-level National Security, International Affairs, and/or Public Policy Programs.

So it is definitely political and not based on merit.


[flagged]


This is just made up bullshit. Pete could list this info with examples but instead just hand-waved excuses.

[flagged]


Making these excuses is unreal.

I definitely agree that is weird and off-putting, but I recently moved to an area with a grocery store that is the complete opposite: the cashiers stand there silently through the whole order. That's also off-putting despite my introversion. I think we need a middle ground with a simple mandatory polite greeting like "Welcome to Hank's" and then after that leave it up to being organic/authentic.

Along similar lines, when I was reading the article I was thinking "this just sounds like a slightly worse version of nix". Nix has the whole content addressed build DAG with caching, the intermediate language, and the ability to produce arbitrary outputs, but it is functional (100% of the inputs must be accounted for in the hashes/lockfile, as opposed to Docker where you can run commands like `apk add firefox` which is pulling data from outside sources that can change from day to day, so two docker builds can end up with the same hash but different output, making it _not_ reproducible like the article falsely claims).

Edit: The claim about the hash being the same is incorrect, but an identical Dockerfile can produce different outputs on different machines/days whereas nix will always produce the same output for a given input.


> so two docker builds can end up with the same hash but different output

The cache key includes the state of the filesystem so I don’t think that would ever be true.

Regardless, the purpose of the tool is to generate [layer] images to be reused, exactly to avoid the pitfalls of reproducible builds, isn’t it? In the context of the article, what makes builds reproducible is the shared cache.


It's not reproducible then, it's simply cached. It's a valid approach but there's tradeoffs of course.

it's not an either or, it can be reproducible and cached

similarly, nix cannot guarantee reproducibility if the user does things to break that possibility


The difference is that you can blow the Nix cache away and reproduce it entirely. The same cannot be said for Docker.

That's not true

Docker has a `--no-cache` flag, even easier than blowing it away, which you can also do with several built in commands or a rm -rf /var/lib/docker

Perhaps worth revisiting: https://docs.docker.com/build/cache/


That will rebuild the cache from upstream but not reproducibly.

Ah you're right, the hash wouldn't be the same but a Dockerfile could produce different outputs on different machines whereas nix will produce identical output on different machines.

Producing different outputs isn't dockerfile's fault. Dockerfile doesn't enforce reproducibility but reproducibility can be achieved with it.

Nix isn't some magical thing that makes things reproducible either. nix is simply pinning build inputs and relying on caches. nixpkgs is entirely git based so you end up pinning the entire package tree.


If you are building a binary on different arches, it will not be the same. I have many container builds that I can run while disabling the cache and get the same hash/bytes in the end, i.e. reproducible across machines, which also requires whatever you build inside be byte reproducible (like Go)

> whereas nix will always produce the same output for a given input.

If they didn't take shortcuts. I don't know if it's been fixed, but at one point Vuze in nix pulled in an arbitrary jar file from a URL. I had to dig through it because the jar had been updated at some point but not the nix config and it was failing at an odd place.


This should result in a hash mismatch error rather than an output different from the previous one. If there is a way to locate the original jar file (hash matching), it will still produce the same output as before.

Flakes fixes this for Nix, it ensures builds are truly reproducible by capturing all the inputs (or blocking them).

Apparently I made note of this in my laptop setup script (but not when this happened so I don't know how long ago this was) so in case anyone was curious, the jar file was compiled with java 16, but the nix config was running it with java 8. I assume they were both java 8 when it was set up and the jar file upgraded but don't really know what happened.

No it doesn't. If the content of a url changes then the only way to have reproducibility is caching. You tell nix the content hash is some value and it looks up the value in the nix store. Note, it will match anything with that content hash so it is absolutely possible to tell it the wrong hash.

Not having a required input, say when you try to reproduce a previous build of a package, is a separate issue to an input silently changing when you go to rebuild it. No build system can ensure a link stays up, only that what's fetched hasn't changed. The latter is what the hash in nix is for. If it tries to fetch a file from a link and the hash doesn't match, the build fails.

Flakes, then, run in a pure evaluation mode, meaning you don't have access to stuff like the system triple, the current time, or env vars and all fetching functions require a hash.


Buildkit has the same caching model. That's what I'm saying. It doesn't force you to give it digests like nix functions often do but you can (and should).

You can network-jail your builds to prevent pulling from external repos and force the build environment to define/capture its inputs.

just watch out for built at timestamps

Are you on a phone? I loaded the article with both my phone and laptop. The ascii diagram was thoroughly distorted on my phone but it looked fine on my laptop.

Firefox on a 27" display. Could be the font being used to render.

The only ASCII image I see on that page is actually a PNG:

https://tuananh.net/img/buildkit-llb.png

Maybe the page was changed? If you're just talking about the gaps between lines, that's just the line height in whatever source was used to render the image, which doesn't say much about AI either way.


looks fine to me but since it messed up for some so i replace it with png

There isn't really a single way to define a country. For background, I would recommend this video from the Map Men: https://www.youtube.com/watch?v=3nB688xBYdY

But following their conclusion: the thing that makes you a country is being recognized as one by other countries. Most of the world recognizes Palestine as a country (including 157 UN member states). Here is a map where the green countries recognize Palestine, and grey do not: https://upload.wikimedia.org/wikipedia/commons/0/08/Palestin...


The Librem 5 would be eliminated by the additional requirements of:

> "I want a CPU that isn't crap while being expensive"

> "I don't want to pay full flagship prices for sub flagship performance"

Adding my own experience: the battery life is also atrocious[0] and simply running a software update on a completely stock librem 5[1] managed to send it into an infinite boot loop that I was only able to recover from by flashing the factory image.

[0] Sitting on a shelf, with the screen off, not connected to cellular networks, not being used at all except to check the battery % periodically throughout the day: I got ~11 hours of battery life. My pixel 10 has been operating under the same conditions for 4 days and is still at 71% battery life (I'm intentionally draining it down to ~50% for long term storage while I wait for the bootloader to unlock in 2 years).

[1] The phone had been sitting on a shelf gathering dust for years. No software had been installed, no accounts had been set up, it had never actually been used as a phone. Could not get more "stock" than that.


> "I don't want to pay full flagship prices for sub flagship performance"

First, it is a flagship GNU/Linux phone. Second, https://puri.sm/posts/the-danger-of-focusing-on-specs/

> I got ~11 hours of battery life

Looks like you didn't enable the suspend. Later updates brought it to >20 hours.

> simply running a software update on a completely stock librem 5[1] managed to send it into an infinite boot loop that I was only able to recover from by flashing the factory image.

When was it? I never experienced this. It could be a problem in the first years though. Current PureOS Crimson is stable.


> Looks like you didn't enable the suspend

This was with the default settings after flashing Crimson (which I did to recover from the infinite boot loop), so if there is some active step that needs to be taken to enable suspend, then I had not done it.

> When was it?

This was within the past month. I see two possible reasons you didn't run into it:

1) You have been applying the updates as they come out, whereas I took a dusty phone that hadn't been turned on in years and ran the update.

2) You were already on crimson, so maybe they only broke byzantium (or whatever version it was on from years of sitting unused and then hitting update in the software center).


> This was with the default settings after flashing Crimson

This is strange. See this post concerning the battery life: https://puri.sm/posts/librem-5-battery-life-improved-by-100/. Have you updated the modem firmware?

You are right, I have case 1). It is quite likely that Byzantium is (was) much less stable, as it required a lot of hacks and relied on a very old Debian version.


I had some time to check today, and it seems the suspend feature is disabled by default. I just enabled it so I'll see how that goes. Thanks!

> Have you updated the modem firmware?

Nope, but I just looked it up and Purism's site states:

"These files are controlled by a third-party and are not publicly accessible. Contact Purism Support to request these files for a firmware update"

Yikes....


Purism is probably not allowed to freely distribute that firmware. With the old modem firmware, modem cannot wake up the phone upon a call.

If you don't want to use the base system (which docker is NOT the base system on Linux) then Bastille offers a pretty much identical workflow to docker, but built on FreeBSD jails: https://github.com/BastilleBSD/bastille

> I don’t know why i keep hearing about jails being better

Jails have a significantly better track record in terms of security.

I can delegate a ZFS dataset to a jail to let the jail manage it.

Do Linux containers have an equivalent to VNET jails yet? With VNET jails I can give the jail its own whole networking stack, so they can run their own firewall and dhcp their own address and everything.


You've been able to setup separate firewalls, network interfaces, IP addresses, etc. for probably 20 years using network namespaces. How do you think container networking is implemented? But you can also use it through other tools; for example, I use firejail to isolate a couple of proprietary desktop applications such that they cannot contact anything on my desktop (or network in general) except the internet gateway.

> If you don't want to use the base system (which docker is NOT the base system on Linux)

There are many ways to manage "containers" on linux. I might agree with the fact that docker is not the base system (although it really depends on what distro you're using).

But I might also use something like systemd-nspawn or systemd-machined (see https://wiki.archlinux.org/title/Systemd-nspawn or https://en.opensuse.org/Systemd-machined) to handle those.

> I can delegate a ZFS dataset to a jail to let the jail manage it.

I could probably do the same.

> Do Linux containers have an equivalent to VNET jails yet? With VNET jails I can give the jail its own whole networking stack, so they can run their own firewall and dhcp their own address and everything.

I'm not sure, but most likely yes. Maybe not through docker. Docker isn't the only way to run containers in GNU/Linux though.


Is there a docker-compose analogue in Bastille? I like being able to spin up an isolated local copy of my infrastructure, run integration tests, and then tear it all down automatically. I'd like to be able to do a similar thing with jails. I wonder if there's a straightforward way to achieve something similar with VNET jails?

Not that I'm aware of. FreeBSD did recently gain support for OCI containers and therefore has podman. I see podman-compose is in the ports tree, but I haven't tried it myself.

  https://freebsdfoundation.org/blog/oci-containers-on-freebsd/
  https://www.freshports.org/sysutils/podman-compose/

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: