Even with closed source clients, MitMing every conversation would likely be detected by some academic soon enough - various people take memory dumps of clients etc and someone would flag it up soon enough.
I think you would be hard pushed to find any big tech company which doesn't do some kind of A B testing. It's pretty much required if you want to build a great product.
A big tech company has ~10k experiments running at once. Some engineers will be kicking off a few experiments every day. Some will be minor things like font sizes or wording of buttons, whilst others will be entirely new features or changes in rules.
Focus groups have their place, but cannot collect nearly the same scale of information.
As someone who works in these orgs, only a small fraction are about user experience metrics. 90+% are extracting more short term value with unknown second order effects on usability.
Yeah, that's why we didn't have anything anyone could possibly consider as a "great product" until A/B testing existed as a methodology.
Or, you could, you know, try to understand your users without experimenting on them, like countless of others have managed to do before, and still shipped "great products".
I know this is a salty take, but reliance on A/B testing to design products is indicative of product deciders who don't know what they are doing and don't know what their product should be. It's like a chef saying, I want to make a pancake, but trying 50 different combinations of ingredients until one of them ends up being a pancake. If you have to test whether a product works / is good / is profitable, then you didn't know what you were doing in the first place.
Using A/B tests to safely deploy and test bug fixes and change requests? Totally different story.
I do wonder if thousands of years ago it might simply have been more socially acceptable to be nude for day to day life?
Cloth was presumably expensive, and so perhaps you would only own a few items of clothing, and therefore might not wear clothes every day? Or maybe only clothes for some purposes like for fighting (armour)?
Kinda sad that this was done less than 1000 years ago at presumably great human efforts, yet we have forgotten why we even did it.
Lack of record keeping is the key problem.
Will someone 1000 years from now know what you spent your lifetime working on? Will your lifetimes work also be a mystery to future generations and will they shrug and say "all this computer code must be for religious sacrifice, we can see no other purposes for it"?
Is there any way we can make the current era of humanity the 'well documented one', for example by etching all our digital data into diamonds to last millions of years?
> Will your lifetimes work also be a mystery to future generations and will they shrug and say "all this computer code must be for religious sacrifice, we can see no other purposes for it"?
I doubt any computer code will survive for 1000 years.
One should consider how futures prices impact this economics model.
This factory could afford to upscale because they could sell their product for a much higher price, and pay their workers much more to do so.
However if they had already sold the factories output for the regular price via a futures contract (ie. 1 ton of polypropylene for delivery in June for $2000), then the story would be different.
Futures contracts are widely used as a way to take the risk out of doing things, but the side effect is your economy loses all incentive to be flexible to changing needs.
If demand increases for PPE (e.g. a righward shift of the demand curve) because of a pandemic, then prices increase but quantity demanded increases as well.
They may have existing futures contracts at the prior price and quantity but if quantity demanded has increased, then that means there are new orders as well that would be at the new, higher, market price.
The article is about them increasing their output. They could have sold the factories previous expected output via a futures contract. That doesn't stop them increasing output and selling in excess of any futures contracts.
I swear, I respect Vinge more and more based on how well he seems to understand human tendencies to plot some plausible trajectories for our civilization.
There's a little throwaway thing in the book (or maybe it was in the prequel) that I always liked, re understanding human tendencies. They're still using Unix time, starting in Jan 1st 1970, but given that their culture is so space-travel-focused they assume the early humans set it to coincide with man's first trip to the moon.
I wish he could have seen the current state of GenAI. Several times in the book he talks about how the ship understands context clues and sarcasm, and that effective natural language translation requires near-sentience.
Legitimately listening to this book for the first time after a coworker recommended it. It's rapidly becoming one of my favorite books that balances the truly alien with the familiar just right.
Not so ironically, it came up when we were discussing "software archeology".
I've only just heard of it. But, I already knew to not run random scripts under a privileged account. And thank you for the book suggestion - I'm into those kinds of tales.
Are you sure?
Are you $150 million ARR sure?
Are you $150 million ARR, you'd really like to keep your job, you're not going to accidentally leave a hole or blow up something else, sure?
I agree, mostly, but I'm also really glad I don't have to put out this fire. Cheering them on from the sidelines, though!
Honestly, since I'm never really in a position to see much of that money, at this point I'd be more concerned about my coworkers. And while that typically correlates with the amount of money you either have or receive, they're often out of balance one way or the other.
Once you get big enough… there comes a point where you need to run some code and learn what happens when 100 million people hitting it at once looks like. At that scale, “1 in a million class bugs/race conditions” literally happen every day. You can’t do that on every PR, so you ship it and prepare to roll back if anything even starts to look fishy. Maybe even just roll it out gradually.
At least, that’s how it worked at literally every big company I worked at so far. The only reason to hold it back is during testing/review. Once enough humans look at it, you release and watch metrics like a hawk.
And yeah, many features were released this way, often gated behind feature flags to control roll out. When I refactored our email system that sent over a billion notifications a month, it was nerve wracking. You can’t unsend an email and it would likely be hundreds of millions sent before we noticed a problem at scale.
However this is a different situation as we’re talking about running arbitrarily found third-party scripts. I can’t imagine that was ever intended to be done in production.
Fun story, when I worked at Facebook in the earlier days someone accidentally made a change that effectively set the release flags for every single feature to be live on production. That was a day… we had to completely wipe out memcached to stop the broken features and then the database was hammered to all hell.
I would say you can get to this point far below 100 million people, especially on web. Some people are truly special and have some kind of setup you just can't easily reproduce. But I agree, you do really have to be confident in your ability to control rollout / blast radius, monitor and revert if needed.
Or just restore from backup across the board. Assuming they do their backups well this shouldn't be too hard (especially since its currently in Read Only mode which means no new updates)
I think there will be good headway in using the part-trained model to generate itself more training data in the form of making itself tasks, completing those tasks with many different approaches, evaluating which solution is best (using the same LLM as judge), and then differentially training on the best solutions vs the worst ones.
The challenge is that such an approach almost certainly requires a model with RLHF post-training, but this needs to be done in the pre training phase. But with infinity compute, this isn't an issue - you simply do the post-training many times.
Interestingly, the latest generation of LED lights are so efficient that it makes little financial sense to bother having a light switch. 330 lumens per watt from recent Phillips bulbs!
The cost of having an electrician wire that switch is probably more than a little 2.5 watt light will use in it's lifespan - particularly when you account for the fact lights in hallways are probably in use most of the time anyway.
Add in the effort of switching the light switch a few times a day for many years and it's certainly the case! Or the risk of fumbling in the dark for a light switch at the far end of a room or in a house you aren't familiar with.
Obviously you might still want to turn it off for maintenance, but you have the breaker for that.
They are direct AC bulbs, no inverter needed. They do have smoothing capacitors so they don't flicker, and I think a slight redesign of the drive circuitry could add ~10% more efficiency...
> nobody should believe for a second that WhatsApp or FB messages are truly E2EE
That's interesting. You think all firms that audited WhatsApp and Signal protocol used by WhatsApp and all programmers who worked there for decades and can see a lie and leak if it was true are all crooks? valid opinion I guess, but I won't call it "no one should believe for a second
(curious you didn't mention Telegram, it is actually marketed as secure and e2e and it has completely gimped "secret chats" that are off by default and used by like almost nobody.)
I forget if its WhatsApp that technically lets you sync chats in unencrypted form to iCloud which is the “loophole” around this, though you can lockdown your iCloud even tighter, not sure it Apple can do much if you fully lock down your iCloud, not sure if this has been legally tested? Its not a very advertised feature its just a setting.
iCloud backups are encrypted, and can be end-to-end encrypted.
Also, backups have nothing to do with the messages being end-to-end encrypted. Like if you don't use a passcode on the phone, the messages are still encrypted.
You mean you will read all code with dependencies and compile it yourself to make sure?;) good for you. but good luck creating a popular e2e messenger then.
> nobody should believe for a second that WhatsApp or FB messages are truly E2EE.
Meta still tracks analytics which isn't good for privacy, but I'm not aware of any news of them or 3rd parties reading messages without consent of one of the 1st parties? Signal is probably much better though
> Meta still tracks analytics which isn't good for privacy, but I'm not aware of any news of them or 3rd parties reading messages without consent of one of the 1st parties? Signal is probably much better though
Correct. WhatsApp uses the Signal protocol, and there is zero evidence of them reading message contents except with the consent of one of the users involved (such as a user reporting a message for moderation purposes).
(And before anyone takes issue with that last qualifier, consent from at least one party is the bar for secure communications on any platform, Signal included. If you don't trust the person you are communicating with, no amount of encryption will protect you).
Discovering a backdoor in WhatsApp for Facebook/Meta to read messages would be a career-defining finding for a security researcher, so it's not like this is some topic nobody has ever thought to investigate.
Yet. Until they say "We delete these messages after X time and they are gone gone, and we're not reading them" Assume they are reading them, or will read them and the information just hasn't got out yet.
I mean we keep finding more and more cases where companies like FB and Google were reading messages years ago and it wasn't till now we found out.
Whether Facebook/Meta can read the plain text of the messages or not depends on whether that encryption is "zero knowledge" or not, aka: does Facebook generate and retain the private encryption key, or does it stay on the users' devices only, never visible to Facebook or stored on Facebook servers?
In the former case, Facebook can decrypt the messages at will, and the e2ee only protects against hackers, not Facebook itself, nor against law enforcement, since if Facebook has the decryption key they can be legally compelled to hand it over (and probably would voluntarily, going by their history).
This viewpoint isn't a slippery slope, it's a runaway train.
"You moved into a neighborhood with lead pipes? That's on you, should have done more research"
"Your vitamins contained undisclosed allergens? You're an adult, and it didn't say it DIDN'T contain those"
"Passwords stolen because your provider stored them in plaintext? They never claimed to store them securely, so it's really on you"
Legislating that everyone must always be safe regardless of what app they use is a one-way ticket to walled gardens for everything. This kind of safety is the rationale behind things like secure boot, Apple's App Store, and remote attestation.
Also consider what this means for open source. No hobbyist can ship an IM app if they don't go all the way and E2E encrypt (and security audit) the damn thing. The barriers of entry this creates are huge and very beneficial for the already powerful since they can afford to deal with this stuff from day one.
Doesn't have to be a law. Can just be standard engineering practice.
Websockets for example are always encrypted (not e2e). That means anyone who implements a chess game over websockets gets encryption at no extra effort.
We just need e2e to be just as easy. For example maybe imagine a new type of unicode which is encrypted. Your application just deals with 'unicode' strings and the OS handles encryption and decryption for you, including if you send those strings over the network to others.
I once publicly stated it's understandable that someone would post an ad that says "No YouTubers" because people don't want to be content for others. The reply I got was "but you're being recorded all the time anyway", as if those are remotely related.
this isn't anything new, however. No messaging has been actually private since forever, that's why encryption was invented. To keep secrets and to pass those secrets in a way that can be observed without revealing the secret.
Telephones can be tapped, people sold special boxes that would encrypt/decrypt that audio before passing it to the phone or to the ear. Mail can be opened, covertly or not. AIM was in the clear (I think at one point, fully in the clear, later probably in the clear as far as the aol servers were concerned)...
Unless the app/method is directly lying to users about being e2ee it's not a slippery slope, it's the status quo. Now there are some apps out there that I think i've seen that are lying. They are claiming they are 'encrypted' but fail to clarify that it's only private on the wire, like the aim story.. the message is encrypted while it flys to the 'switchboard' where it's plain text and then it's put wrapped in encryption on the wire to send it to the recipient.
The claim here that actually makes me chuckle is somehow trying to paint e2ee as 'unsafe' for users.
If you are a grown adult and don't do research on "<insert any topic that could have a material negative impact on your life, but that is not currently on your radar as being a topic that could have a material negative impact on your life>" then that's really on you.
It definitely ignores that many people don't have time. If someone is working over 40 hours per week, plus maybe doing unpaid labor taking care of kids or elders, where are people supposed to find the time and energy to brush up on a million different topics they don't even know they might not know enough about? Especially if they might also have medical issues, or hobbies, or want to have any time at all to relax.
Obviously, one way to improve the situation would be to make sure people are paid fairly and not overworked and have access to good and affordable or free childcare and elder-care and medical care, but corporations don't want that either. If anything, they're incentivised to disempower workers and keep them uninformed, and to get as much time out of them as they can for as little money as possible.
80% of the population does not and will never do that level of deep dive on apps
same discussion for any form of technology be it TVs or changing their car's oil
the deliberate app-store-ification of all things computer is also designed to keep people from asking those questions -- just download in and install, pleb.
it's why the Zoomers can't email attachments or change file types: all of the computers they grew up with were designed so they never had to understand what happens under the hood.
Most people couldn't tell you how their car works, at least not enough to fix it. Is that handholding, too?
People can't be knowledgable about everything. There's just too much information in the world, and too many different skills that could be learned, and not enough time.
A carpenter can rely on power tools without understanding fully how the tools work, and it's fine, as long as the tools are made to safe standards and the user understands basic safety instructions (e.g. wear protective eyewear).
To me, making sure that apps don't screw with people, even if they don't understand how the apps work, is roughly the equivalent of making sure power drills are made safely so they don't explode in peoples' hands.
“As long as the user understands basic safety instructions”
Yes, the internet has basic safety instructions, too (and probably just as many bother to read them), number one or two is “almost nothing online is ever really private”. I learned it by the mid 2000s, not knowing it in 2026 is not excusable with “people don’t need to know how everything works”.
And I never said that people should be knowledgeable about everything...
... and this is not what I was referring to either.
Less handholding -> more learning... but even then, what I meant is that you do not have to be knowledgeable to know that your "private messages" are not really encrypted and can be read by the admin (in case of forums, for one), and so forth.
Honestly I'm tired with every app trying to become the everything app.
Now TikTok wants to be a messaging app. Snapchat has a short video feed just like TikTok. WhatsApp only has a text feed, how long until they also add a video feed?
In my experience most forums have private messaging.
Additionally I think it is fine to say "we don't support e2ee". I prefer honesty to a bad (leaky) e2ee implementation, at least the user can make an informed choice.
>In my experience most forums have private messaging.
Yeah but it's kind of accepted that the forum owner could read it all if they so chose. Maybe this is a hold over from back in the old days when encryption was nowhere near default during which forums arose.
I agree. At least take of "Yes messages are stored on our servers" is honest. And if they are accessed by anything else than limited subpoena is policy or legal issue.
Lots of other consumer services such as Strava have direct messaging without e2e encryption. No privacy is guaranteed. This is fine, they're not deceiving anyone about how it works.
Adding that private self hosted forums can permit uploads of encrypted files, encrypted with a pre-shared secret or a secret shared over a private self hosted Mumble voice chat server.
you can encrypt the content but not the metadata, not even the subject unless you use a customized client that encodes it (like deltachat which doesn't use a subject at all), but then you still have your email address exposed.
Email encryption for most people is sufficient even if the metadata is exposed. One can simply state in their email encryption "Bing Bing Bong" or "Why did you not put the trash out?" which might mean to the recipient :: "check the second SFTP server" or "let the cat outside" or "Jump on my private Mumble chat server" or "Get on my private self hosted IRC server". The email message need not be encrypted for that matter.
The intended payload can be in an header-less encrypted file on a throw-away SFTP server in the tmpfs ram disk.
I have never considered metadata a part of the term E2EE. It has always been about the message contents.
I understand that metadata is valuable information for spies/governments and that encrypting or hiding it is valuable for privacy. But if you use that definition, there are almost no E2EE protocols on the planet in use.
First and foremost, any protocol that uses Apple or Google push notifications is giving metadata to those organizations. Even Whatsapp, iMessage, Signal, Telegram private messages, all of that leaks metadata but the contents of messages are hidden from the provider.
I know, right? I admit that is mostly for people on Linux desktops. People on smart phones are 100% monitored regardless of encryption or fake E2EE that platforms pinky promise is really E2EE like Signal. Shame on Moxie, he knows better.
Ovaltine has a crapload of sugar. Don't drink that horse piss.
you can bring your own encryption to ANY messaging platform, doesn't mean it will be easy to use. e2ee just really makes it handy so that users don't need to preshare any keys.
Eg. The Debian random number generator bug.
reply