> For the longest time (and for good reasons), floating point operations were considered unsafe for deterministic purposes. That is still true to some extent, but the picture is more nuanced than that. I have since learned a lot about floating point determinism, and these days I know it is mostly safe if you know how to navigate around the pitfalls.
If you're only concerned about identical binaries on x86, it's not too bad because AMD and Intel tend to have intentionally identical implementations of most floating point operations, with the exception of a few of the approximate reciprocal SSE instructions (rcpps, rsqrtps, etc). Modern x86 instructions tend to have their exact results strictly defined to avoid this kind of inconsistency: https://software.intel.com/en-us/articles/reference-implemen...
If you want this to work across ARM and x86 (or even multiple ARM vendors), you are screwed, and need to restrict yourself to using only the basic arithmetic operations and reimplement everything else yourself.
At least in the early 2000s, Bloomberg had strict requirements about this. Their financial terminal has a ton of math calculations. The requirement was that they always had live servers running with two different hardware platforms with different operating systems and different CPU architectures and different build chains. The math had to agree to the same bitwise results. They had to turn off almost all compiler optimisations to achieve this, and you had to handle lots of corner cases in code: can't trust NaN or Infinity or underflow to be portable.
They could transparently load balance a user from one different backend platform to the other with zero visible difference to the user.
> If you want this to work across ARM and x86 (or even multiple ARM vendors), you are screwed, and need to restrict yourself to using only the basic arithmetic operations and reimplement everything else yourself.
Is this problematic for WASM implementations? The WASM spec requires IEEE 754-2019 compliance with the exception of NaN bits. I guess that could be problematic if you're branching on NaN bits, or serializing, but ideally your code is mostly correct and you don't end up serializing NaN anyway.
I'm sure you know, but for others reading: even on the same architecture, there is more to floating point determinism than just running the same "x = a + b" code on each system. There's also the state of the FPU (eg: rounding modes) that can affect results.
On older versions of DirectX (maybe even in some modern Windows APIs?) there were cases where it would internally change the FPU mode, causing chaos for callers trying to use floats deterministically[1].
As far as I know, the ARM (at least aarch64) situation should be about the same as x86-64. Anything specific that's bad about it? (there's aarch32 NEON with no subnormal support or whatever, but you can just not use it if determinism is the goal)
that RECIP14 link is AVX-512, i.e. not available on a bunch of hardware (incl. the newest Intel client CPUs), so you wouldn't ever use it in a deterministic-simulation multiplayer game anyway, even if you restrict yourself to x86-64-only; so you're still stuck to the basic IEEE-754 ops even on x86-64.
x86-64 is worse than aarch64 is a very important aspect - baseline x86-64 doesn't have fused multiply-add, whereas aarch64 does (granted, the x86-64 FMA extension came out around not far from aarch64/armv8, but it's still a concern, such is life). Of course you can choose to not use fma, but that's throwing perf away. (regardless you'll want -ffp-contract=off or equivalent to make sure compiler optimizations don't screw things up, so any such will need to be manual fma calls anyway)
The Steam hardware survey currently has FMA support at 97%, which is the same level as F16C, BMI1/2, and AVX2. Personally, I would consider all of these extensions to be baseline now; the amount of hardware not supporting them is too small to be worth worrying about anymore.
We use floating point operations with deterministic lockstep with a server compiled on GCC in Linux a windows client compiled with MSVC in windows, and an iOS client running on ARM which I believe is compiled with clang.
Works fine.
This is a not a small code base, and no particular care has been taken with the floating point operations used.
> I was expecting a unified interface across all architectures, with perhaps one or two architecture-specific syscalls to access architecture-specific capabilities; but Linux syscalls are more like Swiss cheese.
There's lots of historical weirdness, mostly around stuff where the kernel went "oops, we need 64-bit time_t or off_t or whatever" and added, for example, getdents64 to old platforms, but new platforms never got the broken 32-bit version. There are some more interesting cases, though, like how until fairly recently (i.e. about a decade ago for the mainline kernel), on x86 (and maybe other platforms?) there weren't individual syscalls for each socket syscall, they were all multiplexed through socketcall.
and, depending on how you define the rationals, -0.
https://en.wikipedia.org/wiki/Integer: “An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...)”
According to that definition, -0 isn’t an integer.
Combining that with https://en.wikipedia.org/wiki/Rational_number: “a rational number is a number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q”
means there’s no way to write -0 as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.
The compiler sees that foo can only be assigned in one place (that isn't called locally, but could called from other object files linked into the program) and its address never escapes. Since dereferencing a null pointer is UB, it can legally assume that `*foo` is always 42 and optimizes out the variable entirely.
Compilers can do whatever they want when they see UB, and accessing an unassigned and unassiganble (file-local) variable is UB, therefore the compiler can just decide that *foo is in fact always 42, or never 42, or sometimes 42, and all would be just as valid options for the compiler.
(I know I'm just restating the parent comment, but I had to think it through several times before understanding it myself, even after reading that.)
> Compilers can do whatever they want when they see UB, and accessing an unassigned and unassiganble (file-local) variable is UB, therefore the compiler can just decide that *foo is in fact always 42, or never 42, or sometimes 42, and all would be just as valid options for the compiler.
That's not exactly correct. It's not that the compiler sees that there's UB and decides to do something arbitrary: it's that it sees that there's exactly one way for UB to not be triggered and so it's assuming that that's happening.
Although it should be noted that that’s not how compilers “reason”.
The way they work things out is to assume no UB happens (because otherwise your program is invalid and you would not request compiling an invalid program would you) then work from there.
Describing it as 'Google' is misleading, because different arms of the company might as well be completely different companies. The Chrome org seems to have had the same stance as Firefox with regards to JPEG XL: "we don't want to add 100,000 lines of multithreaded C++ because it's a giant gaping security risk", and the JPEG XL team (in a completely separate org) is addressing those concerns by implementing a Rust version. I'd guess that needing the "commitment to long-term maintenance" is Chrome fighting with Google Research or whatever about long-term headcount allocation towards support: Chrome doesn't want the JPEG XL team to launch and abandon JPEG XL in chrome and leaving Chrome engineers to deal with the fallout.
It doesn't seem obvious to me that this is actually a bug in the Android implementation, it seems like this is due to AirPods violating the spec and requiring a special handshake before responding to standard requests. It doesn't seem reasonable to expect Android to work around a device that appears to be intentionally breaking the spec for vendor lock-in purposes: the possibility of them just OTAing an update that breaks in some other way means that you'd have to be entirely bug compatible with iOS's bluetooth implementation.
It not that hard to imagine Apple going out of their way to do something that would break functionality on Android, honestly. Although, I believe Fluoride also is to be blamed here because a simple timeout can not possible cause any issues (it seems that a timeout is there, but never called- at least from my tinkering). I am not planning to spend a single second tracing back the actual problem and suggesting a fix, given that Google just asked me to reproduce twice (!!) and did nothing about it.
when you’ve worked long enough in any given industry you know that all companies "violate" standards to satisfy requirements of their product management.
Apple have been ‘extending’ the Bluetooth stack for quite awhile. They introduced some BLE features before the spec was finished (I think some 3rd party hearing aids were also compatible).
I haven’t used non apple earphones for awhile but the seamless connectivity performance of AirPods would suggest this was done for performance, not to deliberately lock in devices.
> They introduced some BLE features before the spec was finished
In their defence, they went with Lightning shortly before the USB-C spec was finalized. Then, to avoid their customers being screwed over by constantly changing the connector, they kind of had to stick with it for a decade.
People will complain if they push features that are ahead of the spec, and they'll complain if they let the spec be finalized before they use it. Being guided by "What's the best we can do for UX, assuming out users are our users in every product category we enter" seems to be their reasonable middle ground.
The only reason Apple ditched Lightning port and finally gave USB-C port in the iDevices, is because EU forced Apple to do so. But do you think your oh-so-common USB-C cables will work with a new iPhone?
In my country (India), Apple still doesn't sell charger and cable along with its new iDevices, even though those gadgets are exorbitantly expensive. And Apple doesn't allow custom repair here, even though my country mandated the Right to Repair, like EU did so. My old Mac Mini 2012 is gathering dust in a cupboard, because Apple service center refused to upgrade it to new RAM and new SATA SSD, citing Apple policies.
Couldn't you just upgrade yourself in the pre Apple silicone days?
Like within minutes, with no big changes?
I didn't think it's rare that a company refuses to do any work on devices they no longer support. Their employees will no longer be trained to do this work, hence they'd have a nontrivial chance of causing damages. That's exactly why a right to repair is so important, so that other people can pick up their slack
"Seem". Until they don't. I've had multiple instances of Airpods stopping to connect with phones until I charged them at least once with original Apple cables. They might work fine for months, then stop ehaving unless connected through an all-Apple power pipeline (cable and charger). It's probably firmware updates requiring some sort of validation every now and then.
Sounds like you have a flaky / damaged device or bad cables. If there really was some kind of conspiratorial timer requiring you to use 1P cables it would certainly be documented. Can’t hide that stuff. Loads of people use Apple devices with 3P cables all the time and they work just fine, as long as the cables aren’t junk. There really are quality and capability differences in USB C cables. Just because it looks right and physically connects doesn’t mean it can electrically do all the things.
I know that Apple MFI certified Lightning cables work well with iDevices, but I found that third-party non-MFI-certified Lightning cables to be finicky with iDevices. But I never faced such problem with USB cables for non-Apple devices (Android phones, cameras, etc.).
Apple MFI certifies USB-C cables also, so I'm not sure if it is throttling its iDevices to be finicky with non-MFI USB-C cables.
I know for a fact that Apple did software updates to older iPhones to make them sluggish and drain battery quickly. I realised this when I went to Apple Genius Bar to get my iPhone 7 Plus battery replaced after it started draining too quickly daily, but even with new battery same problem persisted. The friendly staff member unofficially told me it is because of the recent software updates by Apple for older iPhones, and advised not to hold out hope that any future software update will fix the problem. Even a year later, his warning remained true. I gave away the iPhone to my nephew as a backup device for his studies, but he sold it soon, as it was a nightmare to keep charging it frequently.
Apple has faced multiple fines for deliberately slowing down older iPhones without informing users, including a €25 million fine in France and a $41 million fine for deceptive marketing practices. The company admitted to slowing down devices to prevent unexpected shutdowns due to aging batteries, but critics argued it was misleading.
These days, I wouldn't trust Apple with a barge pole, let alone the money from my wallet.
>Apple has faced multiple fines for deliberately slowing down older iPhones without informing users, including a €25 million fine in France and a $41 million fine for deceptive marketing practices. The company admitted to slowing down devices to prevent unexpected shutdowns due to aging batteries, but critics argued it was misleading.
These cases are much less convincing than they may seem if you just take a moment to read about them. iDevices would throttle the cpu to make the battery last longer as it's capacity falls, this kind of throttling is not uncommon and not malicious.
This wasn't misleading, and isn't something that warrants any genuine criticism.
In my experience, the only 2 mobile phone companies whose phones drain battery too fast are Apple and Samsung. Apple does this deliberately for older phones, whereas Samsung has this problem even for new phones.
You will not find this quick battery drain problem in Motorola, Nokia, Oppo, Sony, etc. Their phones last several years even with ageing batteries. An 10+ years old Oppo phone I have, still holds almost full charge at idle, throughout the day.
As batteries get older, their capacity to hold charge reduces, but if a phone battery is draining too fast even in idle mode, it is likely due to software, not hardware. And if it is due to software, then the manufacturer company is to blame.
I don't think you can find any evidence of a
Apple actually deliberately doing things to make batteries drain faster on older models.
That would either require hurting the battery life on all models or require distinguishable behaviours that only occur on specific models and would be relatively simple to prove through reverse engineering.
Apple has been fined for the throttling, but hasn't ever been credibly accused of actually deliberately taking steps to reduce battery life on older devices.
That iFixit guide to upgrade the Max Mini is daunting for newbies.
But you've inspired me to gather courage and do the DIY upgrade myself next month during the holidays. No use having a working PC lying unused, merely because it is very sluggish due to old hardware. Wish me luck (for the upgrade), I think I'll need it.
You’re just limiting yourself for no reason. It’s not Apples fault that you are sitting in front of an un-upgraded computer that is tool-less (for one of your tasks, at least) and has step by step instructions meant for beginners.
That's because SMS is a horribly broken, hacky standard, and RCS has to inherit and deal with all the horrifying edge-cases of SMS, MMS, and legacy cruft going back prior to the turn of the millenium.
Then it has to accomodate every other intersted party, many of which hate each other. Apple has always been a bit of an odd duck ("Think Different" has been internalized for some time), but Verizon actively hates OTT messaging as they can't charge for it. Samsung would rather run their own RCS implementation to create and advertise "Samsung RCS", and Google can't push too hard without getting EU attention for antitrust (again).
RCS has been stuck in limbo-hell for years for multiple reasons, none of which are easy.
The specific issue I'm talking about is how Apple for some reason ties the presence of RCS persistently to a contact that requires the user to manually go in and adjust, otherwise the conversation switches back and forth between SMS and RCS as each participant texts back and forth.
This is a problem no other vendors have, and is solely caused by Apple.
Why is that on Apple instead of the hundreds of other manufacturers and Google? If Google wants a better ecosystem, it’s on them since according to them Android was suppose to be the “definition of open”.
Because while Android is "open", Google has no carrot (Verizon can't charge for OTT messaging and has no major incentive to push it), and no stick (pushing too hard will draw regulators' attention again)
RCS has been stuck in limbo-hell for several years, and I expect it to stay that way (to your point, I expect it to stay that way even if Apple chips in)
In general, rigidity of stack is a malfeasance. Over protecting the user brings fragility, un-adaptability, that curses the world. Android certainly is a rigid narrow protective stack that refuses to accommodate, again and again. Different genre, but decades latter and it still won't work on many ipv6 networks because for no clearly stated reason it won't support DHCPv6: Android is full of these weirdly unstated "principled" anti-compatibilities, and I can't excuse blaming the devices or networks for being what they are: it's the unbending rigid OS that offends me.
I do rather hope perhaps perhaps perhaps the EU & DMA or other may perhaps bend Apple off their rotten course of making non-standard bespoke systems. It seems like very recently the EU is getting ready to cave & abandon all their demands for trying to use standards, that their fear of the US is about to make them fold on insisting upon better. Demanding Apple stop doing everything in bespoke incompatible ways is something that should have happened a long time ago, imo, and it's so horrifying to see one of the only stands in my lifetime against the propeietarization & domination of systems by a bespoke corporate lord abandoned.
There's some rays of hope here & there. Seemoo Lab has a ton of amazing reverse engineering efforts, figuring out how many many many undocumented locked down Apple systems & protocols work & trying to give control back. This is the highest virtue, the best hacker nature. Here's Open Wireless Link, but they have so many other amazing projects they've similarly figured out out to pry open. Amazing best human spirit.
https://github.com/seemoo-lab/owl
is there evidence it’s for vendor lock in purposes? airpods have a pretty stellar connection for bluetooth, wouldn’t be surprised if there were performance reasons for them going off spec
I doubt it’s for any reason at all. The obvious explanation is that they just developed and tested these extra firmware features against Apple devices because that was the product decision. Since nobody was tasked with targeting Android they might not have even noticed that it wasn’t perfectly spec-compliant if those states were never encountered, nor expected to be encountered.
No there isn’t. I’ve said this a million times before, but usually just downvoted: this is about reducing support costs, not increasing revenue from lock-in. This is not a theory, I’ve sat in meetings at Cupertino and been told first hand.
Support is very expensive. Say what you want about Apple, but they provide absolutely stellar support, especially with the stupidly inexpensive Apple Care insurance. This is only cost effective if they can make reasonable predictions about how their devices will behave in any given scenario. Interfacing Apple hardware with non-certified (MFi, BLE, etc) third party hardware has a non-trivial risk of unpredictability high support costs, either from excessive Apple Care claims, customer support communications, or just overloading the Genius Bar.
Reducing support cost could easily explain the motivation of the entire walled garden if they are sufficiently high.
That's tautological. Everything that is not supported is so because supporting it has a cost. The question is what is the cost? It seems quite obvious that the marginal revenue from airpods would be overshadowed by the revenue of getting a user in the ecosystem.
Having to test the AirPods with more standards compliant devices, having to waste time to tell customers to fuck off if their phone/laptop/toaster is not standards compliant, having to waste engineering time to investigate non compliant aliexpress phones/laptops/toasters, wasting time to implement additional functionality for Apple customers because it has to go into the spec first
Yes, all that is a part of the cost equation, which points to the same thing, namely, that $200-$300 widgets are not worth selling to the general public; they would rather sell them to a customer who will spend a lot more in the ecosystem. Same as razors and blades or consoles and games.
Customer support costs are higher at Apple than its competitors, because they provide a better support experience. This is not a tautology, it’s one of their core value propositions
They couldn't just write (and make people aware at point of sale, ofc) 'no support for using devices with non-Apple Computers products' into Apple Care. They had to purposely break compatibility?
You can still connect AirPods to an android device using Bluetooth, you just don’t get the seamless connection or support for Spatial Audio that use the extended protocols
> Why use Bluetooth at all if they don't actually use it properly?
Because they needed a way to get audio to the AirPods wirelessly and to work with their devices? That’s a pretty good reason to use Bluetooth.
I doubt they got together and tried to scheme a way to break Bluetooth in this one tiny little way for vendor lock in. You can use the basic AirPod features with other Bluetooth devices. It’s just these extended features that were never developed for other platforms.
HN comments lean heavily conspiratorial but I think the obvious explanation is that the devs built and tested it against iPhone and Mac targets and optimized for that. This minor discrepancy wasn’t worked around because it isn’t triggered on Apple platforms and it’s not a target for them.
It reminds me of the USB keyboard extender that came with old Macs. There’s a little notch in the socket so you can only use it with Apple keyboards. At the time I thought it was a petty way of preventing you from using it with any other device, but apparently the reason they didn’t want you to use it with other devices is because the cable didn’t comply with the USB spec.
Yes, USB extenders are not spec-legal (because the device isn't built expecting to be extended).
But you can have an extension cord which accepts USB on one end but doesn't accept USB on the other.
So the keyboard has a superset connector so that it can go in regular USB and notched USB, because it is verified to work right when using the extension cord.
This design also means you can't plug one extension cord into another to get an even longer distance (which the keyboard wouldn't expect). Pretty clever solution.
Truth is, no one has the full facts so any reasons as to why this was made the way it was is pure speculation. Only a fool would move to condemn or endorse what is not yet fully understood.
As someone who's implemented custom Bluetooth protocols, it's actually quite easy to condemn an Apple manufacturer ID check to expose custom services.
And what do you mean by "conspiracy"? I would be shocked to find out this was done by some lone wolf and wasn't built with broad (even if grumbly) consensus in the relevant teams. That's how corporate software is built.
Every time someone opens an argument with the classic appeal to authority “as someone who has…” you can almost certainly expect to have that person miss the point of the discussion entirely.
Google works around a ton of out-of-spec hardware / driver quirks for Android's ExoPlayer media player stack. So it is more than reasonable to expect Google to add a workaround for this.
I've found 0.8mm to make much more reliable connections, since the specification says that the tongue should be 0.7mm. 0.6mm will disconnect if the cable is angled in any way.
0.8mm is definitely out of USB 3.0 official spec and might damage the plug. The Spec requires 0.7mm with contacts and 0.6mm without, i.e., 0.05mm for the contact. See:
It just feels smooth, like if you were in a modern vim. In most other editors that attempt implementing a vim mode, something constantly breaks the illusion. There are some some little annoyances in zed, but they are mostly behavior differences you can get used to. And they are still working on it, so I really see it as a new imagination of vim with many useful features built-in, like TS-based motions, or the way AI edit predictions work doesn't break the vim editing flow.
If you're only concerned about identical binaries on x86, it's not too bad because AMD and Intel tend to have intentionally identical implementations of most floating point operations, with the exception of a few of the approximate reciprocal SSE instructions (rcpps, rsqrtps, etc). Modern x86 instructions tend to have their exact results strictly defined to avoid this kind of inconsistency: https://software.intel.com/en-us/articles/reference-implemen...
If you want this to work across ARM and x86 (or even multiple ARM vendors), you are screwed, and need to restrict yourself to using only the basic arithmetic operations and reimplement everything else yourself.
reply