Not sure why people flagged you for this. It's very common for open source projects to make the details of security-related bugs private. One example is Firefox, nearly every security update references one or more bug tickets that the public doesn't have permission to view.
I wonder if Apple listed the wrong webkit bug number, it almost looks like it.
I haven't seen a good answer to the question, "Does Lockdown on iOS 16 prevent whatever this exploited?"
In any case, there was a Chrome 0day recently patched too, an Element Desktop RCE... so... Qubes is looking less and less like "A good idea" and more and more like "The only way to safely use web browsers." :( Disposable browsing VMs should keep the nasties away.
> That is until someone comes up with a debilitating Xen 0-day
But you're adding layers.
A Xen 0day, alone, isn't useful. You have to be able to deliver it, which probably implies local root.
To get something useful out of a user's home directory on a typical OS install, you pop the browser, do what you want.
To get something useful out of a user in Qubes, assuming they're using an untrusted browsing VM, you have to pop the browser, then get local root, then deploy your Xen exploit... and then maybe do something useful.
There's also the standard malware anti-RE-sandbox techniques used. Show up in a clean profile on a hypervisor? Maaaaaybe not a good idea to be evil. Lots of stuff will refuse to actuate in something that looks like a malware RE sandbox, and a disposable Qubes VM certainly would look like that.
I won't claim it's impossible, but I will claim that doing a cross-Qube hop through Xen is a lot harder than just one exploit and get the goodies.
With Qubes you already by default have local root [0], because LPE is usually almost a forgone conclusion if the attacker has a sandbox escape.
> A Xen 0day, alone, isn't useful.
I don't think there any attackers with the interest and capability to acquire a Xen sandbox escape that wouldn't readily have access to browser 0-days, unless the target is using something like Tor Browser Bundle with JS, SVG, and PDF.js disabled.
I would think yes, if you can accept the performance hit. Always have to be careful of what files you share between guest and host. Even browsing as another MacOS user would offer much isolation from your main account if you make your main files unreadable by the rest of the system.
Generally speaking, yes, though if you are that concerned about browser exploitation then a more practical mitigation is to browse untrusted sites with JavaScript disabled.
Link is to the macOS patch notes, https://support.apple.com/en-us/HT213412 is the patch notes for iOS if anyone's curious. The only difference is "available for a bunch of iThings" instead of "available for Monterey", the CVEs are the same.
All MacOS patches (whether minor or major) all take 20-30+ minutes. Even on the new M1/M2 chips.
It makes keeping MacOS up to date in an enterprise / corporate environment a pain (an employee updating their computer in the morning puts them out of commission for 45 minutes - then complaints of missing meetings, etc)
Some of this may be related to system volume signing, in which cryptographic signatures for the entire system volume are recomputed following an update. The entire filesystem tree is re-hashed recursively. Once the signatures are recomputed, the signatures are checked at each boot (which goes much faster, of course), and if verification fails, the system declines to start up.
IIRC you're replacing the whole OS inage on every update now.
Also, their edge caches are awefully throttled on large downloads if your ISP has one installed. The Akamai CDN performs a lot better. I ended up switching DNS servers just for this.
From a purely anecdotal perspective, I've noticed a vast decrease in the performance of the macOS updaters/installers since the point when they switched to using what is essentially the iOS mechanism for them, with the MobileAsset UpdateBrain stuff, and .ipsw files, and chunked pbzx archives nested inside zip archives nested inside xar archives. I don't remember the details of the previous system, but I remember it seeming a lot simpler, a lot less buggy, and certainly a lot faster on my hardware.
I’m still kind of confused why they don’t have a a/b update system for macOS now that Sealed System Volume is a thing. I feel like one of the major benefits of that would be that you can swap /System out and not worry about losing any user state, so why can’t they just download a new System volume, put it somewhere else (while you’re using your computer), then on reboot boot from the new one and throw away the old one? If you disable SSV then they can use the slow update process.
(Unless there’s too many system files that are updated a lot and not sealed?)
I'm trying to think how you could reliably, securely hash one volume while running a potentially untrusted system from another volume. I imagine this can be done, but I'd be guessing as to how. For now, I suspect Apple has thought this through and has determined that booting the system into a known-cryptographically-clean state is the best method at hand for reducing the risk of a compromise/failure in the process of signing the system volume.
FWIW, APFS snapshots do at least provide an instant rollback mechanism, whereby a failure to install and sign the updates to the new temporary snapshot do not destroy the previous system. So what you describe seems reasonable and potentially feasible.
The "brain" to which it refers (on x64 machines) basically is an iOS device (running a variant called bridgeOS) on an ARM coprocessor (T2) inside your computer. Same secure boot, same TSS signatures, etc.
It is an unpopular opinion on HN but I have been saying this for quite some time. The old days we were optimising for file size and transfer speed. Making sure you get a smaller download size ( compressed ). That was when Bandwidth were expensive, both in Datacentre and in consumer last mile. Nowadays we have abundance of capacity and we get 100Mbos if not 1Gbps internet, the actual download happens in the background. We should be optimising for total installation time over bandwidth savings.
I wish they optimized for a 2.5 years old computer not being slow as hell. I remember getting my first MacBook in 2013 and it was snappy. I bet they assume a given size of L2 cache and every non-top-notch processor must paginate like crazy.
macOS is sluggish, now, even on day #1 of a purchase.
The progress bars are also wholly unhelpful. There are like 3 in a row, sometimes with a time estimate (often widely wrong), but only for the current bar, sometimes no estimate at all. I'd like to say that it least it gives you the indication that something is happening, but sometimes the progress bar looks frozen for minutes on end, so not even really.
Updating Xcode on an M1 Mac Mini also takes a good couple hours.
Apple really need to work on their update experience.
Went to software update on my iPhone, it told 15.6.1 available as a security update, started downloading, it said 1 minute left.
For some reason it said an error occured when I went back to settings. And now, for the same 15.6.1's description it says:
"This update adds the ability to unlock with Face ID while wearing a mask on iPhone 12 and newer. This update also includes new emoji, a new voice option for Siri, and other features and bug fixes for your iPhone.
Some features may not be available for all regions or on all Apple devices. For information on the security content of Apple software updates, please visit this website: https://support.apple.com/kb/HT201222"
And it now says 45 minutes left. I'm already on 15.6 so the Face ID/Emoji is irrelevant since I already have them being 15.6, and it didn't say any of it when it first showed 15.6.1 as an update anyway.
My iPad did the same. I assume it failed to download the delta update (200MB-ish) for some reason and therefore defaulted back to downloading the full restore image (5GB-ish) instead which must have had different release notes. It installed fine after doing so though.
Why isn't there basic information available on this cve? What version range is affected? What applications or system utilities are affected? Is it remotely exploitable or local only? Does it require elevated privileges?
This is the level of support you get from a trillion dollar company?
This is always the same, they usually reveal this information a few days after the release is made. For two reasons: 1. they don't want to hand out this information to attackers until a significant chunk of their users has updated; 2. they may also be preparing updates for users who are not on the latest macOS/iOS versions (as they usually do).
This is the level of support you get from a trillion dollar company?
Apparently they care about their users not getting exploited. Remember that many macOS/iOS users are not subscribed to the debian-security list and running apt-get update ; apt-get dist-upgrade twice a day.
The level of support is: install this update if you want to be secure. The idea is that you don’t need to know all that other information. Install the update to be secure.
>The level of support is: install this update if you want to be secure.
That is as useless as is it is passive aggressive to someone who needs to plan and prioritize updates on a large number of machines. Pissing off your current customers with shitty support is a good way to lose future business though.
> What applications or system utilities are affected? Is it remotely exploitable or local only? Does it require elevated privileges?
I thought it was clear. Any report not saying a vulnerability requires elevated privileges means it doesn't. An application means any application. WebKit means possibly anything with WebKit including 3rd party apps. Applications are local. Web content can be remote. Combining exploits could give you kernel privileges remotely.
This is a "actively exploited" zero day bug which means there would be specific applications written to exploit this bug. Which application(s) did that? Who specifically crafted their application to exploit the OS X kernel?
> This is a "actively exploited" zero day bug which means there would be specific applications written to exploit this bug. Which application(s) did that? Who specifically crafted their application to exploit the OS X kernel?
It said Apple is aware of a report that this issue may have been actively exploited. Not aware it was. And possibly any WebKit application can be exploited and used to exploit the kernel.
Generally agree that whataboutism is often unhelpful and how Google operates doesn't excuse Apple but that's not what was asked. The question was "This is the level of support you get from a trillion dollar company?", so someone coming in and pointing out that it's the same or worse at other trillion dollar companies is answering the question.
I would interpret that as more generic than buffer overflow.
“Buffer overflow”, for me, is for when you read or write a short distance before or after a buffer (typically because your looping over it and got your ending condition wrong). “Out of bounds write”, for me, also includes when you read or write basically anywhere (say a function you call overwrites your local pointer to a buffer with ‘random’ bits, and you then use the corrupted pointer to read or write data)
> That's the hallmark of a nation state that has previously been exploiting these, but since decided - for whatever reason - that the vulns have become too risky to leave undisclosed to the vendor.
I think it just means the person doesn’t want to be named.
Yes, at this point I'd probably only start taking jobs in secure languages. Working on Rust web services has been a joy and I don't know if I could stand going back to insecure languages and finding critical issues out in the wild like this.
There is no such thing as a "secure language", it's very dangerous to think like that. You can have exploits like SQL injection, for example, in any language.
My seat belt also doesn't stop me dying in a car crash but I still put it on and disregard anyone telling me I shouldn't bother because its not perfect.
Sure, but that's not what I (or the sibling commenter) was saying.
I'm a proponent of using more secure languages, my point is just that if you think it will mean a world in which you won't be "finding critical issues out in the wild like this", you're wrong. There will always be 0days; attackers will always find a way.
https://bugs.webkit.org/show_bug.cgi?id=243557 (leading to https://github.com/WebKit/WebKit/commit/1ed1e4a336e15a59b94a...)
Shouldn’t this issue have been made inaccessible in order to mitigate exploitation?