I’m confused, you’re talking about 16 GB of RAM but OP said:
Having only 8 GB sucks unless you're using it as a terminal or media player.
I have the M1 MacBook Pro with 16 GB too and it’s fine for normal web development and multi tasking but that … really isn’t surprising?
I still regularly use a five year old Ideapad 14 Pro with 16 GB of RAM running Windows 11 and it’s also completely fine for dev work running servers/Docker/WSL2 VM/etc locally.
> I’m confused, you’re talking about 16 GB of RAM but OP said:
Having only 8 GB
Look at the list of things they said they have open. Divide in half and it's still a lot because that set of running software is very hungry. PostgreSQL, Slack, Docker, Brave, Cursor, and iTerm2 running on my system puts RAM usage at 23.5GB, and yet modern macs have both very good memory compression and also extremely fast swap. Most Mac users will never realize if they've filled RAM entirely with background software.
Thanks, I can see the point being that a smaller subset of that would work on 8 GB, but I don't think you can really just divide by half? (Considering a much larger portion of the 8 GB would be dedicated to base OS/unified GPU needs compared to the 16 GB model).
e.g. using hypothetical numbers: if base MacOS/typical GPU usage requires 4 GB, then the 8GB model would have 4GB available for running apps (but multiplied by memory compression/swap to fast SSD). Whereas the 16GB would have a much more comfortable 12 GB for multi-tasking in that scenario especially with the multiplier effect of compression/fast swap on top.
So it still feels like a bit of an apples to oranges comparison as far as what an 8 GB model could handle in real usage. I have a friend who does light dev work on an M1 Macbook Air so I don't think an average user would have issues on the Neo day to day, but using the 16 GB as a yardstick doesn't seem that useful.
> Considering a much larger portion of the 8 GB would be dedicated to base OS
Sure, but, by the numbers I'm seeing, their much heavier load than mine would be waaaay into swap territory for them and is still doing just fine. That's really my point. That's why I think it's actually pretty reasonable to look at half their load and say "man, even half their load is a pretty heavy load for most people, so half their RAM will almost certainly be more than plenty for the target market".
Also, just for the info, my Activity Monitor says that the non-purgeable OS RAM (wired) usage is around 3GB on Tahoe 26.3.
Guess what? Both Windows 10+ and Linux have memory compression, too, yet 8 GB are good only for light usage unless you're willing to "destroy" the flash with intensive swapping.
Sorry, I should have said that running that same stack on Windows/macOS Intel with 16GB resulted in tons of sluggishness in my experience. I would consider that a 32GB workload on Intel, so I was surprised that 16GB was enough for it.
To the major point of can it (Neo 8GB) run multiple programs at the same time, my experience would say it would have no issues doing so given what one can do in 16GB on lesser Mac hardware. (Maybe I am wrong and MacOS takes all 8GB for itself, but that seems far-fetched.)
Shared iPad overview
Shared iPad allows more than one user to sign in to an iPad. The iPad needs to be supervised before Shared iPad can be used. Shared iPad can be used not only in education but also in business. Multiple users can use the iPad, and the user experiences can be personal even though the devices are shared.
Shared iPad requires a device management service and Managed Apple Accounts that an organization issues and owns. Users with a Managed Apple Account can then sign in to an organization-owned Shared iPad. Devices need to have at least 32 GB of storage and be supervised. The following devices support Shared iPad:
> Shared iPad requires a device management service and Managed Apple Accounts that an organization issues and owns
I don't want to have to do a bunch of sysadmin just so my wife and I can both see our own YouTube subscriptions on an iPad. Again, you could do this with zero fuss in 5 minutes on Windows XP.
I remember my first internship for a Boeing subcontractor in 2010ish went all-in on thin clients for like half the company of 200 employees or so. But with an on-prem Windows Terminal Server as the backend for RDP.
It was mostly fine-ish except for some annoyances like streaming audio being fairly sketchy for the era which bothered the techs who normally spent like ten hours a day listening to Pandora on headphones while making repairs.
Ended up having to block it to maintain decent performance for everyone because it bogged down the 100 Mbit LAN which resulted in a lot of grumbling and unhappy people. I imagine it's more viable these days.
The clients themselves were pretty cool though: cheap, booted almost instantly and ran cold. Until that job I had no idea how efficient RDP was as providing a near realtime experience even when bandwidth constrained.
At my current job there are a couple VMs I can only use via RDP and I honestly forget I'm even using it most of the time until the occasional random glitch reminds me.
When I was a kid I used to pack my house's cable modem in a backback and bring it to my friend's house a couple miles away when I'd visit to play Xbox Live. My dad had a back-up dial-up connection for emails and mom didn't use the internet very much so usually wouldn't mind unless he needed to work. I remember this working at greater distances in other places occasionally too.
Earlier, in the dial-up era, my dad didn't feel like paying for internet at home and work, so after school I would call his office and ask his secretary if he had left for his evening meetings yet. If so, she'd disconnect his dial-up connection and I'd get a couple hours to myself after school.
We didn't have two phone lines at home so I'm not sure what happened if he needed it unexpectedly. I think he also had a by-the-minute service as a backup or maybe his partner in the office had a separate plan? This was all done under agreed rules I only vaguely remember so must not have been a frequent problem.
Always funny to think back to that era when internet wasn't assumed to be a 24/7 thing and losing internet for a day wasn't the end of the world...
Perhaps, I can't say with 100% certainty that I wouldn't if offered 50k+ just for writing a blog post. But in doing so I would also have to accept being labeled a "crypto shill" instead of "crypto critic" for the rest of my life.
OpenCode also has an extremely fast and reliable UI compared to the other CLIs. I’ve been using Codex more lately since I’m cancelling my Claude Pro plan and it’s solid but haven’t spent nearly as much time compared to Claude Code or Gemini CLI yet.
But tbh OpenAI openly supporting OpenCode is the bigger draw for me on the plan but do want to spend more time with native Codex as a base of comparison against OpenCode when using the same model.
I’m just happy to have so many competitive options, for now at least.
- better UI to show me what changes are going to be made.
the second one makes a huge diff and it's the main reason I stopped using opencode (lots of other reasons too). in CC, I am shown a nice diff that I can approve/reject. in codex, the AI makes lots of changes but doesn't pin point what changes it's doing or going to make.
Yeah it's really weird with automatically making changes. I read in it's chain of thought that it's going to request approval for something from the user, the next message was approval granted doing it. Very weird...
That’s a separate tool though. You don’t want to have to open another terminal to git diff every 30 seconds and then give feedback. Much better UX when it’s inline.
My main hooks are desktop notifications when Claude requires input or finishes a task. So I can go do other things while it churns and know immediately when it needs me.
My favorite part about that is gas town is supposedly so productive that this guys sleep patterns are affected by how much work he’s doing, but he took the time to physically go to a bank to get a 5 figure payout.
It makes it difficult to believe that gas town is actually producing anything of value.
I also lol at his bitching about how the bank didn’t let him do the transactions instantly as he describes himself how much of a scam this seems and how the worst thing is his bank account being drained, like banks don’t have a self interest in protecting their clientele from such scams.
Yes, this exact scenario has happened to me a couple times with both Claude and Codex, and it's usually git checkout, more rarely git reset. They immediately realize they fucked up and spend a few minutes trying to undo by throwing random git commands at it until eventually giving up.
Yeap - this is why when running it in a dev container, I just use ZFS and set up a 1 minute auto-snapshot - which is set up as root - so it generally cannot blow it away. And cc/codex/gemini know how to deal with zfs snapshots to revert from them.
Of course if you give an agentic loop root access in yolo mode - then I am not sure how to help...
The sneaky move that I hate most is when Claude (and does seem to mostly be a Claude-ism I haven’t encountered on GPT Codex or GLM) is when dealing with an external data source (API, locally polling hardware, etc) as a “helpful” fallback on failures it returns fake data in the shape of the expected output so that the rest of the code “works”.
Latest example is when I recently vibe coded a little Python MQTT client for a UPS connected to a spare Raspberry Pi to use with Home Assistant, and with a just few turns back and forth I got this extremely cool bespoke tool and felt really fun.
So I spent a while customizing how the data displayed on my Home Assistant dashboard and noticed every single data point was unchanging. It took a while to realize because the available data points wouldn’t be expected to change a whole lot on a fully charged UPS but the voltage and current staying at the exact same value to a decimal place for three hours raised my suspicions.
After reading the code I discovered it had just used one of the sample command line outputs from the UPS tool I gave it to write the CLI parsing logic. When an exception occurred in the parser function it instead returned the sample data so the MQTT portion of the script could still “work”.
Tbf Claude did eventually get it over the finish line once I clarified that yes, using real data from the actual UPS was in fact an important requirement for me in a real time UPS monitoring dashboard…
It's similar to early versions of autonomous driving. You's not want to sit in the back seat with nobody at the wheel. That would get you killed guaranteed.
Sounds to me like more evidence in favor of the idea that they're meant to play the golden retriever engineer reporting to you, the extremely intelligent manager.
reply