There has to be a very niche market for people who want ChromeOS on their device but do not have the technical know-how to do so, or without a device that can flash an iso.
I guess for $3 it's not really a cash-grab or anything. Kinda nice to see vendor-supported live USBs honestly
I'd say the majority of people don't know how to install an OS on a device and having the ability to run Chrome on what is likely e-waste is a good thing.
I applaud the efforts of people/groups like MrChromeBox who figure out how to flash linux onto Chromebooks. There are great designs like the Samsung Galaxy book in red with Amoled display (thin metal body) unfortunately it only has 8GB of ram.
For my homelab, I setup a Raspberry Pi running PiHole. PiHole includes the ability to set local DNS records if you use it as your DNS resolver.
Then, I use Tailscale to connect everything together. Tailscale lets you use a custom DNS, which gets pointed to the PiHole. Phone blocks ads even when im away from the house, and I can even hit any services or projects without exposing them to the general internet.
Then I setup NGINX reverse proxy but that might not be necessary honestly
I'd hedge to say roughly the same, but that's writing print in chicken-scratch handwriting (which is my norm) and under-practiced with cursive. I'd suspect after using cursive a bit I would speed up. Similar to using home-row when typing vs pick-and-peck or whatever they call it
My phone would transcribe even quicker than that, though, which would probably be my go-to instead of hand-writing
Similar boat...I'd also like to swap off Plex but a few of my less techie friends use it and I'm worried about the compatibility/ease of setup of Jellyfin on their devices.
Thought about running both in parallel but that seems like a waste. Think I just need a migration day eventually
I've never run Plex, or Jellyfin for that matter. There's a video share on my NAS.
I point Infuse on Apple TV 4K's at it. It works, and cleanly.
Downsides: you have to pay for Infuse Pro to play some formats and deal with some audio codecs. It's IIRC $17 for a year, though, so pretty reasonable for continued development. Your non-technical friends and family can't do the initial setup themselves (it's shared over Tailscale, they can all use the same limited account on your plan), but anyone I'm going to let do this can ship me their Apple TV 4K and let me set it up for them.
It's just weird that it's this complicated. We should get a static IP from our DNS. We should use standard open source streaming conversion mechanisms. It should go over basic video codecs.
Lately I've been working towards just using a webserver to host video files. Sure, it's not adaptive, but for goodness sakes it's simple.
Well, my ISP doesn't support IPv6 for home use, at all. My IPv4 is essentially static - I can't recall the last time it changed - but while Tailscale is a single point of failure for my home network security, it's also one that I can expect to be updated faster than practically any other package.
VLC will play anything I throw at it, but it's not going to go and fetch all the metadata for me and present it in a nice way to the non-technically-oriented users around.
yeah I'm on the same boat. I just have an old laptop hooked up to the tv, which can access a shared folder on my main computer that has all the media. I control it with a wireless mouse, and get an actual fast UI with a web browser instead of the usability nightmare that is a smart tv UI. this is all Windows though, I guess it's possible to have Linux access a Windows shared folder, I've been meaning to look into it for a while
> I guess it's possible to have Linux access a Windows shared folder
It is, and while it's not hard, this was really my first experience running Linux in a long while, and boy do I now understand why people did not like systemd when it came out. It's not bad, per se, but it's not just "stick a line in /etc/fstab". However, even Copilot can put together a couple of scripts for you.
Any $10/mo VPN solves this, and probably advertises it as a selling point.
Of course, then you're spending $10 to save $10....
I have the whole *arr stack setup with Plex running in the US just fine, but that's for sure not for everyone and was a few headaches to get up and running
>Of course, then you're spending $10 to save $10....
Most VPN subscriptions are around $5, whereas netflix with ads costs $8, and $18 without ads. Even at $18 though, it's still not 4K, whereas you can easily pirate 4K versions with your VPN subscription.
Appreciate the reality check. Mullvad has been a bill I don't think about twice when it comes around, and I cancelled streaming services years ago.
To your point though, as I'm running my plex server on an old ~midrange laptop, 4K is pretty rough for me to stream as well. I'm sure better hardware fixes this, but that's higher cost. YMMV based on what hardware you have on hand to repurpose
>To your point though, as I'm running my plex server on an old ~midrange laptop, 4K is pretty rough for me to stream as well.
Unless you're doing reencodes processing power shouldn't matter. You can serve 4K video on a 2010s router if you wanted to. If you're doing reencodes, why bother? Download an encode that's appropriate for how you're watching it. 4K for the big screen and 1080p for mobile. Skip reencoding altogether.
For Private Cloud Compute specifically, the system is described as underpowered and perhaps more trouble than it’s worth. Updating the software is apparently trickier and takes time, and more fundamentally the chips (believed to comprise right now of modified M2 Ultra processors) are not powerful enough to run the latest frontier models like Gemini, which the new Siri will be based on.
> M2 Ultra processors ... are not powerful enough to run the latest frontier models
The local AI community would strongly disagree with that assessment. They may not be able to run them with low latency for interactive use and this is most likely the real blocker for them, but they will have strong compute per watt compared to nVidia GPU's.
You cropped the part of the quote that is relevant:
> like Gemini, which the new Siri will be based on.
The local AI community isn't evaluating the internal Gemini models. Apple's Private Compute hardware is specifically competing against Google's TPU hardware, which is a foregone conclusion if you've seen the inference economics. The money and electricity wasted on Mac inference at that scale isn't even attractive to Apple.
No, but they are in the "Device that runs apps" business right?
Just like they're looking to corner the "Device that runs models locally" business by focusing on onboard inference.
Gains in model performance isn't exactly cheap, and once one frontier model figures it out, the rest seem to copy it quick. Let them figure out what works and what doesnt, then put the "Apple" touch on it, all while putting your devices in everyone's hands. That's been their business model for years.
Does destroying a data center count as a DDOS?
reply