Hacker Newsnew | past | comments | ask | show | jobs | submit | rythie's commentslogin

Waterfox is dependant on Firefox still being developed. Mozilla are adding these features to try to stay relevant and keep or gain market share. If this fails, and Firefox goes away, Waterfox is unlikely to survive.


That's true, but as a Waterfox user, I'm not worried!

If firefox really completely fails, and nobody is able to continue the open source project, I'll just find a new browser. That's not a huge hassle- Waterfox does what I need in the here and now, that's my only criterion.


> I'll just find a new browser.

The problem is that if Firefox dies, there are no browsers left. I don't want to use a re-skin of Chrome.


Yes, I agree. I suppose when I said "I'm not worried" - I meant in the context of "it doesn't put me off using Waterfox". I am worried from an overall software ecosystem point of view.


Luckily there is ladybird in the making


> The problem is that if Firefox dies, there are no browsers left. I don't want to use a re-skin of Chrome.

Lynx is still not a re-skin of Chrome, unless I missed something changing.


Can you manage your bank in Lynx?


If most people move from Firefox to Waterfox, then Waterfox can acquire Firefox devs, no? Obviously it comes to money, but the first step to gain funding is to gain popularity...


First off I’d say you can run models locally at good speed, llama3.1:8b runs fine a MacBook Air M2 with 16GB RAM and much better on a Nvidia RTX3050 which are fairly affordable.

For OpenAI, I’d assume that a GPU is dedicated to your task from the point you press enter to the point it finishes writing. I would think most of the 700 million barely use ChatGPT and a small proportion use it a lot and likely would need to pay due to the limits. Most of the time you have the website/app open I’d think you are either reading what it has written, writing something or it’s just open in the background, so ChatGPT isn’t doing anything in that time. If we assume 20 queries a week taking 25 seconds each. That’s 8.33 minutes a week. That would mean a single GPU could serve up to 1209 users, meaning for 700 million users you’d need at least 578,703 GPUs. Sam Altman has said OpenAI is due to have over a million GPUs by the end of year.

I’ve found that the inference speed on newer GPUs is barely faster than older ones (perhaps it’s memory speed limited?). They could be using older clusters of V100, A100 or even H100 GPUs for inference if they can get the model to fit or multiple GPUs if it doesn’t fit. A100s were available in 40GB and 80GB versions.

I would think they use a queuing system to allocate your message to a GPU. Slurm is widely used in HPC compute clusters, so might use that, though likely they have rolled their own system for inference.


The idea that a GPU is dedicated to a single inference task is just generally incorrect. Inputs are batched, and it’s not a single GPU handling a single request, it’s a handful of GPUs in various parallelism schemes processing a batch of requests at once. There’s a latency vs throughput trade off that operators make. The larger that batch size the greater the latency, but it improves overall cluster throughput.


We know how to use the apps, we just don’t want to. Not all change is for the better.


Palm was the market leader, it would have been the obvious choice. Palm had been around since 1996 and by 1998 had sold 30 million devices [1]. PocketPC didn’t come out until 2000, in 2001 they had only sold 1.25 million devices, equating to less than 10% market share [2]. From what I remember Palm Pilots were the go to choice for PDAs, they were simple and worked. Other devices had come and gone. It would have been odd if they chosen something else. I doubt anyone was thinking it would be used for 20 years, though I don’t think people would have thought it would go away at the time.

[1] https://history-computer.com/palm-pilot-guide/ [2] https://www.zdnet.com/article/pocket-pc-sales-1-million-and-...


I was thinking it’s not actually an obvious choice for controlling hardware. It was either an interesting choice that was small and didn’t need a lot of components, compared to the obvious PLC. Or it seemed like an obvious choice to someone that didn’t know better.[1] Either way, someone probably made a good decision to keep the old system maintainable by emulating the palm pilot instead of replacing it.

Mind you, it’s not clear how much of the control is done by the palm pilot. For all I know, it’s not much more than a screen connected to a PLC. But my gut feeling is it’s actually doing at least some of the control to be worth emulating and keeping the original software.

[1]You see this a ton now, with people reinventing the wheel using arduino, raspberry pi and spark fun parts to automate something in the small business they are employed at. Because they know these things as hobbyists, but they and anyone around were never exposed to PLCs. Soon after they leave, a newer employee will rebuild from scratch, maybe using ESP32. Overall the lifetime cost is probably much higher. Meanwhile a PLC from 1990 is fairly easy to maintain, repair or replace (including porting the software).


You just said with a straight face that the lifetime cost of a PLC was below that of an Arduino...

Arduino cost = $10 for hardware, and a few hours of amateur coding, and an expectation of a 25 year lifespan as long as no changes are needed.

PLC cost is $15k for the hardware, and $10k to hire an expert to code it, who probably forces you into a $10k/year maintenance contract.


The costs could be as you say. And a arduino may last 25 years but a cheap power adapter janky wiring soldered to a hobbyist proximity switch will not.

I was thinking more of a scenario where a young engineer at least knows what a PLC is and buys $1k of stuff from automation direct. And starting from a PLC/googling about PLCs will lead you down a path of PLC cabinets, high quality power supplies, labeled wires and industrial limit switches. Vs another engineer that only knows the world of arduino and messes of wires in boxes.

In the first case, when he or she leaves and the thing breaks down, the next person can either call or pro or have a chance to connect to the PLC, do some troubleshooting with the ladder logic and figure out which sensor needs to be replaced. In the second case there’s probably no documentation and the source code is long gone so the only thing to do is scrap it and start over, probably incurring a large cost because now it’s an emergency to get the thing working again, and/or it causes lost production. I failed to mention earlier I wasn’t just talking about the cost of parts.

I’m not saying the imax solution may be so bad in the “arduino direction”, but thinking about it for me thinking about some professional experiences I’ve seen in both directions.


In my experience, hacked together arduino projects easily exceed a 25 year MTBF (if you exclude day-1 failures because someone did something stupid like wiring it backwards).

However, ESP32's do not (they seem to require a power cycle every few months - and in my view, that is a failure). R Pi's certainly do not (they require human attention for software updates, which IMO is also a failure - and even if you don't update them, there is almost certainly some tiny memory leak and it'll need a reboot in a year or two anyway).


You are also getting high quality components(i.e industrial use) with a PLC.

And what makes you think an amateur can cobble something together in a few hours while a professional cannot?


I was also curious how much control it has, or if it's sort of a front-end to a PLC or microcontroller.


That device specifically was cheap and readily available. If it failed you could have gone to any OfficeMax or Circuit City and picked up a replacement.


I assume at least one engineer aggressively argued for DB9 serial along with a Windows and Mac app instead and lost.

It was clear that the longevity of the installations would far outstrip the longevity of the Palm pilot

If I was in the room I'd even argue for DOS. As a target it had stopped moving, was ubiquitous, not going anywhere and is in enough important places that it would even survive the demise of Microsoft if they were to collapse in the future


Yes but does it fit the form factor?

For all we know the palm is also sending serial commands?


Let me play the role.

"Yes, there's plenty of Windows CE and DOS palmtops. You can make a palmpilot application if you want but that should be a port, just like to BeOS.

The pure serial binary option is fine but this is infrastructure. Like the bridges that run on 5 1/4" disks, this will outlive both us and Palm if we do it right. Hell, if this is still running when our grandchildren are old and grey, this will be one of our greatest achievements as a team.

When I walk down the street and I see a masonry stamp on the sidewalk from a contracting company that installed it 100 years ago, I appreciate the fine work they did that I'm still using a century later.

Let's hope people will feel the same way about what we decide to do in this room today.

We need to at least provide documentation on the protocol.

It has to be made so competent people in the future can easily make this system accessible to the computers of the future as well. That will Not best be handled by a binary blob on a palmpilot"


Saying it should have been Windows CE is just survivorship bias IMO - and we don’t know that they didn’t write documentation on the protocol or that it’s poorly understood - it might just have been easier and safer to emulate an app that everyone is happy with rather than rewrite it (these film projectors might be more in “keep them alive” mode rather than “improve” mode while digital is growing for them).

I’ve put a dos application running in an emulator on an android device for a project to roll out new hardware because that took a few hours to configure rather than a year of development.


O Captain! My Captain!


Did you you ever attempt programming anything under PalmOs back then? It was quite fragile because of the extremely low amount of memory on board, which forced the use of relocatable memory handles, a bit like classic mac OS.

https://www.fuw.edu.pl/~michalj/palmos/Memory.html

PalmOS and it's extreme focus on low end hardware was a super weird choice at the time. The one reason for using PalmOS was extreme battery life, which obviously was not a factor here.

There existed plenty better alternatives at the time.


I am not necessarily disagreeing with you.

I had a Z22 toward the end of the Palm era; back when LifeDrive was on their higher end and webOS seemed to be where the future might end up.

I loved that thing. I read tons of books on it.


The pytorch binaries from pip and conda won’t work on these GPUs, though there are some alternative binaries being maintained that still work: https://blog.nelsonliu.me/2020/10/13/newer-pytorch-binaries-...

The latest Nvidia driver no longer supports the K40, so you’ll have to use version 470 (or lower, officially Nvidia says 460, but 470 seems to work). That supports CUDA 11.4 natively. Newer versions of CUDA 11.x are supported: https://docs.nvidia.com/deploy/cuda-compatibility/index.html though CUDA 12 is not.

In my testing, a system with a single RTX3060 was faster in tensorflow than with 3 K40s and probably close to the performance of 4 k40s.

If you are considering other GPUs, there are some good benchmarks here (The RTX3060 is not there, though the GTX1080Ti was almost the same performance in the tensorflow test they run): https://lambdalabs.com/gpu-benchmarks

As others have said Google CoLab is free option you can use.


The industry was already moving away from the big 64 bit SMP machines made Sun, SGI & IBM. In many cases a cluster of 32bit x86 machines made more sense than one expensive big machine with high priced support contracts and parts. 32 bit x86 machines already supported more than 4GB total memory with PAE, it was just that one process couldn’t use more than 4GB. Other 64bit chips were already well established (SPARC, POWER, MIPS), probably for most of the users they couldn’t easily move to a new CPU architecture. For other users by the time they needed the bigger machines x86 64bit was already available, including from Intel themselves. AMD was limited 8 sockets from what I remember, so their was still a small market for big Itanium systems (like SGI’s Altix).


The Sony RX100 series would be in most top lists. Though personally I'm not sure why they went with a slower lens from the mark 6 onwards. The ZV1 continues with the a similar lens from earlier models. I have the RX100 mark iii, that's still quite good.


I second the RX100 series. As a step up, there is RX1 with a full frame and fixed lens. And HX99 the other way with a smaller sensor, but goes all the way up to 720mm.


Same, RX100-M3 is my webcam now. (X-T30 for anything else)


It means you don't need to be root to run it.


You can also call docker commands by being part of the docker group IIRC.

Doesn't this have more to do with the daemon that the user executing commands ?


> You can also call docker commands by being part of the docker group IIRC.

Which effectively gives you root on the host.


Which is an horrible practice and has roughly the same attack surface as login as root all the time.


With podman there is no daemon, everything is running as you. The standard setup for docker has a daemon running as root, which means when you start a container it has root privileges.


A fast charger has AC-DC converter provide DC power to car, which is expensive. Slow chargers are AC, which are very cheap in comparison.


I think it’s a number of things.

Interchangeable lens cameras now all have video features and increasingly most of the improvements are in that area. When SLRs are used for video the mirror needs to be flipped up and auto-focus system that is used for photos can’t be used, so the camera need another one on the sensor. In this case the mirror is redundant and the viewfinder can’t be used.

Tracking of fast moving subject is difficult with a SLR, the SLR cannot see the image in viewfinder mode only a focus module can, which likely only has a few hundred focus points (or less) and those points often don’t reach the edge of the frame. Additionally mirrorless cameras are able track a subject eye using AI and keep that in focus. A SLR cannot do this in the viewfinder mode as the focus sensor does not have nearly enough resolution to recognise small item like an eye or to know that it is an eye.

Burst shooting is also difficult on a SLR, for each shot the mirror needs to flip up and down and the focus module use a brief period to change focus. Canon is/was the leader in sports photography cameras. The highest end Canon SLR camera can do 16fps with autofocus, but 20fps with the mirror up. The Sony a1 (mirrorless) can do 30fps. These fast shooting rates are only possible with mirrorless cameras.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: