Hacker Newsnew | past | comments | ask | show | jobs | submit | Tepix's commentslogin

Wow, the enshittification over at Dropbox has reached a terrible level. They make it super hard to just download a file in a browser, something that is supposed to be their core function. Why even use Dropbox these days?

Yeah, it's pretty bad.

I don't really use it any more -- sharing those files is about the only purpose, and I put them there years ago.


I'm still happy about FUZIX on the RP2040 (last discussed here two month ago https://news.ycombinator.com/item?id=46271115 ). A capable SoC that costs around $1. Only via (USB) serial so far, but that works for me.

> Tiny computers are like tiny homes

They totally suck like tiny homes? No, actually they are better than tiny homes. Browser are the #1 reason why you want a computer that's better than a Pi 500. Wanting to play modern games is #2.


Heavy?

Two packages made from mycelium can behave very differently because “mycelium composite” is a category, not a single recipe. Particle size, fibre content, and the ratio of substrate to mycelium all change density. Higher density generally brings higher compressive strength and better edge definition, but it also increases weight and can reduce the springy cushioning that protective packaging needs.

Source: https://dirobots.com/en/mycelium-strength/


Sounds like this might be your area of expertise. For the rest of us, take a shoebox. How much ballpark extra weight we talkin’ to have a livable planet? (Maybe the mushrooms would be ~2x as heavy as standard shoeboxes for example, to meet existing spec.)

Or how about for the glasses box they show on the site in OP, or a plastic sleeve like Americans sell Oreo cookies in. Anybody have any guesses?


I've done some experiments with mycelium as a construction material, but I'm hardly an expert. Mycelium weighs anywhere between 50 and 950kg/m3. Usually you won't have mycelium as thin as cardboard, because you want use mycelium as a 3d buffer, replacing styrofoam. EPS (styrofoam) has densities of 15-30kg/m3. So while it's more sustainable it's also heavier.

Hopefully not used as packaging for Oreos, because unless the fungus has been highly adapted to the substrate, the mycelium will try to grow into the food. Oyster mycelium won't be toxic, but I don't want my Oreos to taste like mushrooms.

I don't think the packaging is supposed to be alive at the time of usage.

Heavy means more fuel to ship it. Maybe still a net-win, I don't know.

Do you limit it to human‘s consciousness?

> consciousness

What else is there?


Many other species. (E.g. apes)

Man is just an animal.


Which is why I omitted "Human" when I quoted you..

You didn't quote me. Ex falso quodlibet.

Yea, you are right.

Your comment has a very strong AI stench.

I guess that's the point of it. It's kinda hard to doubt what they're saying anw.

A comment about how nobody can tell facades from the real thing — and the first response is someone trying to tell. I appreciate the live demo.

People can tell. The premise is false. It’s sometimes hard to tell, obviously it’s hard to ascertain false negatives and false positives, but it’s usually pretty obvious.

It's a tell that you think people can't tell.

Touch grass

Concise URLs deserve more praise.

Also, when you look at a site and see URLs like /wiki/index.php/MyPage it tells you about the skill level and care of the site administrators.


And a bit about the skill of whoever made the wiki software, they need better documentation and automation to help even less-skilled admins to have clean URLs


Ironically XWiki doesn't use their own short URLs: https://www.xwiki.org/xwiki/bin/view/Documentation/AdminGuid...

It's insane how much traffic HF must be pushing out of the door. I routinely download models that are hundreds of gigabytes in size from them. A fantastic service to the sovererign AI community.

My fear is that these large "AI" companies will lobby to have these open source options removed or banned, growing concern. I'm not sure how else to explain how much I enjoy using what HF provides, I religiously browse their site for new and exciting models to try.

ModelScope is the Chinese equivalent of Hugging Face and a good back up. All the open models are Chinese anyways

Not true! Mistral is really really good, but I agree that there isn't a single decent open model from the USA.

Mistral is cool and I wish them success but it consistently ranks extremely low on benchmarks while still being expensive. Chinese models like DeepSeek might rank almost as low as Mistral but they are significantly cheaper. And Kimi is the best of both worlds with incredible benchmark results while still being incredibly cheap

I know things change rapidly so I'm not counting them out quite yet but I don't see them as a serious contender currently


Sure, benchmarks are fake and I use Mistral over equivalently sized models most of the time because it's better in real life. It runs plenty fast for me, I don't pay for inference.

> it consistently ranks extremely low on benchmarks

As general purpose chatbots small Mistral models are better than comparably sized Chiniese models, as they have better SimpleQA scores and general knowledge of Western culture.


It’s really hard to beat qwen coder, especially for role play where the instruction following is really useful. I don’t think their corpus is lacking in western knowledge, although I wonder if Chinese users get even better results from it?

> It’s really hard to beat qwen coder, for role play

I am not sure if you actually tried that. Mistrals are widely asccepted go-to models for roleplay and creative writing. No Qwens are good at prose, except for their latest big Qwen 3.5.

> I don’t think their corpus is lacking in western knowledge,

It absolutely does, especially pop culture knowledge.


Instruct and coder just follow instructions so well though. I guess I’ve just never been able to make mistral work well, I guess.

Qwen3 30B A3B and that big 400+ B Coder were absolutely terrible at editing fiction. I would tell them what to change in the prose and they'd just regurgitate text with no changes.

Did you try asking Gemini what model to use and how to configure/set it up? It has worked wonders for me, ironically (since I’m using a big model to setup smaller local models).

> Did you try asking Gemini what model to use and how to configure/set it up?

That would besuboptimal, as Gemini has too old knowledge cutoff. I am long past the need for such an advice anyway, as I've been using local models since mid 2024.


Gemini will search the web for most things (at least if you are using it via the web search interface), it isn’t limited to the knowledge it was trained on. Actually, I’m a bit mortified that not everyone knows this. If you ask Gemini (from the search interface) about a current event that happened yesterday, they will use search to pull in context and work with that. Also about model that was released yesterday, it can do that.

It’s only a very low level model access where search isn’t used. Local models also need to be configured to use search, and I haven't had a use case to do that yet.

Gemini seems to call this “grounding with google search”. If you have Gemini installed in your enterprise, it will also search internal data sources for context.


> Gemini will search the web for most things (at least if you are using it via the web search interface), it isn’t limited to the knowledge it was trained on.

If decides to do so, and even then baked in knowledge would influence the result.

In any case I do not need Gemini or any other LLMs to figure out setting for my llama.cpp, thank you very much.


It has always searched the web for me, and it can give me pretty good guidance about a model released in the last week. All models ATM are trying to reduce dependence on internal knowledge mostly through RAG. Anyways, this part of LLMs has gotten much better in the last 6 months.

If you are able to figure out the right settings for a model Thats was released last week, then great for you! But it sounds like you just don’t trust LLMs to use current knowledge, and have some misconception about how they satisfy recent knowledge requests.


Why are you talking price when we are talking local AI?

That doesn't make any sense to me. Am I missing something?


15 missed calls from your local power company

Your electricity is free?

Apple silicon is crazy efficient as well as being comparable to GPUs in performance for max and ultra chips.

If you have the hardware to run expensive models, is the cost of electricity much of a factor? According to Google, the average price in the Silicon Valley Area is $0.448 per kWh. An RTX 5090 costs about $4,000 and has a peak power consumption of 1000 W. Maxing out that GPU for a whole year would cost $3,925 at that rate. It's not particularly more expensive than that hardware itself.

At that point it'd be cheaper to get an expensive subscription to a cloud platform AI product. I understand the case for local LLMs but it seems silly to worry about pricing for cloud-based offerings but not worry about pricing for locally run models. Especially since running it locally can often be more expensive

for almost the entire year, yes.

Arcee is working on that, see a blog post about their newest in progress model here: https://www.arcee.ai/blog/trinity-large

Its still not fully post trained and its a non-reasoning model, but its worth keeping an eye on if you dont want to use the Chinese models that currently are the best open-weight options.


To be fair there are lots of worse models than OpenAI's GPT-OSS-120b. It's not a standout when positioned next to the latest releases from China, but prior to the current wave it was considered one of the stronger local models you can reasonably run.

They can try. I don't think they'll be able to get the toothpaste back in the tube. The data will just move our of the country.

Many of the models on hugging face are already Chinese. It’s kind of obvious that local AI is going to flourish more in China than the USA due to hardware constraints.

How do you choose which models to try for which workflows? Do you have objective tests that you run, or do you just get a feel for them while using them in your daily workflow?

it’s only a matter of time. we have all seen first hand how … wrong … these companies behave, almost on a regular basis.

there’s a small tinfoil hat part of me that suspects part of their obscene investments and cornering the hardware market is driven by an conscious attempt to stop open source local from taking off. they want it all, the money, the control, and to be the only source of information to us.


Bandwidth is not that expensive. The Big 3 clouds just want to milk customers via egress. Look at Hetzner or CloudFlare R2 if you want to get get an idea of commodity bandwidth costs.

Yup, I have downloaded probably a terabyte in the last week, especially with the Step 3.5 model being released and Minimax quants. I wonder what my ISP thinks. I hope they don't cut me off. They gave me a fast lane, they better let me use it, lol

Even fairly restrictive data caps are in the range of 6 Tb per month. P2P at a mere 100 Mb works out to 1 TiB per 24 hours.

Hypothetically my ISP will sell me unmetered 10 Gb service but I wonder if they would actually make good on their word ...


I have a 1.2TB cap before you start getting charged extra, so you might need to recalibrate your restrictive level.

Is that with a WISP by chance? Or in a developing country? Or are there really wired providers with such low caps in the western world in this day and age?

ATT once told me if I don't pay for their TV service then my home gigabit fiber would have a 1TB cap. They had an agreement with the apartment building so I had no other choice of provider.

Buy our off brand netflix or else we'll make it so you can't watch netflix. How is that legal?

The law is written by the highest bidder, and the telecom lobbyists are very generous

well it's my wired cap a stone's throw from buildings with google cloud logos on the side in a major us city, so...

Comcast.

Doesn't the blog state that it's now 4bit (the first gen was 3bit + 6bit)?

Both the RP2040 and the RP2350 are amazing value these days with most other electronics increasing in price. Plus you can run FUZIX on them for the UNIX feel.

Mmh... I think that the LicheeRV Nano has kind of more value to it.

Around 20 bucks for the Wifi variant. 1GHz, 256MB RAM, USB OTG, GPIO and full Linux support while drawing less than 1W without any power optimizations and even supports < 15$ 2.8" LCDs out of the box.

And Rust can be compiled to be used with it...

https://github.com/scpcom/LicheeSG-Nano-Build/

Take a look at the `best-practise.md`.

It is also the base board of NanoKVM[1]

1: https://github.com/sipeed/NanoKVM


I think the ace up the sleeve is PIO; I've seen so many weird and wonderful use cases for the Pico/RP-chips enabled by this feature, that don't seem replicable on other $1-class microcontrollers.

Wow thanks, this is definetely something I have to investigate. Maybe the Sipeed Maix SDK provides something similar for the LicheeRV Nano.

I'm currently prototyping a tiny portable audio player[1] which battery life could benefit a lot from this.

1: https://github.com/sandreas/rust-slint-riscv64-musl-demo


I'd rather have the Linux SOC and a $0.50-$1 FPGA (Renesas ForgeFPGA, Gowin, Efinix, whatever) nearby.

> $0.50-$1 FPGA

no such thing, 5V tolerant buffers will run you more than that


The ICE40s start well under $2 even in moderate quantities. They’re 3V3, not 5V0, but for most applications these days that’s an advantage.

Amazing value indeed!

That said: it's a bit sad there's so little (if anything) in the space between microcontrollers & feature-packed Linux capable SoC's.

I mean: these days a multi-core, 64 bit CPU & a few GB's of RAM seems to be the absolute minimum for smartphones, tablets etc, let alone desktop style work. But remember ~y2k masses of people were using single core, sub-1GHz CPU's with a few hundred MB RAM or less. And running full-featured GUI's, Quake1/2/3 & co, web surfing etc etc on that. GUI's have been done on sub-1MB RAM machines once.

Microcontrollers otoh seem to top out on ~512KB RAM. I for one would love a part with integrated: # Multi-core, but 32 bit CPU. 8+ cores cost 'nothing' in this context. # Say, 8 MB+ RAM (up to a couple hundred MB) # Simple 2D graphics, maybe a blitter, some sound hw etc # A few options for display output. Like, DisplayPort & VGA.

Read: relative low-complexity, but with the speed & power efficient integration of modern IC's. The RP2350pc goes in this direction, but just isn't (quite) there.


You might like the ESP32-P4

IIRC, you can use up to 16 MB of PSRAM with RP2350. Maybe up to 32 MB, not sure.

Many dev boards provide 8 MB PSRAM.


Eh it's really not when you consider that the ESP32 exists. it has PCNT units for encoders, RMT LED drivers, 18 ADC channels instead of four, ULP coprocessor and various low power modes, not to mention wifi integrated into the SoC itself, not optional on the carrier board. And it's like half the price on top of all that. It's not even close.

The PIO units on the RP2040 are... overrated. Very hard to configure, badly documented and there's only 8 total. WS2812 control from the Pico is unreliable at best in my experience.


They are just different tools; both have their uses. I wouldn't really put either above the other by default.

> And it's like half the price on top of all that. It's not even close.

A reel of 3,400 RP2350 units costs $0.80 each, while a single unit is $1.10. The RP2040 is $0.70 each in a similar size reel. Are you sure about your figures, or are you perhaps comparing development boards rather than SoCs? If you’re certain, could I have a reference for ESP32s being sold at $0.35 each (or single quantities at $0.55)?

PIO units may be tricky to configure, but they're incredibly versatile. If you aren't comfortable writing PIO code yourself, you can always rely on third-party libraries. Driving HDMI? Check. Supporting an obscure, 40-year-old protocol that nothing else handles? Check. The possibilities are endless.

I find it hard to believe the RP2040 would have any issues driving WS2812s, provided everything is correctly designed and configured. Do you have any references for that?


> wifi integrated into the SoC

I really wish we would stop sticking wireless in every device. The spectrum is limited and the security concerns are just not worth it. And if you try to sell it, certifying will be RPITA even in US (rightfully so!). Just had to redesign a little Modbus RTU sensor prototype for mass production, noticed the old version used BT MCU. So I immediately imagined the certification nightmare - and the sensor is deployed underwater, it's not like BT will be useful anyway. Why? Quote "but how do we update firmware without a wireless connection"… How do you update firmware on a device with RS-485 out, a puzzle indeed. In all fairness, the person who did it was by no means a professional programmer and wasn't supposed to know. But conditioning beginners to wireless on everything - that's just evil. /rant


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: