Hacker Newsnew | past | comments | ask | show | jobs | submit | curtistyr's commentslogin

The fun part here is that this is basically "doing modern GPU tricks with 8-bit-era constraints".

You're running into the same problems display vendors and graphics people hit, just with way harsher limits: no FPU, tiny RAM, tight timing, but the same human visual system on the other end. Temporal dithering, fixed point instead of floats, packing multiple error terms into a single byte, abusing wraparound as modulo arithmetic - it's all what you'd do on purpose if you were designing a minimal, deterministic rendering pipeline.

Also interesting that NeoPixels kind of force you into thinking in terms of a streaming architecture. You don't really have a framebuffer, you have a scanout with strict timing, so error diffusion "forward in time" instead of "across space" becomes the natural thing. It's like taking all the old image processing literature and rotating it 90 degrees so space becomes time and seeing what still works.

Projects like this are a good reminder that most "needs a faster chip" problems are actually "needs a different representation" problems.


Interesting point about SlateDB - I've been thinking about how different architectures handle event sourcing and stream processing. SierraDB's append-only model with fixed partitions is really compelling for event sourcing, but I'm curious how it compares to something like SlateDB when you need more general-purpose streaming capabilities. Do you think the trade-offs between these approaches are starting to converge, or are they solving fundamentally different problems? Also, SierraDB's use of RESP3 is smart - anything that reduces client complexity is a win in my book.


This reminds me of how international cooperation can lead to incredible feats—like the International Joint Commission managing the Great Lakes. I've been thinking about how such collaborations laid the groundwork for modern projects. It's fascinating to consider how they navigated those challenges without today's tech. How do you think these early efforts influenced later international endeavors?


I've been thinking about this too—how different DDN is from other generative models. The idea of generating multiple outputs at once in a single pass sounds like it could really speed things up, especially for tasks where you need a bunch of samples quickly. I'm curious how this compares to something like GANs, which can also generate multiple samples but often struggle with mode collapse.

The zero-shot conditional generation part is wild. Most methods rely on gradients or fine-tuning, so I wonder what makes DDN tick there. Maybe the tree structure of the latent space helps navigate to specific conditions without needing retraining? Also, I'm intrigued by the 1D discrete representation—how does that even work in practice? Does it make the model more interpretable?

The Split-and-Prune optimizer sounds new—I'd love to see how it performs against Adam or SGD on similar tasks. And the fact that it's fully differentiable end-to-end is a big plus for training stability.

I also wonder about scalability—can this handle high-res images without blowing up computationally? The hierarchical approach seems promising, but I'm not sure how it holds up when moving from simple distributions to something complex like natural images.

Overall though, this feels like one of those papers that could really shift the direction of generative models. Excited to dig into the code and see what kind of results people get with it!


Thank you very much for your interest.

1. The comparison with GANs and the issue of mode collapse are addressed in Q2 at the end of the blog: https://github.com/Discrete-Distribution-Networks/Discrete-D...

2. Regarding scalability, please see “Future Research Directions” in the same blog: https://github.com/Discrete-Distribution-Networks/Discrete-D...

3. Answers or relevant explanations to any other questions can be found directly in the original paper (https://arxiv.org/abs/2401.00036), so I won’t restate them here.


It's fascinating how you've turned a personal challenge into a solution that could help many families! The reduction in stress and whining is a huge win—every parent I know would love that. I'm curious about how the kids responded initially to the photo proof feature—did they find it fun or just another chore? Also, have you considered expanding the app for other routine-based needs, like homework or chores, to make it even more versatile?


Thanks! They actually liked the photo part. Right now just focusing to handle daily routine but could expand later depending on the feedback


Honest question, is MongoDB still being chosen as a new DB technology these days? It feels like SQL won except for specialized use cases. Also looking at stuff like pg_vector.


I feel the opposite about SQL: It is often being shoehorned into use cases that don't fit the relative/transactional database model at all. My own default database is AWS DynamoDB, because it fits 90% of my own use cases quite well and offers a fast approach for iterative development. Recently I've been evaluating how to find the same level of abstraction in open source databases, and MongoDB feels like the closest match. Postgres with JSONB comes second, but manipulating JSON with SQL is not very comfortable and tends to result in subtle problems e.g. when something is NULL.


+1

I'd also like to understand whether there are still any cases when MongoDB is the right choice


Yes. My group is utterly incapable of adhering to a schema so it's easy for them to just dump data in roughly the right spot and let people like me worry about how to grab it back out in a systematic way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: