Hacker Newsnew | past | comments | ask | show | jobs | submit | zarzavat's commentslogin

The reason that FPV drones are so easily disrupted is that they are too light to carry anything more than a radio and fly low.

Disrupting the signal for a normal-sized aircraft is much harder. If you're flying at 10s of thousands of feet and have a line of sight to multiple satellites it's going to take some serious weaponry to disrupt that.


True. But the next rung up the escalation ladder is of course disrupting the satellites.

I envision them all gone seconds into any large scale war.

The G forces are another thing. I wonder why they aren't stsrting wth missle platforms instead.

Sure, winged flight has uses, but taking a missle platform, adding small munitions instead of a big bang?


> I wonder why they aren't stsrting wth missle platforms instead

Price and ease of manufacture. Missiles are expensive and hard to build.


Latest FPV drones in Ukraine became much more resistant to electronic countermeasures. Plus other drones are used as retranslators.

Seems they are using kilometers of fibre optic cables, so they fly tethered and communication can't be disrupted.

I'd hate to be part of the clean-up crew when that war ends. Broken fibre is nasty stuff.


You mean it's equally fraud as that. Fraud has a particular definition which does not permit "someone else is committing fraud and getting away with it" as a defense.

Not to mention politicians begging for money and in return you get begged for more money by more politicians.

Not everybody is using their headphones on the go. 99% of my headphone use is at my desk while I work. Wired is more convenient than wireless since it's one less lithium battery to charge.

It's true that humans are not particularly sensitive to audio quality, but they are very sensitive to audio latency. If all you do is listen to buffered audio sources then latency is not important but the moment you need to use your headphones in an input loop then wired is the superior technology as it offers close to zero variance in latency.


Yeah prompting doesn't work for this problem because the entire point of an LLM is you give it the what and it outputs the how. The more how that you have to condition it with in the prompt, the less profitable the interaction will be. A few hints is OK, but doing all the work for the LLM tends to lead to negative productivity.

Writing prompts and writing code takes about the same amount of time, for the same amount of text, plus there's the extra time that the LLM takes to accomplish the task, and review time afterwards. So you might as well just write the code yourself if you have to specify every tiny implementation detail in the prompt.


Makes me think of this commitstrip comic: https://i.xkqr.org/itscalledcode.jpg (mirrored from the original due to TLS issues with the original domain.)

A guy with a mug comes up to a person standing with their laptop on a small table. The mug guy says, "Some day we won't even need coders any more. We'll be able to just write the specification and the program will write itself."

Guy with laptop looks up. "Oh, wow, you're right! We'll be able to write a comprehensive and precise spec and bam, we won't need programmers any more!"

Guy with mug takes a sip. "Exactly!"

Guy with laptop says, "And do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?"

"Uh... no..."

"Code. It's called code."


You know, this makes me wonder... is anybody actually prompting LLMs with pseudocode rather than an English specification? Could doing so result in code that that's more true to the original pseudocode?

I’m not sure if it went anywhere but I remember there was this attempt at one point called Sudolang:

https://medium.com/javascript-scene/sudolang-a-powerful-pseu...


You can give the macro-structure using stubs then ask the LLM to fill in the blanks.

The problem is that it doesn't work too well for the meso-structure.

Models tend to be quite good at the micro-structure because they've seen a lot of it already, and the macro-structure can easily be promoted, but the levels in between are what distinguishes a good vs bad model (or human!).


Goodhart's Law of Specification: When a spec reaches a state where it's comprehensive and precise enough to generate code, it has fallen out of alignment with the original intent.

Of course there are some systems where correctness is vital, and for those I'd like a precise spec and proof of correctness. But I think there's a huge bulk of code where formal specification impedes what should be a process of learning and adapting.


My dream antiprogram is a specification compiler that interprets any natural language and compiles it to a strict specification. But on any possible ambiguity it gives an error.

    ?
This terse error was found to be necessary as to not overwhelm the user with pages and pages of decision trees enumerating the ambiguities.

Openspec does this. But instead of "?" it has a separate Open Questions section in the design document. In codex cli, if you first go in plan mode it will ask you open questions before it proceeds with the rest.

The UX is there, for small things it does work for me, but there is still something left for LLMs to truly capture major issues.


Bless our interesting times.

the goal would be to write it a reusable prompt. this is what AGENT.md is for.

> the entire point of an LLM is you give it the what and it outputs the how

I'm still struggling to move past the magic trick of guessing what characters come next to ascribe understanding of "how" and implying understanding?


To err is human. Let's embrace our humanity in the face of this proliferation of insipid perfection.

I want the raw tokens straight out of your head. Even if they are lower quality, they contain something that LLMs can never generate: authenticity. When we surrender our thoughts to a machine to be sanitized before publication, we lose a little of what it means to be human, and so does everyone who reads what we write.

Part of the joy of reading is to wallow in a writer's idiosyncrasies. If everybody ends up writing the same way, AI companies will have succeeded in laundering all the joy from this world.


The manufacturers don't care about display quality, because displays are hard and expensive. Apple has enough volume that they can get a custom panel.

Users on the other hand, they definitely care about display quality more than they care about RAM. The display is the part you look at!

If you're in store and there's a Neo with a crisp 200 PPI screen and a Windows laptop with a cheap screen but more RAM, the vast majority of consumers will choose the laptop with the better display. People make purchasing decisions based on feels and the Neo has great feels.


On the contrary, displays are commodity components. So much so that motivated enthusiasts have managed to swap better panels into their ThinkPads for a long time. Manufacturers don't prioritize display quality in cheap devices because it doesn't show up on the spec sheet and most customers don't care that much.

Consumers don't read spec sheets. My mother doesn't even know the difference between RAM and SSD - it's all just "memory" right? But she knows that when she goes to the Apple Store the computers are built to an impressive standard.

Quality speaks for itself, and the way that people buy computers is through their eyes and fingertips, not their heads.

Go to the Apple Store and just observe how people make their buying decisions. They don't just look at the spec sheet, they lift, type on, caress the computers. They want to know how it will feel to own one.


People already in the Apple Store have already chosen to probably buy an Apple computer, and they all have approximately equally great displays to the layperson (myself included -- I can't tell an important difference between my M1 MacBook Air's screen and the one on the nicest MacBook Pro in the Apple Store).

Go into a non-Apple space though, where money is not "no object," and see how many people would choose a 16-17" 1920x1080 screen over a 13" MacBook Neo purely because of the big screen, nevermind that the Mac has roughly 4x the number of pixels. I guarantee you, it's more than you think.

My only point was that yes, the MacBook Neo wins on quality construction and aesthetics (but I'll argue NOT on durability since plastic laptops can take a lot more incidental bumps than Macs will), cool factor/perceived eliteness, and screen quality. I am sure there are plenty of people who care about those things, but I think most of those people are already buying a Mac today.

I suspect we'll actually see a modest cannibalization of those casual but cheap Mac users from the MacBook Air, since most people don't really understand how to evaluate RAM and storage size, but a lot of them will have a bad experience after filling the disk.


Apple has way more stringent quality gates for panel uniformity compared to even high end Windows laptops. And uniformity is hard to achieve on LCD; the probably refuse at least half of the panels.

Arguably with SIP a hardware indicator light is not strictly necessary, the OS could force the indicator pixels to be lit.

Isn't the argument that a hardware indicator light is (more) immune to bugs? If its just software, you're a software exploit/bug away from finding a way to access the sensor without tripping the software light.

I might be mis-remembering but wasn't Pegasus spyware able to bypass the camera indicator? Or was the issue that journalists were constantly seeing the light appear for no reason. I believe it was one of those.

Pegasus is primarily a mobile spyware toolkit and iPhone does not have a hardware light

Yes but also this has never been an issue on any phones (i.e. never heard a complaint), and you take that to the toilet. By comparison a laptop camera has much less access to your private life.

People who are truly worried about cameras will cover it regardless of indicator.


This depends on how the light is implemented -- if it's implemented in the camera module itself it's pretty bulletproof, but i would bet it's just a gpio to the processor on most of these devices and controlled by the os anyway. I could be wrong about that, but I err on the side of caution. I keep my phone in a bag most of the time.

Treat every gun as if it's loaded, and every camera as if it's filming.


On modern Apple devices, the HW indicator light is wired directly between the power rail of the camera module and ground. Turning the camera on via software energizes the power rail. The only way that the camera is on and the led is not is if the led has burned out.

This is a "nothing-up-my-sleeves" implementation, it's not really possible to hide anything weird in the complexity. Apple clearly didn't just want a light that's always on when the camera is on, they wanted an implementation where they can point to it and clearly prove that the light is always on if the camera is on.


There's a real disconnect. I was talking to a junior developer and they were telling me how Claude is so much smarter than them and they feel inferior.

I couldn't relate. From my perspective as a senior, Claude is dumb as bricks. Though useful nonetheless.

I believe that if you're substantially below Claude's level then you just trust whatever it says. The only variables you control are how much money you spend, how much markdown you can produce, and how you arrange your agents.

But I don't understand how the juniors on HN have so much money to throw at this technology.


  > I was talking to a junior developer and they were telling me how Claude is so much smarter than them and they feel inferior.
Every time I talk to a wizard I feel like they're so much smarter than me and it makes me feel inferior.

So I take that feeling and use it to drive me to become a wizard like them. I've generally found that wizards are very happy to take on apprentices.

I'm not trying to call Claude a wizard (I have similar feelings to you), but more that I don't understand that junior's take. We all feel dumb. All but time. Even the wizards! But it's that feeling that drives you to better yourself and it's what turns you into a wizard.

Honestly so much of what I hear from the "AI does all my coding" crowd just sounds very junior. It's just the same like how a year or two ago they were saying "it does the repetitive stuff". Isn't that what functions, libraries, functors, templates, and other abstractions are for? It feels like we're back to that laughable productivity metric of lines of code or number of commits. I don't know why we love our cargo cults. It seems people are putting so much effort into their cargo cults that they could have invented a real airplane by now.


It's 20 dollars a month to use...

Yes for the basic plan. However there are people who claim to use the API and spend hundreds, or thousands, of dollars a month.

It feels like everyone's gone mad.

Here I am mostly writing code by hand, with some AI assistant help. I have a Claude subscription but only use it occasionally because it can take more time to review and fix the generated code as it would to hand-write it. Claude only saves me time on a minority of tasks where it's faster to prompt than hand-write.

And then I read about people spending hundreds or thousands of dollars a month on this stuff. Doesn't that turn your codebase into an unreadable mess?


Why read code when you are getting results fast ? See https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

I am not kidding. People don't seem to understand what's actually happening in our industry. See https://www.linkedin.com/posts/johubbard_github-eleutherailm...


I'm not getting results. That's the point. Claude doesn't fucking work without human intervention. When left to its own devices it makes bad decisions. It writes bad code. It needs constant supervision to stop it from going off the rails and replacing working code with broken code. It doesn't know what it's doing!

It's about as far as you can get from being able to work independently.

Yegge is an entertainer. Gas Town is performance art, it's not meant to be taken seriously.


How much are you spending ? See initial post of the thread. My team has no problems with it, they are spending each 5-10k per month.

Use Codex

Why is everyone obsessed with Mac Minis. They're awesome but for the work that these people are attempting to do? Just seems... nonsensical. Renting a server is cheaper and still just as "local" as any of this (they want "self hosted", I don't think anyone cares about local. Like are people air gapping networks? lol)

And a senior director of Nvidia? He had several Mac Minis? I really gotta imagine a Spark is better... at least it'll be a bit smarter of a cat (I'm pretty suspicious he used a LLM to help write that post)

No time to think, gotta go fast?


It seems like the monkey-ladders story. Someone probably just had one sitting around and it worked or needed to do something Apple-specific and that message got lost along the way

They want access to Apple Messages. That's all there's to it AFAICT.

These are like, jokes right?

I've been thinking about this recently and it seems like the most enthusiastic boosters always suggest difference in results is a skill issue, but I feel like there are 4 factors which multiply out to influence how much value someone gets: - The quality of model output for _your particular domain / tech stack_. Models will always do better with languages and libraries they see a lot of than esoteric or proprietary - The degree to which "works" = "good" in your scenario. For a one off script, "works" is all that matters, for a long lived core library, there are other considerations. - The degree to which "works" can be easily (best yet, automatically) verified. - Techniques, existing code cleanliness, documentation etc.

Boosters tend to lay all different experiences at the feet of this last, yet I'd argue the others are equally significant.

On the other hand, if you want to get the best results you can given the first 3 (which are generally out of one's control) then don't presume there's nothing you can do to improve the 4th.


Let's just do leap minutes. If humanity survives long enough to witness a leap minute without destroying ourselves then that's ample compensation for the minor inconvenience.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: