Hacker Newsnew | past | comments | ask | show | jobs | submit | pfdietz's commentslogin

I wonder if robots could be made to work better at cryogenic temperature, so superconductors could be used. The figure of merit would be much higher if resistance was zero. Or maybe this is another reason to want room temperature superconductors.

Power disipation in the motor is not the limiting factor. Moving a robot requires work and the motor provides that work with a high eficiency already.

Plus, cryo temps require a lot of power to keep thing cool and coper and iron embrittle so the forces acting on the winding could shatter them.


You would hit electrical steel saturation limits way before you need to pump in enough current to justify super-conductance.

Cooling in general is not a bad idea to allow you dissipate heat as you push motors to their saturation limits.


Even copper has vastly lower resistance when cryogenically cooled. It's not a bad idea for some applications, and water cooling is already a good way to increase power density.

I learned recently that the inductive heating coils used for metallurgy (smithing) are copper tubing with coolant flushing through them. The copper tries to heat up along with the bar you’re heating in the coil. Both from resistance and from radiative heating.


Economically I expect it wouldn't be that pure, since it doesn't have to be that pure to provide lift, and party balloons are not trying to maximize lift.

Out of curiosity I did a minor amount of research to get an idea.

Turns out that you are right, some balloon gas is 80%. Specifically, the "Balloon Time" tanks you can buy at places like Target say "not less than 80%" helium.

On the other hand, I went to AirGas and a few other suppliers and they seemed to have 95%-97.0% helium gas as their definition for balloon grade.


Perhaps "balloon grade" here is not "party balloon grade". Weather balloons? Research balloons?

My guess is that places like AirGas aren't really supplying many weather or research balloons. I suspect the easier answer is 'Balloon Time is low grade crap aimed at people who don't know any better and just want to pick up some balloon gas while grocery shopping.' It's like the difference between people who go to a gas station to refill propane tanks, and people who swap them at Home Depot. (though the smart fellers do swap at Home Depot occasionally, if they need a fresher tank...)

Definitely worth knowing what you're getting, in any case, so you don't get ripped off, and so you can actually get that lawn chair contraption into the sky.


AirGas prioritizes industrial users, in the case of helium, copper welding. Argon is perfectly good enough for almost all welding purposes, but copper is different because of its heat conductivity. The heat from the weld really wants to go anywhere else. Helium has substantially higher heat conductivity than argon, which allows the heat to flow from the electric arc into the metal faster, resulting in better welds.

Obviously you can't have oxygen in welding gas; it would oxidize the shit out of everything.

A little bit of oxygen in party balloon gas is beneficial. Some kid will breathe it, and when they do, you didn't want them to asphyxiate themselves.


Do you know if the other 20% is oxygen (as was claimed) or if it's air? I just think the latter seems cheaper and more likely.

I think it may be CO2. CO2 in the gas would cause all sorts of unpleasant effects that would discourage continuing to breath it, and CO2 is probably cheaper to store and transport than oxygen.

The whole program is a joke.

Hacker News now runs on top of Common Lisp https://news.ycombinator.com/item?id=44099006 (435 comments)

(this was mentioned below but repeated here)


SBCL has fine type checking. Some is done at compile time -- you get warnings if something clearly can't be type correct -- but otherwise when compiled with safety 3 (which people tend to make the default) types are checked dynamically at run time. You don't get the program crashing from mistyping as one would in C.

> You don't get the program crashing from mistyping as one would in C.

Sorry but I don't compare to C anymore, I want the same safety as in Rust or Typescript: exhaustive checks, control-flow type narrowing, mapped types and so on. Some detection at compile time is not enough, since there is a way to eliminate all type errors I want to eliminate them all, not some.


Why stop there? Why not demand proof of correctness? After all, that's now within reach using LLMs producing the formal specification from a simple prompt, right?

SBCL does a fine job in detecting type mismatches within the frame of ANSI Common Lisp, not Haskell. While I would agree that a strict type system eases long term maintenance of large systems, for "explorative computing", proof-of-concepts, RAD or similar that tends to get in the way. And if such proof-of-concept looks promising, then there is no shame in rewriting it in a language more suitable for scale and maintenance.


Proof of correctness would be fantastic, but I have yet to see it in action. LLMs maybe could do it for simple program, but I'm pretty sure it will fail in large codebases (due to context limits), and types help a lot in that case.

> for "explorative computing", proof-of-concepts, RAD or similar that tends to get in the way

I would even argue that its better to have typed system even for POCs, because things change fast and it very often leads to type errors that need to be discovered. At least when I did that I often would do manual tests after changes just to check if things work, with typing in place this time can also be minimised.


> You don't get the program crashing from mistyping as one would in C.

Uh, isn't that exactly what happens with runtime type checking? Otherwise what can you do if you detect a type error at runtime other than crash?

In C the compiler tries to detect all type errors at compile time, and if you do manage to fool the compiler into compiling badly typed code, it won't necessarily crash, it'll be undefined behavior (which includes crashing but can also do worse).


> Uh, isn't that exactly what happens with runtime type checking?

No, it raises an exception, which you can handle. In some cases one can even resume via restarts. This is versus C, where a miscast pointer can cause memory corruption.


Again, a proper C compiler in combination with sensible coding standards should prevent "miscast pointers" at compile time / static analysis. Anyway, being better than C at typing / memory safety, is a very low bar to pass.

I'm curious in what situation catching a typing exception would be useful though. The practice of catching exceptions due to bugs seems silly to me. What's the point of restarting the app if it's buggy?

Likewise, trying to catch exceptions due to for example dividing by zero is a strange practice. Instead check your inputs and throw an "invalid input" exception, because exceptions are really only sensible for invalid user input, or external state being wrong (unreadable input file, network failures, etc.).


If "just don't do the bad things" is a valid argument, why do we need type checking at all?

Exceptions from type checking are useful because they tell you exactly where something has screwed up, making fixing the bug easier. It also means problems are reduced from RCEs to just denial of service. And I find (in my testing) that it enables such things as rapid automated reduction of inputs that stimulate such bugs. For example, the SBCL compiler is such that it should never throw an exception even on invalid code, so when it does so one can automatically prune down a lambda expession passed to the COMPILE function to find a minimal compiler bug causing input. This also greatly simplifies debugging.

A general reason I look down on static type checking is that it's inadequate. It finds only a subset, and arguably a not very interesting subset, of bugs in programs. The larger set of possible bugs still has to be tested for, and for a sufficient testing procedure for that larger set you'll stimulate the typing bugs as well.

So, yeah, if you're in an environment were you can't test adequately, static typing can act as a bit of a crutch. But your program will still suck, even if it compiles.

The best argument for static typing IMO is that it acts as a kind of documentation.


That was his confusion.

One approach is more integration of researchers with businesses. Fraud (or simple incompetence) by researchers negatively affects businesses, as they expend effort on things that aren't real. I understand this is a constant problem in the pharmaceutical industry.

It's quite possible to be very successful marketing and selling things that aren't real. The market consists of humans, not perfectly rational machines.

Even so, businesses largely compete based on whether their products are worth buying. That means bad research is bad for business.

And funding dedicated to replication studies.

paid by the original authors if their study fails to replicate

In the US such a requirement would violate the 1st amendment. It's also probably counterproductive.

punishing failure is probably counterproductive? let me guess, participation trophies is the way...

Let me introduce you to the concept of "unintended consequences".

Constitutional protections aren't trumped by mere issues of governmental convenience.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: