Hacker Newsnew | past | comments | ask | show | jobs | submit | pealco's commentslogin

It reminds me of "web3" marketing. My hackles are immediately raised.

Most of my time interacting with this site was spent in developer tools, trying to figure out where the scrolling behavior was coming from. (Couldn't figure it out.) I can't understand why people are still doing this in 2025.


Enter this in the console:

document.body.onwheel = (e) => e.stopPropagation();


Most likely the developer is using a Windows computer.


Cool, but as described, this wouldn't work on humans. The light sheet microscopy technique that the suped-up MRI data is paired with to create these images requires tissue to be "cleared" (or made transparent) with solvents, which obviously you can't do with a living human brain. To be honest, I don't quite understand how light sheet microscopy works with living _mouse_ brains.


It doesn’t. The brain must be cleared which generally involves removing lipids so the tissue has a homogeneous index of refraction throughout. Given lipids are an extremely important component of the brain (for instance myelin is a lipid rich material that surrounds axons) even if one were able to clear the brain without excising it from the skull, the clearing process essentially destroys the brain by preventing it from function properly.


Correct. But that is where the lightsheet adds sufficient resolution. MRI for the “mesopic” scale, lightsheet for for microscopic scale to 0.2 microns, and selective elecron microscopy down as low as tissue quality will allow.


You’re not even responding to the original questions, which is is lightsheet microscopy capable of imaging entire living brains? Pay attention.


Can you use light microscopy on a living brain? No


Two and three photon microscopy works on living brains down to depth of approximately 1.5 mm. A company even sells head mounted miniature microscopes for mice. Try to refrain from commenting on topics outside your expertise.


In the video, you keep saying "Turing" (turr-ing), when I think you mean to say "truing" (troo-ing).


He's saying it properly and isn't even remotely strange in my mind. Are you from the midwest? You might be used to the midwest accent's vowel lengthening


It's possible I have a Canadian accent.

But now that I think about it, Turing Stand is a name with great potential. ;-)


I was so confused :) "Is the stand doing computations??"


In my experience, people like this are dilettantes, who actually have a very shallow understanding of these "ideas" that they're so in love with. They confuse _having heard_ of Obscure Subject with _understanding_ Obscure Subject. If you happen to have a deeper understanding of Obscure Subject and try to engage them in conversation about it, it goes nowhere.


Ted Petrou has written a very detailed critical review of this book. He finds it lacking in certain areas.

https://medium.com/dunder-data/python-for-data-analysis-a-cr...


This doesn't really address your teacher's claim about having to look words up, though. What you want to look at is the distribution of low frequency words across the book. What do the plots look like when you remove proper nouns, functional words (e.g., "the", "and", prepositions) and, say, the top 1000 most frequent words in English?


Would be very interesting to see this applied to blogs in different categories to rapidly learn languages through reading based on the words that you currently know and the most frequent words in that language. So it would always present you with the article that suits your level and you would have the benefit of learning the most new words.


Also would be interesting to see it applied to newspapers, with obvious slices like particular author, section (sports v world news etc) distribution year to year, and which paper. TV news broadcasting could also be interesting to compare by the same dimensions, though the conversational style in some interview shows would possibly make this less telling. .


That's something worth trying.


I imagine it would look very much like the plots of unique words given in the article. As you suspect, the chances of coming across one of these is much more evenly distributed.


It probably would look more or less similar. They are excluded very quickly. There is something I cannot asses: How is the word important to understanding the sequence?


In what space does the clustering occur? I wasn't able to tell from the post.


We tend to operate on sever performance metrics, error rates, and networking metrics. We've found through practice that these metrics tend to reveal most issues that we're targeting.

If you meant in the more mathematical sense we perform our clustering in a normalized euclidean space.


It appears to cluster on a particular server metric (e.g. % CPU used), for readings of that metric across a group of servers. One of the example graphs had 'errors per servers,' and I'd presume each color on that graph represents a different server.


Right you are!


What sorts of things does it improve?


Release Notes: http://dl.dropbox.com/u/891742/Screenshots/tm2_release_notes...

Major changes include the new project drawer (though, I prefer ProjectPlus[1]), and the new bundle/theme updater[2], where you just tick the box and it auto-installs.

-

[1] http://ciaranwal.sh/2008/08/05/textmate-plug-in-projectplus

[2] http://dl.dropbox.com/u/891742/Screenshots/1hgh.png


Has anyone worked out where new bundles created within TM2 get saved to though? It's not obvious to me...


~/Library/Application\ Support/TextMate/Managed/Bundles


> I think Norvig acknowledges the point you are making here, namely that the statistical approach does not explain the cognitive systems behind language.

If that is the case, then the argument that Norvig is making is irrelevant to the argument Chomsky is making. Chomsky simply makes the point that statistical accounts lack explanatory adequacy. As someone who has worked closely with many of his students and who has received extensive training on his scientific program, I can say with confidence that Chomsky would have no objection whatsoever about the usefulness of statistical approaches to linguistic engineering problems. The results speak for themselves. He would go on to say, however, that how well a statistical approach solves a linguistic engineering problem is irrelevant to the question of how humans do what they do.

The answer to the question may well be statistically grounded. That is a valid hypothesis and a logical possibility which should be taken seriously. However, it is incumbent on the proponents of such an answer to provide evidence that it is what humans are doing. Here are some examples of the kinds of evidence necessary:

* evidence that humans are capable of performing the kinds of computations that the statistical approach requires,

* evidence that the statistical approach works with the relatively limited amount of data that a human receives,

* evidence that the statistical approach fails in ways that humans fail

How well a statistical approach succeeds at an engineering task is not an item on this list, simply, again, because engineering tasks are irrelevant to what humans actually do.

Let me specifically say that statistical approaches are not, from the start, ruled out as potential candidates for the algorithms underlying human language. It's just that a case has to be made for them using the right kind of evidence.

Finally, I'll reiterate what others have pointed out: from a scientific perspective, that something is hard to explain doesn't mean that we shouldn't try. And, those that have given up (as you suggest Norvig has) shouldn't fault those who haven't for calling them out on it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: