Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It looks impressive, but it would help to be able to tell the hype from reality:

"Sensors flood the device with terabytes of data every second" ... somehow I doubt the aggregate bandwidth of the device is > 1TB\s

Makes it harder to know how much of the rest of the 'explanations' are accurate.



I believe that the TB reference comes from listening to Alex Kipman at the Microsoft announcement event.

In speaking about the so-called "HPU" (around 01:50 in http://www.theverge.com/2015/1/21/7867593/microsoft-announce..., second video) Kipman mentions "processing terabytes of information from all of these sensors". This is straight from the proverbial horse's mouth, and while it seems hype-ish it should be looked into.

Anyone in here know about this HPU?


He didn't say "per second" though, so the OPs quote is just a piece of bad reporting.


I'm pretty sure he didn't say that the HPU was processing it either. He said something along the lines of "when we look around a room our brains process terabytes of data".


Could the sensors be indeed flooding the device with terabytes of data, but the device can only sample that data at a more reasonable rate?


Well, from that perspective, an analog temperature sensor is flooding your ADC with infinite GB/s.

For further comparison: the fastest CPUs you can get nowadays have an aggregated memory bandwidth of ~ 90 GB/s using four lanes.


if you have enough pins, a custom asic can do just about whatever you want. the data flowing into the HPU is likely huge, but it is processed down into something the CPU can deal with.


Yeah, well, 4x DDR4 DIMMs have 4x288 = 1152 pins. If you want to be two orders of magnitude faster than that, you're talking on the order of 100 000 pins, which is just absurd.


To give an example, a raw 4K stream at 12 bit is about 500MB/s, so unless it has 2000 4K cameras, unlikely.


I think you forgot to multiply by a frame rate, otherwise you don't have a "per second" unit. At 60fps, it's 29.66 GiB/s.


Original Kinect data rates, measured empirically by a third party:

Colour: 10.37 Mb/s Depth: 29.1 Mb/s Skeleton: 0.49 Mb/s

So roughly 40Mb/s (4 * 10^7)

Does this device produce > 1Tb/s (1 * 10^12) or 25,000 times as much data? I'd be surprised.


Maybe we're counting photons now.


Terabytes is out there, terabits not so much... I'm developing a single chip with 3Tbit (384GBytes/s) aggregate external (chip to chip) bandwidth, and with 8TByte/s aggregate internal (core to core) bandwidth.


I feel like you're the kind of guy (or girl) who could explain really complex nerdy things to me on a regular basis and I'd be ok with that




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: