I tuned out when I read this: "Sensors flood the device with terabytes of data every second, all managed with an onboard CPU, GPU and first-of-its-kind HPU (holographic processing unit)."
I'm sorry, but there is no wearable device which can handle terabytes of data per second. Heck, my brand new Haswell has a peak memory bandwidth of 17GB/s; even to the L1 cache, its theoretical max is 700GB/s.
They probably confused that with GB. Not sure which sensors would even put out Terabytes per second. Let's assume they have a brand new high dynamic range RGB-D camera that has 32 bits per channel at 60 Hz. To reach 1 TB/s that camera would have to have approx. 1 Gigapixel. --> Not very likely.
Completely understandable, given the fact that Wired Magazine is after all, pretty new to all this computing and technology business ... /s
I can't imagine any (tech-savvy) editor proofreading this not going "wait what, terabytes, really??" -- which is then presumably their job to doublecheck.
I understood it as saying the sensors are reading terabytes of information every second. The data would get selectively loaded into the device based upon the task at hand. I don't see that as far-fetched -- I can hook 100 digital cameras up to my computer and make the same claim.
They didn't mention a timeframe for the amount of data collected. The exact quote is "[...] all by processing terabytes of information from all of these sensors, all in real time."
I got the impression that it was a write up of the demo video, not actually hands on... It's a confusing article and there's no 'real time video or images' of anyone actually interacting with it. I'm actually quite confused as to what this is, bad PR piece - more like a mocumentary on discovery channel...
> I got the impression that it was a write up of the demo video, not actually hands on...
That's not really an excuse to dumbly repeat something which sounds so amazingly far-fetched (see other comment elsewhere, what sensors even produce TBs of data per second?), it verges on the physically impossible, reporting it without even blinking an eye--such as "yes, you read that right, we think it's hard to believe too", or preferably some explanation of how it can even be possible. How did the reporter not go "Wait what--terabytes per second?!!".
A current generation Hawsell chip can easily manage terabytes per second in on-chip cache and by manipulating registers.
Eight or more virtual cores plus SIMD operations that can smash against large chunks of data per cycle adds up awfully fast on a 4GHz chip.
They're also probably counting the fact that data flows from one system to another in sequence, but adding up each sequence. Eight streams of 150 gigabytes per second for example.
This doesn't count information that's captured but discarded at the source, processed away before it's transmitted downstream.
It's probably more along the lines of the same data being processed within the same chip. Not actual memory bandwidth. This HPU they are talking about is processing depth information and eye lines (probably quite a bit more often than 60hz), it's quite possible they are processing the same data multiple times over achieving theoretical bandwidth in the TBs. It's disingenuous sure but it's like saying a network with 20 100GB routers can handle terabytes of data.
A single 4K2K RGBD sensor at a high enough refresh could generate something in the 100Gbps range. The device has at least 2 (possibly 4?) forward facing sensors. It's presumably also doing inward facing gaze tracking, audio and IMU.
As a point of reference the Leap Motion Dragonfly has 2 x 3K sensors w/ 225fps color and 720fps tracking.
Presumably the "HPU" is an ASIC that bakes in some sort of SLAM/positional tracking, skeletal tracking, and gaze tracking.
I'm sorry, but there is no wearable device which can handle terabytes of data per second. Heck, my brand new Haswell has a peak memory bandwidth of 17GB/s; even to the L1 cache, its theoretical max is 700GB/s.
This sounds like a puff PR piece.