I believe that the TB reference comes from listening to Alex Kipman at the Microsoft announcement event.
In speaking about the so-called "HPU" (around 01:50 in http://www.theverge.com/2015/1/21/7867593/microsoft-announce..., second video) Kipman mentions "processing terabytes of information from all of these sensors". This is straight from the proverbial horse's mouth, and while it seems hype-ish it should be looked into.
I'm pretty sure he didn't say that the HPU was processing it either. He said something along the lines of "when we look around a room our brains process terabytes of data".
if you have enough pins, a custom asic can do just about whatever you want. the data flowing into the HPU is likely huge, but it is processed down into something the CPU can deal with.
Yeah, well, 4x DDR4 DIMMs have 4x288 = 1152 pins. If you want to be two orders of magnitude faster than that, you're talking on the order of 100 000 pins, which is just absurd.
Terabytes is out there, terabits not so much... I'm developing a single chip with 3Tbit (384GBytes/s) aggregate external (chip to chip) bandwidth, and with 8TByte/s aggregate internal (core to core) bandwidth.
"Sensors flood the device with terabytes of data every second" ... somehow I doubt the aggregate bandwidth of the device is > 1TB\s
Makes it harder to know how much of the rest of the 'explanations' are accurate.