Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From their marketing material, they pretend they figured a way to fake depth. If true, that would be a huge step compared to classic stereoscopic technologies or other cumbersome devices.

However, I doubt this is the case, else they wouldn't just slap this announcement at the end of a boring Windows 10 presentation. I mean, all I could remember about this presentation was: "Windows 10. Windows 10. Windows 10. Windows 10. Windows 10. Windows 10. Windows 10 with HOLOGRAMS using Windows 10! Windows 10. Windows 10. Windows 10."



I don't think they are pretending they figured out a way to fake depth, it sounds like they have created a compact head-mounted light field display, which is absolutely a huge step forward. Unlike classic stereoscopic technologies light field displays let your eyes refocus the image because the displays recreate the direction of the from the object as well as the color and intensity. NVIDIA demoed a compact head mounted light field display recently that explains the concept. See it here https://news.ycombinator.com/item?id=8451746


No need to fake it... if you can get the retinal projection accurate enough, with fast enough eye tracking, the depth is as real as anything else you'll see in real life...the goal would be to have it feel completely natural.


... and to do it, as well as motion tracking and environmental feature identification, in real-time with as little latency as possible, and do it on batteries.

I keep feeling like watching the Longhorn demo at PDC 2003...


They added the third chip, besides CPU and GPU, they call it HPU (holographic processing unit), which could speed things considerably. If they hooked it directly to ccd then they could really grok terabytes of data on battery.


Magic is not possible. Whatever the HPU does is still constrained by physics. They can't explain away the engineering problem with an invented new name for a device we know nothing about except that it defies the laws of physics WRT computation.


What we do know is that fixed function ASICs can be can be 10s to 100s of times more power efficient than general purpose (Von Newman) computing.

So noting they have described defies the laws of physics.


So, where would the terabytes per second come from? ... on a head-mounted device?


I really hope they get the accurate tracking and depth and getting objects to "stick" where they belong in 3d space correctly, without moving out of place or floating in a wrong way, with quick head movement. If they can do that, most of the battle is won and it will be amazing.

Edit: although, of course they'll need some intelligence on the surroundings to identify surfaces and stuff. But imagine like re-decorating your work room, adding scifi textures or something, and maybe pipes or whatever ;p


They didnt, otherwise they would show you eye view instead of third person impression of what its supposed to look like to the user.

Most likely it suffers the same shaky snap laggy tracking like every other AR setup.


But they showed footage "through the eyes of the wearer" and they let press have a hands on demonstration, so it's not like they can really fake anything.

I did see a tiny bit of judder in the footage that was supposed to be exactly what the person wearing the glasses would see, but it was hard to tell.


In the video I saw at the conference presentation, the "holograms" were always in front of the person's appendages, obscuring things: https://www.youtube.com/watch?v=b6sL_5Wgvrg&spfreload=10


Peter Bright said it didn't suffer from that - see the Minecraft section of his review: http://arstechnica.com/gadgets/2015/01/hands-on-with-hololen...


> intelligence on the surroundings

In case any reader here weren't aware how Kinect works, it sends to the developer a 2D image of the depth. Of course as walod says, there's work to do to identify surfaces (as you can see on the image below, background elements are excluded).

http://www.gadgetguy.com.au/hands-on-with-the-xbox-one/micro...


You still need to blur objects that should not be in focus or you’re going to get mixed depth information.

EX: http://www.photographyblogger.net/wp-content/uploads/2011/06... Now picture an infocus immage behind the blurry background pen's.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: