Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you pause at 9 seconds in and check out the rig they're using to film the feed where the virtual objects are visible, it looks as though they are using the same lenses that are on the goggles to display the virtual objects in front of what the camera is recording. So I would bet that what we are seeing in the video is exactly what the user is seeing and not something added on the fly by other means.


That is amazing. So you have two (or more) people that hook into the same "scene", with the kick being that they see it from a different angle. Wow, very nice.


This was something that I think will be a bigger deal. They can communicate with one another, and potentially split the processing load. I'd love to play an RTS where my device processes my pieces, I see their backs and my opponent sees their fronts.

Of course, the caveat there would be that my command console would be invisible or hidden still from my opponent. Providing selective vision would make for interesting game possibilities.



Nope. Those lenses are above the camera's main lenses. Whatever those circles do, they are not generating the images for the video. Plus, only one would be needed for that - the camera has only one "eye".


I would like to see video recorded direct from the "eyeball's eye view", because I agree - the video looks generated. As you say, you can see areas where darker superimposed objects overlay lighter areas of reality in the visual field. Can this technology really do that? If so- wow. If not, still wow but just... less wow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: