I think this type of AR / holographic technology has many, many more potential real-world applications than VR. With VR, you're shutting yourself off from the outside world. Here, you're enhancing the outside world with technology. You still get to interact with others. Using HoloLens doesn't stop you from doing almost anything.
What I'm curious to find out is whether HoloLens will run into the same core problem as Glass. People are afraid of people wearing Glass. They're scared that they're being filmed, or worse. Unless HoloLens can avoid making you stand out - by looking like regular glasses, or even contact lenses - I'd guess that HoloLens will end up suffering the same fate as Glass.
It's hard to understand how AR / holographic technology could help people in their day-to-day life. There are a zillion potential uses, but all of them seem extremely complicated and hard to pull off.
Take the example of fixing your car. For example, performing your own oil change, or replacing your alternator. That seems like a perfect use case for holo, right? The goggles would tell you what needs to be done and what the next step is.
But that would involve so many technical challenges that it seems very difficult. You, as the creator of FixYourCar holo app, would need need to detect what type of car the user is looking at, what part of the car they're looking at, render an overlay with the correct orientation, and so on. And at the end of all of that, it's not entirely clear that your app is more helpful to them than if they'd just look up a list of steps for fixing their car using their mobile phone.
I guess what I'm asking is, what do you think holo's killer app would be?
I don't know about the consumer market, but I can think of numerous commercial applications.
Sony currently sell an HMD for surgical use, allowing for comfortable and convenient viewing of video from endoscopic cameras. A practical translucent HMD would be extremely valuable in surgical procedures guided by x-ray or ultrasonic imagery.
To give a trivial and easily-implementable example, I would have bought Google Glass without hesitation if it integrated with my electronics test equipment. Being able to view data from an oscilloscope or logic analyser without taking my eyes off the PCB would be a boon. PCBs are designed with fiducial markers as a necessary part of manufacture, and machine vision is already used extensively in many aspects of electronics manufacture and repair; It would be relatively straightforward to overlay all sorts of data that would be enormously useful to technicians and engineers.
Stereoscopic and volumetric displays are used extensively in petroleum and mining geophysics; This equipment is currently relatively niche due to extremely high cost, but could be used in a much greater range of geoinformation applications if costs fell.
Imagine a car mechanic remotely helping you to do what's needed to be done. This use case is showcased in one of the videos and it is probably not as complicated as building an AI engine to help a user repair his car.
This is actually a relatively silly use case for this. A lot of the actual difficult things that a mechanic can do for you usually involve more strength than you have. Or just experience in actually working with things that they can't see.
Now, a mechanic using this to "see" things that are actually in control of a remote robot? Pretty cool. Showing you the thing that is right in front of you? Cute, but ultimately silly.
Yeah, I'm just not sure that I buy the idea that there's a big market for an expert coaching you through doing repairs via AR goggles.
How exactly does this work?
You still need to pay for an expert's time -- in fact, you probably need to pay more for it, because the expert is probably faster to do a thing than to explain the thing to you and then you do it. Also, the expert now needs to be someone with these additional skills of coaching someone through an operation.
Tools are still needed -- is there actually a big market for the kind of repairs you can do with the tools that everyone has lying around at home but which is complicated enough to need hand-coaching by an expert?
I mean, maybe! Especially if you can locate the expert in some place where labor prices are much lower (so: India). But then you also need the person to buy the AR goggles. And how often does this use case come up? Is this like 3D printers where people try to sell me on the concept that I could pay hundreds or thousands of dollars for something that can make me things that cost less than $20 and which I need three of every year?
I agree this is highly unlikely to be a common use case. I might use it, because I try to do most things myself, and I could often use some expert advice. And I know the people who would help me, but it'd be inconvenient to have them come all the way out here. But overall, this is a one-in-a-thousand use case.
But:
> Tools are still needed -- is there actually a big market for the kind of repairs you can do with the tools that everyone has lying around at home but which is complicated enough to need hand-coaching by an expert?
Almost all repairs on appliances, cars, houses, and so on can be done with tools you have laying around the house; they don't require anything more than a hammer, drill, screwdrivers, wrench sets, etc. The only thing you're typically not going to have on hand is replacement parts, which are usually not too difficult to get your hands on and which you would have to pay for anyway.
When people do this, it's not going to be an "expert", it's going to be something along the lines of "hey dad, look at this real quick."
> But then you also need the person to buy the AR goggles.
The Ars reporting showed someone using a Surface to view and annotate the HoloLens-user's view, not another set of HoloLens. So the barrier is much lower, any Windows 10 PC should be godo enough.
Have you ever worked on a car? Tons of specialized tools can be needed for some tasks. Do you have a set of triple square bits? General purpose puller? Bearing extraction tool? How about half inch drive Tor-X sockets? 200 ft lb torque wrench? When you work on a car for fun, you find that your collection of tools balloons just for all the things on the car that require one specific tool. If you think you can do everything on the car with just a simple socket set, you'll wind up stuck and having to buy a new tool for every task.
Yes, although granted when I said "almost all repairs" I did not have in mind major automotive work, but more along the lines of general maintenance and little things going wrong. Of course if you're doing something like rebuilding the transmission you'll need more tools than the average bear.
Fair enough, but my point was that for things where you'd benefit from a mechanic walking you through a task, you'll probably need some specialty tools to do the job.
None of those things require the augmented reality aspect of this: you don't need to place math and physics lessons into your local environment. They'd do just as well with VR, and indeed it doesn't sound to me like they'd do MUCH worse with just a plain old screen. What is it that you're imagining we couldn't do with a tablet that has swipe gestures to rotate the demo around all axes?
An electronics teaching kit might not work on a tablet (but would in VR), and note that any kind of really fluid manipulation of a virtual environment is going to involve a whole additional technology that gives precise locations of your hands (at least). The HoloLens allows a few simple gestures, not the ability to handle virtual objects in many degrees of freedom.
I'd approach this problem differently. Not from a service industry angle but from a product vendor angle. Imagine a world where AR-glasses are widespread.
I'd be pretty interested in buying the kitchen sink that comes with an AR repair guide or the furniture that comes with AR assembly instructions.
So I think the interesting market is in building the infrastructure/app that makes it easy for vendors to create the content and ship it as a (free) addon for their products.
You could draw on a much bigger pool than professionals. There are plenty of people who know a skill and don't professionally sell their services but could spare a few minutes occasionally to help someone with a problem. If you combine skill tracking with instant global availability of services, you have a lot of room for development.
I don't think that 'strength' is the issue but better tools
and a lot more experience are the core parts: I remember watching one mechanic changing the light in my car: what took me ~15 minutes (I'm not kidding) took him ~30secondes..
And that's like riding a bike: you cannot really tell someone how to do it..
But the hologram need not be a person! At least, it won't be once this use case makes sense. The hologram, will be an AI hologram. Just like the light switch installation in TFA, it would be silly to have a human expert show you how to install a switch, or trade out a component in your car, once an AI expert will do.
If there's a market for this, why doesn't it already exist? You could take your smartphone under your car, and video chat with a mechanic anywhere in the world. The mechanic could even draw arrows or highlight areas on your video in real-time as it's looped back to your display.
That adblock thing would probably be great. Also, I ofthen wish there was an easy way all those specials in the store could be compared more easily. Usually when you work out that 2 for x offer you see they gave a generous 10% discount that made you buy an entire extra thing for no real savibg.
I noticed that the promo videos all show you doing things indoors, in more or less private settings. Your living room, your kitchen, your workspace. Contrasted to initial Google Glass video (skydiving, jogging, meeting for lunch .etc) I think it's safe to say Microsoft has learned from Google's mistakes.
I noticed this too. It avoids a whole class of problems interacting with other people, and seems like a good idea marketing-wise.
I was also thinking about battery life. If it's not designed for outside, then presumably you'll be near a charger, so you're less likely to run out of charge when you need it.
From the videos HoloLens looks like being an actually well thought out product unlike Glass.
Many people would find tools like these incredibly useful at work, in the car or at home. But not in the street, at the beach or in restaurants whilst talking to other people. That's just socially awkward/insensitive.
I definitely want HoloLens to be real, too, but to avoid heartbreak I'll temper my hopes until more reports come in. Or, even better, a firsthand experience.
I've never used a Kinect; how does the promo video live up to reality? It's looks almost identical to what I still assume Kinect is like, minus perhaps some of the highest fidelity parts like the skateboarding and soccer which I imagine have been attempted but turn out too clunky to be worthwhile. Am I wrong?
No. The problem is Google kept trying to force Glass as a consumer product for use in public.
And given that most normal people would know that it was socially awkward to use it in public only "glassholes" remained. This meant that buying/wearing Glass tarnished you with that label and associated you with that group.
This has become forgotten as the public perception of Glass became dominated by the whole "glassholes" phenomenon, but the Explorer Program was supposed to demonstrate that people could think up these kind of amazing life-altering apps that proved the utility of bothering to wear Glass.
They didn't. Years later, the reason to wear Glass remained "take pictures/videos hands free and shave 3 seconds off the time it takes you to check your text messages."
The MS product seems pretty clearly to be more broadly capable hardware, but I do still wonder if it will have actual applications.
The main application that sells it is likely to be less specialised than the cool demos, which are always a bit niche (modelling industrial design for motorbikes etc).
I wonder if its "killer app" might just be that now a virtually big screen takes up little physical space/weight.
Clear the big monitor off your desk, now your 11" laptop (or smaller) can effectively have a 40" screen, etc.
Unlike Oculus, you can still see the real world. Unlike Google Glass, it's a big display and not an awkward eye movement.
There's still the barriers of
- showing other people stuff
- social awkwardness of sitting with a keyboard seeming (to others) to be staring into empty space while working
- it might feel like wearing a hat
- what's the effective pixel density like?
This is the clear winner for me. A portable, wireless keyboard + hololens = the biggest virtual desktop in the world that also doesn't shut you out from reality / coworkers / your desk / etc. Whether or not the more ambitious use-cases ever materialize, I'd be happy to trade in my macbook for this.
The focal point is a problem. It is advised to keep your screen at >65cm so your eye doesn't have to accomodate (coincidentally, the length of your arms). A big problem of Google Glass is the focal point is a few cm away and it is known to give headaches. The smaller the screen is, the more myopic you become.
It is absolutely possible to use a lens system to move the focal point to the distance, but hasn't been done yet, probably because you can't do it on 120x120 degrees.
I wouldn't work on a virtual screen for long hours until there's an answer to that. But once it is solved, I can see how we'll all become Holographic addicts ;)
Surely they must have sorted out the focus issue for HoloLens -- otherwise that Minecraft demo where the castle is on the table would have felt very trip for the journalist (if you consider where the castle touches the table, you'd have a joint that is both several feet and a couple of centimetres from your eye)
I had an idea to do something similar once for a Uni dissertation, involving a rift mounted with two cameras to do a very hacky and cheap prototype version of what you've described.
My supervisor shot it down because "Google glass will do that" :(
I think it's hard to say whether nobody used it because there were no killer apps, or whether there are were no killer apps because no one wanted to use it.
Ya the screen wasn't that great and was awkward to look at. Seems like the photo taking is the best part of it -- for that you don't really need all the rest of the complexity. And too bad you also look like a douchbag wearing it, especially in SF where tech is stigmatized enough. Reminds me of a joke a comic said last night at an SF standup spot: "so I was on google last night... Do you guys know Google? It's this company making people homeless in SF"
"7. On that note, don't give one to Robert Scoble"
It's not that Google Glass intrinsically makes you look like a narcissistic douchebag, it's that the first people to show them off were narcissistic douchebags, posing with their smug self important "look at me I want your attention" expressions, who crystallized the image of the "glasshole" in everyone's minds.
What it reminds me of, is Google Project Tango, which also has the NASA's JPL listed as a partner[1]. Also worth mentioning, is Johnny Lee, who worked for Microsoft Kinect, and is now working at Google for project Tango.
First thing, it crashes, a lot. We're talking 2-5 minutes active 3d scanning before the structure sensor driver bites the big one. Requires killing and restarting service along with all programs associated.
Also had hard freezes as well.
Its "google quality" in other words, crap. It might get better. It probably won't, given their track history regarding consumer devices in "google beta" (read as alpha).
As far as I can tell, a big difference is that Google's concept video looks very little like actually using it, whereas Microsoft's is clearly just a better version of their live demo. The live demo was amazing.
It's interesting how they go out of their way to describe this as NOT augmented reality when... that's exactly what it is. The only time the term appears on the product page is here:
"Microsoft HoloLens goes beyond augmented reality and virtual reality by enabling you to interact with three-dimensional holograms blended with your real world. "
I understand the marketing reasons for this, but contrast this with the fact that Oculus embraces the term "virtual reality" despite the baggage that comes with it and the fact that they can't trademark it. I guess AR never caught the public imagination like VR did.
AR is traditionally a 2D projection on 3D space. This is a 3D projection that you interact with. Sure, its still AR on some level, but I think differentiating the product makes a lot of sense. My idea of AR is a boring HUD-like system that fits in with things like flying fighter jets. This holographic projection is different and notably so.
MS could find the middle ground between lush 3D VR-like environments and the real world. I find things like the Oculus and other HMDs to be terribly claustrophobic and dizzying. Not to mention really asocial. I don't want to mount a tissue-box size thing to my face that removes the real world. I'd prefer having the real world still here with the digital world tied to it. There just seems to be something wrong with giving software my entire field of view. I don't want to stare into the same Unity3D generated environments. I want to augment my real world life, not replace it.
>MS could find the middle ground between lush 3D VR-like environments and the real world.
Back around 2000 a Slashdot article reported an attempt to create a human-sized hamster ball constructed of a semi-opaque projection-friendly surface. The ball would sit on some sort of roller mount. Five projection screens surrounding the ball would project a virtual environment over the "port", "starboard", "fore", "aft", and "north" surfaces. A human occupant would enter the ball, and, based on his movement detected through the roller base, be presented with a continually updating holodeck-like virtual environment.
Perhaps something like this is still in development somewhere.
When we say virtual reality, there is this implicit expectation (at least in my mind) that its an always ON experience. The Video here did show some of that too but I think scoping it to specific tasks at least initially would be very powerful. So you don't wear these bulky, dorky glasses/headsets all day long but only when you need to do specific things. And then you return to your normal life.
In the early days of computer, the usage was like that...very task oriented. When you are done you go back to your non-digital life. It's only when the technology and public perception changes, you start carrying PCs in your pocket all day long like we are doing today.
Right - but let's say they get less dorky and more comfortable - and really do have the visual quality we really want - objects look solid and real.
That seems much more useful than a VR that you have to plug out of the real world and immerse yourself in - at least for collaboration with others on real world things... like the example with the motorbike design.
Makes me think of something i read about from CES.
Basically a helmet of sorts where you dropped a smartphone into a slot at the top, and some semi-transparent lenses in front of wearer's eyes then made the phone screen appear to float in front of said wearer.
http://www.microsoft.com/microsoft-hololens/en-us