By now we’re assuming you are all familiar with Google’s “Project Glass”, an ambitious augmented reality project for which they revealed a promotional video last week. [Will Powell] saw the promo vid and was so inspired that he attempted to rig up a demo of Project Glass for himself at home.
While it might seem like a daunting project to take on, [Will] does a lot of work with Kinect-based augmented reality, so his Vuzix/HD webcam/Dragon Naturally Speaking mashup wasn’t a huge step beyond what he does at work. As you can see in the video below, the interface he implemented looks very much like the one Google showed off in their demo, responding to his voice commands in a similar fashion.
He says that the video was recorded in “real time”, though there are plenty of people who debate that claim. We’re guessing that he recorded the video stream fed into the Vuzix glasses rather than recording what was being shown in the glasses, which would make the most sense.
We’d hate to think that the video was faked, mostly because we would love to see Google encounter some healthy competition, but you can decide for yourself.
[youtube=http://www.youtube.com/watch?feature=player_embedded&v=33wOKBMA2QA&w=470]
I doubt its portable yet. Would be nice to have a linux based device like this since windows ce wont run anything usefull.
More like-
New appointment: “Go shooping this video then sync it up so that there is no delay for the voice recognition software”
Project Glass is often referred to as augmented reality but I am wondering if that description is really accurate. I would have thought that to be augmented reality that the display would have to interact with your view of reality. As it is Project Glass simply overlays information over your view but its position and the information it displays doesn’t seem to relate to your view.
I would suspect that a better description than augmented reality would be that it is a wearable HUD (Head Up Display).
Google doesnt call it AR themselves.
Technically the name could be used as it is a augmented view of reality.
However, the term has come to be a bit more specific, requiring some sort of real world alignment.
However, in many ways thats more a software challange then hardware.
While Google has only shown non-AR applications, if the glasses have an integrated camera I don’t see what’s stopping it from running WordLens, which is definitively real AR (even if buggy for now).
I think the take-away message in this whole thread is that we need better image recognition technology. We need cameras and software to function more like the human eye and human brain. Imagine if the glasses in this video recognized all the text on the pages, as well as the sources of all the advertisements on the page. You could see who is trying to sell you something, as well as why they are trying to sell it to you. Informed consumers could see why certain companies choose certain tactics.
I like this idea a lot. Going further, we start having it evaluate everything you are exposed to audio-visually for semantic content: You’re watching a commercial, and the display reminds you that the sponsor hasn’t actually made any substantive claims about the product at all, etc. Or, they suddenly alert you to seeing a friend/relative’s face in a shot of the crowd at a covert or sporting event. Sort of a buddy second awareness.
covert -> concert. At least my typos are correct speelings.
@Scott
Computer vision technologies as they exist are pretty sophisticated. The bottleneck is processing power. Do a “Google Play” search for “OpenCV” and play with some of the apps available. Compare framerate between an LG Optimus V vs. an HTC Evo 3D. The frame rate on something as basic as a haar cascade based face detection app will tell you a lot about the horse power requirements.
Moors law and the singularity still aren’t to the point where you can put a cerebral cortex in a pair of sunglasses and expect equivalent performance to the human brain.
IMO we won’t see “human brain performance” in our lifetime. Also, I think you’re using Moore’s Law incorrectly here. Moore’s does not describe processing power, it just describes the shrinking of transistors over time, which is finite and will start slowing soon.
Re. singularity.
In fact the memristor breakthrough
means that the singularity just
accelerated by about 25 years.
So expect near human level AI by
about 2023 and full uploading by
2040.
you never see how they work and the contrast is so high on the popups and no visible pixels or any shake despite movement of the camera … its clearly fake
Given the hardware and engines he specifies, it’s feasible. Might actually be easier to make than it would be to fake it.
Might be. But if his rig has a camera it could just be recording that stream and overlaying the menu on top of that to give us an idea of what it looks like. The menu probably doesn’t look that good to him as Vuzix glasses don’t have a very good resolution if i recall correctly.
This isn’t trying to be a see-though display like Project Glass. He is overlaying the HUD on-top of a webcam video and playing that back on the glasses display. We are seeing a recording of the feed (i.e. he didn’t even need the glasses).
I hate to be that guy, but this isn’t impossible and looks real. But ironically, the Google one is just a glossy “what if” simulation.
I think it’s too good to be true. I think the google glasses were too good to be true.
*IF* this is real, it was recorded by recording the displayed video. I believe these glasses are not see-through displays, but rather displays in front of your eyes. It is just showing video from a camera, overlaid with the information. Then the computer just records the displayed video.
Bio, you are incorrect. This is a recording of the stream, so the icons are getting injected into the video feed before displaying to the end user. There would be no movement of the icons as it’s displaying in exactly the same space. Same with the contrast. The fact that he doesn’t explain it probably means it’s fake, but not the other reasons you suggested.
It is 3D Glasses + Webcam + Microphone
It is not see through but clone image to glasses
I used to play with it 10 years ago with i-glasses with IR Camera to walk in the dark.
I call fake. The video might be from the glasses, but the overlay is clearly just edited in. But imho google did the same thing and neither of the videos proves that there is anything like the google glasses that is actually working as well as google wants us to believe.
If he had a web interface he’d have something more interesting to read than that magazine.
I see no reason why this should be faked – it isn’t groundbreaking. I don’t think those Vuzix glasses are proper look through hud technology, are they? Aren’t they just high end VR glasses? I.E. a small display and optics to give a virtual screen, onto which he’s projecting a composite of the video feed from the cameras and some Adobe graphics controlled via dragon.
I suspect he’s using the PC to do the work too, so not portable.
I’ve looked into this for my own (failed!) project – the optics are the truly challenging part. Getting a data feed wirelessly from a smartphone should be easy enough.
Brother have cracked the optics side with their retinal projection system, which I suspect is the route Google will take.
Vuzixs STAR glasses have optically transparent displays – but they cost $4999 and I dont think they are out yet.
Nicely done mate.
I’ll take two.
This is either fake or he just stubbed the Google+ submission part because the G+ API is currently read-only. It would be impossible for him to post from that device.
the tech is possible but the video is a fake a Bluetooth Smart Phone and a Bluetooth Glasses with a camera, mic, ear-buds, OLED screen are all the basic needs to make this all is available today.
What the guy had on in the video is a HUD Video Glasses that cover the eyes no seeing past them. Also on the whole magazine thing with a live camera should also bring up links to webpages on the topic other options could be to have the magazine read to them like text to speech.
The Smart Phones out today have more then enough power to handle the computing needs.
One day they will be all you need no need for cellphone, computer, Tv, Mp3 Player, Camera can all be replaced with A pair of glasses
What we don’t see is how good the image inside the glasses is. The main problem with augmented reality is the user experience. Previous incarnations were fuzzy and not completely aligned with the vision of the user.
i hate to be “that guy” but…
that picture totally looks like there’s a wang in it…
First thing I saw as well…
Yeah lol
and the girl staring at it with her legs open…
http://www.youtube.com/watch?v=Ma8NbpCvSwo
The best spoof ever.
I don’t get it. Brother Inc. allready presented a prototype of a HMD in 2009 with a projector, projecting an image on your retina.
They even contracted with NEC to bring this device on the market.
http://www.brother.com/en/news/2010/airscouter/
This is far more awesome than google’s goggles.
MFG
Ixbidie
Why so?
Brothers work is also great – but at best it looks the same, at worst it doesn’t look optically transparent like Googles prototypes are.
Not sure that makes it more awesome, more “last gens versions”
(ditto for Vuzixs military ones)
Anyone here seen Dennou Coil?
Thats set 202X, which seems pessimistic at this rate 🙂
its fake
Could be real – have a look at some of the other stuff he’s involved with – http://www.willpowell.co.uk/blog/?p=194
clearly fake, he is obviously not reading that magazine if hes looking at the weather is he..
Reading a newspaper wearing a video see-through HMD is like taking screenshots with a camera. Nevertheless, such device should be possible to make in a day or two. There may be problems with low-light situations, latency and limited field of view.
“Field of view” is the breaking point here. Wear this in the streets, and you are very likely to be run over by the next truck you can’t see because of digital tunnel vision. Not good.
All he did was copy what was already done By Steve Mann.
Sorry, but “project Glass” is based on Steve Mann’s work at the University of Toronto. He has been doing this stuff for well over 10 years.
Everyone else is simply copying his work and would get a hell of a leg up if they went and read all his papers and looked at his designs before starting their own copy.
one person? 10 years?
Theres been dozens, if not hundreds of researchers and people prototyping in these field for the past 50 years or more.
Lots of people have contributed to this field, theres no one person being “copied”
I’m glad to see others referencing the work of Steve Mann, who really is a seminal researcher in this area of computer-mediated vision. Google’s project is nothing new, and there are already people out there with years of experience using similar devices in their everyday lives.
The interesting question is what happens when, after years of depending on this technology, it is removed…
http://www.nytimes.com/2002/03/14/technology/at-airport-gate-a-cyborg-unplugged.html
I suggest you read up on Steve Mann and Thad Starner before you throw that out there.
So, who’s first to try and glue an LCD display onto his glases?
The actual Android device could well be hidden in your pocket, but the cable connecting the two needs to be thin and flexible…
all in all, that shouldn’t be all too difficult…
ME.
I had a HUD and working system in 1998 I folloed the designs of the two that invented it more than a decade ago…
http://en.wikipedia.org/wiki/Thad_Starner IS one of the fathers of “Project Glass.
http://en.wikipedia.org/wiki/Steve_Mann The other one.
Project glass is a direct ripoff of their work. Both of them have been walking cyborgs longer than most of you have been out of diapers.
In fact I had a handeykey chording one handed keyboard, a 486 pocket PC and a camcorder viewfinder monitor on my head with a lens and mirror arrangement that looked like a sci fi device.
Mine actually worked, and was in use for 3 years unlike this guys fake demo.
Fake to get attention and possibly job offers.
Frankly with off the shelf hardware what this guy’s done isn’t difficult. He seems to already have these Vizux glasses (which from a cursory glance seem like pretty uninteresting LCD display goggles with cameras mounted in front) and the rest uses Adobe AIR, which I’m pretty darn sure is a more or less full horsepower platform that runs on a PC.
What I’m saying is that if you had these glasses lying around, had some experience with some language that could interface with the easily, and have some third party voice recognition software, it’s probably a day or two’s worth of hacking to make software like what he showed work. Google glass is more of a cool idea–honestly, there isn’t really any single part of google glass which alone is really innovative. HUDs like glass have been used by the military since forever, and are already in a number of consumer products (high end cars). It’s a matter of putting that together with /good/ speech recognition, /good/ UI, and some smart way to retrieve information (‘course google’s already knows how to do that), and having well designed hardware (for google, this means contract to Samsung or someone who’s done plenty of this sort of thing before) for a well polished product you would want to use.
Of course, google has yet to demonstrate working hardware, so I’m actually about as skeptical about how great their product will work in practice until I see it for myself.
Back to this thing, I think it’s more or less given that it’s only a demo, and hardly has all the features of what google glass is supposed to have (most of the functionality, softwarewise, is probably faked or limited to whatever he showed)
> and have some third party voice recognition
>software, it’s probably a day or two’s worth of
>hacking to make software like what he showed work.
you can make off the voice recognition software recognize command AND fire up action BEFORE you finish a sentence? 😮
Thats why its fake. It was made to promote his person, get some view and maybe score a client/job/contract.
Just like the Artuino controlled Cellphone and wiimotes operated flying wings video – FAKE.
PTAM http://www.robots.ox.ac.uk/~gk/PTAM/
nice but this video is not realtime and fake why you see that he puts the glasses on. but where is the camera that films what you see through the glasses. we must believe that we are in real time along the way see what I see now can not because there is no camera for your eyes. I think the google menu, there’s tunes with editing in.
remember the 3d whitout a glasses put 2 things on your head and you can see the 3d but turn out it is fake.
Here is the video debunked…
http://www.youtube.com/watch?v=b88Y83j7F_g
My big problem with this, is I didn’t think they made wearable monitors with enough resolution to read a magazine. Obviously in the video itself you won’t see that, since the video’s own resolution is limited. But this is why a transparent screen is a good idea. Nobody could be expected to function with their vision coming purely through a monitor.
I’ve had an idea… use an LCD, monochrome, to switch between transparent / black areas on the display. On top of that, put an OLED or a side-lit LCD or something that gives off light. That way, computer graphics can go on the blacked-out bits, while normal vision can still come through the bits left transparent.