At this year’s Consumer Electronics Show, Microsoft Research unveiled one of their latest projects called IllumiRoom. Like most of their ongoing technological marvels of the future this one is quite fascinating so perk up. Imagine you’re playing a video game–let’s say Halo 4–and all of a sudden the on-screen action extends beyond the confines of your TV set. In an instant the mysterious jungles of planet Requiem surround you and you feel as if you’re truly immersed inside the captivating game developed with precision by 343 Industries. Microsoft’s IllumiRoom attempts ”to blur the lines between on-screen content and the environment we live in allowing us to combine our virtual and physical worlds.”
So how does it all work? A peek behind the curtain reveals two devices: the pairing of a Kinect for Windows camera and a projector. “Our system uses the appearance and the geometry of the room (captured by Kinect’s sensors) to adapt projected visuals in real-time without any need to custom pre-process the graphics.” Sounds simple now, doesn’t it? Unfortunately like most Microsoft Research projects IllumiRoom is only proof-of-concept, but with engineers working hard to make Kinect even more powerful and projectors keeping pace with high definition resolutions, the technology is there for this prototype to enter the marketplace. Let’s place IllumiRoom in the pile labeled “not if, but when.”
Watch Illumiroom perform its magic in the video embedded above; Microsoft ensures the action was “captured live and is not the result of any special effects added in post production.”
Those of you who aren’t so content with present tech and are longing for the kind you see in the movies, your time has arrived. Today Google leaked information regarding a super secret and highly advanced technology they’ve been working on for quite some time. It’s a pair of augmented reality glasses and the initiative is called Project Glass. Here’s how the search giant made the announcement:
We think technology should work for you—to be there when you need it and get out of your way when you don’t.
A group of us from Google[x] started Project Glass to build this kind of technology, one that helps you explore and share your world, putting you back in the moment. We’re sharing this information now because we want to start a conversation and learn from your valuable input. So we took a few design photos to show what this technology could look like and created a video to demonstrate what it might enable you to do.
In essence, this wearable device might be intended to replace the bulky brick in your pocket, your cell phone. It does everything your phone can do but in a more natural (read: human) manner. Reminiscent of a heads-up display you’d find in a first-person shooter video game, the glasses feature a small lens that projects text, images, video, and sound in the space in front of your eyes. The software that’s implemented inside the device allows users to be alerted notifications like text messages and email and respond to these things with simple verbal cues and head gestures. Google Maps is built into the unit, naturally, so planning a route and following it becomes second nature when the precise directions are displayed right in front of you. Though exact specifications have yet to be released, it is confirmed that the smart spectacles feature a built-in camera for snapping photos, shooting video, and initiating video chat. Imagine you’re walking down the street and you see something that catches your eye; speak “take a photo of this” and the camera will snap. Want to share the image with your friends? Say “share it to my Circles” and it’ll be instantly uploaded to your Google+ account. The possibilities are endless, really. And the potential is grand.
You must be thinking something like this is great and all but does Google actually have plans to release this to the general public. Yes, they do. Now go pick up your brains that are scattered on the wall and continue reading… Project Glass is currently in beta mode (er, alpha mode really). Google is testing the prototype device in the field, sending company employees out into the wild wearing these nerdy bad boys to see how they handle real world conditions. Besides making techies around the globe foam at the mouth, Google’s intent with today’s reveal is this: “We’re sharing this information now because we want to start a conversation and learn from your valuable input. Please follow along as we share some of our ideas and stories. We’d love to hear yours, too. What would you like to see from Project Glass?” In other words, they want your input! The conversation is taking place at the Project Glass Google+ page.
Now that you’re informed, here’s what you can do. Take a look at the glasses in the gallery below, bearing in mind that these are strictly prototypes and a final product will almost certainly come appearing differently. Then jump after the break to watch a two-and-a-half minute video showing off Google’s vision of how augmented reality glasses could make us more efficient beings. Rumors are flying that the wearable device in its final form will come complete with 4G data capability for always-on Internet functionality with a price tag looming anywhere between $250 and $600 when it comes out later this year. But forget the speculation for now; feast on the video below and shiver in anticipation for more information to leak out surrounding Google’s latest concoction.
Located on the Redmond campus, the Microsoft Home was first opened in 1994 and since then it has given private parties a glimpse into the future. Three to fifteen years into the future, to be somewhat more precise. Though the Home is closed to the public, Channel 9′s Steve Clayton was recently invited inside with a camera crew and his all-too-brief tour is available for everyone to watch in the video embedded above. Microsoft’s Flora Goldthwaite is the tour leader and she admits that the technology inside the house is at the prototype stage; though Microsoft envisions these technologies to be possible in the near future, perhaps they won’t be used the way in which the Home demonstrates them. Anyhow, all of the quick demos are intriguing to say the least. From conductive charging plates that display UIs to next-gen media centers and kid’s rooms that project incoming text messages and Twitter updates, Microsoft’s home of the future certainly tickles the imagination.
Microsoft Research is back, and this time they are bringing a new technology to the table (hehe) that’s going to eliminate any desire you might have had to purchase an exuberantly priced Microsoft Surface.
LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing.
In essense LightSpace rips out the multiple depth 3D cameras and projectors from their secret cove beneath a table and places them up in the ceiling. In effect, this means that all Surface user interfaces and features can be displayed on virtually any flat surface; the actual Surface table is no longer required. You’re going to want to watch the video demonstration above; some of the LightSpace applications are quite extraordinary. In one example the Microsoft researcher “picks up” an object located on a table projection and transfers it in his hand to a second wall display. It’s drag-and-drop IRL. Now remember, this is a Microsoft Research project so there’s no telling how long it’s going to cook in the labs before it makes its away to the general public (if ever).
YouTuber x313xkillax somehow managed to get his hands on a protoype model of the HP Slate. As you can see in the video embedded above the Slate has a myriad of ports and switches around the edges (including one that reveals an on-screen keyboard) and it boots fairly quickly into a full-fledged copy of Windows 7. IE8 seems to run Flash content without hiccup (advantage HP, iPad). Since Windows 7 isn’t all that optimized for touchscreen implementation I am anticipating a future HP tablet running multitouch-friendly WebOS. HP bought Palm so they can do that, you know.
Want more deets on the Slate? A second video surfaced that previews the tablet in greater detail. The back of the product box lists the following specs: 1.86GHz Intel Atom Z540 with GMA500 and Broadcom Crystal HD Enhanced Video accelerator, 8.9-inch WSVGA screen, 2GB DDR2 RAM, about 60GB storage capacity, Windows 7 Home Premium, 802.11b/g/n, Bluetooth, SD card reader, 2-Cell 30WHr Lithium-ion Polymer battery. There are also back and front-facing cameras. Apparently the Slate will ship with a dock that comes complete with a kickstand, two USB ports, HDMI out, and a headphone jack.
Update: Both videos have been “removed by the user” due to obvious reasons. However I was able to find another copy of the original preview video and it’s embedded above; the more extensive preview is nowhere to be found, unfortunately.
Nicholas Negroponte, founder of the One Laptop Per Child initiative, says that the next prototype laptop to come out of the factory will be the XO-3 model discussed here. Back in December we were told that the XO-3 would be ready for 2012 with a $75 price tag. Apparently the folks at OLPC are ramping up production techniques. According to Negroponte, the dream tablet for developing countries will be put together in prototype form by December 2010 and will be formally revealed at the next Consumer Electronics Show in January 2011. Though the prototype model will feature a glass screen, the objective is for it to eventually be ”100 percent plastic, unbreakable, and almost extruded out of a machine.” Listen to Negroponte divulge more details about the XO-3 in the brief but informative video above.
HP recently showed off its “wall of touch” concept to The Wall Street Journal. HP labels it a “large digital sign” that allows users to interact with it. Interestingly HP gives the user two options for said interaction: you can touch it as you normally would with, say, a Microsoft Surface table, or you can simply point to specific locations on the wall. With the aid of integrated cameras and a magnetic strip the wall can detect when a user approaches and intentially interacts with it with hand gestures. For now HP is selling this technology to companies who plan on using it in large public spaces. In fact, Continental Airlines has one of the first walls installed in their Houston airport. HP does leave the door open and hints that it may turn into a “mainstream product” if there’s enough interest and demand for it. It would cost anywhere from “a couple thousand dollars” to $100,000, depending upon the built-in technologies (HD video cameras, etc.). Be sure to check out a demo of HP’s “wall of touch” in the video above.
Designed by Inigo Manglano-Ovalle.
Kinda looks like the floating mountains from Avatar, dontcha think? Peek after the break for another look.
The awesome dudes at Gizmodo picked up this story earlier this week, and boy is it a fascinating one. What was a big secret for Microsoft has now been reveiled to the public–a MS-hardware and software designed booklet.
Gizmodo has the details:
Until recently, it was a skunkworks project deep inside Microsoft, only known to the few engineers and executives working on it.
Courier is a real device, and we’ve heard that it’s in the “late prototype” stage of development. It’s not a tablet, it’s a booklet. The dual 7-inch (or so) screens are multitouch, and designed for writing, flicking and drawing with a stylus, in addition to fingers. They’re connected by a hinge that holds a single iPhone-esque home button. Statuses, like wireless signal and battery life, are displayed along the rim of one of the screens. On the back cover is a camera, and it might charge through an inductive pad, like the Palm Touchstone charging dock for Pre.
So, the MS Courier is in fact a real device, it packs two 7-inch multitouch displays and an integrated camera, and has a UI design that looks sleek, organized, and most importantly, simple. For more on the UI, check out the video below for a quick tour of the Courier user interface, still in development. One question: What’s with the stylus? That’s so 1990s!
Following the recent news of the upcoming pressure sensitive keyboard is this new way to input information on a computer screen or some other device. The video above shows off the implementation of creating sound by scratching a surface with your fingers that, in turn, generates sound waves which are picked up by a receiver. The receiver sends the signal to the device it is connected to and viola! For example, you can “write” a letter on a flat surface, and the sounds waves from your scratching will output the desired letter on screen. Or, you can create your own personal gestures to answer a phone call, enable speakerphone, and close out your email application all at the same time. Cool stuff. Keep it coming, innovators.
The above video shows off a pressure sensitive computer keyboard from Microsoft. Basically, the harder you press a key or keys, the more varied the output on the computer screen. For example, say you are working on a paper in a Word document. If you hold a key down with more pressure than usual, you can output a capital letter without the need for a shift key. Microsoft demonstrated changing font size by the amount of force you exert on a key. Also shown was accelerated backspacing, which is where a user can delete words or sentences at a time rather than just letters depending on the amount of pressure applied to the key(s). There’s also a gaming demo; the harder you press down on the key, the faster your character runs in-game. It is exciting to know that this type of technology is being experimented with and that new methods of computer input are being tested.
According to CNET: “Microsoft is also holding a contest for student developers to coincide with the UIST conference. Contestants get a sample keyboard and a month to come up with an entry. $2,000 prizes go to programs deemed the most useful, the best implementation, and the most innovative.” Let the creativeness flow! (Check out a second video of this implementation after the break.)