Virtual reality is soooo last decade (aught-ish)! Now, in 2010, in the new decade we see virtual and physical experiences merge for the next phase of time-and-space bending expressions: Immersive Reality.
And the Ralph Lauren 4D shows are an early indicator of where it’s all headed. This video offers a taste, but for a more complete representation, you’ve got to go to the site: http://4d.ralphlauren.com/
The videos document major shows held last week ‘on’ Ralph Lauren’s flagship stores in NYC and London. They featured 3D holographic projections that bring the building to life, laser light shows, and a 4th dimension: scent. It’s a mash-up of ad, art installation, and fashion show.
Charlie Kemp was a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) when he really began to dig into the idea of so-called “wearables”. Not quite robots in and of the themselves, wearables were more like a really smart backpack thingy, with sensors to be attached at various points of the wearer’s body (see image at left)
The wearable he developed, named “Duo,” had the ability to learn from sensory input – achieved via human guidance. The basic idea is this – a wearable ride’s along with a human guide, taking in that individual’s actions and learning from them. A camera mounted on the wearer’s head provides visual input, while sensors on the body learn certain movements, positions and actions.
It would be kind of like guiding another person’s hands through the motions of a certain task, whereby that person would then learn how to complete the task.
From Charlie’s website:
“Systems that better perceive and understand everyday human activity will be more capable of assisting people, coordinating with people, learning from people, and emulating human activity.”
If you’re looking for a more in depth look at wearables, I suggest reading Charlie’s article “Wearables and Robots: A Shared View” from the publication Pervasive Computing. The image to the left is from that article.
Charlie’s work with Duo went on to inform the visual system for Domo, a helper robot that also has the ability to learn from human example – in addition to bearing a striking resemblance to Johnny 5, from “Short Circuit.”
Let’s Talk About Assisted Mental Fidelity
What really grabbed me about Charlie’s work was the idea of a wearable system, maybe even a compact robot, that would learn from you, your actions, and your environment. Imagine such a robot and then imagine that, after learning enough from you, it would be able to ‘learn’ on its own.
Suppose you were getting directions from someone at a gas station and are 99 percent sure that you were supposed to take a left at that church on the corner. Human memory is fallible, and I have the wrong left turns to prove it, but a robotic memory … it could be almost like an external hard drive for your mind, but instead of simply storing the information it collects – it feeds it back to you, as needed, in real-time.
Remembering the Things You Forget, Learning from Your Mistakes
I’m also terrible at remembering things like phone numbers, names, to-do lists, etc. Oh, if only I had a helper to remember those things for me, and to keep me on track if I start to drift away from the things I need to be doing.
My robot helper would have learned my tendencies by now and, thus, be able to tell when I am drifting off track. This thought chain doesn’t follow directly from Charlie Kemp’s work above and, to avoid the misapplication of purpose to someone else’s work, I’ll just note that this is my own pondering.
Still, if a robot can learn from visual, physical input – why not mental input? Last week, we talked about mind-controlled robots being developed in Japan. Given time, it seems entirely possible that mental connections could move from simple commands to actual experience and thought absorption.
The Mental Yak Bak, or, Nothing but the Truth
Plus, such a system could help ensure that you stay honest. That 10-inch fish you caught last weekend was really more like six inches, and the giant grizzly bear that stole it from you was actually a petite cub. And that party last night? I don’t remember a thing, but my robotic companion can tell me everything.
Then again, the best objective memory may be one that comes with an “Erase” function.
Remember Rosie the Robot from the Jetsons? Well, make room, cause Rosie and her relatives are planning to move in – for good.
From folding your laundry to washing your dishes, these newfangled ‘bots do all the chores that you’ve probably always hated. For years, scientists and engineers have been creating simple robots to replace those menial tasks, but just recently have they perfected more intricate technology. South Korean scientists at the Korean Institute of Science and Technology have designed Mahru-Z, a humanoid robot with 3-D vision to recognize chores that need to be tackled, and plan on introducing Mahru-Z to the wider public within the next ten years. Needless to say, the introduction of such technology into our lives would be a real weight off our shoulders.
But robots can play a much more integral role in our social lives than most of us would expect. While robots might start out by relieving us from such domestic drudgery as mopping the floor and vacuuming those nasty carpets, they may also be our friends. From retirees to only children, robotic companions could play a large part in keeping people company, whether it’s by playing chess or taking a stroll around the neighborhood. After all, robots won’t argue, won’t lie to you, won’t divorce you, won’t fall asleep mid-conversation, and won’t storm out angrily. And while such complex AI technologies will take a considerable amount of time to develop, we’re certainly getting closer.
In the meantime, let’s stick with sipping robot-made lemonade while robots make our beds and dust our rooms. So many new and diverse robotic technologies are being perfected, giving us time to stop and smell the roses.
SENSEable City’s Flyfire project, a collaboration with the Aerospace Robotics and Embedded Systems Laboratory (ARES Lab) at MIT, is about to make it possible for any empty space to become a fully interactive display environment. It does this by way of hundreds (maybe thousands) of tiny, “self-organized micro helicopters” – each with an LED light.
Think of these mini-copters as pixels in the sky. From here on, let’s refer to them as the “pixel swarm.” A remote controller is able to designate the desired shape from the ground, or wherever, and the pixel swarm creates the desired shape.
The pixel swarm is self-organizing, which means that they’re smart and can adapt to directed changes in real time. As the team behind Flyfire points out, this allows viewers to experience an animated display – with the pixel swarm moving fluidly from one shape to another.
To better understand what such a demonstration might look like, watch this brief video on Flyfire from the SENSEable City Lab.
Could projects like this spell the end of a fixed interface?
It’s certainly feasible that such technology could be developed to the point where it was possible to watch almost anything using the pixel swarm. Sure, it’s a long way off, but until then, it’s probably more realistic to imagine such technology being used at events to sex-up the user experience.
The potential for advertising is immense – for example, “mobile billboards” or other sponsored messages. Imagine being at a football game and watching an advertisement for an electric, turbo-charged sports car that zooms through the air, much like the Golden Snitch of the Quidditch game played in the Harry Potter stories. Perhaps the ‘Golden Snitch’-like pixel swarm would be a part of the half-time show, or programmed to hover over the seat of someone who just won the car…. the possibilities seem endless.
Pattie Maes’ Sixth Sense, featuring Pranav Mistry
Last year at TED, Pattie Maes premiered a new technology developed by Pranav Mistry in her MIT Fluid Interfaces Group. The physical hardware consisted of little more than a camera and projector, worn around the user’s neck. Functionally, it was a little bit Minority Report and a little bit RoboCop.
Say you’re looking for a book on CSS at Barnes & Noble. Having done this myself, I can safely say that there are about a dozen and, from what I can tell, each looks as good as the next. How do you decide which book is the best one for your needs?
If you have a smartphone, you can just look it up. If you don’t, you can ask one of the bookstore employees and hope they have a design background.
What if you could just pick up the book and have its Amazon rating projected right onto the cover? This would be much more efficient, no?
That’s just a start.
The goal of the Sixth Sense project is to allow any user to access relevant information wherever he or she happens to be. This is similar to augmented reality, save for the fact that it would be accomplished without a cellphone and, therefore, be much more seamless in regards to information gathering.
After consumer devices such as these are developed, our next step is surely embedded discovery tools, we we discussed in our post on augmented reality contact lenses last year.
It’s all terribly exciting, a little terrifying, and very promising. Stay tuned!
Innovations in alternative energy, always exciting and unpredictable, are certain bets for the future. But which technology is the biggest gamble – and pays off the most? The latest and most promising one, the Bloom Box, was unveiled this past Sunday. This “little power plant-in-a-box,” which can literally sit in your basement, potentially provides independent and clean energy for home and small businesses alike. Within five to ten years, Bloom Energy hopes to make its Box available to individual residences for below $3,000, quite affordable given the price of a furnace ($3,000+) or installing a central heating heating system (up to $10,000!). Great, right?
But let’s bring this to scale. The Bloom Box won’t be available for at least five years. What do we do until then? In the energy lottery, certainly there are solar, biofuel, natural gas and wind resources, among others. We use everything from algae to manure to moon rocks – but instead of producing new technologies and new sources of energy, why don’t we use what’s right under our noses? Are overlooking the most obvious source of energy – movement?
The great thing about capturing free energy is that it really is everywhere: from crashing (or lapping!) ocean waves to a busy thoroughfare, there are plenty of sources of kinetic movement. The only questions we face are what technologies we need (to develop) to harness kinetic force, and how to scale out these technologies for wide – and so more efficient – use. Even in the most unexpected places possibilities are waiting to be tapped.
What caught my eye is a new keyboard on the market. Researchers have found a way to return the kinetic energy generated while typing to local utility providers through nanotechnology connecting the keyboard to any standard 110-volt outlet. At $30, and considering Americans, especially young ones, spend increasingly more time on the computer, keyboards like the Dynamo are both cost-efficient and accessible.
Another variation on this theme is a keyboard that recharges the computer’s battery the more you type. The goal one day is to develop a keyboard that will be fully powered with the speedy clicks of a laptop’s keys. We could reduce external energy consumption while prolonging battery life – a pretty perfect situation.
Where else might we be able to capture free energy? In big, high-pedestrian traffic cities like New York and Chicago, design company Fluxxlab wants to harvest the movement created each time a revolving door spins to power that same building. Likewise, the movement generated by city walkers as they rush to their next destination can be harnessed to power traffic lights, street lamps, and other electrical needs. Private company M2E Powerhas designed a microgenerator for troops that replaces the 10-30 pounds of batteries a soldier typically carries: clipped onto the wearer, walking or shaking for two hours powers mobile devices for an hour and a half, an incredible prospect
Capturing kinetic energy avails us of innumerable opportunities. Yet, we also face challenges in cost-efficiency and scale. While engineers at Free Energy Technologies have developed plates installed onto streets that capture the energy of decelerating cars, and this might generate a great deal of electricity, it perhaps isn’t enough to offset the costs of retrofitting old roads. Likewise, the Revolution Door only makes sense in big cities with high human traffic. We need to strategically and systematically make use of new technologies, and imagine more cost-efficient means of implementing them throughout our lives.
While all technological innovations push us toward a more progressive future, developing them takes time, funding, and determination. We certainly hope that the Bloom Box will bloom into our own (green) power plants, but at the same time, let’s keep in mind that the safest bet for the future is that portfolio of mixed energy-capturing conservation measures. We need to rely on multiple sources of energy for maximal efficiency. And right now, it looks like the kinetic energy from flying fingertips and cooling brakes is our greatest untapped natural resource.
I remember waiting for an episode of “Family Matters” (long ago) that was going to be in 3D. We’re talking paper glasses with one red lens, one blue lens. You know what I mean — the type of 3D where, if you watch it without the glasses, it looks like you slipped and fell through the cracks in a table of RGB variants.
The good news is, the 3D TV of the future will be smoother, more efficient and come with cooler glasses. Better yet, it could be here by 2010!
Then again, maybe it shouldn’t be so surprising. Hollywood has been steadily releasing an increasing amount of films formatted for 3D. Think Coraline, Up, the U2 concert film and James Cameron’s upcoming Avatar. While Beowulf may not have been strong support for the need of 3D in the home, one has to imagine that film companies are hungry for the DVD market these films could bring.
Whether or not consumers will be able to afford 3D-enabled televisions just after upgrading to HD and plasma TVs is another question entirely.
My first impression of Neave.com was that I had mistakenly stumbled across an online playground for stoners – the lights, the sounds, the painfully over done British banter complete with bizarre references to elephants named Dave – huh?
Paul Neave, the London-based interactive designer behind Neave.com explains his flash-design software such: “I love trying to dissolve the boundaries between code and design and exploring ways of making technology seem less scary and geeky, but more fun and human.” Undoubtedly, the site is amusing, but when it comes to dissolving boundaries – I have to disagree. Neave’s flash graphics are fun, but in his attempts to make them more “human”, it’s hard to miss the irony that his use of technology is actually driving us away from the very definition of humanity: interaction with each other and the outside world.
Take for example, any of the following: Imagination, Bounce, Dandelion, or Flash Earth. Immediately upon opening the Flash Imagination screen, I am met with memories of playing Ribbon Dancer in my driveway. Bounce reminds me of the ball pits that I would bury myself in at Chuck-E-Cheese, the blowing of the dandelion seeds in Dandelion is a practice that I still indulge in, and Flash Earth or Planetarium – well, walk outside your front door and you can behold the real deal.
It is interesting to think that the catalyst behind Neave’s playful flash design is the nostalgia of our own childhood. It works for most of us now, but what about the next generation of kids? With the introduction of computers coming earlier and earlier in life, their first exposure to these games might actually be Neave.com’s version instead of real thing. Will the flash games still engage them if they have no previous, more tangible memories to build upon for their understanding of fun? How do “real play” and “virtual play” overlap? Must one precede the other in order to be effective or can we be engaged in a continual exchange?
Neave.com toys with this concept in a clever way. Paul Neave takes delight in his ability to use his flash-tech savvy to have fun at work, but I think we’re better off taking him up on his parting advice:
“Turn off the computer and go outside. Go hang with your friends. Make lots of new friends. Count your blessings. Smile like an idiot. Don’t think too much. Don’t worry about the future. Don’t take life too seriously. Don’t pay attention to word I say.” – Done!