A look at what – and who – is pushing the future in new directions

Technology

Fuel cells placed in estuaries could help power cities like Boston and New York!

THE GIST:
  • Mixing freshwater with saltwater produces electricity. 
  • A process known as reverse electrodialysis could channel that electricity into the power grid. 
  • Right now the electricity would be relatively expensive to produce, but that could change.

Capturing electrical current in naturally flowing ones seems pretty cool to me, especially when it has no environmental impact. Another example of how solutions to global issues (of all kinds) are going local. Looks very promising!

From news.discovery.com:

As long as rivers of freshwater flow into a salty sea, rivers of electricity could flow from estuaries into the power grid.

Scientists from the Netherlands have found a way to harvest electricity from estuaries using devices similar to a fuel cell and they say that a full-scale production plant could provide significant amounts of power to nearby cities.

These small stacks have no moving parts to endanger wildlife or break, and could add electricity to the grid as fast as freshwater is dumped into saltwater.


Theater upstages fashion, but who’s complaining? The Official Ralph Lauren 4D Experience

Virtual reality is soooo last decade (aught-ish)! Now, in 2010, in the new decade we see virtual and physical experiences merge for the next phase of time-and-space bending expressions: Immersive Reality.

And the Ralph Lauren 4D shows are an early indicator of where it’s all headed. This video offers a taste, but for a more complete representation, you’ve got to go to the site: http://4d.ralphlauren.com/

The videos document major shows held last week ‘on’ Ralph Lauren’s flagship stores in NYC and London. They featured 3D holographic projections that bring the building to life, laser light shows, and a 4th dimension: scent. It’s a mash-up of ad, art installation, and fashion show.


Robot Love

Scientists from the University of Hertfordshire recently unveiled Nao — the first robot allegedly capable of both developing and expressing emotions. This sensitive robot is the result of Feelix Growing (Feel, Interact, Express), a project aimed at socially situating robots in our society. According to Dr. Lola Cañamero, the computer scientist who is running the project, “Emotions foster adaptation to environment, so robots would be better at learning things.”

Nao is the emotional equivalent of a one-year-old child, showing emotion through non-verbal clues like posture and gestures, rather than more advanced facial or verbal expression. Non-verbal clues from actual human beings, body language and distance in particular, are also what guide Nao’s reactions and feelings. The robot learns from human interactions, can remember faces and is programmed to form close bonds with people who treat it (him? Pronoun struggle.) with kindness. This basic understanding of human body language, along with a programmed set of basic rules about what’s “good” and “bad” for it, allow Nao to indicate how it’s feeling.

This originally made me think that Nao is just another basic robot, BUT found out that while the actions used to display each emotion are preprogrammed, Nao decides by itself which feeling to display, and when. (Robot agency!)

Hunching its shoulders when it’s sad and raising its arms for a hug when it’s happy, the robot really does emulate the physical expressions of a very young child. If frightened, Nao will only stop cowering in fear when soothed by gentle strokes on the head. Along with happiness, sadness and fright, Nao can also express anger, guilt, excitement and pride.

Beyond just being a novelty, Nao has several projected practical uses. The FEELIX team members in charge of creating Nao’s emotions believe that robots are absolutely going to act as human companions in the near future, and that responses from the robots will make it easier for humans to interact with them.

“If people can behave naturally around their robot companions, robots will be better-accepted as they become more common in our lives.”

In addition to being an ambassador for the ideal everyday companions of the future, one of the immediate aims of FEELIX’s project is to provide 24-hour companionship for young children and the elderly in hospitals and to provide support for their parents, carers, doctors and nurses. He would be capable of helping out with therapeutic aspects of their treatment, as well as providing companionship and helping their emotional well-being.

I don’t think we’re anywhere close to the point where robots will replace actual human attention, but they could be a great helper, when no one else is available. The public might not be ready for robot companions  with a mind of their own, but the technology is here, it’s consistently improving, and it can’t be ignored.

(All seriousness aside, I think my favorite thing about Nao is that he happens to be an awesome dancer, bringing a whole new meaning to ‘The Robot’.)


Futile Purism in the Oncoming Era of 3-D Movies?

“Action is more generally understood than words. Like the Chinese symbolism it will mean different things according to its scenic connotation. Listen to a description of some unfamiliar object—an African wart hog, for example. Then look at a picture of the animal and see how surprised you are.” Charlie Chaplin to Time Magazine, 1931

Retro 3D GlassesIt started with a trickle, but with 60 planned over the next two years, 3-D movies certainly seem here to stay. Companies like Sony and Panasonic are betting on them sticking around after the theater, too, and bother companies already have 3-D television sets on the market.

Early 3-D films like “Beowulf” may have proved that such technology could get people to the theater, but James Cameron’s “Avatar” proved that it could scale. It seems almost a given at this point that any animated feature worth its salt will release in 3-D. While not the norm yet for live-action films, that too seems only a matter of time.

Is there room for 2-D purists in an industry bent on throwing the kitchen sink at the medium (not to mention the audience).

Last week, The New York Times covered the “2-D Résistance” in Hollywood. Directors like J.J. Abrams, Christopher Nolan and others have shown themselves unwilling to make the transition. Abrams, for one, has been very vocal in his opposition.

Are they risking obscurity in the face of progress? As Cecily has pointed out at many a Push meeting, we are inherently scared of change. We enjoy comfort, knowing what’s next and how to get from point A to point B.

When point A suddenly leads to point J, we grasp for what we know. Is it okay to just pick and choose your adaptations?

Maybe, but perhaps at the risk of irrelevance. Technology is communication, in many ways, and a lack of fluency can leave even the smartest people behind. To effectively interact, you have to be willing to do so on the same level as everyone else.

In 1931, Charlie Chaplin told Time Magazine that he had no desire to do “talkies.” The words just weren’t as expressive. In 1940, he released “The Great Dictator,” his first picture to feature speaking roles.

As for me, I don’t really care for 3-D movies, and I’d rather have a solid book than an iPad.

Still, I’m reconsidering the firmness of my stance. Who’s with me?


The Great Oceanic Oil Rush


“They paved paradise and put up a parking lot….don’t it always seem to go that you don’t know what you got ’till it’s gone…”

Forty years after Joni Mitchell wrote these lyrics, the crude swill lapping against shorelines of the Gulf of Mexico, the Atlantic Ocean, and now up the banks of the Mississippi (Tar Balls Reported on Mississippi Mainland), is paving a broader swath of paradise than we could have ever imagined.

From sea to shining sea, waters glisten with poisonous plumes of oil. Whatever its name — Black tide. Tar balls. Sludge. Mud Monster — the greasy, grimy goo threatens to kill off ecosystems and economies “across the land.” It’s like some kind of creepy B-movie horror flick.  Unfortunately, this horror is all too real.

Harvesting Oil for Profit

Might there a silver lining in this profusely polluting oil slick? Perhaps, for wherever there’s need, there’s opportunity.

The most urgent need is to remove the oil. No, not dispersing the oil (a technique that compounds the matter by adding highly toxic chemicals to the mix), but collecting it. Tar balls and skimmed slicks could yield still-usable crude for further refining. Workable solutions have the potential to stimulate economic opportunities for coastline communities: oil-capture technologies would need to be manufactured at scale and provide fishing fleets with a different kind of harvest. Sadly, the recent failure of the Deepwater Horizon won’t be the last time oil will contaminate our oceans and, we (companies, government, communities) should invest in oil remediation and recycling strategies.

Scientists at BP and other oil companies, inventors, government agencies, and entrepreneurs are working on it. Here’s where it stands:

Current methods for purifying oil that has been mixed with water and other debris are fairly stringent as to what is salvageable and what is not. Depending on how long the oil has been out in the elements, how thoroughly mixed  it is with water and other debris, and what exactly that debris is composed of – the oil may or may not useful as raw fuel. Of everything, oil harvested from the sea has the best chance of separation since it will likely be composed of only oil and salt water.

Current separation methods aside, with so much thought being poured into the question of oil separation, it’s possible that the solution for more advanced filtering and reclamation is just around the corner. Perhaps something lying in BP’s rejection box, cast aside because it seemed a poor prospect … or maybe something that has not yet been fully ideated yet, but whose realization is just around the corner.

BP doesn’t own the oil spilling into the ocean, simply the means to get it out – for now. What’s stopping a few of these innovators, inventors and businessmen from creating small-scale operations, collecting spilled oil, purifying or adapting it and then selling it back to BP or the highest bidder. Are there laws against this? Perhaps, but things change. Plus, laws never stopped some people anyway.

With tragedy comes the opportunity to innovate. With innovation comes opportunity to capitalize. Are we on the cusp of an oil rush in the Gulf? Maybe not quite, but who can say?

(image via Reuters / Colin Hackley)


Wolfram Alpha Refocuses – Now for the Common People.

Wolfram AlphaIt’s been about a year since Wolfram|Alpha (W|A) was unleashed on the Web and, after some initial confusion over just what to make of it, seems to be hitting its stride. Or, as many have pointed out, seems to be hitting a new audience. While the initial version was a bit too academic for folks like me, the new W|A is sleeker, easier to use and more exciting.

This, then, is an overview of that new Wolfram|Alpha.

Like any search engine W|A wants to help you find what you’re looking for, though you won’t find any advertisements for used car dealerships in our area or new flavors of cat food.

Rather, W|A serves as more of a smart encyclopedia – more akin to Wikipedia than Google. It collects objective data, such as rainfall patterns in Brazil, the amount of calories in a banana and the population of your hometown. Not only can it return information on the rainfall patterns in places like La Crosse, WI and  Gavaudun, Lot-et-Garonne, France – it can compare these blocks of information.

You are also given options for expansion on your search. For example, would you like rainfall data for one week or two?

As stated on the site, the goal with Wolfram|Alpha is to:

make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything.

Wolfram Alpha Search Results SampleNot only will W|A be able to use such “models, methods and algorithms” to return increasingly complex data sets, it will also be able to break these very functions down – hopefully in such a way that anyone can both understand the logic behind the data we’ve requested and benefit from it.

Obviously, the research implications are huge, but even for someone who just wants to count calories or find the weekend’s weather, W|A provides a valuable database.

Another key part of W|A’s unique business proposal is the way in which it presents information. For example, a search for “10 feet” yields:

  • the measurement’s equivalent in yards, inches and meters
  • its equivalent in average human height (about 1.8 stacked humans)
  • its electromagnetic frequency range (VHF, which is primarily used for broadcasting)

Essentially, no matter what you were thinking when you typed “10 feet” into W|A, the search engine wants to make sure that you find what you were looking for.

By being smart in interpreting what we are asking for, and being generous in what it returns, W|A was and remains a unique step in the field of semantic search. Rather than returning a list of potential sites that might answer your question, W|A pulls on concrete data to give you the answers you are searching for – without leaving the search results page.

Wolfram|Alpha is also making a push for mass-acceptance, with applications available on the iPhone and iPad, as well dashboard widgets and browser plug-ins.


Human Computer Viruses – Hacking into Our Hardware

BBC Reporter Rory Cellan-Jones“So we’ve got a future where we could all become some sort of great big, walking computer, infected with a virus?”

Dr. Mark Gasson“That’s very possible.”

Implantable Computer Chip

Dr. Mark Gasson, senior research fellow at the University of Reading, made history last week as the first human to be infected with a computer virus. It was less a case of the sniffles than it was a case of a corrupted computer chip, specifically the one embedded in his hand.

RFID (Radio Frequency Identification) chips have been embedded in animals for a while now, and are used to identify and track them. For the most part, these chips only feed outward, like a beacon which can be interpreted, but can not actively engage with its environment.

The chip embedded in Gasson’s hand could be used to access secure areas, engage with his mobile phone by identifying him as the proper user … and transfer computer viruses.

In the University of Reading’s press release “Could humans be infected by computer viruses,” Gasson is quoted:

“Our research shows that implantable technology has developed to the point where implants are capable of communicating, storing and manipulating data,” he said. “They are essentially mini computers. This means that, like mainstream computers, they can be infected by viruses and the technology will need to keep pace with this so that implants, including medical devices, can be safely used in the future.”

In Michael Crichton’s book The Terminal Man, a cerebral pacemaker of sorts is implanted into an unstable patient’s head to combat seizures that tend to induce violent behavior. Obviously, the project goes awry and the patient escapes after learning how to manipulate the electrical stimulation, using it for his own ends.

What Dr. Gasson is talking about is so far behind Crichton’s vision that it barely warrants the reference, save for the notion of fusing man and machine and confronting what problems may come from this matrimony.

The scariest part? After being infected with a computer virus, Dr. Gasson was able to infect other computers. Ostensibly, assuming other humans were walking around with similar chips, they too could be infected. Hence Cellan-Jones’ question regarding a walking computer with an infection.

Consider the implications for medical devices, say, a mental pacemaker that delivers shocks in the event of a seizure. If such a device, or “mini computer,” could be infected by an outside source, the consequences could be dire. It’s not hard to imagine such bio-computer viruses being developed in some sort of high stakes trolling.

Such chips could also, potentially, be used to increase memory. Imagine a human hard drive upgrade, an inserted computer chip connected to your brain and sensory inputs, recording information and serving as a mental reserve. We discussed this a bit in our post on wearable computers, but this certainly goes above and beyond even that scientific frontier.

If  these mental hard drives were hacked, could we lose memories and be force fed new ones? Could someone be forced to play the role of a high-tech Manchurian Candidate?


I Feel, Therefore I Am: Spinoza Meets the World Wide Web

For those who think the big, bad Internet doesn’t foster enough emotional connections between human beings, a new invention called the iFeel_IM might be exactly what they’re looking for. Developed by a technology professor in Japan, the iFeel_IM is a (only somewhat creepy) virtual hugging vest designed to inject a subtle effect of human touch into online chatting.

By retrieving emotional messages in online text, the device triggers a matching sensation in the vest that the individual is wearing. For example, if I told someone online – who was wearing the iFeel_IM – that I love them, the device would give them a gentle hug. The iFeel_IM can simulate a heartbeat, generate warmth, the tickling sensation of butterflies in the stomach and a spine tingling chill of fear, among others. Imagine trying to ask someone out on a date online (whoo, 21st century!) and having your nervousness magnified tenfold by a vest giving you literal chills of fear and butterflies in your stomach.

The setup resembles the straps of a backpack which contains sensors, motors and speakers. Like I said earlier, the device retrieves emotions from written text and responds accordingly. Professor Dzmitry Tsetserukou, the inventor, says the iFeel_IM can distinguish joy, fear, anger and sadness with 90 percent accuracy, and can parse nine emotions — including shame, guilt, disgust, interest and surprise — nearly four out of five times. It was tested in Second Life, the online 3D virtual world, where the inventor’s predicted accuracy rates rang true.

Presented at the first Augmented Human International Conference in France, Tsetserukou compared the system to the film Avatar, and especially the film Surrogates, set in a future when humans stay at home plugged into a cocoon while their healthier, more handsome doppelgangers venture forth into the real world.

“In a few years, this could be a mobile system integrated into a suit or jacket,” he said. “It’s not that far away.”

While I love that the technology behind this vest exists, the idea in general somewhat depresses me. I don’t think the Internet is inhumane enough that it drains individuals of the ability to experience feelings by themselves, without a digital prompter. (Ok, when this word is said, you’re supposed to feel happy!) Wearing your heart on your virtual sleeve is a little too robot-like for my liking.

But, at the very least, the iFeel_IM could cut down on the overabundance of emoticons littering the Web.

;)

A video demonstrating the iFeel in action:


Wearable Computers Expand the Reaches of Memory and Learning

Charlie Kemp - Duo (via Pervasive Computing)Charlie Kemp was a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) when he really began to dig into the idea of so-called “wearables”. Not quite robots in and of the themselves, wearables were more like a really smart backpack thingy, with sensors to be attached at various points of the wearer’s body (see image at left)

The wearable he developed, named “Duo,” had the ability to learn from sensory input – achieved via human guidance. The basic idea is this – a wearable ride’s along with a human guide, taking in that individual’s actions and learning from them. A camera mounted on the wearer’s head provides visual input, while sensors on the body learn certain movements, positions and actions.

It would be kind of like guiding another person’s hands through the motions of a certain task, whereby that person would then learn how to complete the task.

From Charlie’s website:

“Systems that better perceive and understand everyday human activity will be more capable of assisting people, coordinating with people, learning from people, and emulating human activity.”

If you’re looking for a more in depth look at wearables, I suggest reading Charlie’s article “Wearables and Robots: A Shared View” from the publication Pervasive Computing. The image to the left is from that article.

Charlie’s work with Duo went on to inform the visual system for Domo, a helper robot that also has the ability to learn from human example – in addition to bearing a striking resemblance to Johnny 5, from “Short Circuit.”

Let’s Talk About Assisted Mental Fidelity

What really grabbed me about Charlie’s work was the idea of a wearable system, maybe even a compact robot, that would learn from you, your actions, and your environment. Imagine such a robot and then imagine that, after learning enough from you, it would be able to ‘learn’ on its own.

Suppose you were getting directions from someone at a gas station and are 99 percent sure that you were supposed to take a left at that church on the corner. Human memory is fallible, and I have the wrong left turns to prove it, but a robotic memory … it could be almost like an external hard drive for your mind, but instead of simply storing the information it collects – it feeds it back to you, as needed, in real-time.

Remembering the Things You Forget, Learning from Your Mistakes

I’m also terrible at remembering things like phone numbers, names, to-do lists, etc. Oh, if only I had a helper to remember those things for me, and to keep me on track if I start to drift away from the things I need to be doing.

My robot helper would have learned my tendencies by now and, thus, be able to tell when I am drifting off track. This thought chain doesn’t follow directly from Charlie Kemp’s work above and, to avoid the misapplication of purpose to someone else’s work, I’ll just note that this is my own pondering.

Still, if a robot can learn from visual, physical input – why not mental input? Last week, we talked about mind-controlled robots being developed in Japan. Given time, it seems entirely possible that mental connections could move from simple commands to actual experience and thought absorption.

The Mental Yak Bak, or, Nothing but the Truth

The Mental Yak BakBeing able to retrieve dusty old memories can come in handy every now and then – say, when you’re trying to remember how to say “Where is the bathroom?” in Spanish.

Plus, such a system could help ensure that you stay honest. That 10-inch fish you caught last weekend was really more like six inches, and the giant grizzly bear that stole it from you was actually a petite cub. And that party last night? I don’t remember a thing, but my robotic companion can tell me everything.

Then again, the best objective memory may be one that comes with an “Erase” function.


Japan Plans Mind-controlled Robots for the Masses by 2020

Honda mind-controlled robotIn the era of mind-controlled consumer electronics, epic battles will be waged over who controls the television. In Japan, home to seemingly all significant advancements in the field of robotics, scientists and engineers are hoping to have consumer-ready, mind-controlled robot helpers and other consumer electronics within the next decade.

According to sources cited in Popular Science, these thought-controlled robots would use the brain’s electrical signals and blood-flow to interpret thoughts. As any diligent science fiction aficionado would imagine, users are expected to employ  a sensor-loaded headset, probably looking like something out of  X-men, to control devices.

It shouldn’t be too hard to think of potential problems with robots or electronic devices that react to your thoughts. Hot for teacher? Leave the helper robot at home.

Of course, there are more practical – and acceptable – uses for such technology. Helper robots for the elderly or disabled, for example, could be a huge boon to families that need help caring for relatives.

And at any rate, errant thoughts shouldn’t be a problem – at least initially. The fact that such robots and devices will likely be controlled using a helmet should keep most from unintentionally causing trouble.

Still, in 15-20 years? It’s not hard to imagine such robotics being controlled via chip implant. Late last year, Popular Science covered HP‘s plan to do just that. By 2020, the same year Japan’s mind-controlled robots are supposed to roll off the assembly line and into homes, HP hopes to be marketing chip implants that allow users to control electronics via thought.

Sure, the ability to produce isn’t always equal to the public’s willingness to purchase, but the simple fact that this technology is becoming a reality … it’s not science fiction anymore and Kansas is miles away, fading fast.


Trying to Tryvertise to Tryconsumers

Yes, I think these new “buzz words” are annoying as you do. However, aside from the cringeworthy lingo, there actually IS a shift taking place in the way companies are trying to get consumers to interact with their products. Tech-savvy consumers of today are demanding more out of the products they use and the experience surrounding them, and the companies producing those products are scrambling to keep up. This is where augmented reality comes in.

Think about how many times you’ve ordered a product online in blind faith, only to grumble when you have to pay shipping fees to return it. Think about how many times you’ve stood awkwardly in the makeup aisle at Target trying to hold cheap makeup, still in its package, next to your face to find the shade that’s just right for you, only to look like a clown when you finally put it on your actual face. (Ok, maybe that’s just me, makeup extraordinaire.) Think about how many terrible tattoos you’ve seen that made you wonder if that’s actually what they wanted their back to look like for the rest of their life. Incredibly useful applications of augmented reality are now making it possible to avoid situations like these through virtual dressing rooms.

Tobi.com recently debuted its Augmented Reality Dressing Room, a fitting room experience that is completely integrated into their online store. After watching a quick tutorial, customers print a ‘marker’ which they hold up to their webcam in order to activate the application’s sensors.

Following suit, Shiseido, a Japanese cosmetics company, has started inserting augmented reality makeup mirrors into its stores. You sit down in front of the mirror, it scans a picture of your face and … voila! You’re free to digitally try on makeup to see what colors are right for your face.

Polyvore.com is a fashion website that is “redefining how people around the world experience, create and shop for fashion on the Internet.” Their site features a virtual styling tool that lets people mix and match products from any online store to create their own outfits. You can then call upon the other members of the Polyvore community to rate and critique your ensembles.

TatMash lets users see what a real tattoo would look like on them before they really get it done. (Shouldn’t this have been dreamed up a long time ago? Here’s looking at you, old volleyball teammate with the foot-tall Tinkerbell tattoo on your back.) To see what one would look like on them, users simply upload a photo of themselves and can either upload their own tattoo image or use one of the pre-made designs on the site. They can then drag the tattoo’s image to the spot on their body where they’d like to have it done. This seems like such an obvious idea that I’m glad it was finally created.

The whole idea of tryvertising in general – a cross between advertising, product promotion and marketing communication – seems just as obvious. Integrating convenient product experiences like these into the online world – a space most consumers already inhabit – seems like a natural fit for most companies. If shoppers have the opportunity to digitally try on purchases, they will probably spend more, buy more and return far fewer items, which would result in higher sales, reduced shipping and handling costs, and happier customers. Win, win and win.

With the advent of some of these digital technologies, it seems likely that in the near future, consumers will just have a lifelike avatar, enabling them to try out and try on anything on behalf of their real world alter-egos. Expanding from that, there could also be 3D versions of spaces, not just people, enabling consumers to try out even more products before they buy them. All speculation, but I’m interested to see what happens in the next few years, and how companies will continue to evolve to better shopping experiences for consumers.


Transmedia Storytelling – Experience Assembly

Transmedia StorytellingTransmedia is quickly becoming a choice buzz word, its core elements and primary motivations on deck to be bulleted, retweeted and watered down. In fact, it’s already turning up quite a bit, most noticeably on Jawbone.tv, which seems to almost exclusively feed transmedia stories into my inbox these days. Not that I’m complaining – it’s interesting stuff.

The idea of transmedia storytelling transforms the very function of advertising. In the ‘olden days’, advertising served simply to broadcast a message (i.e. ‘Sale!’) as loudly and broadly as possible. Since the advent of online communities and instant communications, sophisticated advertising has shifted away from the in-your-face style of old, to more of a conversation. And, like a good conversation, the exchange is unique to — and changed by — the participants. It also has a narrative structure and, in the case of transmedia, develops a different aspect of the story (storyline) that is specific to each medium. Here’s how it works:

Say a movie is followed up with a television series, book series, special edition comic books, YouTube mini-stories and Twitter account featuring tweets in character. I would consider the movie to hold the central narrative, while the others constitute story units that pull from the movie and, at the same time, feed back into it – creating a much richer, much deeper experience. The marketing campaign around James Cameron’s Avatar (now the highest grossing movie of all time) illustrates how transmedia engaged the movie’s avid fans.

Avatar tells the story of the indigenous people on the planet Pandora and their fight to defend its natural habitat from corporate exploitation. It’s an epic 3D movie, full of fantastical beings and special effects that supplies great fodder for marketing. Developing characters and storylines (i.e. heroes, villains, fantastical creatures, flora, and fauna) across multiple platforms, the story came to life by integrating real world items such as action figures with online games, movies, communities and more.  Through its advertising partnerships the world of Pandora emerged as a full-fledged adventure, both on and off the screen, such that the story lived on well beyond the initial theater experience. It became a two-way flow of information, intersecting storylines, and personalized adventures.

Take the Avatar action figures.

Traditionally, action figures have only been tied to the movies, shows or games they represent by way of the characters. The stories they exist in after they leave the shelves are up to the creative impulses of their new owners.

Avatar “Battle Packs” demolished that one-way stream of input by providing the toys with augmented reality content that can be unlocked using a computer camera. I could explain it, but recommend watching the below video instead.

Yes, put two Battle Packs together and they will interact!

Augmented reality actually showed up a few times in the marketing around Avatar. (e.g. McDonald’s Avatar “thrill cards”) This advanced technology really helped enable the two-way flow of information between the movie and it’s extensions. Instead of having a film that simply fed out into toys and games, the toys and games fed back into the movie, adding dimensions, taking you down different rabbit holes.

All of these elements came together in the Avatar marketing campaign to enrich the movie’s plot, not just providing a homonymous ideal of the film outside of it’s run time, but really continuing to evolve the storyline in homes and online.  While the idea of outside marketing elements is nothing new, the ability to interact with them (to this extent) really is.

This, then, is the true power of transmedia – at least in my opinion.

It’s the idea that we never have to leave the fantasy worlds we enter in the theater, on television or through books. The old one-way-street-model is fast becoming rather staid. We are moving towards a time when each piece of the marketing puzzle feeds into the story, collecting in the center of a network of tributaries, where the actual film or show – even brand – may be the largest, but far from the only stream.

And at the center of this flow of information will be where the story truly lives.

Of course, the increasing popularity of transmedia storytelling is also being motivated in part by the technology available. I would be remiss to go without giving some direct credit to the advent of augmented reality and ever-expanding mobile capabilities. (For a bit of both, read “Real Life, Plus: Metaio Goes Mobile.”) That may be a post for another time, but worth watching in the meantime.

For a great overview of transmedia storytelling, watch this 20-minute presentation by Jeff Gomez, of Starlight Runner Entertainment. It’s a bit long, but extremely interesting and insightful.

(image via MediaLAB Amsterdam)


Project the Future: SENSEable City's Flyfire and the End of Interface

MIT's SENSEable City - Flyfire Project

SENSEable City’s Flyfire project, a collaboration with the Aerospace Robotics and Embedded Systems Laboratory (ARES Lab) at MIT, is about to make it possible for any empty space to become a fully interactive display environment. It does this by way of hundreds (maybe thousands) of tiny, “self-organized micro helicopters” – each with an LED light.

Think of these mini-copters as pixels in the sky. From here on, let’s refer to them as the “pixel swarm.” A remote controller is able to designate the desired shape from the ground, or wherever, and the pixel swarm creates the desired shape.

The pixel swarm is self-organizing, which means that they’re smart and can adapt to directed changes in real time. As the team behind Flyfire points out, this allows viewers to experience an animated display – with the pixel swarm moving fluidly from one shape to another.

To better understand what such a demonstration might look like, watch this brief video on Flyfire from the SENSEable City Lab.

Could projects like this spell the end of a fixed interface?

It’s certainly feasible that such technology could be developed to the point where it was possible to watch almost anything using the pixel swarm. Sure, it’s a long way off, but until then, it’s probably more realistic to imagine such technology being used at events to sex-up the user experience.

The potential for advertising is immense – for example, “mobile billboards” or other sponsored messages. Imagine being at a football game and watching an advertisement for an electric, turbo-charged sports car that zooms through the air, much like the Golden Snitch of the Quidditch game played in the Harry Potter stories. Perhaps the ‘Golden Snitch’-like pixel swarm would be a part of the half-time show, or programmed to hover over the seat of someone who just won the car….    the possibilities seem endless.

Pattie Maes’ Sixth Sense, featuring Pranav Mistry

Last year at TED, Pattie Maes premiered a new technology developed by Pranav Mistry in her MIT Fluid Interfaces Group. The physical hardware consisted of little more than a camera and projector, worn around the user’s neck. Functionally, it was a little bit Minority Report and a little bit RoboCop.

Say you’re looking for a book on CSS at Barnes & Noble. Having done this myself, I can safely say that there are about a dozen and, from what I can tell, each looks as good as the next. How do you decide which book is the best one for your needs?

If you have a smartphone, you can just look it up. If you don’t, you can ask one of the bookstore employees and hope they have a design background.

What if you could just pick up the book and have its Amazon rating projected right onto the cover? This would be much more efficient, no?

That’s just a start.

The goal of the Sixth Sense project is to allow any user to access relevant information wherever he or she happens to be. This is similar to augmented reality, save for the fact that it would be accomplished without a cellphone and, therefore, be much more seamless in regards to information gathering.

After consumer devices such as these are developed, our next step is surely embedded discovery tools, we we discussed in our post on augmented reality contact lenses last year.

It’s all terribly exciting, a little terrifying, and very promising. Stay tuned!


Children's Brains 2.0

As the lives of younger generations become increasingly digitalized (the average 8-18 year-old spends 7 hours and 38 minutes using entertainment technology throughout a typical day), companies and older generations are desperately trying to keep up and understand this way of life.

The Disney Channel recently announced a brand new movie titled Harriet the Spy: Blog Wars. No longer content with her old-school secret notebook, Harriet is forging boldly into 2010 and competing against the most popular girl in school to become the official blogger of their high school class. Jezebel poked fun of the update by re-naming other classic children’s books for the MySpace generation. Instead of abiding by “the only book I read is Facebook” mindset, they suggested titles such as “margaret 48267: are you there god?”, “Little Blog on the Prairie”, “From the Mixed-Up Tweets of Mrs. Basil E. Frankweiler” and “Wikipedia Brown, Boy E-tective” for the digital generation. Along with making me laugh, these updates to pop culture of yesteryear made me wonder exactly what sort of impact this constant exposure to technology and social media sites is having on children’s brains.

Search results revealed that almost every article on the negative effects of social media on developing brains referred back to an article written by Baroness Susan Greenfield. (No, I didn’t know that baronesses still existed either.) With the straightforward title of “Social Websites Harm Children’s Brains,” Greenfield argues that sites such as Facebook and Twitter shorten attention spans, encourage instant gratification and make young people more self-centered. Backing that up, a different study found out that 30% more college students scored high on the Narcissistic Personality Inventory in 2006 than in 1982. (A fact that could potentially be proven just by looking at the sheer number of tagged photos on Facebook some individuals have of themselves.)

Anyway. The large majority of the baroness’s research seemed to be rather subjective, considering that one of her main points was that social interactions conducted through computer screens are fundamentally different from spoken conversations — which are “far more perilous” than electronic interactions because they “occur in real time, with no opportunity to think up clever or witty responses.” (I hate when that happens!)

On the opposite end of the spectrum are scientists who are studying brain plasticity – how the brain continues to dramatically change its wiring and function long after early development. Scientists are realizing that the brain never stops reorganizing itself in response to the world and that kids today need to learn new digital skills to survive and thrive in our fast-evolving society.

Research on young people shows that use of the sites is associated with a better social life in the real world because they use the services to enhance their existing relationships — just as anyone would do with the telephone. When researchers at the University of Minnesota  asked 16- to 18-year-olds what they learn from using social networking sites, the students listed technology skills as the top lesson, followed by creativity, and being open to new or diverse views and communication skills. Being a technophile, I could rave about the wonders of the Internet all day, but there are a lot of people who are genuinely concerned that this constant exposure to technology and social media sites is having a negative impact on children today.

Every generation is afraid of the effects of brand new technology on the next. An article on Neuroanthroplogy.net sums it up best:

If we search for analogies, we can think of countless previous techno-moral panics that now seem positively quaint: the dangerous effects of rock ‘n’ roll, comic books, music videos, television, the wireless, air conditioning, trains… Mesopotamian parents were probably fearful of the impact of the newfangled chariot, and German parents no doubt fretted about what horrors Gutenberg’s movable type was about to introduce into their homes.

Cecily brought up the point that adults solidify what they know instead of taking new in. (I like what I know and I know what I like.) It’s easier to stay inside comfort zones than it is to reinvent our notions of the world and the way it works. Younger generations, however, don’t have to reinvent their worldview — this much constant access to technology is all they’ve ever known. Youth today reference communicating online with the same terminology and naturalness as real life. They “talk” to each other when they are on online messenger systems. There is no real life or digital life, it’s the same place.

Baroness Greenfield said that “it is hard to see how living this way on a daily basis will not result in brains, or rather minds, different from those of previous generations.”

And maybe, that’s the point.


Flying fingertips and cooling brakes: using what we have

Corporate Bloom Boxes

Innovations in alternative energy, always exciting and unpredictable, are certain bets for the future. But which technology is the biggest gamble – and pays off the most? The latest and most promising one, the Bloom Box, was unveiled this past Sunday. This “little power plant-in-a-box,” which can literally sit in your basement, potentially provides independent and clean energy for home and small businesses alike. Within five to ten years, Bloom Energy hopes to make its Box available to individual residences for below $3,000, quite affordable given the price of a furnace ($3,000+) or installing a central heating heating system (up to $10,000!). Great, right?

But let’s bring this to scale. The Bloom Box won’t be available for at least five years. What do we do until then? In the energy lottery, certainly there are solar, biofuel, natural gas and wind resources, among others. We use everything from algae to manure to moon rocks – but instead of producing new technologies and new sources of energy, why don’t we use what’s right under our noses? Are overlooking the most obvious source of energy – movement?

The great thing about capturing free energy is that it really is everywhere: from crashing (or lapping!) ocean waves to a busy thoroughfare, there are plenty of sources of kinetic movement. The only questions we face are what technologies we need (to develop) to harness kinetic force, and how to scale out these technologies for wide – and so more efficient – use. Even in the most unexpected places possibilities are waiting to be tapped.

Its own powerhouse: the Dynamo

What caught my eye is a new keyboard on the market. Researchers have found a way to return the kinetic energy generated while typing to local utility providers through nanotechnology connecting the keyboard to any standard 110-volt outlet. At $30, and considering Americans, especially young ones, spend increasingly more time on the computer, keyboards like the Dynamo are both cost-efficient and accessible.

Another variation on this theme is a keyboard that recharges the computer’s battery the more you type. The goal one day is to develop a keyboard that will be fully powered with the speedy clicks of a laptop’s keys. We could reduce external energy consumption while prolonging battery life – a pretty perfect situation.

Where else might we be able to capture free energy? In big, high-pedestrian traffic cities like New York and Chicago, design company Fluxxlab wants to harvest the movement created each time a revolving door spins to power that same building. Likewise, the movement generated by city walkers as they rush to their next destination can be harnessed to power traffic lights, street lamps, and other electrical needs. Private company M2E Powerhas designed a microgenerator for troops that replaces the 10-30 pounds of batteries a soldier typically carries: clipped onto the wearer, walking or shaking for two hours powers mobile devices for an hour and a half, an incredible prospect

Where we might install these MotionPower plates in roads to capture free energy from vehicles

Capturing kinetic energy avails us of innumerable opportunities. Yet, we also face challenges in cost-efficiency and scale. While engineers at Free Energy Technologies have developed plates installed onto streets that capture the energy of decelerating cars, and this might generate a great deal of electricity, it perhaps isn’t enough to offset the costs of retrofitting old roads. Likewise, the Revolution Door only makes sense in big cities with high human traffic. We need to strategically and systematically make use of new technologies, and imagine more cost-efficient means of implementing them throughout our lives.

While all technological innovations push us toward a more progressive future, developing them takes time, funding, and determination. We certainly hope that the Bloom Box will bloom into our own (green) power plants, but at the same time, let’s keep in mind that the safest bet for the future is that portfolio of mixed energy-capturing conservation measures. We need to rely on multiple sources of energy for maximal efficiency. And right now, it looks like the kinetic energy from flying fingertips and cooling brakes is our greatest untapped natural resource.


General Motors Hopes for a Battery-powered Recovery

We’re on the cusp of a battery revolution. On Thursday, General Motors will begin battery pack assembly at its plant in Brownstown Township, Michigan. It will be the first plant of its kind in the United States and, one can hope, start a trend rather than a flash in the pan.

Remember the stimulus package – that controversial, Titanic piece of legislation? Well you can thank our government, at least in part, for this leap forward on the part of GM. Way back in March, the president announced plans to reward advances in battery technology for the support of electric vehicle proliferation in the states.

General Motors was one of many companies that applied for some of the $2 billion+ in federal funding under the Electric Drive Vehicle Battery and Component Manufacturing Initiative.

The money wasn’t just to boost hybrid vehicles in the United States, but to boost our competitiveness in the “battery wars.” Most of the batteries that power your phone, laptop, and various mobile devices and pending tablets come from overseas. Companies like LG, in South Korea, currently hold a rather large market share. While General Motors will be using cells from LG, the actual manufacturing of the battery packs will be going on right here – or in Michigan, rather.

Just as the United States has a role to play in battery production, GM, and the Obama administration, is hoping that there are also gains to be made in the area of more efficient automobile. Having grown up on a steady diet of Buick Leasers, Oldsmobile 98s and Cadillac DeVilles, I can say without reservation that I do not equate U.S. automobiles with either efficiency or the future of driving.

That’s not to say that I don’t love Buicks – just drop by and I’ll take you for a ride in mine.

But when I think compact efficiency, I think Honda, Toyota, Hyundai, etc. However, with the exception of the Prius, most of the models rely on fuel efficiency. Battery-powered cars, while not new in concept, have yet to reach any sort of critical mass. So, cars like the Chevy Volt enter into a race that is still very much anyone’s game.

The Volt’s lithium-ion battery pack will be able to charge both on board, by way of an internal combustion engine, and externally, from a plain, old household current. This means that, just as we now plug our phones in overnight, so too may we, in the future, charge our cars while we sleep. According to Discover Magazine, that’s a good 40 miles out of 80 cents of electricity. Not too shabby!

Learn more about the Chevy Volt below.

[via earth2tech]


Nanotechnology and the Fight Against Cancer

nanotechnologyLast week, MIT’s Technology Review published a piece on the development of new, nanotechnology-based drug delivery techniques for the treatment of cancer. Specifically, the studies focused on pancreatic cancer, which kills about 35,000 people every year.

These “nanotechnology-based drug delivery techniques” constitute a major breakthrough in the War on Cancer which, since it was first declared in 1971, could use a boost (*If you’re interested, a quick review of the biological mechanisms in cancer can be found at the end of this post).

Currently, cancer therapy includes surgery to remove affected tissue, and chemotherapy and/or radiation to kill cancer cells. The problem with chemotherapy and radiation is that it exposes all cells to their toxic load. An analogy: It’s kind of like having a poisonous snake problem in a certain section of the woods behind your house. Radiation, in this example, would be much like burning down that section of woods to combat said snakes. The Result: You get rid of a lot of snakes, but you also kill many natural, non-obtrusive, animals.

Now, imagine that you have a special beetle (just go with me) that can infiltrate the snake’s defenses and infect them with an anti-drug, nullifying their poisonous bite. Not only would you be taking care of your snake problem, but you’d be doing so with minimal impact on the other species present.

This is kind of how nanotechnology works. In a nutshell, the drugs are contained in tiny, engineered particles which, when injected, fight the cancerous cells from the inside. Since nanotechnology works at an atomic level, teeny-tiny agents of destruction can be customized for the particularities of a  type of cancer. This is what they accomplished in the research labs at Massachusetts General Hospital where they’ve designed two new types of agents  to treat human pancreatic cells.

Each cell fights the cancerous pancreatic tumor in a unique way.

  • The first cuts off blood supply to the cancerous tumor, starving it. The drugs used are already approved by the FDA, but had much more success within the nanocell, as they are able to deliver the drugs directly, from inside the cancer cells.
  • The second nanocell is designed to prevent cancer cells from developing resistance to chemotherapy. They do this by targeting two specific proteins, which promote cancer growth, and blocking them.

Such advances in biotechnology (at the nano scale) open doors to all kind possibilities, from curing cancer to manufacturing new tissue. The latter has the potential to repair damaged tissue, such as exists in Parkinson’s, diabetes, heart disease, or spinal cord injuries, among other things.

Strides toward such a future are already being made: In early 2008, researchers at the University of Minnesota Center for Cardiovascular Repair, created a functioning heart from a dead rat heart and new cells from baby rats. Sure, humans aren’t going to benefit much from an engineered rat heart – pig, maybe – but the point is that it’s possible. And what’s possible on a small scale, usually, can be adapted to work on a larger scale.

Like the researchers in the video below say, there’s no reason to stop at hearts – why not any organ?

*Every cell in our body has the same strands of DNA, the complete blueprint for every type of cell in the body. Nearly all of the sequence will be covered except for the genes related to its particular function (i.e. nerve function, if it’s a nerve cell, or gut function if it’s a gut cell).
The unexpressed part of the DNA is blocked by “repressor proteins” that help coordinate the on/off switch to accelerate or slow cell division at different stages of development. If the protein fails, it can uncover the growth gene and kick the cell back in to a rapid multiplication state again, which is what cancer is. “Carcinogens” are named as such because they have the capacity to break down repressor proteins (among other things), a process that also happens as a normal part of aging.  - CS


Taking Your Life on the Road

Designing for reality: people won’t be hanging up their cell/smartphones anytime soon inside (what I refer to as) their traveling telephone booths. Microvision is working on ways to integrate social interactions while keeping your eyes on the road. It doesn’t help to focus attention, just your sight lines.


The Stuff of Life

The ElementsWhere was this book when I was studying chemistry?!

There wasn’t an easy way around rote, brain-numbing memorization of the Periodic Table of Elements when I was in school. I’d practice filling them in like a crossword puzzle, putting abbreviations and atomic weights of each element in the right square. There was absolutely nothing engaging about it, it was just a grind.

To transform a string of memorized data into something meaningful — the stuff of knowledge — is to give that data a context. Tell a story, show its utility, demonstrate something remarkable.  Which is what Theodore Gray’s new coffee table book, “The Elements: A Visual Exploration of Every Known Atom in the Universe” has done for the Periodic Table. Suddenly elements are sexy little beasts, each with its own history and a glossy two-page spread.

You’ll recognized some, of course, like Calcium (Ca)  and Sodium (Na), but have you ever gotten a close look at Promethium (glow-in-the-dark paint on diving watches), Tellurium (even tiny amounts will leave you reeking of garlic for months), or how ’bout the honorific Einsteinium (which, unlike the man, has little utility). Beautiful photos and interesting tidbits make the world, at the atomic level, interesting and comprehensible….and memorable.

[via Boing Boing]


Nature's Orchestra

font,letterform,listen,silence-c24f59dc91ee4536872533b9ba92c908_h“If we don’t take the opportunity to form a baseline understanding of natural soundscapes, we’ll lose part of our own humanity. These sounds taught us to dance, and they’re part of our language. I think we owe them something.” – Bernie Krause

Western culture has long favored sight over hearing. Bombarded with thousands of visual images every day, we pay very little attention to the subtle sounds that enter our ears. Middle school sleepover games of “Would You Rather?” always resulted in a unanimous group decision that being blind would be, like, WAY harder than being deaf. American bioacoustician Bernie Krause thinks otherwise and has devoted the last 40 years of his life to recording the earth’s rapidly disappearing “biophony” — a term he coined to describe what the world sounds like in the absence of humans.

He believes that biophony is unique all over the world; nowhere in nature sounds anything like anywhere else. He also believes that in a biophony, animal groups each communicate at a different frequency so they don’t interfere with one another’s voices. When the pitches are mapped out, it ends up looking like a musical score, with each instrument in its proper place.

The problem with this lovely orchestra concept is that man-made noise (anthrophony) greatly intrudes on this natural symphony. The noises of machinery and cars interfere with a part of the sound spectrum already in use and suddenly some animal can’t make itself heard, which Krause has proven can have a significant impact on evolution.

Today, there are fewer and fewer places on Earth where man-made noises don’t prevail — over 40 percent of his original field-recording locations have been lost due to increasing habitat degradation and human noise.  To combat that, Krause is making it his mission to compile the largest private archive of natural sound anywhere — fittingly named Wild Sanctuary. The collection of sounds represents over 3,500 hours of wild soundscapes and nearly 15,000 species. Even more intriguing, Wild Sanctuary’s Internet home base is Google Maps and Google Earth, an innovative bridge between the virtual and the natural world that allows you to click on any location you’re interested in and hear exactly what it sounds like.

It’s easy to see ecological problems. Now we need to learn to listen to them as well. Should we be focusing on developing quieter, as well as cleaner, technology and machinery? Would more noise ordinances benefit animals in nature? There isn’t really an answer — it’s just about using all of your senses when trying to make sense of the world around you.


Physicists Kept Awake by Seven Wonders

Stress, excitement, and indigestion are common causes of interrupted sleep — for most of us, that is. Not so for physicists, whose insomnia stirs for far less pedestrian concerns, as cataloged in this month’s New Scientist as the Seven Questions that Keep Physicists Awake at Night.

Number One: Why this universe?

“In their pursuit of nature’s fundamental laws, physicists have essentially been working under a long standing paradigm: demonstrating why the universe must be as we see it. But if other laws can be thought of, why can’t the universes they describe exist in some other place?”

I’ll be honest, I double-majored in philosophy because I was pumped up on French existentialist novels. Ironically, they were the last novels I ever read in a philosophy class. Instead, I ended up reading essays on multiple worlds and van Inwagen‘s Doctrine of Arbitrary Undetached Parts. While I’m flattered to have spent so many sleepless nights mulling over the same topics as professional physicists, I feel they may have been confused on a slightly higher level.

DONNIE DARKOThe question is this – if I postulate a set of laws for a world (other than this one), and I’m able to imagine/reason how things work in that world without contradicting those laws, then who’s to say that this world could not exist?

Though the scenario is rather complex, the exercise is as simple as imagining a world in which I start my day at 4:30 instead of 6:30(oops)….  Meaning that, though the concepts are over most of our heads, it’s a reminder that the ability to imagine different worlds is available to us all.

For a lighter look at parallel worlds, check out this Nova special “Parallel World, Parallel Lives,” featuring Eels frontman Mark Everett. (The rest of the show is available on YouTube.)

Number Two: What is everything made of?

“Ordinary matter” is classified here as “atoms, galaxies and stars.”

Ponder this: if ‘ordinary matter’ only accounts for four percent of the universe’s total energy, what’s the other 96 percent?

As evil as it sounds, dark matter simply designates matter whose light either does not reach us, or particles which are not emitting light to begin with. We know dark matter is out there because we can see its effects, such as the continued expansion of the universe. It’s not the stuff from the X-Files that lets aliens take control of you body … or scientists are keeping that part on the low.

So what else keeps physicists up at night?

3. How does complexity happen?
4. Will string theory ever be proved correct?
5. What is singularity?
6. What is reality, really? (see also: The Sixth Sense, The Machinist, Identity, The Fountain and others of
the “it was all a dream, or was it” ilk)
7. How far can physics take us?

This last question is the wildest one of all. What it suggests is that physics, the (scientific) language by which we make sense of our world, may have its own limits. I think I’ll mull that one over the next time I can’t get to sleep…


3D TVs: Coming to a Living Room Near You

I remember waiting for an episode of “Family Matters” (long ago) that was going to be in 3D. We’re talking paper glasses with one red lens, one blue lens. You know what I mean — the type of 3D where, if you watch it without the glasses, it looks like you slipped and fell through the cracks in a table of RGB variants.

The good news is, the 3D TV of the future will be smoother, more efficient and come with cooler glasses. Better yet, it could be here by 2010!

Then again, maybe it shouldn’t be so surprising. Hollywood has been steadily releasing an increasing amount of films formatted for 3D. Think Coraline, Up, the U2 concert film and James Cameron’s upcoming Avatar. While Beowulf may not have been strong support for the need of 3D in the home, one has to imagine that film companies are hungry for the DVD market these films could bring.

Whether or not consumers will be able to afford 3D-enabled televisions just after upgrading to HD and plasma TVs is another question entirely.


Google Makes Us Smarter (Whew!)

After so much wailing about how computers are dumbing down a whole generation, comes evidence that it’s the older generation that may benefit most: turns out that computer activity helps keep dementia at bay.

Mental stimulation is the name of the game when it comes to keeping our wits about us, and the simple act of searching for information online (“Googling”) is great for keeping those synapses snapping. Even more than Sudoku or crossword puzzles, searching for new information online is a continuous learning experience.


The Boy Who Harnessed The Wind

614968682_de604774f7William Kamkwamba was 14 years old when he was forced to quit school due to his family’s inability to pay the required $80 student fee in his small village in central Malawi.

Eager to continue his education any way he could, he spent a considerable amount of time at the library where, one day, he picked up a tattered U.S. textbook and saw a picture of a windmill.

Malawi is short on many resources, but they do have an abundant supply of wind. Thinking “If somebody did this thing, I can also do it,” Kamkwamba set out to build his very own windmill. To get supplies for it, he salvaged all sorts of junk (another man’s treasure) from his father’s broken bike to old tractor pieces and was often greeted with “Ah, look, the madman has come with his garbage.” Several people thought he was smoking marijuana to which he replied that he was “only making something for juju [magic].”

The garbage-collecting madman succeeded in making magic when he managed to hack together a functioning windmill from strips of PVC pipe, rusty car and bicycle parts and blue gum trees.

Originally, all he wanted to do was power a small light bulb in his bedroom so he could stay up and read past sunset. That dream got bigger in a hurry and one windmill has turned into three, which now generate enough electricity to light several bulbs in his family’s house, power radios and a TV, charge his neighbors’ cellphones and pump water for the village’s fields and household use.

TEDGlobal heard about him, invited him to speak at their conference, and inspired by William, started a non-profit called the Moving Windmills Project. Moving Windmills supports Malawian-run rural economic development and education projects in Malawi, with the goals of community economic independence and self-sustainability; food, water and health security; and educational success.

All this from a tattered library book, a few old bicycle parts and a boy with a seemingly impossible dream. Juju indeed.

(William will be a guest on The Daily Show tonight!)