Friday 9 October 2009

IEEE

NEWS

Flexible Paper Speakers on the Way

Taiwanese research institute set on commercializing the technology, likely in car-audio systems first

Photo: Industrial Technology Research Institute

BY Yu-Tzu Chiu // September 2009

30 September 2009—Scientists in Taiwan say that industrial production of an ultrathin, flexible loudspeaker made mostly out of paper could begin by the end of 2010.

”Aside from use in family, stereo, or automobile hi-fi equipment, it can also be used in earphones or for industrial antinoise purposes,” says Johnsee Lee, president of Taiwan’s Industrial Technology Research Institute (ITRI), where the technology has been under development since 2006.

The device, named fleXpeaker, is basically a sandwich of paper and metal filled with an electroactive polymer that contracts and expands with an audio signal’s electric field.

”It’s soft [and can] easily fit in different curves,” says Ming-Daw Chen, division director of ITRI’s Electronics and Optoelectronics Research Laboratories. ”Therefore, the product customization can be done in diverse fields, such as art for public facilities, interior design,...costume accessories, and others.”

Chen says the flexible paper speaker consumes less electricity than a conventional speaker with the same sound performance. However, due to the thinness of the speaker, improving the performance of very low-pitched sounds with frequencies below 200 hertz remains a challenge.

The paper speakers have caught the attention of a number of firms outside Taiwan, according to Chen, but ITRI has decided to work with domestic car-audio system producers first. ”We’ve been fixing the production process, and the commercialization is likely to be launched in late 2010,” he says, adding that an innovative audio amplifier module will also be invented for the thin-speaker system.

Meanwhile, ITRI is demonstrating the malleability of the high-fidelity speaker. Chen says a wall made up of fleXpeaker units will be displayed for the first time at the 2010 Taipei International Flora Expo.

Taiwan’s ITRI is not alone in inventing paper-thin flexible speakers. Last year, a research team led by Shoushan Fan and Kaili Jiang at Tsinghua University, in Beijing, announced the invention of a paper-thin speaker made from carbon nanotubes, or CNTs.

According to the Foxconn Enterprise Group, which backs the Tsinghua-Foxconn Nanotechnology Center, commercializing the nanotube speaker might take more than three years. ”We’ve been working hard to design the production process with full confidence in the CNT speaker,” says Shaoming Fu, who’s in charge of Foxconn’s intellectual property management department. According to Fu, the team has made the needed nanotubes on 4-inch wafers, and each 4-inch wafer can produce 6 square meters of the thin film.


IEEE news

DEPARTMENTS

DIY Street-View Camera

Create Google Street View-like panoramas with cheap webcams and open-source software


Photo: Roy D. Ragsdale

OFF THE GRID: A do-it-yourself camera array lets you create your own street views of places where Google's cameras don't go.

BY Roy D. Ragsdale // October 2009

If you use Google Maps, you're probably familiar with its Street View feature, which shows actual ground-level photos of many cities around the world. Google creates the images by mounting special cameras on vehicles and driving them around.

Now wouldn't it be great if you could have your own Street View–like camera? You could hike a trail and later share the photos with friends. The photos would carry GPS tags, so you could display them on Google Earth and include annotations—good water here, poison ivy there. Realtors could display whole neighborhoods to potential clients. A country club could offer a virtual tour of its golf course. Architects could monitor progress at a construction site.

Last year, as part of a "disruptive technologies" course at the United States Military Academy, in West Point, N.Y., I set out to develop a prototype. I thought such a system would have many applications in the battlefield, for example, helping soldiers patrol dangerous routes. My system—I call it PhotoTrail—uses off-the-shelf components and open-source software. It consists of webcams, a GPS receiver, a notebook computer, and imaging software.

For the camera system, I chose the Microsoft LifeCam NX-6000, which is small and has UVC (USB video class) compatibility. It was also cheap (although it lists for US $79.95, I got it for $25 new). It has a megapixel video resolution and shoots 8-megapixel still images.

The NX-6000 has a lens with a 71-degree field of view. In order to stitch images together for 360-degree panoramas, I bought eight units, for a total of 568 degrees of coverage, allowing a healthy image overlap. To connect all the cameras to the notebook, I used two D-Link USB hubs ($25 each), which ran unpowered.

For the GPS receiver, I chose the GlobalSat BU-353, a self-contained waterproof device with good signal reception and accuracy, which costs a mere $37. If you attach it to a USB port, the GPS coordinates will appear in a log file, using a standard GPS encoding scheme.

Construction was straightforward. On a flat octagonal heavy-cardboard base, I glued small posts for the cameras' clips to latch onto. I aligned each unit and then placed the USB hubs and the GPS receiver in the middle. I secured the cables with Velcro and sandwiched everything with another piece of cardboard. The whole thing's the size of a small pizza box, weighing less than 1 kilogram. Excluding the notebook (a 2-gigahertz machine with 512 megabytes of RAM running Ubuntu Linux), the hardware cost about $300.

To start capturing images, I installed a UVC driver and a device driver compatible with the camera array. For the capture itself, I used luvcview, a small open-source webcam program by Logitech. (Uvccapture, also by Logitech, lets you take still shots, but it was incompatible with this camera.)

I had set the camera array on video capture, so I needed to tweak luvcview's source code to get still images from the video feed. The tweaks call for the array to capture a few frames and then stabilize itself so that the images are in focus and have good light contrast. I wrote a Python script to capture the eight 1280-by-1024 JPEG files. That capture takes about 8 seconds. Images captured within that time frame can be considered a single cluster to be stitched together.

Digital cameras normally add data about the photograph, but because luvcview operates at the file level, these images have no such metadata. So I wrote a Python script to read the date and time the file was created. I then used Exiftool, a command-line image metadata editor, to put the date and time into the file.

The images also need to be GPS-tagged. Gpicsync, an open-source tool, can automatically get the latitude and longitude data from a GPS receiver's log and add the coordinates to the image's metadata field. Gpicsync also lets you transform this image set into a single file that you can view using Google Earth.

I used two tools to generate panoramas. The first, autopano-sift, identifies common features in different images and aligns them along a horizon line. Another tool, hugin, uses those common elements to effectively stitch the images into a single panorama. I again used gpicsync to GPS-tag the panorama and generate a Google Earth file. To see the panorama as a 360-degree image and zoom in and pan about the scene, you can use PTViewer.

On my underpowered computer, it took 15 minutes to stitch each panorama. It's a long time. But you can do the capture first and the stitching later, or transmit the images to a more powerful server for remote processing.

With all this development work done, it was time to test the prototype. During a trip to the Boston area, I walked around the MIT campus holding the system above my head. Passersby didn't seem bothered. I guess students attached to weird contraptions are a common sight there. On Google Earth, I can retrace my route and see the surroundings with great detail [see photo, previous page].

JEEP CAM
Photo: Roy D. Ragsdale

JEEP CAM: Mounted on a moving vehicle, the camera array [detail] can capture images of streets and their surroundings—just as Google does to produce its Street View panoramas.

I also mounted the array on a Jeep [see photos above] and drove around West Point, capturing images while driving up to 100 kilometers per hour. I programmed it to take one set of images every 20 seconds. In an hour I had 300 MB of data from 180 sets of images. When the jeep isn't stationary, the images can't be clustered into panoramas. (Recall that it takes 8 seconds to grab a single set.) Still, the individual images are perfectly clear and on a par with those available on Google Street View.

I'm now working on some improvements. One idea is to replace the notebook with a smaller computer, such as one based on the Pico-ITX board, and shrink the camera system (the actual CCD, or charge-coupled device, and lens elements are no bigger than a fingernail). Eventually, you could build a camera system small enough to be integrated into a headband or hat.

The software could use some tweaks as well. I'm planning to write an Adobe Flash application to allow the user to see the panoramas as 360-degree images and be able to navigate from one panorama to another, just as in Google Street View.

The U.S. Army is currently evaluating my prototype. Eventually, a contractor could produce a field version for tests. Meanwhile, as this article goes to print, I'm preparing to travel far and wide. If I have space in my backpack, I'll have the camera capturing my journey, step by step.

IEEE news

Video: UCSD Robots Climb Stairs, Hop Like Pogo Stick

POSTED BY: Erico Guizzo // Thu, October 08, 2009

The robots developed by Thomas Bewley and his team at the Coordinated Robotics Lab at the University of Califonia, San Diego, may look rather simple at first. But it turns out these 'bots are capable of impressive acrobatic maneuvers.

The Switchblade rover can balance on the tip of its treads and climb stairs by flipping itself end-over-end. iHop balances itself by using its wheels as gyros and it can hop on its pogo-stick of a leg.

Check out the video below by Spectrum's Josh Romero, showing how the UCSD engineers are giving their robots new ways to move.

Previously:

Boston Dynamics Demo Shows Robot Jumping Over Fence

Boston Dyanmics to Develop Two-legged Humanoid (And a New Hopping Robot in Their Spare Time)

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Powered by Blogger | Printable Coupons