Poster filem avatar...hahaha...klu dapat tngok yg 3D nyer msti best...
Assalamualaikum dan selamat sejahtera kepada sesiapa yang berjaya menjejakkan kaki buat pertama kali atau kedua kali mahupun berkali-kali ke blog aku ini yang diberi nama sebagai HIKAYAT NAWZA. Kepada mereka yang secara tidak sengaja terclick ke url ini melalui spam2 pown aku ucapkan terima kasih kerana sudi membenarkan diri anda di"spam" kan sehingga anda berjaya ke blog ini. Semua catatan dan nukilan didalam hikayat ini adalah pendapat peribadi dan cerita tentang aku dan ...{dot3x}. Jika terdapat unsur copyright, akan dinyatakan sumbernya. Akhir kata, selamat berbloging kepada semua...
Replace these every slider sentences with your featured post descriptions.Go to Blogger edit html and find these sentences.Now replace these with your own descriptions.This theme is Bloggerized by Lasantha - Premiumbloggertemplates.com.
Replace these every slider sentences with your featured post descriptions.Go to Blogger edit html and find these sentences.Now replace these with your own descriptions.This theme is Bloggerized by Lasantha - Premiumbloggertemplates.com.
Poster filem avatar...hahaha...klu dapat tngok yg 3D nyer msti best...
NEWS
Flexible Paper Speakers on the Way
Taiwanese research institute set on commercializing the technology, likely in car-audio systems first
Photo: Industrial Technology Research Institute
BY Yu-Tzu Chiu // September 2009
30 September 2009—Scientists in Taiwan say that industrial production of an ultrathin, flexible loudspeaker made mostly out of paper could begin by the end of 2010.
”Aside from use in family, stereo, or automobile hi-fi equipment, it can also be used in earphones or for industrial antinoise purposes,” says Johnsee Lee, president of Taiwan’s Industrial Technology Research Institute (ITRI), where the technology has been under development since 2006.
The device, named fleXpeaker, is basically a sandwich of paper and metal filled with an electroactive polymer that contracts and expands with an audio signal’s electric field.
”It’s soft [and can] easily fit in different curves,” says Ming-Daw Chen, division director of ITRI’s Electronics and Optoelectronics Research Laboratories. ”Therefore, the product customization can be done in diverse fields, such as art for public facilities, interior design,...costume accessories, and others.”
Chen says the flexible paper speaker consumes less electricity than a conventional speaker with the same sound performance. However, due to the thinness of the speaker, improving the performance of very low-pitched sounds with frequencies below 200 hertz remains a challenge.
The paper speakers have caught the attention of a number of firms outside Taiwan, according to Chen, but ITRI has decided to work with domestic car-audio system producers first. ”We’ve been fixing the production process, and the commercialization is likely to be launched in late 2010,” he says, adding that an innovative audio amplifier module will also be invented for the thin-speaker system.
Meanwhile, ITRI is demonstrating the malleability of the high-fidelity speaker. Chen says a wall made up of fleXpeaker units will be displayed for the first time at the 2010 Taipei International Flora Expo.
Taiwan’s ITRI is not alone in inventing paper-thin flexible speakers. Last year, a research team led by Shoushan Fan and Kaili Jiang at Tsinghua University, in Beijing, announced the invention of a paper-thin speaker made from carbon nanotubes, or CNTs.
According to the Foxconn Enterprise Group, which backs the Tsinghua-Foxconn Nanotechnology Center, commercializing the nanotube speaker might take more than three years. ”We’ve been working hard to design the production process with full confidence in the CNT speaker,” says Shaoming Fu, who’s in charge of Foxconn’s intellectual property management department. According to Fu, the team has made the needed nanotubes on 4-inch wafers, and each 4-inch wafer can produce 6 square meters of the thin film.
DEPARTMENTS
DIY Street-View Camera
Create Google Street View-like panoramas with cheap webcams and open-source software
Photo: Roy D. Ragsdale
OFF THE GRID: A do-it-yourself camera array lets you create your own street views of places where Google's cameras don't go.
BY Roy D. Ragsdale // October 2009
If you use Google Maps, you're probably familiar with its Street View feature, which shows actual ground-level photos of many cities around the world. Google creates the images by mounting special cameras on vehicles and driving them around.
Now wouldn't it be great if you could have your own Street View–like camera? You could hike a trail and later share the photos with friends. The photos would carry GPS tags, so you could display them on Google Earth and include annotations—good water here, poison ivy there. Realtors could display whole neighborhoods to potential clients. A country club could offer a virtual tour of its golf course. Architects could monitor progress at a construction site.
Last year, as part of a "disruptive technologies" course at the United States Military Academy, in West Point, N.Y., I set out to develop a prototype. I thought such a system would have many applications in the battlefield, for example, helping soldiers patrol dangerous routes. My system—I call it PhotoTrail—uses off-the-shelf components and open-source software. It consists of webcams, a GPS receiver, a notebook computer, and imaging software.
For the camera system, I chose the Microsoft LifeCam NX-6000, which is small and has UVC (USB video class) compatibility. It was also cheap (although it lists for US $79.95, I got it for $25 new). It has a megapixel video resolution and shoots 8-megapixel still images.
The NX-6000 has a lens with a 71-degree field of view. In order to stitch images together for 360-degree panoramas, I bought eight units, for a total of 568 degrees of coverage, allowing a healthy image overlap. To connect all the cameras to the notebook, I used two D-Link USB hubs ($25 each), which ran unpowered.
For the GPS receiver, I chose the GlobalSat BU-353, a self-contained waterproof device with good signal reception and accuracy, which costs a mere $37. If you attach it to a USB port, the GPS coordinates will appear in a log file, using a standard GPS encoding scheme.
Construction was straightforward. On a flat octagonal heavy-cardboard base, I glued small posts for the cameras' clips to latch onto. I aligned each unit and then placed the USB hubs and the GPS receiver in the middle. I secured the cables with Velcro and sandwiched everything with another piece of cardboard. The whole thing's the size of a small pizza box, weighing less than 1 kilogram. Excluding the notebook (a 2-gigahertz machine with 512 megabytes of RAM running Ubuntu Linux), the hardware cost about $300.
To start capturing images, I installed a UVC driver and a device driver compatible with the camera array. For the capture itself, I used luvcview, a small open-source webcam program by Logitech. (Uvccapture, also by Logitech, lets you take still shots, but it was incompatible with this camera.)
I had set the camera array on video capture, so I needed to tweak luvcview's source code to get still images from the video feed. The tweaks call for the array to capture a few frames and then stabilize itself so that the images are in focus and have good light contrast. I wrote a Python script to capture the eight 1280-by-1024 JPEG files. That capture takes about 8 seconds. Images captured within that time frame can be considered a single cluster to be stitched together.
Digital cameras normally add data about the photograph, but because luvcview operates at the file level, these images have no such metadata. So I wrote a Python script to read the date and time the file was created. I then used Exiftool, a command-line image metadata editor, to put the date and time into the file.
The images also need to be GPS-tagged. Gpicsync, an open-source tool, can automatically get the latitude and longitude data from a GPS receiver's log and add the coordinates to the image's metadata field. Gpicsync also lets you transform this image set into a single file that you can view using Google Earth.
I used two tools to generate panoramas. The first, autopano-sift, identifies common features in different images and aligns them along a horizon line. Another tool, hugin, uses those common elements to effectively stitch the images into a single panorama. I again used gpicsync to GPS-tag the panorama and generate a Google Earth file. To see the panorama as a 360-degree image and zoom in and pan about the scene, you can use PTViewer.
On my underpowered computer, it took 15 minutes to stitch each panorama. It's a long time. But you can do the capture first and the stitching later, or transmit the images to a more powerful server for remote processing.
With all this development work done, it was time to test the prototype. During a trip to the Boston area, I walked around the MIT campus holding the system above my head. Passersby didn't seem bothered. I guess students attached to weird contraptions are a common sight there. On Google Earth, I can retrace my route and see the surroundings with great detail [see photo, previous page].
JEEP CAM: Mounted on a moving vehicle, the camera array [detail] can capture images of streets and their surroundings—just as Google does to produce its Street View panoramas.
I also mounted the array on a Jeep [see photos above] and drove around West Point, capturing images while driving up to 100 kilometers per hour. I programmed it to take one set of images every 20 seconds. In an hour I had 300 MB of data from 180 sets of images. When the jeep isn't stationary, the images can't be clustered into panoramas. (Recall that it takes 8 seconds to grab a single set.) Still, the individual images are perfectly clear and on a par with those available on Google Street View.
I'm now working on some improvements. One idea is to replace the notebook with a smaller computer, such as one based on the Pico-ITX board, and shrink the camera system (the actual CCD, or charge-coupled device, and lens elements are no bigger than a fingernail). Eventually, you could build a camera system small enough to be integrated into a headband or hat.
The software could use some tweaks as well. I'm planning to write an Adobe Flash application to allow the user to see the panoramas as 360-degree images and be able to navigate from one panorama to another, just as in Google Street View.
The U.S. Army is currently evaluating my prototype. Eventually, a contractor could produce a field version for tests. Meanwhile, as this article goes to print, I'm preparing to travel far and wide. If I have space in my backpack, I'll have the camera capturing my journey, step by step.
Video: UCSD Robots Climb Stairs, Hop Like Pogo Stick
POSTED BY: Erico Guizzo // Thu, October 08, 2009
The robots developed by Thomas Bewley and his team at the Coordinated Robotics Lab at the University of Califonia, San Diego, may look rather simple at first. But it turns out these 'bots are capable of impressive acrobatic maneuvers.
The Switchblade rover can balance on the tip of its treads and climb stairs by flipping itself end-over-end. iHop balances itself by using its wheels as gyros and it can hop on its pogo-stick of a leg.
Check out the video below by Spectrum's Josh Romero, showing how the UCSD engineers are giving their robots new ways to move.
Previously:
Boston Dynamics Demo Shows Robot Jumping Over Fence
Boston Dyanmics to Develop Two-legged Humanoid (And a New Hopping Robot in Their Spare Time)
FEATURE
Augmented Reality in a Contact Lens
A new generation of contact lenses built with very small circuits and LEDs promises bionic eyesight
Image: Raygun Studio
September 2009
The human eye is a perceptual powerhouse. It can see millions of colors, adjust easily to shifting light conditions, and transmit information to the brain at a rate exceeding that of a high-speed Internet connection.
But why stop there?
In the Terminator movies, Arnold Schwarzenegger’s character sees the world with data superimposed on his visual field—virtual captions that enhance the cyborg’s scan of a scene. In stories by the science fiction author Vernor Vinge, characters rely on electronic contact lenses, rather than smartphones or brain implants, for seamless access to information that appears right before their eyes.
These visions (if I may) might seem far-fetched, but a contact lens with simple built-in electronics is already within reach; in fact, my students and I are already producing such devices in small numbers in my laboratory at the University of Washington, in Seattle [see sidebar, "A Twinkle in the Eye"]. These lenses don’t give us the vision of an eagle or the benefit of running subtitles on our surroundings yet. But we have built a lens with one LED, which we’ve powered wirelessly with RF. What we’ve done so far barely hints at what will soon be possible with this technology.
Conventional contact lenses are polymers formed in specific shapes to correct faulty vision. To turn such a lens into a functional system, we integrate control circuits, communication circuits, and miniature antennas into the lens using custom-built optoelectronic components. Those components will eventually include hundreds of LEDs, which will form images in front of the eye, such as words, charts, and photographs. Much of the hardware is semitransparent so that wearers can navigate their surroundings without crashing into them or becoming disoriented. In all likelihood, a separate, portable device will relay displayable information to the lens’s control circuit, which will operate the optoelectronics in the lens.
These lenses don’t need to be very complex to be useful. Even a lens with a single pixel could aid people with impaired hearing or be incorporated as an indicator into computer games. With more colors and resolution, the repertoire could be expanded to include displaying text, translating speech into captions in real time, or offering visual cues from a navigation system. With basic image processing and Internet access, a contact-lens display could unlock whole new worlds of visual information, unfettered by the constraints of a physical display.
Besides visual enhancement, noninvasive monitoring of the wearer’s biomarkers and health indicators could be a huge future market. We’ve built several simple sensors that can detect the concentration of a molecule, such as glucose. Sensors built onto lenses would let diabetic wearers keep tabs on blood-sugar levels without needing to prick a finger. The glucose detectors we’re evaluating now are a mere glimmer of what will be possible in the next 5 to 10 years. Contact lenses are worn daily by more than a hundred million people, and they are one of the only disposable, mass-market products that remain in contact, through fluids, with the interior of the body for an extended period of time. When you get a blood test, your doctor is probably measuring many of the same biomarkers that are found in the live cells on the surface of your eye—and in concentrations that correlate closely with the levels in your bloodstream. An appropriately configured contact lens could monitor cholesterol, sodium, and potassium levels, to name a few potential targets. Coupled with a wireless data transmitter, the lens could relay information to medics or nurses instantly, without needles or laboratory chemistry, and with a much lower chance of mix-ups.
Three fundamental challenges stand in the way of building a multipurpose contact lens. First, the processes for making many of the lens’s parts and subsystems are incompatible with one another and with the fragile polymer of the lens. To get around this problem, my colleagues and I make all our devices from scratch. To fabricate the components for silicon circuits and LEDs, we use high temperatures and corrosive chemicals, which means we can’t manufacture them directly onto a lens. That leads to the second challenge, which is that all the key components of the lens need to be miniaturized and integrated onto about 1.5 square centimeters of a flexible, transparent polymer. We haven’t fully solved that problem yet, but we have so far developed our own specialized assembly process, which enables us to integrate several different kinds of components onto a lens. Last but not least, the whole contraption needs to be completely safe for the eye. Take an LED, for example. Most red LEDs are made of aluminum gallium arsenide, which is toxic. So before an LED can go into the eye, it must be enveloped in a biocompatible substance.
So far, besides our glucose monitor, we’ve been able to batch-fabricate a few other nanoscale biosensors that respond to a target molecule with an electrical signal; we’ve also made several microscale components, including single-crystal silicon transistors, radio chips, antennas, diffusion resistors, LEDs, and silicon photodetectors. We’ve constructed all the micrometer-scale metal interconnects necessary to form a circuit on a contact lens. We’ve also shown that these microcomponents can be integrated through a self-assembly process onto other unconventional substrates, such as thin, flexible transparent plastics or glass. We’ve fabricated prototype lenses with an LED, a small radio chip, and an antenna, and we’ve transmitted energy to the lens wirelessly, lighting the LED. To demonstrate that the lenses can be safe, we encapsulated them in a biocompatible polymer and successfully tested them in trials with live rabbits.
Seeing the light—LED light—is a reasonable accomplishment. But seeing something useful through the lens is clearly the ultimate goal. Fortunately, the human eye is an extremely sensitive photodetector. At high noon on a cloudless day, lots of light streams through your pupil, and the world appears bright indeed. But the eye doesn’t need all that optical power—it can perceive images with only a few microwatts of optical power passing through its lens. An LCD computer screen is similarly wasteful. It sends out a lot of photons, but only a small fraction of them enter your eye and hit the retina to form an image. But when the display is directly over your cornea, every photon generated by the display helps form the image.
The beauty of this approach is obvious: With the light coming from a lens on your pupil rather than from an external source, you need much less power to form an image. But how to get light from a lens? We’ve considered two basic approaches. One option is to build into the lens a display based on an array of LED pixels; we call this an active display. An alternative is to use passive pixels that merely modulate incoming light rather than producing their own. Basically, they construct an image by changing their color and transparency in reaction to a light source. (They’re similar to LCDs, in which tiny liquid-crystal ”shutters” block or transmit white light through a red, green, or blue filter.) For passive pixels on a functional contact lens, the light source would be the environment. The colors wouldn’t be as precise as with a white-backlit LCD, but the images could be quite sharp and finely resolved.
We’ve mainly pursued the active approach and have produced lenses that can accommodate an 8-by-8 array of LEDs. For now, active pixels are easier to attach to lenses. But using passive pixels would significantly reduce the contact’s overall power needs—if we can figure out how to make the pixels smaller, higher in contrast, and capable of reacting quickly to external signals.
By now you’re probably wondering how a person wearing one of our contact lenses would be able to focus on an image generated on the surface of the eye. After all, a normal and healthy eye cannot focus on objects that are fewer than 10 centimeters from the corneal surface. The LEDs by themselves merely produce a fuzzy splotch of color in the wearer’s field of vision. Somehow the image must be pushed away from the cornea. One way to do that is to employ an array of even smaller lenses placed on the surface of the contact lens. Arrays of such microlenses have been used in the past to focus lasers and, in photolithography, to draw patterns of light on a photoresist. On a contact lens, each pixel or small group of pixels would be assigned to a microlens placed between the eye and the pixels. Spacing a pixel and a microlens 360 micrometers apart would be enough to push back the virtual image and let the eye focus on it easily. To the wearer, the image would seem to hang in space about half a meter away, depending on the microlens.
Another way to make sharp images is to use a scanning microlaser or an array of microlasers. Laser beams diverge much less than LED light does, so they would produce a sharper image. A kind of actuated mirror would scan the beams from a red, a green, and a blue laser to generate an image. The resolution of the image would be limited primarily by the narrowness of the beams, and the lasers would obviously have to be extremely small, which would be a substantial challenge. However, using lasers would ensure that the image is in focus at all times and eliminate the need for microlenses.
Whether we use LEDs or lasers for our display, the area available for optoelectronics on the surface of the contact is really small: roughly 1.2 millimeters in diameter. The display must also be semitransparent, so that wearers can still see their surroundings. Those are tough but not impossible requirements. The LED chips we’ve built so far are 300 µm in diameter, and the light-emitting zone on each chip is a 60-µm-wide ring with a radius of 112 µm. We’re trying to reduce that by an order of magnitude. Our goal is an array of 3600 10-µm-wide pixels spaced 10 µm apart.
One other difficulty in putting a display on the eye is keeping it from moving around relative to the pupil. Normal contact lenses that correct for astigmatism are weighted on the bottom to maintain a specific orientation, give or take a few degrees. I figure the same technique could keep a display from tilting (unless the wearer blinked too often!).
Like all mobile electronics, these lenses must be powered by suitable sources, but among the options, none are particularly attractive. The space constraints are acute. For example, batteries are hard to miniaturize to this extent, require recharging, and raise the specter of, say, lithium ions floating around in the eye after an accident. A better strategy is gathering inertial power from the environment, by converting ambient vibrations into energy or by receiving solar or RF power. Most inertial power scavenging designs have unacceptably low power output, so we have focused on powering our lenses with solar or RF energy.
Let’s assume that 1 square centimeter of lens area is dedicated to power generation, and let’s say we devote the space to solar cells. Almost 300 microwatts of incoming power would be available indoors, with potentially much more available outdoors. At a conversion efficiency of 10 percent, these figures would translate to 30 µW of available electrical power, if all the subsystems of the contact lens were run indoors.
Collecting RF energy from a source in the user’s pocket would improve the numbers slightly. In this setup, the lens area would hold antennas rather than photovoltaic cells. The antennas’ output would be limited by the field strengths permitted at various frequencies. In the microwave bands between 1.5 gigahertz and 100 GHz, the exposure level considered safe for humans is 1 milliwatt per square centimeter. For our prototypes, we have fabricated the first generation of antennas that can transmit in the 900-megahertz to 6-GHz range, and we’re working on higher-efficiency versions. So from that one square centimeter of lens real estate, we should be able to extract at least 100 µW, depending on the efficiency of the antenna and the conversion circuit.
Having made all these subsystems work, the final challenge is making them all fit on the same tiny polymer disc. Recall the pieces that we need to cram onto a lens: metal microstructures to form antennas; compound semiconductors to make optoelectronic devices; advanced complementary metal-oxide-semiconductor silicon circuits for low-power control and RF telecommunication; microelectromechanical system (MEMS) transducers and resonators to tune the frequencies of the RF communication; and surface sensors that are reactive with the biochemical environment.
The semiconductor fabrication processes we’d typically use to make most of these components won’t work because they are both thermally and chemically incompatible with the flexible polymer substrate of the contact lens. To get around this problem, we independently fabricate most of the microcomponents on silicon-on-insulator wafers, and we fabricate the LEDs and some of the biosensors on other substrates. Each part has metal interconnects and is etched into a unique shape. The end yield is a collection of powder-fine parts that we then embed in the lens.
We start by preparing the substrate that will hold the microcomponents, a 100-µm-thick slice of polyethylene terephthalate. The substrate has photolithographically defined metal interconnect lines and binding sites. These binding sites are tiny wells, about 10 µm deep, where electrical connections will be made between components and the template. At the bottom of each well is a minuscule pool of a low-melting-point alloy that will later join together two interconnects in what amounts to micrometer-scale soldering.
We then submerge the plastic lens substrate in a liquid medium and flow the collection of microcomponents over it. The binding sites are cut to match the geometries of the individual parts so that a triangular component finds a triangular well, a circular part falls into a circular well, and so on. When a piece falls into its complementary well, a small metal pad on the surface of the component comes in contact with the alloy at the bottom of the well, causing a capillary force that lodges the component in place. After all the parts have found their slots, we drop the temperature to solidify the alloy. This step locks in the mechanical and electrical contact between the components, the interconnects, and the substrate.
The next step is to ensure that all the potentially harmful components that we’ve just assembled are completely safe and comfortable to wear. The lenses we’ve been developing resemble existing gas-permeable contacts with small patches of a slightly less breathable material that wraps around the electronic components. We’ve been encapsulating the functional parts with poly(methyl methacrylate), the polymer used to make earlier generations of contact lenses. Then there’s the question of the interaction of heat and light with the eye. Not only must the system’s power consumption be very low for the sake of the energy budget, it must also avoid generating enough heat to damage the eye, so the temperature must remain below 45 °C. We have yet to investigate this concern fully, but our preliminary analyses suggest that heat shouldn’t be a big problem.
All the basic technologies needed to build functional contact lenses are in place. We’ve tested our first few prototypes on animals, proving that the platform can be safe. What we need to do now is show all the subsystems working together, shrink some of the components even more, and extend the RF power harvesting to higher efficiencies and to distances greater than the few centimeters we have now. We also need to build a companion device that would do all the necessary computing or image processing to truly prove that the system can form images on demand. We’re starting with a simple product, a contact lens with a single light source, and we aim to work up to more sophisticated lenses that can superimpose computer-generated high-resolution color graphics on a user’s real field of vision.
The true promise of this research is not just the actual system we end up making, whether it’s a display, a biosensor, or both. We already see a future in which the humble contact lens becomes a real platform, like the iPhone is today, with lots of developers contributing their ideas and inventions. As far as we’re concerned, the possibilities extend as far as the eye can see, and beyond.
The author would like to thank his past and present students and collaborators, especially Brian Otis, Desney Tan, and Tueng Shen, for their contributions to this research.
Siapakah orang yang sibuk?
Orang yang paling sibuk adalah orang yang tidak mengambil berat akan waktu solatnya
seolah-olah ia mempunyai kerajaan seperti kerajaan Nabi Sulaiman a.s.
Siapakah orang yang manis senyuman nya?
Orang yang mempunyai senyuman yang manis adalah
orang yang ditimpa musibah lalu dia kata "Inna lillahi wainna illaihi rajiuun."
Lalu sambil berkata, "Ya Rabbi Aku reda dengan ketentuanMu ini", sambil mengukir senyuman.
Siapakah orang yang kaya?
Orang yang kaya adalah orang yang bersyukur dengan apa yang ada dan
tidak lupa akan kenikmatan dunia yang sementara ini.
Siapakah orang yang miskin?
Orang yang miskin adalah orang tidak puas dengan nikmat yang ada
sentiasa menumpuk-numpukkan harta.
Siapakah orang yang rugi?
Orang yang rugi adalah orang yang sudah sampai usia pertengahan namun
masih berat untuk melakukan ibadat dan amal-amal kebaikan.
Siapakah orang yang paling cantik?
Orang yang paling cantik adalah orang yang mempunyai akhlak yang baik.
Siapakah orang yang mempunyai rumah yang paling luas?
Orang yang mempunyai rumah yang paling luas adalah orang yang mati membawa amal-amal kebaikan
di mana kuburnya akan di perluaskan saujana mata memandang.
Siapakah orang yang mempunyai rumah yang sempit lagi dihimpit?
Orang yang mempunyai rumah yang sempit adalah orang yang mati
tidak membawa amal-amal kebaikkan lalu kuburnya menghimpitnya.
Siapakah orang yang mempunyai akal?
Orang yang mempunyai akal adalah orang-orang yang menghuni syurga kelak kerana
telah menggunakan akal sewaktu di dunia untuk menghindari seksa neraka...Waallahual am.
Russia Reveals Vision for Manned Spaceflight
A new design for a crew vehicle, launcher, and spaceport
Photo: Anatoly Zak
BY Anatoly Zak // August 2009
20 August 2009—Russia unveiled an ambitious three-decade plan for a manned space program this week at the International Aviation and Space Salon, MAKS-2009, which opened Tuesday in the town of Zhukovsky, near Moscow. The Russian Federal Space Agency’s hope is that its plan will become the basis for a broad international effort to send humans to Mars and build a permanent base on the surface of the moon.
In contrast to NASA efforts, which would use the moon as a stepping-stone on the way to Mars, the latest Russian space doctrine aims for Mars first. To reach a Mars landing, RKK Energia, Russia’s premier developer of manned spacecraft, displayed a multitude of planned space vehicles, including a transport ship, a nuclear-powered space tug, and a planetary lander system. Together they would make up what the agency is calling the Interplanetary Expeditionary Complex.
“I believe that we should move [straight] to Mars…as the moon cannot be a goal by itself,” says Vitaly Lopota, the head of RKK Energia. ”Nevertheless, all the infrastructure that we are proposing for the Interplanetary Expeditionary Complex could be used for operations in Earth orbit, but also for the lunar exploration, if such goals emerge,” Lopota told IEEE Spectrum.
Officials at the Russian space agency, Roskosmos, made no secret that these grand ambitions were not achievable within the current budget and capabilities of the Russian space program alone. Instead, they hoped to jump-start the idea of broader international cooperation, which could spread the cost of the manned space program.
“This proposal could serve as a basis for a large international venture, which could be even wider than the one we have had in the International Space Station, ISS, because the resources required are enormous,” says Alexei Krasnov, director of manned flight programs at Roskosmos.
“I hope, and this is my personal opinion, that in the context of the Augustine Commission work, and with the new NASA administration, we will ultimately come to the understanding that the development of international cooperation in space should move in this direction,” Krasnov says.
However, Krasnov stressed that partners should learn all lessons from the cooperation on the ISS: “We should admit that both good and bad came out of this project, and we should keep this in mind.” The ISS suffered numerous technical delays and cost overruns, which partners often blamed on one another, especially in the initial phase of the project.
The core of Russia’s latest space strategy rests on replacing its Soyuz transport ship with a larger next-generation vehicle and a brand-new rocket to launch it. Combined, they would be able to carry cosmonauts to the Earth-orbit space stations but also support missions to the moon and even expeditions to Mars. Roskosmos approved both projects for development in spring 2009, while preliminary studies into the program have been in progress since around 2006.
If it’s built, the new Russian manned transport will resemble NASA’s Orion capsule, whose development was officially launched in 2004. The American decision to replace the space shuttle with an expendable cone-shaped vehicle at least partially influenced the latest Russian concept. In 2006, Roskosmos rejected a proposal by RKK Energia to develop a minishuttle called Kliper.
Exhibit information at MAKS-2009 confirmed that the proposed Russian spacecraft would feature significant differences in its design and capabilities from the Orion. Unlike the Orion, which is designed to land under a parachute, the ship envisioned by the Russian engineers would use a brand-new (and still controversial) Buck Rogers–style rocket landing.
Lopota told IEEE Spectrum that after two years of study, RKK was still optimistic about rocket assistance as the primary method of landing. Also, a scale model presented at the show featured reusable thermal protection tiles, not unlike those used on NASA’S shuttle, instead of the ablative shields selected for Orion.
Along with the new spacecraft, TsSKB-Progress, the Samara, Russia–based developer of the Soyuz series of rockets, presented a new launcher for the manned space program, known as Rus-M. The company officially won the government contract for the development of the Rus-M in March.
Aleksandr Kirilin, the head of TsSKB-Progress, confirmed at MAKS-2009 that his company was on schedule to deliver a preliminary design of the vehicle to Roskosmos by August 2010. “On 10 July, we conducted the first Scientific and Technical Council [dedicated to the project], determined the members of the chief designer council [which would oversee the development], and formed working groups on various aspects of the work,” Kirilin says.
Photo: Anatoly Zak
Kirilin reiterated that the new rocket was being built not only to carry the new spacecraft into orbit but also to create a basis for much more powerful versions of the launcher, which will have a maximum payload of 60 metric tons. Such a rocket would be enough to carry an unmanned lunar lander and an “escape” stage, sending it from Earth orbit toward the moon. There, the lander would link up with a manned transport vehicle launched by a second 60-metric-ton-class rocket. After the crew transfers into the lunar lander, it could land on the surface.
“We are still looking at the possibility of developing a 100-ton vehicle within the size constraints of the current design,” Kirilin says. Among the possible ways of upgrading the new rocket family, TsSKB-Progress studied integrating the powerful RD-120 engine, inherited from the Soviet-era Energia rocket. Brand-new engines burning an exotic mix of three propellants, instead of the traditional two, are also on the table, Kirilin says.
The new Russian rocket would be much smaller than the Ares family of vehicles proposed by NASA for the new Constellation program. However, unlike the Ares, which is scheduled to fly from refurbished shuttle facilities at Cape Canaveral, the new Russian rocket fleet would need a whole new space center. In 2007, the Russian government decided to build a new launch site for its manned space program in the nation’s far east, not far from the Chinese border.
At the show, RKK Energia unveiled a proposed layout of the future launch facility, featuring a single launchpad and the support infrastructure for the manned space program, including cosmonaut housing and a training complex. If the new facility enters service as scheduled in 2018, it would end Russia’s manned operations in Baikonur, Kazakhstan, after almost six decades.
NEWS
Jordan's Radioactive Water Problem
A critical $1 billion engineering project in Jordan could be complicated by radium
Photo: Christoph Rosenberger/Getty Images
BY Nawza // August 2009
Jordan is in a tight spot. The virtually landlocked country is 80 percent desert, and the remaining 20 percent loses most of its rainfall to evaporation. The Dead Sea and the Jordan River, which feeds it, are drier than ever. With its population swelling with Iraqi migrants, water is Jordan’s foremost concern.
Most of the country receives water service once a week at best, and unexpected disruptions force the Ministry of Water and Irrigation to deliver water by truck. ”When a country has its back against the wall, you take the least damaging solution,” says Munther Haddadin, the country’s former water minister.
The water ministry has decided that the best way to get water to the capital of Amman is to mine it, tapping into what the department says is some of the cleanest, purest water in the world. The water sits in the pores and holes of the Disi aquifer, an expanse of sandstone some 500 meters beneath the desert in southern Jordan and northwestern Saudi Arabia.
Having just secured the final US $200 million in loans needed from European development banks in May, the government will soon begin building a 325-kilometer pipeline across the country, from the heart of the desert to Amman. The plan is to pump 100 million cubic meters of water from 55 wells in Disi each year. The water will travel about 1300 meters uphill, requiring about 4 kilowatt-hours of energy to deliver each cubic meter, according to Othman Kurdi, the engineer in charge of the Disi Water Conveyance Project. At that rate, the power required to pump ayear’s worth of water is equivalent to the output of a 45-megawatt power plant, or about 4 percent of the country’s electricity production.
”It’s not rocket science, but it’s a megaproject and a challenge in every sense,” Kurdi says. Indeed, the Disi Water Conveyance Project is riddled with complications. Recent research has revealed that the water may not be as pure as project planners had said, and that could make the scheme more complex and costly—and even take a toll on public health. Further, the pipeline project is just a stopgap measure that will leave Jordan permanently poorer in natural freshwater resources while the country pursues an even larger, costlier, and more energy-intensive solution that remains decades away.
By pumping the Disi aquifer, Jordan will be depleting its only strategic reserve of water, a move also being considered by other developing nations that are poor in both energy and water resources. Unlike rivers and lakes that refill with rainfall or melting snow, once this so-called fossil water is pumped, it leaves Jordan forever. Much of the water in the Disi aquifer essentially hasn’t moved since it began dripping into the ground during the Pleistocene era, some 30 000 years ago. ”For developing countries in that region, they have no other choice—using this water is the only way to survive the water crisis,” says Avner Vengosh, a geochemistry professor at Duke University, in Durham, N.C.
Policymakers and water experts had been debating the merits of draining Disi through much of the project’s planning. But in February the debate suddenly shifted, when Vengosh published a report in the journal Environmental Science & Technology describing the Disi water as highly radioactive. He and his coauthors collected samples from 37 wells in the Disi area used mostly for agriculture and mining activities. They found that in all but one well, the concentrations of radium-226 and radium-228 isotopes exceeded the levels considered safe by the World Health Organization and even the more relaxed European Union and U.S. water standards. In some spots, the radiation levels were observed to be 30 times the WHO’s thresholds. Long-term exposure to radium is believed to increase the risk of developing bone cancer.
Click to enlarge image
Illlustration: Emily Cooper
JORDAN'S WATER CYCLE:
After the Disi pipeline is built, 100 million cubic meters of water will travel from the desert to two reservoirs each year. About 40 million cubic meters will go to the Abu Alanda reservoir (1) and mix with some surface water from Wala (2) and treated brackish water from the Zara Ma’en desalination plant (3). The other 60 million cubic meters will go to the Dabouq reservoir (4) and blend with Wala surface water and the output of the Zai Treatment Plant (5), which treats water from the King Abdullah Canal (6). Amman’s used water is sent to the As Samra Wastewater Treatment Plant (7) and later used for irrigation.
Vengosh theorizes that the isotopes entered the water from the surrounding sandstone through a physical process known as recoil. Thorium-232 and thorium-230, the parents of radium-228 and radium-226, respectively, exist naturally in the porous sandstone that holds the Disi water. When one of those thorium atoms radioactively decays to emit an alpha particle (two protons and two neutrons bound together), some energy is also released that causes the new atom to move in the opposite direction of the ejected particle. In some cases, the recoil can cause this new atom to get pushed out of its host material into a surrounding medium—in this case, from sandstone into water.
The Ministry of Water and Irrigation contends that the radiation is not a problem. The Disi pipeline will send the water to two large reservoirs outside Amman, where the fossil water will be diluted with 105 million cubic meters of treated surface water, Amman’s current supply. According to Susan Kilani, a ministry official in charge of water quality, the quantity is ”in excess of what we need for blending.”
Judging by Vengosh’s data, this doesn’t appear to be the case. Dilution would double the total volume of water, which means that the Disi water’s radiation can be no more than double the desired threshold in order to comply with international benchmarks. Very few of the wells tested by Vengosh and his colleagues met that criterion. Relying on blending would limit the amount of usable water in the aquifer and curtail the life span of the pipeline project—or expose the population of Amman to heightened levels of radium.
Aqaba, Jordan’s small port city, has had an instructive experience with Disi water. The city has been pumping 15 million cubic meters of water from Disi each year to a collection reservoir outside Aqaba. Samples taken at the wells come out radioactive. But without any diluting or further treatment of the isotopes, the water is somehow pristine by the time it reaches the city, according to Aqaba’s utility.
Imad Zureikat, the Aqaba Water Co.’s general manager, maintains that Aqaba’s water is tested at several international laboratories and adheres to international standards. ”My kids are drinking water from the tap,” Zureikat says. ”We don’t play with people’s health. Everywhere you go in Jordan, you can drink from the tap.”
The locations and depth of the wells likely play a role in Aqaba’s good fortune [see ”Jordan’s Red Sea Desalination Plan,” IEEE Spectrum, July 2009]. Near Aqaba, at the southern tip of Jordan, the aquifer lies closer to the surface. Vengosh’s samples from these shallower wells showed much more variability in their radium content. So cautiously choosing well sites may indeed bring the water ministry within sight of international standards for the Amman project.
Photo: Avner Vengosh
Elias Salameh, a professor at Jordan University, in Amman, has been monitoring the elevated alpha-particle activity in Disi water. His data is unpublished, because the presence of radium in groundwater is just not news, he says. In the United States, for example, New Jersey has relied on water from a radium-tainted aquifer for many years. He points out that should blending fail to neutralize the water, the ministry can treat it using reverse osmosis, by forcing the water through a membrane that prevents the passage of radium. Ion-exchange purification, in which the water is fed through columns of porous materials whose pores work as capture sites for the radium, is another option. ”We can’t choose another land, another country, so we have to do our best,” Salameh says.
Of course, any purification treatment would come at a price. Haddadin, the former water minister, studied the cost of the Disi project, along with the collection and treatment of its wastewater, and concluded the cost of water service would reach 10 percent of the income of the average Amman resident. Adding a treatment facility, and the needed disposal of radioactive waste, would drive the cost up even more.
Whatever the price, Jordan will almost certainly find a way to use the Disi water. As Nizar Abu-Jaber, a geology professor at Yarmouk University, in Irbid, Jordan, sees it, the ”availability of water takes precedence over radioactivity.” Water-stressed countries are frequently forced into difficult, or at least expensive, choices. In Libya, a massive project known as the Great Man-Made River delivers fossil water from the Sahara Desert and distributes it along entirely new waterways, while Saudi Arabia has plowed money into becoming the world’s largest producer of desalinated seawater.
Amman, for its part, is undergoing a population boom, spurred by an influx of an estimated million immigrants from Iraq. ”The Disi project will not solve the problem of water in Jordan,” says Kurdi, the project’s leader. ”It will just maintain the status quo.” With any luck, the project will at least enable the government of Jordan to increase each person’s share of the kingdom’s water, to meet what it considers to be the daily demand of 120 liters per person per day. (Abu Dhabi residents, by contrast, go through an average of 550 L per day.)
By early 2013, Kurdi estimates the water should be flowing to Amman. The aquifer will be pumped for about 25 years, he says, until a subsequent water project is in place. That project, known as the Red-Dead Canal, would send water north from the Red Sea, with half of the water replenishing the shrinking Dead Sea and the other half to be desalinated for consumption.
To get within sight of the ballyhooed canal, Jordan must first find a way to keep its population—and economy—humming along for the decades the country will need to build it. ”We are going to implement this project, insha’Allah, God willing, as we say,” Kurdi says. ”Because this is what we need.”