Tuesday, July 3, 2007

New Batteries Readied for GM's Electric Vehicle

The technologies behind the battery packs for the GM Volt are being tested and could be ready within a year.

This week, General Motors (GM) announced its selection of battery makers to develop and test battery packs for use in its proposed electric vehicles. The selected battery makers, Compact Power, based in Troy, MI, and Continental Automotive Systems, based in Germany, say that they've overcome the performance and cost limitations that have been an obstacle to electric vehicles in the past.

Over the next 12 months, researchers from, Compact Power, Continental Automotive Systems and GM will be testing the battery-pack designs in the lab and in vehicles to confirm that the packs can work for the life of the car--at least 10 years, says Denise Gray, director of hybrid energy storage devices at GM. Initial tests of individual battery cells, along with projections about the performance of battery packs that can contain hundreds of these cells, have Gray optimistic that her company will have proven packs by June 2008.

If the packs perform well, they are slated for use in the proposed Chevrolet Volt, an electric concept car announced by GM in January. The Volt marks a change in emphasis for GM, which previously focused on more distant plans to bring hydrogen-fuel-cell-powered cars to market. The Volt could be ready within a few years. Until now, however, it has been unclear who would develop its advanced batteries.

There are a number of design variations for the Volt, but they will all be propelled by electric motors. In one version, the battery pack, which can be recharged by plugging it in, will provide 40 miles of range. Then an onboard gasoline- or ethanol-powered generator will kick in to recharge the battery, providing an additional 600 miles of range. A proposed hydrogen-fuel-cell version would have a smaller battery pack and no onboard generator.

To make batteries that are up to GM's specifications, battery makers have had to redesign the chemistry of lithium-ion batteries, a type of battery widely used in mobile phones and laptops. While lithium-ion batteries are light and compact, the type of lithium-ion battery typically used in electronic devices relies heavily on cobalt, an expensive metal. The cobalt oxide used in one of the battery's electrodes isn't thermally stable, making the batteries prone to bursting into flame if damaged or poorly manufactured--a shortcoming that led to the massive recall of millions of laptop computer batteries last year. (See "Safer Lithium-Ion Batteries.") This could be a problem in vehicle battery packs, which would be much larger than those in portable electronics, so an accident could be more dangerous.

One alternative is to replace cobalt with manganese. Mohamed Alamgir, director of research at Compact Power, says that manganese-oxide electrodes are significantly more thermally stable than cobalt oxide, and less expensive. The battery maker has also developed a new material for keeping the electrodes separate: the material remains stable at higher temperatures than conventional materials, further guarding against the runaway heating that causes batteries to catch fire. What's more, the company makes the batteries in a flat shape rather than in the typical cylindrical design. Alamgir says this flat shape prevents heat from building up at the center of the cell, making it easier to keep the battery at an even, cool temperature.

See-Through Transistors

Transparent transistors made from nanowires could mean bright and clear OLED displays.

Organic light-emitting diode (OLED) displays are currently found on mobile phones and digital cameras. But in the future, manufacturers expect bigger, bendable, and completely transparent versions. They envision bright maps on visors and windshields, television screens built into eyeglasses, and roll-up, see-through computer screens. And although the OLEDs themselves can be transparent, to make a clear display, the transistors that control each display's OLED, or pixel, need to be transparent as well.

Researchers at Purdue University and Northwestern University have now made flexible, see-through transistors using zinc-oxide and indium-oxide nanowires. By contrast, the amorphous or polycrystalline silicon transistors used in existing displays are not transparent. The new transistors also perform better than their silicon counterparts and are easier to fabricate on flexible plastic.

The transistors could lead to brighter see-through OLED displays, says Purdue electrical- and computer-engineering professor David Janes, who led the work published in last week's Nature Nanotechnology. When conventional nontransparent transistor circuitry is placed around the OLED, it takes up space on the display that could otherwise be emitting light. But, says Janes, "you could put transparent transistors underneath or on top of the pixel," increasing the light-emitting area.

To make the transistors, Janes and his colleagues first deposit an indium-zinc-oxide gate electrode on glass or plastic. Then they put a nanowire solution on the surface. After finding a nanowire that is aligned appropriately, they deposit source and drain electrodes made from indium tin oxide on either side of the nanowire. Both indium zinc oxide and indium tin oxide are transparent materials.

The nanowire transistors have high electron mobility, which determines how fast the transistor can work and how much current it can carry. In fact, the mobility is a few hundred times better than it is for transistors made from amorphous silicon, which is widely used in the electronics for displays. Because of that, the transistors could be made smaller and faster, Janes says. More-compact transistors, he says, would mean an even larger pixel area. What's more, the nanowire transistors are much easier to make on plastic than silicon transistors are because they don't need high-temperature processing.

Research groups have recently made transparent transistors using thin films of zinc oxide or indium oxide, or using carbon nanotubes. (See "Cheap, Transparent, and Flexible Displays.") Both technologies face unique issues. While the carbon-nanotube transistors have much better electron mobilities and are stronger than the new nanowire transistors, they aren't totally transparent because they need tiny metal contacts to connect the nanotubes to the electrodes. Thin-film transistors, on the other hand, are easier to fabricate on various surfaces but have much lower mobilities.

For the new transistors, "the performance in terms of the mobility, flexibility, and transparency is very impressive," says John Wager, an electrical-engineering and computer-science professor working in the area of transparent electronics at Oregon State University. Now the biggest remaining question, he says, is, "Can all of this be translated into real-world manufacturability?"

Right now, there is no method to control where nanowires get deposited on a surface or how they line up. "In experimental demonstrations, you throw down a couple thousand nanotubes and hope one aligns in the direction you want," Wager says. But randomly depositing nanowires on a surface will not work if one is to manufacture transistors for large displays.

Indeed, says Janes, "you have to have some way of putting the desired number of nanowires in the location you want." At this point, all three technologies to make transparent transistors--nanowires, thin films, and carbon nanotubes--have a fair shot at replacing silicon transistor technology for future transparent, flexible displays, Janes says.

According to John Rogers, a professor of materials science and engineering at the University of Illinois at Urbana Champaign, the ultimate commercial success of one of the three technologies will depend on how they measure up on many different factors: transparency, electrical performance, flexibility, and the ease and cost of manufacturing them. "It will be a good horse race to see which approach wins," Rogers says.

Nanotube Circuits Made Practical

Software can predict the best designs for fabricating logic gates from disorganized carbon nanotubes.

Many experts believe that carbon nanotubes could eventually replace silicon in microelectronics because of their potential for superior speed and reduced power consumption. And over the past several years, researchers have made transistors out of carbon nanotubes. However, it's still difficult to make reliable circuits out of them. One problem is that the nanotubes, used for transistors that make up the circuits, tend to be fabricated in different directions, making it impossible to know which nanotube form which transistor. And such a chaotic arrangement can lead to electrical malfunctions. But now researchers at Stanford University have written a program that finds a working circuit layout, no matter how disorganized or misaligned the nanotubes.

"Just having a single [carbon nanotube] transistor isn't going to do it," says Subhasish Mitra, assistant professor of electrical engineering and computer science at Stanford. It may not be possible to synthesize precisely organized nanotubes anytime soon, so Mitra and his team, which includes Stanford electrical-engineering professor H.-S. Philip Wong, University of Southern California chemistry professor Chongwu Zhou, graduate student Nishant Patil, and Jie Deng of Stanford electrical engineering, have turned to algorithms to make sense out of the nanotubes mess.

Circuits are made of logic gates that collectively generate instructions to, say, send a document to a printer or turn on a cell phone. Transistors are at the core of these logic gates, grouped in various ways to make up different types of gates. The researchers first approached the problem of making nanotube circuits by looking at a specific type of digital logic gate called a 2-input NAND gate, which contains two parallel transistors that can messed up by misaligned nanotubes. If electrical current flows through one or both of the transistors, then the gate is "on." If electrical current flows through neither, then the gate is "off." But if the transistors are haphazardly arranged, these gates can't operate properly.

"Even if current flows through neither of the transistors, the gate may be turned on by misaligned nanotubes," says Mitra, "and if you try to complete a logic function, it could create a short or give a wrong response." What his algorithm does, he explains, is find a way to lay out a circuit that keeps these nanotubes from creating shorts or incorrect logic gates.

To do this, the algorithm uses a combination of the mathematics of complicated networks and Boolean algebra to define various regions on a batch of nanotubes. Depending on the desired function of the circuit, a design is proposed that lays out legal and illegal regions for the gates to exist. When nanotubes cross the boundary into a particular region, explains Mitra, they are chemically etched away so that they won't conduct an erroneous current to another transistor. The researchers have generalized the approach so that it works not just for NAND gates, but for any arbitrary logic gate as well.

It's important to explore approaches such as these, says Naresh Shanbhag, professor of electrical engineering at the University of Illinois, Urbana-Champaign. "Nanotubes are considered to be a promising post-silicon device," he says. "Nanotube circuits need such techniques ... so that a circuit exhibiting reliable behavior can be designed."

The Stanford algorithm does not solve all the problems with carbon-nanotube transistors, however. In a typical batch of nanotubes, as much as 30 percent of them act as a metal, constantly conducting electricity (unlike a semiconductor), making them useless as transistors. Another issue, says Mitra, is that it's difficult to control the density of carbon nanotubes from one batch to the next. And if there isn't a sufficient density, then there simply aren't enough nanotubes to make logic gates for circuits.

Progress is being made on all these fronts, says Tom Theis, director of physical sciences at IBM's TJ Watson Research Center, in Yorktown Heights, NY. Researchers at IBM, for example, are developing ways to selectively remove unwanted nanotubes--metal nanotubes where semiconductor nanotubes are wanted, for example--to help pave the way for reliable devices. While the Stanford work doesn't resolve all the challenges facing nanotube electronics, Theis says that it's "an interesting piece of the puzzle."

Plastics from Sugar

New catalysts convert glucose into a valuable chemical feedstock.

Researchers at Pacific Northwest National Laboratory (PNNL) have come up with an easy, inexpensive method to directly convert glucose into a chemical that can be used to make polyester and other plastics, industrial chemicals, and even fuels.

Petroleum is commonly used to make plastics and various chemical products, such as fertilizers and solvents. But researchers are trying to find a simple and affordable way to convert the sugars, including glucose and fructose, in plants into compounds that can replace petroleum feedstocks. If successful, such technology could use a chemical made from corn, potatoes, and even grass to substitute for ones derived from oil.

While previous studies have shown various ways to chemically convert fructose and glucose into plastic intermediates and even fuels, these conversion processes are complicated and costly, and are only efficient for converting fructose. Glucose is a much more common sugar because it can be derived directly from starch and cellulose, both plentiful in plant material. "The major bottleneck has been to utilize nature's most abundant building block, which is glucose," says Z. Conrad Zhang, a scientist at PNL's Institute for Interfacial Catalysis, who led the work.

Zhang and his colleagues have developed a catalysis process to transform the sugars into an organic compound called hydroxymethylfurfural, or HMF, which can be converted into polyester and a diesel-like fuel. The technique, which the researchers describe in last week's Science, yields almost 90 percent of HMF from fructose and 70 percent from glucose.

The yield from fructose is similar to that reported in the past by other research groups, Zhang says. But he claims that his process is simpler, involving fewer steps, which would make it more cost-effective. Previous methods use an acidic catalyst, and the chemical reactions take place in a water-based solution, producing high levels of impurities. Instead of an aqueous solution, the PNL researchers use solvents known as ionic liquids, and they use metal chlorides as catalysts. The resulting chemical reaction gives nearly pure HMF, getting rid of the cost of purification, Zhang says.

After trying various metal chlorides, the researchers found that chromium chloride is the best catalyst for glucose. It gets the most HMF from glucose and works at temperatures of 80 °C for fructose and 100 °C for glucose.

The ability to make HMF directly from glucose and in relatively high yields has caught the attention of some experts. The new technique is a step in the right direction, says Leo Manzer, president of Catalytic Insights, a consulting firm based in Wilmington, DE. "What folks have been looking for is a cheaper feedstock and a good way to make HMF," he says. "This is a very unique, remarkable system that [Zhang] has discovered."

The ultimate goal will be to build an economical reactor that can convert cellulosic biomass, such as grass and plant stalks, into HMF. Zhang says that his research team is already working on a method to utilize cellulose directly. However, he says, the first step will be to develop a commercial process for converting glucose into HMF, and that will take several years.

Storing Light

A new optical device could make high-speed computing and communications possible.

A microscopic device for storing light developed by researchers at Cornell University could help free up bottlenecks in optical communications and computing. This could potentially improve computer and communications speeds by an order of magnitude.

The new device relies on an optically controlled "gate" that can be opened and closed to trap and release light. Temporarily storing light pulses could make it possible to control the order in which bits of information are sent, as well as the timing, both of which are essential for routing communications via fiber optics. Today, such routing is done, for the most part, electronically, a slow and inefficient process that requires converting light pulses into electrons and back again. In computers, optical memory could also make possible optical communication between devices on computer chips.

Switching to optical routing has been a challenge because pulses of light, unlike electrons, are difficult to control. One way to slow down the pulses and control their movement would be to temporarily confine them to a small continuous loop. (See "Tiny Device Stores Light.") But the problem with this approach is getting the light in and out of such a trap, since any entry point will also serve as an exit that would allow light to escape. What's needed is a way to close the entryway once the light has entered, and to do so very quickly--in less time than it takes for the light to circle around the loop and escape. Later, when the light pulse is needed, the entryway could be opened again.

The Cornell researchers, led by Michal Lipson, a professor of electrical and computer engineering at the university, use a very fast, 1.5-picosecond pulse of light to open and close the entryway. The Cornell device includes two parallel silicon tracks, each 560 nanometers wide. Between these two tracks, and nearly touching them, are two silicon rings spaced a fraction of the width of a hair apart. To trap the light in these rings, the researchers turned to some of their earlier work, in which they found that the rings can be tuned to detour different colors by shining a brief pulse of light on them.

Light of a certain color passes along the silicon track, takes a detour through one of the rings, and then rejoins the silicon track and continues on its way. However, if the rings are retuned to the same frequency the moment after a light pulse enters a ring, the light pulse will circulate between the rings in a continuous loop rather than rejoin the silicon track and escape. Tuning the rings to different frequencies again, such as by shining another pulse on one of the rings, allows the light to escape this circuit and continue on to its destination.

Work remains to be done before such a device will function in a commercial system. So far, the rings only capture part of a pulse of light. As a result, any information encoded in the shape of the overall pulse is lost. This can be solved by compressing the pulse and using a cascade of rings, says Mehmet Yanik, a professor of electrical engineering and computer science at MIT.

The other issue is that the length of time a light pulse can be stored is relatively short, Lipson says. If the light stays in the ring for too long, it will be too weak to use. Lipson says it might be possible to make up for light losses by amplifying the light signal after it leaves the rings to restore any lost power.

Other schemes for storing light have been demonstrated in the past, but these were impractical, requiring carefully controlled conditions, for example, or a large, complicated system. The new approach is an important step forward because it makes it possible to store light in ambient conditions and in a very small device, says Marin Soljacic, a professor of physics at MIT. Once you've done that, he says, "then it becomes interesting to industry."

Molten Mirrors

Liquid mirrors could enable more-powerful space telescopes.

Canadian researchers have developed a liquid mirror that could operate in a future telescope located on the moon, allowing researchers to peer back into the origins of the universe with extraordinary clarity. Telescopes relying on liquid mirrors can be hundreds of times more powerful than those with glass mirrors--for the same cost--and they should be easier to assemble in space.

A liquid-mirror telescope could reveal much fainter objects than the Hubble Telescope can, says Ermanno F. Borra, a physics professor at the Université Laval, in Quebec, who is leading the development of the new mirror. The power of a telescope is proportional to the surface area of its mirror. The James Webb telescope, which is scheduled to launch in 2013 and is far more powerful than the Hubble, has a diameter of about six meters. (See "Giant Mirror for a New Space Telescope.") A lunar liquid-mirror telescope could be as large as 20 to 100 meters, says Borra.

The liquid mirror, which was funded by NASA, consists of a pool of an ionic liquid coated with a film of silver. Such ionic liquids are carbon-containing salts that freeze only at very low temperatures and have very high viscosity. The salt used in the Laval mirror is liquid down to -150 ºC and does not evaporate below room temperature, even in a vacuum--suggesting that it could withstand the harsh environment of the moon.

There are two limitations on cosmologists' observations of the early universe: "The objects you want to observe are incredibly distant and incredibly faint," says Borra. Telescopes in orbit like the Hubble, whose views are unobstructed by Earth's atmosphere, are limited in size and power; telescopes on Earth can be larger and more powerful but produce fuzzier images because of the atmosphere. Liquid mirrors couldn't go into orbit, but they could operate on the moon, which has no atmosphere.

Large, powerful liquid-mirror telescopes should be less complicated to take into space than their glass counterparts. "To put a glass mirror into a rocket, you have to break it into segments and then reassemble them," says Borra. "You can carry a liquid mirror in a jug." But none developed so far have been space worthy. The University of British Columbia's Large Zenith Telescope uses a liquid mirror made of mercury to observe the early universe. Mercury solidifies at -40 ºC--much warmer than the temperature on the moon.

Borra searched for a better liquid to make telescope mirrors and found that ionic liquids seemed promising. Unlike mercury, however, these molten salts are not reflective, and they require a metal coating to function as a mirror. "Depositing a layer of silver on liquid is like painting on air," says Borra. Laval graduate student Omar Seddiki adapted the technique used to coat glass mirrors: in a vacuum chamber, Borra and Seddiki run an electrical current between pieces of silver, which vaporize and form a thin coating over the liquid salt. The Laval researchers have so far made a small mirror, about two inches in diameter, to demonstrate the technology.

Making a large, perfectly smooth, concave optical surface out of glass is an involved and expensive process. Very tiny flaws in the glass can make a mirror unusable. The containers that hold liquid mirrors, says Borra, don't need precisely smooth surfaces and would be much cheaper to manufacture. Telescopes that rely on liquid mirrors would cost about 100 times less than glass-mirror telescopes of comparable size, says Borra.

"The forces of nature conspire to give the right shape," Borra says of liquid mirrors, which need only be rotated to form a flawless reflective surface. As the mirror spins, centrifugal force and gravity pull the liquid into a smooth parabola. Unlike with a glass mirror, if the liquid is perturbed, it can move right back into shape.

Borra expects that a liquid-mirror telescope would be assembled on the moon by robotics. "A container holding the liquid will be sent to the moon and opened up like an umbrella," he says of an imagined future system. A liquid-mirror telescope could not be put into orbit because gravity is necessary to form the optical surface--and because it would spill.

"There's a tremendous amount of research to be done" to fine-tune the mirror, cautions Robin D. Rogers, a chemistry professor at the University of Alabama. He points out that there are hundreds of other ionic liquids that might have a better set of properties than those used in the Laval mirror.

"It may take 20 years before it's built," Borra says of his telescope. If it does come about, however, such a system could help cosmologists observe faint signals from when the universe was only a billion years old, "at that time when matter first assembled into stars, stars into galaxies," says Borra.

New Nano Weapon against Cancer

A new class of specially engineered nanoparticles that can target, image, and kill tumor cells could be a potent weapon against cancer. The new nanoengineered system, designed by physician and researcher James Baker and his colleagues at the University of Michigan, contains gold nanoparticles with branching polymers called dendrimers that sprout off the nanoparticle's surface.

The particles could be used to launch a multiprong attack against tumors. The dendrimer arms can carry a number of different molecules, including molecules that target cancer cells, fluorescent imaging agents, and drugs that slow down or kill the cells. Once enough of the nanoparticles have gathered inside cancer cells, researchers could kill the tumors by using lasers or infrared light to heat up the gold nestled inside the dendrimers. The nanoparticles could thus kill tumors "by combining chemical therapy and physical therapy," says University of Michigan researcher Xiangyang Shi, who was involved in the work.

In a paper published in the July issue of Small, the researchers demonstrated targeting and imaging cancer cells in a laboratory dish with the new gold-dendrimer hybrid nanoparticles. They hooked four or five folic-acid and fluorescent-dye molecules to each of the dendrimer branches. Then they processed the particles to remove any extra surface charge, which can make the otherwise safe polymers toxic.

Cancer cells have many more folic-acid receptors on their surface than healthy cells do. The folic acid-laden nanoparticles attached to human cancer cells, and the cells swallowed them, along with the folic acid. The particles, which are only three nanometers wide, easily passed through the cell membrane.

Using a microscope, the researchers could see the particles that had accumulated inside the cells because of the dye molecules. The gold in the particle enhanced the contrast enough for the researchers to see that the particles gathered inside the cells in tiny spherical structures called lysosomes. The goal, Baker says, is to make particles that target cancer genes inside cells. "You would bind this material to, let's say, an oncogene in a cell and knock out the oncogene without harming anything else," he says.

But first, the researchers will have to show that their material works inside animals. Many other research groups have developed multifunctional nanoparticles to seek out cancer cells and deliver imaging molecules and drugs. Hundreds of different materials are now undergoing tests--gold nanoparticles, silica nanoparticles, polymer shells, and gold-coated glass beads, to name a few. To work in humans, any cancer nanotherapy has to pass three major challenges: the nanoparticles should target only cancer cells; any nanoparticles that do not accumulate inside cells should get eliminated from the body; and the particles should not trigger the body's immune response.

The first goal--targeting tumors--has not been easy. "Specificity in drug delivery has been historically a very elusive goal," says Mauro Ferrari, chair of the biomedical-engineering department at the University of Texas Health Science Center, in Houston. Because the new particles have dendrimers on which the researchers can attach different targeting molecules, the technique might work. But the real test will be doing that inside the body. "Targeting cancer cells can be done in a million different ways in the lab," Ferrari says. "But translating the technique into animals and humans has proven to be very difficult."

Baker believes that the polymer dendrimers should do the trick. In a 2005 study, his research team showed that dendrimer molecules--without gold inside--that were loaded with folic acid and a cancer drug specifically targeted human tumors in mice, and slowed or killed the tumors more efficiently than the drug alone. The researchers are now testing the new gold-dendrimer hybrid particles in mice and expect the particles to be just as effective as the plain dendrimers.

The small size of the new particles should ensure that they get eliminated from the body. The particles are smaller than most other nanoparticle systems designed for cancer therapy, according to Baker, so they shouldn't accumulate in vital organs such as the kidney, liver, or lungs. But their small size might raise other safety issues. Inside animals or humans, the nanoparticles could get into other cells, says Raoul Kopelman, a professor at the University of Michigan's Center for Biological Nanotechnology, who was not involved in the new work. "If you deal with animals or humans, there are many kinds of cells," Kopelman says. "Will they get into other cells like immune cells? It needs to be tested."

The most important problem to solve, says Ferrari, is how to make nanoparticles that can stealthily avoid the body's natural defense mechanisms and get to tumors. "The body has so many booby traps that keep drugs and nanoparticles and everything that is foreign [from getting] into anything of significance in the body," he says. "If you can build on top of the dendrimer platform the ability to make it across biological barriers with great efficiency, then we have a great breakthrough."

Holographic Video for Your Home

In a dark room down the hall from Michael Bove's office at MIT's Media Lab is an apparatus with a white screen the size of a CD jewel box. When Bove sits in a chair opposite the machine and flips a switch, an image of a human rib cage seems to leap out a few inches beyond the screen. The image is produced by the Mark II, a 14-year-old holographic-video system that takes up most of the room. But its vividness is one of the inspirations for Bove's own project: to bring 3-D video displays to consumer and medical markets.

Bove's new system, which is called Mark III, is scheduled to be completed by the end of the summer. It can run on a standard PC with a graphics card and will be small enough to fit on top of a desk. (In contrast, an earlier version of Mark II required whole racks of computers.) Although Bove doesn't yet have any manufacturing partners, he predicts that a product based on Mark III's design would cost just a couple of hundred dollars to manufacture and could become standard in doctor's offices as a way to view magnetic reso­nance images and computed tomography scans in 3-D detail. It would also be within the price range of gamers and technology enthusiasts.

The development of holographic video at MIT dates back to the late 1980s, when researchers put together Mark I, a proof-of-concept system with a low-resolution display. But Mark I and Mark II were destined never to leave the lab. They were, Bove says, "loud, finicky, and a general pain in the neck to work with." And while numerous researchers in the United States, Japan, Korea, and the United Kingdom have invested time and money in holographic video, no one has yet found a way to build a system that is compact, inexpensive, and easy to use.

In 2004, Bove, who is the head of the Consumer Electronics Lab at MIT, started exploring the possibility of making holographic video practical for consumers. Thanks to ever-more-­powerful PCs, small, ultrabright lasers, and other compact optoelectronic devices, he says, a consumer-friendly system is now within reach. And, he says, "there's more and more 3-D information that's kicking around" and could easily be projected holographi­cally. Many video games, for example, are now based on sophisticated 3-D models of the virtual world--models that have to be flattened out for the 2‑D screens of PCs or game machines. ­Similarly, the 3-D data in hospitals' large stores of magnetic resonance images and computed tomography scans has to be rendered as 2-D cross sections in order for doctors and patients to interpret it.

The Media Lab's video holograms appear to float above a piece of frosted glass. An electronic device behind the glass, called a light modu­lator, reproduces interference patterns that encode information about the pictured object. Laser light striking the modulator scatters just as it would if it were reflecting off the object at different angles.

A holographic video begins with a computed 3-D model of some moving object or scene. This model "can be thought of as having a whole lot of points on its surface at different depths that change over time," Bove says. To make that model holographic, a computer needs to figure out the intensity of the light that would be reflected from each point on the object to the point where the viewer's eyes will be. "You need to create a diffraction pattern that reconstructs all the different intensities for all the different angles," Bove says. He found that graphics chips in today's PCs are adept at doing this sort of 3‑D rendering, computing the diffraction patterns, and combining them into a single video output.

After the computation is complete, the output is fed to the light modulator. The introduction of a novel modulator, says Bove, is a primary reason he and his team have been able to shrink the holographic setup. The modulator is an inexpensive device adapted from use in telecommunications; Daniel ­Smalley, a graduate student in Bove's lab, modi­fied its components, optimizing them to convert electrical signals into holographic patterns. Previous holographic systems used up to 18 separate modulators that were made of expensive materials and took up a lot of space. The new device, says Bove, is about half the size of a postage stamp. It's fast and can accommodate a large amount of data, allowing for high-resolution holograms.

When light from a laser or set of lasers enters the modulator, it's converted into a series of diffraction patterns that are shaped and focused by a number of lenses and mirrors before they reach the screen. One of the advantages of the new modulator, explains Bove, is that it allows the researchers to avoid using a bulky rotating mirror that previous setups required to keep a holographic scene from drifting horizontally. That mirror was "the bane of the two early generations of [holographic] video display," he says. Now that it's been eliminated, Quinn ­Smithwick, a postdoc in the lab, has figured out how to shorten and fold the system's optical path so that the ­necessary components fit into a space about half a meter long.

Currently, Mark III uses a gas laser housed in a foot-long tube. But in its final version, it will use a semi­conductor laser as small as a postage stamp. Bove says the system will pro­ject a monochromatic video scene, about the size of a snapshot photo, that will have the resolution of a standard television image.

Aware that this sort of display wouldn't cut it in consumer applications, Bove and his team have laid out plans for the next generation of the system, Mark IV. Mark IV will use a set of powerful red, blue, and green semiconductor lasers to shine full-color videos onto a screen the size of a computer monitor. A prototype could be ready within the next couple of years.

The market, of course, will dictate how quickly, if ever, holographic video makes its way into living rooms or doctor's offices. If all goes well, however, it could give doctors a better window into the body, let scientists visualize data more accurately, and help gamers immerse themselves more deeply in virtual worlds.

The Big Picture, Made Easy

New software should make it even easier to annotate, manage, and share pictures online. The program, called Skitch, allows users to capture screen shots, sketch an idea, or muck around with photos with just a couple of clicks.

"It's a really fast way of showing someone something on your screen," says Keith Lang, a musician based in Melbourne, Australia, who is also the cofounder and interaction designer of Plasq, the company that developed the tool.

Some standard software tools, such as Microsoft's Onenote, already support these kinds of features, but a user has to know what she's doing, says Beryl Plimmer, a computer scientist who researches the usability of design tools at the University of Auckland, in New Zealand. "Joining annotations and pictures together to post into an e-mail or picture for the Web is probably beyond the average user's capability," she says.

"If they see something cool and want to put an image into their blog, it can be a long and consuming process," says Lang. And depending on what tools a person has, it might even take two or three applications to produce the desired result, he says.

For example, a user might have to enlist a program like Photoshop to crop an image, and a separate file transfer program to send it.

But with Skitch, it's much easier because the program combines all the tools a user needs in a single window interface, says Lang. "[Skitch] can do it in literally 10 seconds."

Skitch provides everything in a single window frame, so what you see is what you get. Cropping or resizing images is carried out just by dragging the corner of the image. Changing the file format is handled by a single on-screen button.

And once you have finished playing with your image, you can drag it wherever you want without having to save or format the file because it is created automatically. Clicking a single button immediately posts the image online in your MySkitch page, and from there you can copy it into a forum, a blog, or a MySpace page--again, with just a couple of clicks.

According to Lang, Skitch was originally built to meet the needs of Plasq--a distributed organization. With members as far afield as Norway, Australia, France, Switzerland, and the United States, he found that discussing ideas was proving to be a problem. "Necessity is the mother of invention, and no one needed it as much as us," says Lang.

"You've always been able to take snapshots," he says. "It's not that we are enabling people to do anything they couldn't do before. But being able to show someone what you're seeing and quickly annotate the image is a lot easier than having to type out a lengthy description."

Plimmer agrees. "Images and annotations are very powerful and natural ways to communicate," she says. "Making this functionality simple for the average user is a move forward."

The program was made available as a beta version last week after being tested by Plasq members and their friends over the past year.

The company isn't sure when it will launch the software commercially, and it's still sorting out pricing. "We're trying to make this as available as possible," says Lang.

One of Plasq's previous programs, Comic Life (which allows users to turn their photo albums into comic strips), is prebundled on Mac computers.

Skitch is likely to appeal to the increasing number of people using tablet-style, pen-based interfaces and touch screens, says Plimmer. Microsoft Vista now inherently supports this kind of "inking" function, she says. But so far, Skitch can only run on a Mac. Lang says that the company is currently working on a PC version.

CE 6.0 MSDN Online Chat - May 30th 2007, 9am Pacific.

Do you have tough technical questions regarding Windows Embedded CE 6.0 for which you're seeking answers? Do you want to tap into the deep knowledge of the talented Microsoft Embedded Devices Group members? If so, please join us for a live chat and bring on the questions! Windows Embedded CE is the operating system that is powering the next generation of 32-bit, small-footprint and mobile devices. This chat will cover the tools and technologies used to develop devices using the Windows Embedded CE operating system."

Kiosk Mode for CE 6.0

Last August I put up a blog post that showed how to create a Kiosk type device using CE 5.0 and I've had a number of requests to update the blog post to cover CE 6.0 - so here's the CE 6.0 Kiosk mode post.

First thing to cover is what is Kiosk Mode ? - Kiosk mode can be thought of as an operating system image that boots into a custom application that only gives the user the ability to do one specific task, examples are an ATM machine, Airport Check in Terminal, that kinds of thing.

The CE 6.0 operating system has a number of shells that ship with the devleopment tools, these include:

* 'Standard Shell' (start button, task bar, icons on the desktop)
* Command Shell (boots directly to a command prompt)
* Thin Client Shell (for Remote Desktop/Thin Client devices)

Here's the shell components exposed in the CE 6.0 catalog.


You may have noticed that the design templates in CE 6.0 add one of the shells, typically the Standard Shell, for our Kiosk mode device we will need to do a number of things...

1. Create an operating system that doesn't have a shell
2. Create our shell application
3. Create a 'launcher' for the shell application
4. Integrate everything into the final operating system image.


Home control manufacturer uses .NET Micro Framework to create product quickly

Windows Embedded Blog: "Home control manufacturer uses .NET Micro Framework to create product quickly

Dave Baker and John Holdstock from the Embedded Developer and Platform Evangelism (DPE) team have published an article on Embedded2Enterprise that shows how Leviton (a company that's been around since 1906!) use the .NET Micro Framework to bring a solution to the Home Automation space."

Webinar: Windows Embedded Opportunities in Enterprise Organizations (June 28th 2007)

Devices based on the Windows Embedded platforms - Windows Embedded CE, Windows XP Embedded and Windows Embedded for Point of Service - are increasingly being adopted by enterprise organizations to help improve efficiencies while driving down costs. Learn how Windows Embedded can help enterprise organizations decrease costs and increase efficiencies by using devices such as hand-held inventory scanners, ordering terminals, point-of-sale systems, ATMs, and others. You will also learn about how Windows Embedded is a key driver for enterprise adoption of Microsoft server, services and application products."

Windows Embedded Blog

Windows Embedded Blog: ".NET Micro Framework - book, video, webcast!

Here's a couple of updates on the .NET Micro Framework - interesting stuff!

Here’s a video interview from TechEd 2007 with Donald Thompson and Rob Miles, authors of the upcoming “Embedded Programming with the Microsoft® .NET Micro Framework” by Microsoft Press.

And tomorrow (June 7th) at noon (Pacific), Roger Wolff and Rob Miles are doing a webcast on developing with the .NET Micro Framework"

Update Rollup 1.0 for Windows XP Embedded (was Feature Pack 2008)

Update Rollup 1.0 for Windows XP Embedded enables OEM customers to use some of the latest Windows Vista components on the Windows XP Embedded platform, including Internet Explorer 7, Windows Media Player 11, User Mode Driver Framework, Remote Desktop Connection 6, and .NET Framework 3.0. It also includes hotfixes for Windows XP Embedded SP2 Feature Pack 2007. The Update Rollup 1.0 for Windows XP Embedded is not a stand alone product; it requires the developer to have Windows® Embedded Studio tool suite installed.

Here's the two installer locations.

* Evaluation Product
* Full Product

The Update Rollup 1.0 contains the bits that you may have known as 'Windows XP Embedded Service Pack 2 Feature Pack 2008'."

CE4All Blog up and running...

Some of the developers at Intrinsyc have started a blog to provide hints and tips to developers building embedded systems based on Windows CE - the blog authors have extensive knowledge of Windows CE having worked on devices based on the early releases of Windows CE through to the very latest editions.

Recent posts include information about Windows Mobile Device Center (WMDC) 6.1 [an update that supports embedded and mobile devices], Bluetooth, and O/S build."

"Imagine Cup, Embedded Invitational: Finalists Announced!

"Imagine Cup, Embedded Invitational: Finalists Announced!

The results are in, 15 teams are moving forward to the finals of the Imagine Cup Embedded Invitational - the list of finalists (and their reports) can be found here.

The world finals of the Imagine Cup are being held in South Korea in August (not too long to wait!) - I will of course update the blog with more information from the competition and the finals."

Mobile Development Handbook

Andy Wigley, Daniel Moth, and Peter Foot have been hard at work developing an MS Press book called Microsoft Mobile Development Handbook (link to Amazon) - the book (all 688 pages of it!) covers the following topics.

1. .NET Compact Framework—a Platform on the Move
2. Building a Microsoft Windows Forms GUI
3. Using SQL Server 2005 Compact Edition and Other Data Stores
4. Catching Errors, Testing, and Debugging
5. Understanding and Optimizing .NET Compact Framework Performance
6. Completing the Application: Packaging and Deployment
7. Exchanging Data with Backend Servers
8. Networking
9. Getting Connected
10. Security Programming for Mobile Applications
11. Threading
12. Graphics Programming
13. Direct3D Mobile
14. Interoperating with the Platform
15. Building Custom Controls
16. Internationalization
17. Developing with Windows Mobile
18. Introducing .NET Compact Framework Version 3.5 and Visual Studio 'Orcas'

If you are developing applications for Windows Mobile devices then you should consider getting a copy of the book."

Directly querying the Mantis DB for file resource ownership

A request came in to me the other day from Dave Baker on behalf of a customer and I wanted to share the answer with everyone. The ask is to directly query the SQL DB for the owning component of a file resource instead of using Target Designer's Filter Manager.

This query below will return all components in the DB that own a file, there's no reason why you can't pass in a list of files or use the query in an ASP or vbscript. You'll need query the MantisSQLDB, you can copy and past this query from the web page directly into Query Analyzer.

SELECT ExtendedProperties.StringValue, ComponentObjects.DisplayName

FROM ExtendedProperties INNER JOIN ComponentObjects ON ComponentObjects.ComponentID = ExtendedProperties.OwnerID

WHERE (ExtendedProperties.StringValue LIKE 'iexplore.exe') AND (ExtendedProperties.ResourceTypeID = 1) ORDER BY ExtendedProperties.StringValue

That query returns a list of components in my database, as you can see it found all versions of components that own the iexplore.exe file resource. The 'Primitive: iexplorer.exe' component is from the Feature Pack 2007 refactoring work we did to reduce footprint.

Internet Explorer
Internet Explorer - Hotfix Q319182
Internet Explorer
Internet Explorer - Hotfix Q321232
Internet Explorer - Hotfix Q313675
Internet Explorer - Hotfix Q316059
Internet Explorer

You could do some more interesting things here with the query, such as return the version number of the component, order them by version so you know which is the winning component to look for in Target Designer, etc...Or to find all file resources with something like the string 'wmi' in it, change 'iexplore.exe' to '%wmi%' in the query above.

Later we'll modify the script to return the owning components for registry key(s).

Note: This method of information retrieval is not for everybody. My recommendation to folks is to pick up a copy of Sean's XPe Component developer tools, I use them as well.

First look at dot net framework 3.0

Hello Friends

I am writing this article to give you almost all the needed information about the WinFx (.Net framework 3.0). I inspired to write this article is just because I feel that on the internet all the information are scattered every where and it was quite tough for me to collect the information and tools to proceed with the new era of software development. And I am also sure that now it won’t that much easy for us to move from .net2 to WinFx as we could move from .0 to .1 to .2 frameworks. I tried to write this article by my own words so that it would be easy to understand and addressing the proper direction as all the developers is needed. I am sure that once you gone through this document, you will have much comfort to learn and start up for .net 3.0.

Before you go through this article, I would like to ask you some of the questions that were arising in my mind when I was exploring the WinFx. Please keep all this questions in your mind and then proceed further in this article.

1. The first thing I would like to ask you is about How many time you feel that the windows application development and Web development is totally different in terms of creating the UI for both.

2. Have you ever thought that the Windows application can have same look and feel that we can give with the help of Html in web sites. How much flexibility your designer could have to prepare the GUI in Photoshop for the Windows Form.

3. How many communication methods you know and how many of them you had tried to implement by analyzing the requirements of software. Let me say that you might know XML web services, Remoting, MSMQ... and hardly tried to implement all this of them and hardly analyze the requirement to choose the right one from those methods.

4. How many times you had to deal with the very complex Business logic in your application and how many time you need to debug your code to ensure that the data manipulated by your code is the right one. This could be the case when you are implementing the ERP system. Whenever you had tried to implement that complex code, how dose your code looks likeJ, and how many of your team members can understand without your help.

5. How do you represent your code to your customer, your team members, and your Project leader with 99% of identical documents so all of them can easily understand whatever you have done in that? I am sure, that could never be possible, can be?

6. How many types of Software application you know? Let me say that you might be knowing, Standard Windows application, Web sites, Smart client application and Windows CE. Is there any new type of application that customer can or even already demand?

Once you find out the answer of all this questions, you will find nothing but the new technology from Microsoft named WinFx which is nothing but the code name of .net Framework

Ok, so now we have a vision that why we need some evolution for better intelligent and automated development environment. Let me explain in very brief about the WinFx technology just as an introductory part of it. I have given a link from where you can find detail information for those particular topics

You can find interesting White paper on WinFx from


There are so many basic needs which inspire Microsoft to release .net frameworks. But let me point one direction so that I can better explain you framework 3.0.

.Net Frameworks are running as an intermediate Layer between your applications developed using .net and Operating system. And so it’s of course considered as a Managed Code and that reduce the Risk for crashing an application. CLR is taking the responsibility to manage your code and manage the resource your application is using. And if we move to the future, .net Framework 3.0 is waiting for us
Just in brief, .net 3 (WinFx) is wrapper on .net 2.0 with some unbelievable features that make me to jump on my deskJ. The wrappers are again divided into four corner of Boxes.

1. WPF (Windows Presentation Foundation). New era for Windows Desktop Application UI designing. Following are some of the main features

· Now onwards there are no WinForm Controls that we are using till .net 2.0. Everything is XML. Whatever you write will be the XML for the desigining. Officially Microsoft gives its name XAML

· Because of this XML everything you can do in Html pages will now possible in Winform design also

· Cascading Style like web is possible in Winform also

· Designer draw the screen in Photoshop just like they where doing for web page. And then you can do the same step as you were doing to convert that layout in html

· Provides new type of application known as "WPF Browser based application", which can run in Internet Explorer just like the ActiveX and Applets are running

· 3D modeling.

· Animation like GIF file can be possible for the windows form.

WPF Architecures and functionality
WPF for beginners
Find the features in detail for WPF
Introduction Videos
First Hands on lab

2. WCF (Windows Communication Foundation). New era for Communication methods for the Distributed application development and Grid Computing with much flexibility that we never have before. This is something they are calling Service Oriented Application Architecture.

· Web service, WSE, Remorting, MSMQ... all of this is now a mapped in single API

· Easy to deploy and integration with Application.

· XML web service access speed will no more issue with the help of WCF.

Learning WCF (Architecture and Foundation)
Introduction of Developing WCF

3. WF (Windows Workflow Foundation). This is really really very cool features that Microsoft comes up with .net 3.0. With the help of this foundation you can implement any complex Business logic which you could not even think for it. Just within a few hours you can implement few weeks of work. The features are

· Draw the Flow chart and the coding part will be written automatically. Of course you have to feed some line of code

· Represent your logic on document which is actually the code you implement

· No more syntax and commenting for Code, because your logic is in form of Flow charts

· Maintain the State of Object (Application Objects) with the Help of State Chart. Draw the state diagram for your Object and it will reach exactly in the state as per you have drawn with in IDE

· I could not believe on my eyes when I saw the demonstration of Parallel Computing and Multi threading application development. Just to draw the flow chart with some parallel Flow in sequence, that’s all to do. Not to think anything for the thread and multi process management

Understanding Components in WF

4. Windows Info Card: I didn’t understand anything about this. If any one of you can understand please inform me

HOW TO START FOR WinFx (Tools and Resources)
I researched and find the proper tools so that you don’t have to research again to start up with ,net 3.0. It was hard time for me to organized all this information in proper direction

1. Visual Studio 2005 is the tool for WinFx development. So nothing to worry about new tool

2. Install and download, net 3.0 frameworks. But with this installation, your 2005 IDE will not allow you to create the WinFx applications

3. For the WPF development (Windows Form Design based on WinFx) install the "Microsoft Visual Studio Extension for WPF". Once you install this you can create the WPF form, WPF windows application, WPF pages, WPF controls, WPF browser application (This is not web app). Also these extensions allow you to create WCF services

4. For the Workflow development, Install the "Microsoft Visual Studio Extension for Workflow Foundation". Once you install this you can Create Sequence, Sequence Services, Sequence State Diagram.

That’s all to Do. And now you can be an expert for .net 3.0

Now there so many tools they have given for the XAML editing and WPF designing. Just like we have so many tools for the Html. Although we can do everything in VS IDE for the Html, designer will still prefer Dreamweaver for the Complex Html design. same way even though VS IDE gives everything for XAML creating and editing for the windows forms, Microsoft IDE is not proper tool for complex WinForm design like 3D modeling. Because of that Microsoft has given two tools

Microsoft Blends:
Using this tools you can create better XAML documents without writing any XAML syntax by your own. Create Style Sheets for Windows Form in form of Resouce Dictionary. This is exactly you can do in Dreamweaver for Html documents and Css file in form of Class Dictionary

Microsoft Design:
Using this tools you can create much complex XAML document with 3D modeling also. This can be compare as ADODB photoshop where you can craete the layout and export it as html document

There are also some third party tools are available for this designing, but i used to play with these two tools only and really amazing tools that we could not even realize before for the web application

Some of the features in Microsoft Design and Blend can only be possible in Windows Vista Environment like Glass Windows, 3D animation of windows control. Microsoft is on the way to produce number of more and more controls for the WinFx, and they will release mostly each year. There are number of Microsoft Partner Company has started to build third party control based on WinFx but none of them has released even beta version
OK, so now I think you have better startup for windows framework 3.0. So for what you are waiting for.

Whatever I have written here is all about what I understood and learnt. I might be wrong in some of the cases and so if you find any further and correct information please add your reply to this topic.

So, let’s begin our new journey for the future that we may have to face especially the person who worked on Microsoft Technology

Learning Resource (Contains Videos, Sample Application, Codes, Tricks for WPF, WCF, WF and CardSpace) (Complete Guide for WinFx, Almost Everything you needed to start with ।net 3)