Friday, January 15, 2010

From a cost/benefit perspective, the self-assembly described below requires propylene-terephthalate (PET), a nasty plastic, and, I imagine, a fair amount of petroleum for both materials and manufacturing. The dilemma of renewable energy technology! Traditional ways alone will not transform the giant landfill we've created but virtually every technology contributes to that landfill.

Steve

Self-assembling solar panels a step closer

January 14th, 2010 in Chemistry / Materials Science

The self assembly process made a device of 64,000 parts in 3 minutes. Image: Heiko O. Jacobs

(PhysOrg.com) -- Scientists Robert J. Knuesel and Heiko O. Jacobs of the University of Minnesota have developed a way to make tiny solar cells self-assemble.

The researchers had previously been unsuccessful in their attempts to make self-assembling electronic components. In large systems gravity can be used to drive self-assembly, and in nanoscale systems chemical processes can be used, but between the two scales, in the micrometer range, it is much more difficult.

To overcome the difficulties, Kneusel and Jacobs designed a flexible substrate of a thin layer of copper covered with propylene-terephthalate (PET). Regular depressions the same size as the "chiplets" were etched into the PET layer and then the sheet was dipped into a bath of molten solder, which coated the exposed copper in the etched depressions. Each chiplet consisted of a 20-60 µm silicon cube with one gold face. The silicon sides had a coating of hydrophobic (water-repelling) molecules, while the gold side had a hydrophilic (water-attracting) coating.

When the elements were placed in a container containing oil and water, they neatly arranged themselves in a sheet at the boundary between the liquids, with the gold side pointed down to the water layer. The substrate was then pulled slowly up through the boundary like a conveyor belt, and the elements neatly dropped in place in the depressions as the solder attracted the gold side. Accuracy was 98%. The assembly was covered with epoxy to keep the chiplets in place, and then a conducting electrode layer was added.

The device was able to assemble 62,000 elements, each of them thinner than a human hair, in only three minutes. The elimination of a dependency on gravity and sedimentation meant the chiplets could be reduced to below 100 micrometers in size. It was important to limit the assembly time to avoid oxidation of the surfaces, which would reduce surface energies and interfere with self-assembly. The water layer had to be acidic, at pH 2.0, and the temperature had to be kept at 95C to keep the solder molten.

The self assembly process. Elements align at the oil/water boundary. The "blank" solar cell has pre-cut places for the elements and is dipped through the boundary. As it is slowly drawn upwards, the elements pop into place.

The researchers think they can adapt their method to smaller components and larger assembled devices, and it could be used to cheaply and quickly assemble all kinds of high-quality electronic components on a wide range of flexible or inflexible substrates including plastics, semiconductors and metals. The assemblages could find uses in numerous applications such as solar cells, video displays and tiny semiconductors.
The use of this method in solar cell production would reduce the cost considerably since less silicon is needed, and it should also be possible to assemble solar chiplets into transparent, flexible materials, which would extend their range of uses.

The paper is published in the Proceedings of the National Academy of Sciences (PNAS).

More information: Self-assembly of microscopic chiplets at a liquid-liquid-solid interface forming a flexible segmented monocrystalline solar cell, Robert J. Knuesel and Heiko O. Jacobs, PNAS, DOI:10.1073/pnas.0909482107

© 2010 PhysOrg.com

Thursday, December 24, 2009

Another example of what continues to attract research funding and investment dollars. As others have pointed out, after seeing how well top down planning has solved so many transportation issues, I will continue to choose walking and biking as a bottom up alternative type of intelligent vehicles.

The holistic perspective, however, tends to undermine the legitimacy of the techno-fix approach with the fact that the most intelligent and healthy vehicles are self powered! Steve

Steve

How Intelligent Vehicles Will Increase the Capacity of Our Roads

As the percentage of computer-controlled cars on the road increases, traffic should flow smoothly for longer, says a new study.

Adaptive cruise control systems work by monitoring the road ahead using a radar or laser-based device and then use both the accelerator and brake to maintain a certain distance from the vehicle ahead. (Other variations can bring the car to a halt in the event of a potential accident).

These devices have been available on upmarket cars for ten years or more and are now becoming increasingly common. If you drive regularly on freeways, the chances are you regularly come across other vehicles being driven by these devices, especially in Europe and Japan (here, the density of traffic means that ordinary cruise control has never caught on in the way it has in the U.S.).

So how does the presence of computer-controlled vehicles affect traffic dynamics? Today, Arne Kesting and pals at the Technical University of Dresden in Germany provide an answer of sorts using a model of traffic flow in which both human and computer-driven cars share the road.

They say that the presence of computer-driven cars increases the amount of traffic that can flow on a road before jamming occurs. And the more of these cars, the greater the capacity becomes. "1 percent more [computer-controlled] vehicles will lead to an increase of the capacities by about 0.3 percent," they say.

That's interesting but there have been other studies suggesting that computer-controlled cars can lead to greater congestion and it's not at all clear why Kesting and company's analysis is superior.

Either way, the argument is probably moot. Computer-controlled cars are just the first step in what many expect to be a revolution in car travel. The big increases in traffic capacity are likely to come when cars are able to communicate with each other. This should allow entire platoons of vehicles to travel as one unit, with just a few centimetres gap between cars and the vehicle in the front communicating its intentions to all the others. Platooning should improve fuel efficiency, too.

Of course, that won't be possible until there is a critical mass of computer-controlled cars on the roads. Even then there is a bigger hurdle to overcome of creating the legal framework in which all this can happen: imagine the insurance claims if one of these platoons were to crash.

The biggest challenge for the makers of cars that drive themselves is no longer technical but legal.

Ref: arxiv.org/abs/0912.3613: Enhanced Intelligent Driver Model to Access the Impact of Driving Strategies on Traffic Capacity
A sign [article] of what's trending in California. Reusing landfill makes sense [you may even be able to capture the landfill's methane]; the conversion of ag land into power generation land is already creating battlelines as 'we' struggle to maintain our "way of life". In the words of Stewart Brand, "The climate change problem is principally an energy problem". Steve

Stanislaus County Board of Supervisors Watch (12/23/09)

last updated: December 22, 2009 10:53:17 PM
By unanimous vote, Stanislaus County supervisors Tuesday:
• Awarded a one-year exclusive negotiating period to Sol Orchard, which wants to build a solar energy farm at the former Greer Road Landfill. Sol's response beat those submitted by two other companies, including JKB Energy, which recently won a one-year negotiating period for a much larger solar farm next to the county's Fink Road landfill near Crows Landing. Sol has a pending partnership with Solar Power Partners, the nation's third-largest solar energy developer. Preliminary plans include a mounting system that doesn't penetrate soil. Sol will have a year to do environmental studies, make a power-selling deal and negotiate a lease with the county.
-- Garth Stapley

Monday, November 16, 2009

Vulcan Logic and the Missing Sink

By: David Richardson

The daily carbon dioxide emissions report probably doesn't come up very often at America's dining room tables, but Kevin Gurney and researchers from the Vulcan Project hope to soon see that change.

Gurney leads Purdue University's Vulcan Project , which has produced the nation's first county-by-county, hour-by-hour snapshot of CO2 emissions. With Vulcan — named after the Roman god of fire and funded by the federal government through the North American Carbon Program — it is possible to peer, literally, into your own backyard (or your neighbor's) to see how your local area is contributing to the global problem that ultimately leads to global warming.

A little goes a long way
The gases nitrogen and oxygen account for 98 percent of the Earth's atmosphere. As one of the alphabet soup of trace gases naturally present in the air, CO2 accounts for a mere .038 percent of the atmosphere by mass; however, it has a disproportionate impact on warming.

As Gurney , the associate director of the Climate Change Research Center at Purdue University, explained, "It comes down to how we look at the atmosphere. If we look at it in terms of mass — just the weight of things — then CO2 is very small, but if we think about it in terms of the amount of infrared radiation it would trap, then CO2 would look huge."

Although the oceans remove one-third to one-half of the carbon dioxide produced each year, "the bad news is, their ability to do so appears to be diminishing," Gurney said. Meanwhile, the thousand-year atmospheric lifespan of CO2 means levels will continue to ratchet up even if emissions ended immediately.

Nevertheless, given what is known about carbon dioxide sources and sinks (locales or processes that remove CO2 from the atmosphere), scientists say CO2 has not accumulated as fast as one might expect, leading to the ironic question, "Why is the atmosphere not more polluted?"

Gurney and Daniel Mendoza, a graduate student with the Vulcan Project, suspect the answer may lie in an unrecognized carbon sink somewhere on the planet, and in light of the diminishing CO2 capacity of the seas, they believe this "missing sink" will play an important role in the carbon policy equation.

Though Gurney said CO2 concentration varies by no more than a few parts per million from pole to pole, he believes that mapping these barely perceptible peaks and valleys in CO2 levels could lead the way to the missing carbon sinks.

"In effect the weather map is a great analogy," he said. "Most people know that the air has temperature everywhere, but we know that in some places it's a bit higher and in some places it's a bit lower just like a weather map shows. CO2 is kind of like that. It's everywhere but it definitely has little bumps and valleys depending upon what's happening at the surface.

"The absence of the gas, where it would otherwise be expected, would indicate the possible location of the missing sinks."

To find the hidden sink, Gurney explained, research must first pin down where the observed carbon dioxide in the atmosphere originates. But that is not easy. CO2 mixes thoroughly with air as it travels over the planet. "You can find CO2 at the South Pole that was generated by industrial activity in the Northern Hemisphere."

Nonetheless, Gurney said, "We've known at the national level how much is coming from the U.S. as a whole," but prior to the Vulcan Project, the best estimates of emissions at the state level were based on fuel sales figures and shipping records.

Drilling down to the local level the math gets even fuzzier. Gurney says "the gold standard" of CO2 emissions estimates merely apportioned total discharges among jurisdictions on the basis of population density. Noting the obvious flaw in that approach, he points out that major CO2 emitters such as interstate highways and power plants, which alone account for 40 percent of U.S. emissions, are often a considerable distance from the population centers they serve.

To get a more accurate view of emissions sources Gurney turned to the U.S. Environmental Protection Agency and its network of local air pollution monitors . "Emission monitoring has gone on for 40 years — since the 1960s — which is an amazing legacy of information and infrastructure built and perfected over four decades," he explained. "In fact it is so good that we almost take it for granted now."

Reverse engineering
Although the EPA monitoring system was never intended to record CO2 levels, Gurney said their reports can be reverse-engineered to quantify the CO2 component of the exhaust that produced the pollution, "provided you know the type of device and fuel."

Mendoza is using data recycled from transportation studies picked up by "the thin wires that you can see that run across the road."

"They're the Federal Highway Administration 's weight and motion sensors," he said. "They classify vehicles as either light duty or heavier duty" as they pass over the counter. Mendoza says from this and similar data, Vulcan can generate CO2 emissions figures along major roads.

Although Vulcan collects no original field data, Mendoza says an incredible amount of detail can be coaxed from archival sources. Looking back to the year 2002, he says, Vulcan provides a sector-by-sector breakdown of CO2 emissions from the power plant sector, residential, commercial sectors, and the cement sector. "We can bring it down to a 10-kilometer-by-10-kilometer grid and provide a temporal pattern for most sectors," he said.

"We were a little bit floored by how much emissions actually come out of very unpopulated areas — a lot of that is due to electricity generation, which is such a great part of the economy here," Mendoza noted.

He said Vulcan delivered an additional surprise, revealing large cities to be "much more" CO2 efficient than smaller communities. Aside from the efficiency of urban mass transit, Mendoza believes lifestyle plays a role in the disparity. "Here in rural Indiana, we have large houses out in the middle of two acres, so the heating cost are much larger, but in the city you have the urban heat island effect keeping costs down."

The Vulcan Project Web page offers animated displays that show local CO2 emissions ramping up and down in response to heating and cooling needs, traffic patterns or other cycles of daily American life for the year 2002.

International Appeal
Gurney said U.S. government officials asked him if Vulcan can be used for "verification purposes" for an eventual climate treaty. It's an idea he finds appealing: "It would certainly be better to have an independent scientific body perform that function than a government."

He believes the average citizen can benefit from a dialogue with Vulcan as well. (You can take a look immediately using Google Earth .)

"I was pretty amazed at how interested people were when we released the maps and the movies," he said. "It's always been difficult to communicate the essence of this problem because it's very abstract in a certain way. One of the things that Vulcan has done is start to make this problem a bit more real and at least make it recognizable to people's lives.

"It brings the discussion down to the human scale, to the scale people live, in their state, their county, in some cases, their city. It brings it into their living rooms."

Mendoza said his next tasks will include adding data from Mexico and Canada so the Vulcan grid covers the entire continent.

Gurney, who predicts Vulcan eventually will produce emissions forecasts three months in advance, said he has received funding for a global version of the project and is currently exploring partnership opportunities with candidates in Europe, Asia and South America.

Despite the detour down the public awareness road, Gurney says he has not wavered from his initial quest to discover the missing sink. However, with a better grasp on where the CO2 originates, he says it will take direct observation, on a global scale, to determine where it might be going, and that job he said, is best performed from space.

Print

Monday, November 2, 2009

Intelligent Lighting Controls Deliver ROI in 3 Years

October 16, 2009

More building owners and managers are considering intelligent lighting systems to cut energy use because lighting, on average, accounts for approximately one-quarter of a building’s overall electricity use, rivaled only by HVAC and office equipment, according to Gary Meshberg, LEED, AP, and director of sales for Encelium Technologies .

State-of-the-art lighting systems reduce costs, demonstrate an overall commitment to being environmentally friendly, as well as contribute toward higher building values, higher tenant retention rates and overall end-user satisfaction, said Meshberg.

As an example, Encelium Technologies’ Energy Control System (ECS) uses addressable networking technology in combination with advanced control hardware and software, which can be integrated with HVAC, security and irrigation systems.

ECS uses a universal I/O (input/output) module to connect to standard lighting components such as low-voltage non-dimming ballasts, and occupancy sensors or photo sensors for digital control capabilities. The system allows each person to control his or her own workspace light levels from their desktop computer, and provides facility managers with energy management capabilities.

Installation of an intelligent lighting system also provides a significant return on investment (ROI), said Meshberg. He cites the following example. ECS installations, which cost between $3.00 and $3.50 per square foot for existing space, are designed to reduce lighting-related energy costs by 50 to 75 percent, so the projected savings of 75 cents to $1.25 per square foot per year means that the installation cost is amortized in less than three years.

As an example, the Rogers Centre sports and entertainment complex in Toronto, with approximately 7,000 light fixtures, cut its energy use by 77 percent, or by 3,731,000 KWh annually, with an ECS installation, according to Meshberg. The complex also achieved a 39 percent reduction in energy demand and a savings of 76 percent for energy costs. This translates into a cost savings of about $300,000 per year for the complex.

Meshberg also said ECS installations ease the way for buildings to earn the U.S. Green Building Council’s Leadership in Energy and Environmental (LEED) certification, contributing up to 18 points needed for certification, as well as facilitate a building’s compliance with ASHRAE 90.1, EPAct, Title 24 of the California Code of Regulations and various utility rebate programs.

Tuesday, October 6, 2009

Spinning flywheels said to make greener energy

Sep 21, 2009
Associated Press Online
By JAY LINDSAY

TYNGSBOROUGH, Mass., Sep. 21, 2009 (AP Online delivered by Newstex) -- Spinning flywheels have been used for centuries for jobs from making pottery to running steam engines. Now the ancient tool has been given a new job by a Massachusetts company: smooth out the electricity flow, and do it fast and clean.
Beacon Power's flywheels -- each weighing one ton, levitating in a sealed chamber and spinning up to 16,000 times per minute -- will make the electric grid more efficient and green, the company says. It's being given a chance to prove it: the U.S. Department of Energy has granted Beacon a $43 million conditional loan guarantee to construct a 20-megawatt flywheel plant in upstate New York.
"We are very excited about this technology and this company," said Matt Rogers, a senior adviser to the Secretary of Energy. "It's a lower (carbon dioxide) impact, much faster response for a growing market need, and so we get pretty excited about that."
Beacon's flywheel plant will act as a short-term energy storage system for New York's electrical distribution system, sucking excess energy off the grid when supply is high, storing it in the flywheels' spinning cores, then returning it when demand surges. The buffer protects against swings in electrical power frequency, which, in the worst cases, cause blackouts.
Such frequency regulation makes up just 1 percent of the total U.S. electricity market, but that's equal to more than $1 billion annually in revenues. The job is done now mainly by fossil-fuel powered generators that Rogers said are one-tenth the speed of flywheels and create double the carbon emissions.
Beacon said the carbon emissions saved over the 20-year life of a single 20-megawatt flywheel plant are equal to the carbon reduction achieved by planting 660,000 trees.
Flywheels also figure into the emerging renewable energy market, where intermittent energy sources such as wind and solar provide power at wildly varying intensities, depending on how long the breeze blows and sun shines. That increases the need for the faster frequency buffering, Rogers said.
Dan Rastler of the Electric Power Research Institute, an industry research group, added that if a carbon tax is passed by Congress, flywheels start looking a lot better than fossil-fuel powered alternatives.
Beacon's flywheels, massive carbon and fiberglass cylinders, have already been tested on a small scale in New York, California and the company's Tyngsborough offices. Chief executive officer Bill Capp hopes the Stephentown, N.Y., plant will be up and running by the end of 2010.
Flywheels are rotating discs or cylinders that store energy as motion, like the bicycle wheel that keeps rotating long after a pedal's been turned. That energy can be drawn off smoothly depending on the needs of the user, such as when the speed of a potter's wheel is adjusted to shape the clay as desired.
The basics of Beacon's flywheels seem simple enough as they spin silently in their chambers in a small facility outside Beacon's Tyngsborough plant. But the technological challenges to create them were immense and have cost Beacon $180 million, so far.
For instance, the one-ton flywheel had to be durable enough to spin smoothly at exceptionally high speeds. To avoid losing stored energy to friction, the flywheel levitates between magnets in a vacuum chamber.
"We've pretty much demonstrated that it works, it's just a question of scaling," Capp said. "The more we run, the more people get comfortable with us."
Beacon's flywheels are powered by the excess energy they take off the grid. When demand for electricity surges, the flywheels even things out and return the energy to the grid by slowing down.
Flywheels have some clear benefits in energy storage, including the durability to store and release power hundreds of thousands of times over a long, 20-year life, said Yuri Makarov, chief scientist in power systems at Pacific Northwest National Laboratory, which tested Beacon's system for the DOE. Chemical batteries being developed for the same job wear out after a couple thousand charge-and-discharge cycles.
Flywheels use less energy than fossil-fuel powered generators because they adjust more quickly to the ever-shifting demands of the electric grid by simply slowing down or spinning faster, Makarov said. Fossil-fuel generators are slower and less efficient as they constantly fire up and down.
The disadvantage of flywheels, Makarov said, is that they can only store a limited amount of energy for a limited amount of time. That can shut them out of numerous other services the grid demands -- and that other storage technologies can perform -- such as long-term power storage.
Regulations in many markets are also lagging. Beacon will bid against other power generators to provide frequency regulation, but in some markets, the bidding system doesn't even exist yet for energy storage.
Beacon's reward for taking on the technology is that it's the first flywheel company in the nation ready to provide utility-scale frequency regulation in the electric grid. Rogers said the New York project will help show whether the flywheels can do the job:
"If they're successful in New York, we'd expect this kind of technology to be picked up in many other markets around the country," he said.

Wednesday, September 16, 2009

Superefficient Solar from Nanotubes

Tuesday, September 15, 2009

Carbon nanotube photovoltaics can wring twice the charge from light.
By Katherine Bourzac

Today's solar cells lose much of the energy in light to heat. Now researchers at Cornell University have made a photovoltaic cell out of a single carbon nanotube that can take advantage of more of the energy in light than conventional photovoltaics. The tiny carbon tubes might eventually be used to make more-efficient next-generation solar cells.

"The main limiting factor in a solar cell is that when you absorb a high-energy photon, you lose energy to heat, and there's no way to recover it," says Matthew Beard , a senior scientist at the National Renewable Energy Laboratory in Golden, CO. Loss of energy to heat limits the efficiency of the best solar cells to about 33 percent. "The material that can convert at a much higher efficiency will be a game-changer," says Beard.

Researchers led by Paul McEuen , professor of physics at Cornell, began by putting a single nanotube in a circuit and giving it three electrical contacts called gates, one at each end and one underneath. They used the gates to apply a voltage across the nanotube, then illuminated it with light. When a photon hits the nanotube, it transfers some of its energy to an electron, which can then flow through the circuit off the nanotube. This one-photon, one-electron process is what normally happens in a solar cell <http://www.technologyreview.com/energy/23459/> . What's unusual about the nanotube cell, says McEuen, is what happens when you put in what he calls "a big photon" -- a photon whose energy is twice as big as the energy normally required to get an electron off the cell. In conventional cells, this is the energy that's lost as heat. In the nanotube device, it kicks a second electron into the circuit. The work was described last week in the journal Science <http://sciencemag.org/> .

There's evidence that another class of nanomaterials called quantum dots can also convert the energy of one photon into more than one electron. However, making operational quantum-dot cells that can do this has proved a major hurdle, says Beard, whose lab, led by Arthur Nozik , is working on the problem. One of the challenges with quantum-dot solar is that it's very difficult to get the freed electrons to leave the quantum dot and enter an external circuit. "The system is teasing you; you can't get those charge carriers out, so what's the point?" says Ji Ung Lee , professor of nanoscale engineering at the State University of New York in Albany. "McEuen's group has shown this in a system where you can get the extra carriers out."

McEuen cautions that his work on carbon nanotube photovoltaics is fundamental. "We've made the world's smallest solar cell, and that's not necessarily a good thing," he says. To take advantage of the nanotubes' superefficiency, researchers will first have to develop methods for making large arrays of the diodes. "We're not at a point where we can scale up carbon nanotubes, but that should be the ultimate goal," says Lee, who developed the first nanotube diodes while a researcher at General Electric.

It's not clear why the nanotube photovoltaic cell offers this two-for-one energy conversion. "It's mysterious to us," says McEuen. However, the most likely reason is that while conventional solar materials have only one energy level for electrons to move through, carbon nanotubes have several. And two of them just happen to be very well matched: one of the energy levels, or bandgaps, is twice as high as the other. "We may have gotten lucky, and it has very little to do with the fact that it's a carbon nanotube," says McEuen. This means, McEuen hopes, that even if it proves too challenging to make arrays of nanotube solar cells, materials scientists can look for pairs of materials that have these kinds of matched bandgaps, and layer them to make solar cells that do with two materials what the single nanotube cells can do. "Maybe the answer won't be in nanotubes, but in another pair of materials," McEuen says.

Copyright Technology Review 2009.