Global Warming on Bulgaria on Christmas Day

Global warming is increasing. The previous record on Christmas Day in Veliko Tarnovo was recorded in 1956, long before the more recent effects of the global warming started to kick in, and it was only 16.6 degrees Celsius.

There is good reason to reflect for a moment about global warming. Weather forecasts predict that as of Monday, all of Bulgaria will experience much lower temperatures with the still typical winter weather setting in by the end of next week.

Instead of the traditional snowy Christmas, several Bulgarian cities had temperature records on December 25, 2010, Christmas Day. The highest air temperature in Bulgaria on Saturday was recorded in the northern city of Veliko Tarnovo – 20.1 degrees Celsius. via

The dinosaur age may become extinct due to global warming

Extinction of dinosaurs in New Zealand caused by global warming. Several types of extinct reptiles. It has survived ice ages, volcanic eruptions and the intrusion of humans on its South Pacific island home, but New Zealand's last survivor of the dinosaur age may become extinct due to global warming.

Mounted with spiny scales from head to tail and covered by rough, grey skin that disguises them among the trees, the tuatara is one of the world's oldest living creatures.

But the lizard-like reptile is facing increasing risk of extinction from global warming because of its dependency on the surrounding temperature which determines the sexes of unborn young while still in their eggs.

"They've certainly survived the climate changes in the past but most of them (past climate changes) have been at a more slower rate," said Jennifer Moore, a Victoria University researcher investigating the tuatara's sexual behaviour.

"So you wouldn't expect these guys to be able to adapt to a climate that's changing so rapidly."

The sex of a tuatara depends on the temperature of the soil where the eggs are laid. A cooler temperature produces females, while a warmer soil temperature results in male offsprings. [source]

Human activities are warming the planet at a dangerous rate

By Juliet Eilperin

An international panel of climate scientists said yesterday that there is an overwhelming probability that human activities are warming the planet at a dangerous rate, with consequences that could soon take decades or centuries to reverse.

The Intergovernmental Panel on Climate Change, made up of hundreds of scientists from 113 countries, said that based on new research over the last six years, it is 90 percent certain that human-generated greenhouse gases account for most of the global rise in temperatures over the past half-century.

Declaring that "warming of the climate system is unequivocal," the authors said in their "Summary for Policymakers" that even in the best-case scenario, temperatures are on track to cross a threshold to an unsustainable level. A rise of more than 3.6 degrees Fahrenheit above pre-industrial levels would cause global effects -- such as massive species extinctions and melting of ice sheets -- that could be irreversible within a human lifetime. Under the most conservative IPCC scenario, the increase will be 4.5 degrees by 2100.
ad_icon

Richard Somerville, a distinguished professor at the Scripps Institution of Oceanography and one of the lead authors, said the world would have to undertake "a really massive reduction in emissions," on the scale of 70 to 80 percent, to avert severe global warming.

The scientists wrote that it is "very likely" that hot days, heat waves and heavy precipitation will become more frequent in the years to come, and "likely" that future tropical hurricanes and typhoons will become more intense. Arctic sea ice will disappear "almost entirely" by the end of the century, they said, and snow cover will contract worldwide.

read more www.washingtonpost.com

Global warming and evolution

You can see that there's practical value in learning more about Global warming and evolution. Can you think of ways to apply what's been covered so far?

The battle over the science of global warming has long been a street fight between mainstream researchers and skeptics. But never have the scientists received such a deep wound as when, in late November, a large trove of e-mails and documents stolen from the Climatic Research Unit at Britain's University of East Anglia were released onto the Web.

In the ensuing "Climategate" scandal, scientists were accused of withholding information, suppressing dissent, manipulating data and more. But while the controversy has receded, it may have done lasting damage to science's reputation: Last month, a Washington Post-ABC News poll found that 40 percent of Americans distrust what scientists say about the environment, a considerable increase from April 2007. Meanwhile, public belief in the science of global warming is in decline.

The central lesson of Climategate is not that climate science is corrupt. The leaked e-mails do nothing to disprove the scientific consensus on global warming. Instead, the controversy highlights that in a world of blogs, cable news and talk radio, scientists are poorly equipped to communicate their knowledge and, especially, to respond when science comes under attack.
ad_icon

A few scientists answered the Climategate charges almost instantly. Michael Mann of Pennsylvania State University, whose e-mails were among those made public, made a number of television and radio appearances. A blog to which Mann contributes, RealClimate.org, also launched a quick response showing that the e-mails had been taken out of context. But they were largely alone. "I haven't had all that many other scientists helping in that effort," Mann told me recently.

This isn't a new problem. As far back as the late 1990s, before the news cycle hit such a frenetic pace, some science officials were lamenting that scientists had never been trained in how to talk to the public and were therefore hesitant to face the media.

"For 45 years or so, we didn't suggest that it was very important," Neal Lane, a former Clinton administration science adviser and Rice University physicist, told the authors of a landmark 1997 report on the gap between scientists and journalists. ". . . In fact, we said quite the other thing."

The scientist's job description had long been to conduct research and to teach, Lane noted; conveying findings to the public was largely left to science journalists. Unfortunately, despite a few innovations, that broad reality hasn't changed much in the past decade.

read more www.washingtonpost.com

Global warming in a snowstorm

You can see that there's practical value in learning more about Global warming in a snowstorm. Can you think of ways to apply what's been covered so far?

The dead of winter – especially this winter with its massive snow storms in the eastern United States – is not the easiest time to make the case for global warming. Short-term weather events and long-range climate change are not the same thing, of course, but it’s hard to separate them in the public’s mind.

But it’s even harder these days to convincingly argue that climate change is a reality.

“Gloomy unemployment numbers, public frustration with Washington, attacks on climate science, and mobilized opposition to national climate legislation represent a ‘perfect storm’ of events that have lowered public concerns about global warming even among the alarmed,” says Anthony Leiserowitz, director of the Yale Project on Climate Change.

Yale and George Mason University recently polled on the question. Since 2008, the number of people who don’t believe global warming is happening has more than doubled to 16 percent. At the same time, those “alarmed” at the prospect of climate change has dropped from 18 percent to just 10 percent, and those who say they’re “concerned” has dropped from 33 percent to 29 percent.

As often happens, shifting attitudes change the political dynamic.

At the environment web site Grist, Amanda Little writes, “Sen. James Inhofe (R) of Oklahoma, one of the world’s most vociferous climate skeptics, is practically giddy these days.”

In the wake of recent scandals and heightened criticism of climate scientists, Inhofe is leading the charge against the Intergovernmental Panel on Climate Change. The IPCC shared the Nobel Peace Prize in 2007 with former Vice President Al Gore.

“There is a crisis of confidence in the IPCC,” Inhofe said in a Senate speech earlier this month. “The challenges to the integrity and credibility of the IPCC merit a closer examination by the US Congress.”

Read more www.csmonitor.com

By Brad Knickerbocker

The science of climate change is now well established

The science of climate change is now well established. This is the result of painstaking work of over two decades carried out by thousands of scientists drawn from across the globe to assess every aspect of climate change for the benefit of humanity. The Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC) was produced in the year 2007, and highlighted, on the basis of careful observations extending over a long period of time, that “warming of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice and rising global average sea level.” It was also stated clearly that most of the “observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations. It is likely that there has been significant anthropogenic warming over the past 50 years averaged over each continent (except Antarctica).”

It is important to remember that changes in climate are not limited merely to an increase in temperature, but in fact involve several impacts such as an increase in intensity and frequency of floods, droughts, heat waves and extreme precipitation events. Therefore, these pose serious implications for the availability of water in several parts of the world and could have negative impacts on the yields of several crops.

read more beta.thehindu.com

Climate chief was told of false glacier claims

Ben Webster, Environment Editor

The chairman of the leading climate change watchdog was informed that claims about melting Himalayan glaciers were false before the Copenhagen summit, The Times has learnt.

Rajendra Pachauri was told that the Intergovernmental Panel on Climate Change assessment that the glaciers would disappear by 2035 was wrong, but he waited two months to correct it. He failed to act despite learning that the claim had been refuted by several leading glaciologists.

The IPCC’s report underpinned the proposals at Copenhagen for drastic cuts in global emissions.

Dr Pachauri, who played a leading role at the summit, corrected the error last week after coming under media pressure. He told The Times on January 22 that he had only known about the error for a few days. He said: “I became aware of this when it was reported in the media about ten days ago. Before that, it was really not made known. Nobody brought it to my attention. There were statements, but we never looked at this 2035 number.”

Asked whether he had deliberately kept silent about the error to avoid embarrassment at Copenhagen, he said: “That’s ridiculous. It never came to my attention before the Copenhagen summit. It wasn’t in the public sphere.”

However, a prominent science journalist said that he had asked Dr Pachauri about the 2035 error last November. Pallava Bagla, who writes for Science journal, said he had asked Dr Pachauri about the error. He said that Dr Pachauri had replied: “I don’t have anything to add on glaciers.”

The Himalayan glaciers are so thick and at such high altitude that most glaciologists believe they would take several hundred years to melt at the present rate. Some are growing and many show little sign of change.

Dr Pachauri had previously dismissed a report by the Indian Government which said that glaciers might not be melting as much as had been feared. He described the report, which did not mention the 2035 error, as “voodoo science”.

Mr Bagla said he had informed Dr Pachauri that Graham Cogley, a professor at Ontario Trent University and a leading glaciologist, had dismissed the 2035 date as being wrong by at least 300 years. Professor Cogley believed the IPCC had misread the date in a 1996 report which said the glaciers could melt significantly by 2350.

Mr Pallava interviewed Dr Pachauri again this week for Science and asked him why he had decided to overlook the error before the Copenhagen summit. In the taped interview, Mr Pallava asked: “I pointed it out [the error] to you in several e-mails, several discussions, yet you decided to overlook it. Was that so that you did not want to destabilise what was happening in Copenhagen?”

Dr Pachauri replied: “Not at all, not at all. As it happens, we were all terribly preoccupied with a lot of events. We were working round the clock with several things that had to be done in Copenhagen. It was only when the story broke, I think in December, we decided to, well, early this month — as a matter of fact, I can give you the exact dates — early in January that we decided to go into it and we moved very fast.

“And within three or four days, we were able to come up with a clear and a very honest and objective assessment of what had happened. So I think this presumption on your part or on the part of any others is totally wrong. We are certainly never — and I can say this categorically — ever going to do anything other than what is truthful and what upholds the veracity of science.”

Dr Pacharui has also been accused of using the error to win grants worth hundreds of thousands of pounds.

source www.timesonline.co.uk

UN wrongly linked global warming to natural disasters

By Jonathan Leake, Science and Environment Editor

THE United Nations climate science panel faces new controversy for wrongly linking global warming to an increase in the number and severity of natural disasters such as hurricanes and floods.

It based the claims on an unpublished report that had not been subjected to routine scientific scrutiny — and ignored warnings from scientific advisers that the evidence supporting the link too weak. The report's own authors later withdrew the claim because they felt the evidence was not strong enough.

The claim by the Intergovernmental Panel on Climate Change (IPCC), that global warming is already affecting the severity and frequency of global disasters, has since become embedded in political and public debate. It was central to discussions at last month's Copenhagen climate summit, including a demand by developing countries for compensation of $100 billion (£62 billion) from the rich nations blamed for creating the most emissions.

Ed Miliband, the energy and climate change minister, has suggested British and overseas floods — such as those in Bangladesh in 2007 — could be linked to global warming. Barack Obama, the US president, said last autumn: "More powerful storms and floods threaten every continent."

Last month Gordon Brown, the prime minister, told the Commons that the financial agreement at Copenhagen "must address the great injustice that . . . those hit first and hardest by climate change are those that have done least harm".

The latest criticism of the IPCC comes a week after reports in The Sunday Times forced it to retract claims in its benchmark 2007 report that the Himalayan glaciers would be largely melted by 2035. It turned out that the bogus claim had been lifted from a news report published in 1999 by New Scientist magazine.

The new controversy also goes back to the IPCC's 2007 report in which a separate section warned that the world had "suffered rapidly rising costs due to extreme weather-related events since the 1970s".

It suggested a part of this increase was due to global warming and cited the unpublished report, saying: "One study has found that while the dominant signal remains that of the significant increases in the values of exposure at risk, once losses are normalised for exposure, there still remains an underlying rising trend."

The Sunday Times has since found that the scientific paper on which the IPCC based its claim had not been peer reviewed, nor published, at the time the climate body issued its report.

When the paper was eventually published, in 2008, it had a new caveat. It said: "We find insufficient evidence to claim a statistical relationship between global temperature increase and catastrophe losses."

Despite this change the IPCC did not issue a clarification ahead of the Copenhagen climate summit last month. It has also emerged that at least two scientific reviewers who checked drafts of the IPCC report urged greater caution in proposing a link between climate change and disaster impacts — but were ignored.

The claim will now be re-examined and could be withdrawn. Professor Jean-Pascal van Ypersele, a climatologist at the Universite Catholique de Louvain in Belgium, who is vice-chair of the IPCC, said: "We are reassessing the evidence and will publish a report on natural disasters and extreme weather with the latest findings. Despite recent events the IPCC process is still very rigorous and scientific."

The academic paper at the centre of the latest questions was written in 2006 by Robert Muir-Wood, head of research at Risk Management Solutions, a London consultancy, who later became a contributing author to the section of the IPCC's 2007 report dealing with climate change impacts. He is widely respected as an expert on disaster impacts.

Muir-Wood wanted to find out if the 8% year-on-year increase in global losses caused by weather-related disasters since the 1960s was larger than could be explained by the impact of social changes like growth in population and infrastructure.

Such an increase, coinciding with rising temperatures, might suggest that global warming was to blame. If proven this would be highly significant, both politically and scientifically, because it would confirm the many predictions that global warming will increase the frequency and severity of natural hazards.

In the research Muir-Wood looked at a wide range of hazards, including tropical cyclones, thunder and hail storms, and wildfires as well as floods and hurricanes.

He found from 1950 to 2005 there was no increase in the impact of disasters once growth was accounted for. For 1970-2005, however, he found a 2% annual increase which "corresponded with a period of rising global temperatures,"

Muir-Wood was, however, careful to point out that almost all this increase could be accounted for by the exceptionally strong hurricane seasons in 2004 and 2005. There were also other more technical factors that could cause bias, such as exchange rates which meant that disasters hitting the US would appear to cost proportionately more in insurance payouts.

Despite such caveats, the IPCC report used the study in its section on disasters and hazards, but cited only the 1970-2005 results.

The IPCC report said: "Once the data were normalised, a small statistically significant trend was found for an increase in annual catastrophe loss since 1970 of 2% a year." It added: "Once losses are normalised for exposure, there still remains an underlying rising trend."

Muir-Wood's paper was originally commissioned by Roger Pielke, professor of environmental studies at Colorado University, also an expert on disaster impacts, for a workshop on disaster losses in 2006. The researchers who attended that workshop published a statement agreeing that so far there was no evidence to link global warming with any increase in the severity or frequency of disasters. Pielke has also told the IPCC that citing one section of Muir-Wood's paper in preference to the rest of his work, and all the other peer-reviewed literature, was wrong.

He said: "All the literature published before and since the IPCC report shows that rising disaster losses can be explained entirely by social change. People have looked hard for evidence that global warming plays a part but can't find it. Muir-Wood's study actually confirmed that."

Mike Hulme, professor of climate change at the Tyndall Centre, which advises the UK government on global warming, said there was no real evidence that natural disasters were already being made worse by climate change. He said: “A proper analysis shows that these claims are usually superficial”

Such warnings may prove uncomfortable for Miliband whose recent speeches have often linked climate change with disasters such as the floods that recently hit Bangladesh and Cumbria. Last month he said: “We must not let the sceptics pass off political opinion as scientific fact. Events in Cumbria give a foretaste of the kind of weather runaway climate change could bring. Abroad, the melting of the Himalayan glaciers that feed the great rivers of South Asia could put hundreds of millions of people at risk of drought. Our security is at stake.”

Muir-Wood himself is more cautious. He said: "The idea that catastrophes are rising in cost partly because of climate change is completely misleading. "We could not tell if it was just an association or cause and effect. Also, our study included 2004 and 2005 which was when there were some major hurricanes. If you took those years away then the significance of climate change vanished."

Some researchers have argued that it is unfair to attack the IPCC too strongly, pointing out that some errors are inevitable in a report as long and technical as the IPCC's round-up of climate science. "Part of the problem could simply be that expectations are too high," said one researcher. "We have been seen as a scientific gold standard and that's hard to live up to."

Professor Christopher Field,director of the Department of Global Ecology at the Carnegie Institution in California, who is the new co-chairman of the IPCC working group overseeing the climate impacts report, said the 2007 report had been broadly accurate at the time it was written.

He said: “The 2007 study should be seen as “a snapshot of what was known then. Science is progressive. If something turns out to be wrong we can fix it next time around.” However he confirmed he would be introducing rigorous new review procedures for future reports to ensure errors were kept to a minimum.

source www.timesonline.co.uk

Center For American Progress: Snowstorm Doesn't Disprove Global Warming!

by Chris Good -- staff editor at TheAtlantic.com

Irked by the suggestion of climate change doubters that the East Coast snowstorm is proof that global warming doesn't exist, the liberal Center for American Progress pulled together a conference call today to tell reporters not only that that's not the case, but, basically, to stop being a mouthpiece for people who doubt the science.

Jeff Masters, director of meteorology at Weather Underground, got on the call to remind everyone that snowfall does not equal a drop in temperature--that as long as it's cold enough for snow, precipitation means a snowstorm.

More precipitation--including heavy snowstorms--is a sign of global warming, he said, as atmospheric moisture levels have increased with warmer temperatures, meaning more storms with heavy snow or rain.


"We still will have snowstorms, and the signs of record snowstorms being evidence against global warming is just not true," Masters said. "In the future we shouldn't be surprised to find heavier precipitation events."

Center for American Progress climate change fellow Joseph Romm criticized John M. Broder's New York Times story, in which Romm was quoted, which noted the uptick of global warming debate during the snowstorm. A chorus of Republicans have mocked Al Gore since the snowstorm hit. The family of Sen. James Inhofe (R-OK), the leading climate change skeptic in the nation (and, by extension, perhaps the world) built an igloo on Capitol Hill bearing a sign that read "Al Gore's New Home." This is not new; cold weather often sparks criticisms of climate change, and of Gore.

"We're not in a deep freeze," Romm said of the NY Times headline, "Climate-Change Debate Is Heating Up in Deep Freeze."

"This actualy is, according to the satellite record, it's the warmest winter on record," Romm said. "The scientific literature predicts that you will see more intense winter storms because of global warming."

Romm pointed to warm temperatures (in the upper 40s) and likely rain during the Winter Olympics in Vancouver.

"I realize that this can be a complicated matter to report," Romm said. "The challenge for the media is gonna be how do you report about this statistical increase in extreme weather and not let those who don't understand the science obfuscate it."

Study: Water vapor may help 'flatten global warming trend'

By Doyle Rice, USA TODAY
www.usatoday.com

Why the Earth's surface temperature hasn't warmed as expected over the past decade continues to be a puzzle for scientists. One study out earlier this month theorized that the Earth's climate may be less sensitive to greenhouse gases than currently assumed.

Another surprising factor could be the amount of water vapor way up in the stratosphere, according to a new study out Thursday in the journal Science.

Water vapor, a potent, natural greenhouse gas that absorbs sunlight and re-emits heat, is "a wild card" of global warming, says the paper's lead author, senior scientist Susan Solomon of the National Oceanic and Atmospheric Administration in Boulder, Colo. Solomon was also a co-chair of one of the groups within the Intergovernmental Panel on Climate Change that put out the definitive forecast of global warming in 2007.

In the Science paper, Solomon and her colleagues found that a drop in the concentration of water vapor in the stratosphere "very likely made substantial contributions to the flattening of the global warming trend since about 2000."

While climate warming is continuing — the decade of 2000 to 2009 was the hottest on record worldwide — the increase in temperatures was not as rapid as in the 1990s.

The stratosphere is the layer of the atmosphere just above the troposphere, which is the layer of air here at the planet's surface. (The troposphere goes from the surface up to about 8 miles, and the stratosphere is from about 8 to 30 miles above the surface.)

The decline in water vapor in the stratosphere slowed the rate of surface warming by about 25%, compared to that which would have occurred due to carbon dioxide and other greenhouse gases, notes the study. Specifically, the planet should have warmed 0.25 degree F during the 2000s, but because of the influence of the water vapor, it rose just 0.18 degree F.

"We call this the 10/10/10 paper," says Solomon. "10 miles above your head, there is 10% less water vapor than there was 10 years ago."

Why did the water vapor decrease? "We really don't know," says Solomon, "We don't have enough information yet."

The findings are "surprising," says Bill Randel, an atmospheric chemist at the National Center for Atmospheric Research, who was not part of the study. He said it was surprising how big an effect such a very little change in stratospheric water vapor has had on the surface climate.

These fluctuations in water vapor could be part of a feedback loop. Although it's known that water vapor in the troposphere increases as the climate warms — and is a major climate feedback that is well simulated in global climate models — in sharp contrast, models do a poor job of simulating water vapor in the stratosphere, according to the paper.

But Solomon points out this isn't an indication that predictions on global warming are overstated: "This doesn't mean there isn't global warming," notes Solomon. "There's no significant debate that it is warmer now than it was 100 years ago, due to anthropogenic (man-made) greenhouse gases."

And how will this water vapor affect future global warming? "We really don't know the answer to this," says Solomon. "If the water changes are due to the specific way the sea-surface temperature pattern looks right now, then it may well not be linked to the overall warming. It could just be a source of variability from one decade to another as the ocean pattern slowly changes. Or it could be linked to the overall warming of the tropics, in which case it could continue to 'put the brakes on.' Only time will tell, and more data."

Cyclonic spray scrubber

Cyclonic spray scrubber

Cyclonic spray scrubbers are an air pollution control technology. They use the features of both the dry cyclone and the spray chamber to remove pollutants from gas streams.

Generally, the inlet gas enters the chamber tangentially, swirls through the chamber in a corkscrew motion, and exits. At the same time, liquid is sprayed inside the chamber. As the gas swirls around the chamber, pollutants are removed when they impact on liquid droplets, are thrown to the walls, and washed back down and out.

Cyclonic scrubbers are generally low- to medium-energy devices, with pressure drops of 4 to 25 cm (1.5 to 10 in) of water. Commercially available designs include the irrigated cyclone scrubber and the cyclonic spray scrubber.

In the irrigated cyclone (Figure 1), the inlet gas enters near the top of the scrubber into the water sprays. The gas is forced to swirl downward, then change directions, and return upward in a tighter spiral. The liquid droplets produced capture the pollutants, are eventually thrown to the side walls, and carried out of the collector. The "cleaned" gas leaves through the top of the chamber.

The cyclonic spray scrubber (Figure 2) forces the inlet gas up through the chamber from a bottom tangential entry. Liquid sprayed from nozzles on a center post (manifold) is directed toward the chamber walls and through the swirling gas. As in the irrigated cyclone, liquid captures the pollutant, is forced to the walls, and washes out. The "cleaned" gas continues upward, exiting through the straightening vanes at the top of the chamber.

This type of technology is a part of the group of air pollution controls collectively referred to as wet scrubbers.

Particulate collection

Cyclonic spray scrubber

Cyclonic spray scrubbers are more efficient than spray towers, but not as efficient as venturi scrubbers, in removing particulate from the inlet gas stream. Particulates larger than 5 µm are generally collected by impaction with 90% efficiency. In a simple spray tower, the velocity of the particulates in the gas stream is low: 0.6 to 1.5 m/s (2 to 5 ft/s).

By introducing the inlet gas tangentially into the spray chamber, the cyclonic scrubber increases gas velocities (thus, particulate velocities) to approximately 60 to 180 m/s (200 to 600 ft/s). The velocity of the liquid spray is approximately the same in both devices. This higher particulate-to-liquid relative velocity increases particulate collection efficiency for this device over that of the spray chamber. Gas velocities of 60 to 180 m/s are equivalent to those encountered in a venturi scrubber.

However, cyclonic spray scrubbers are not as efficient as venturi scrubbers because they are not capable of producing the same degree of useful turbulence.
Gas collection

High gas velocities through these devices reduce the gas-liquid contact time, thus reducing absorption efficiency. Cyclonic spray scrubbers are capable of effectively removing some gases; however, they are rarely chosen when gaseous pollutant removal is the only concern.

Maintenance problems

The main maintenance problems with cyclonic scrubbers are nozzle plugging and corrosion or erosion of the side walls of the cyclone body. Nozzles have a tendency to plug from particulates that are in the recycled liquid and/or particulates that are in the gas stream. The best solution is to install the nozzles so that they are easily accessible for cleaning or removal.

Due to high gas velocities, erosion of the side walls of the cyclone can also be a problem. Abrasion-resistant materials may be used to protect the cyclone body, especially at the inlet.

From http://en.wikipedia.org/

Cyclonic separation

Cyclonic separation

Cyclonic separation is a method of removing particulates from an air, gas or water stream, without the use of filters, through vortex separation. Rotational effects and gravity are used to separate mixtures of solids and fluids.

A high speed rotating (air)flow is established within a cylindrical or conical container called a cyclone. Air flows in a spiral pattern, beginning at the top (wide end) of the cyclone and ending at the bottom (narrow) end before exiting the cyclone in a straight stream through the center of the cyclone and out the top. Larger (denser) particles in the rotating stream have too much inertia to follow the tight curve of the stream and strike the outside wall, falling then to the bottom of the cyclone where they can be removed. In a conical system, as the rotating flow moves towards the narrow end of the cyclone the rotational radius of the stream is reduced, separating smaller and smaller particles. The cyclone geometry, together with flow rate, defines the cut point of the cyclone. This is the size of particle that will be removed from the stream with a 50% efficiency. Particles larger than the cut point will be removed with a greater efficiency, and smaller particles with a lower efficiency.

An alternative cyclone design uses a secondary air flow within the cyclone to keep the collected particles from striking the walls to protect them from abrasion. The primary air containing the particulate enters from the bottom of the cyclone and is forced into spiral rotation by a stationary spinner. The secondary air flow enters from the top of the cyclone and moves downward toward the bottom, intercepting the particulate from the primary air. The secondary air flow also allows the collector to be mounted horizontally because it pushes the particulate toward the collection area.

Large scale cyclones are used in sawmills to remove sawdust from extracted air. Cyclones are also used in oil refineries to separate oils and gases, and in the cement industry as components of kiln preheaters. Smaller cyclones are used to separate airborne particles for analysis. Some are small enough to be worn clipped to clothing and are used to separate respirable particles for later analysis.

Analogous devices for separating particles or solids from liquids are called hydrocyclones or hydroclones. These may be used to separate solid waste from water in wastewater and sewage treatment.

From http://en.wikipedia.org/

Biofilter

Biofiltration is a pollution control technique using living material to capture and biologically degrade process pollutants. Common uses include processing waste water, capturing harmful chemicals or silt from surface runoff, and microbiotic oxidation of contaminants in air.

Biofilter

Examples of biofiltration include;

* Bioswales, Biostrips, Biobags, Bioscrubbers, and Trickling filters
* Constructed wetlands and Natural wetlands
* Slow sand filters
* Treatment ponds
* Green belts
* Living walls
* Riparian zones, Riparian forests, Bosques

Biofilter

Control of air pollution

When applied to air filtration and purification, biofilters use microorganisms to remove air pollution. The air flows through a packed bed and the pollutant transfers into a thin biofilm on the surface of the packing material. Microorganisms, including bacteria and fungi are immobilized in the biofilm and degrade the pollutant. Trickling filters and bioscrubbers rely on a biofilm and the bacterial action in their recirculating waters.

The technology finds greatest application in treating malodorous compounds and water-soluble volatile organic compounds (VOCs). Industries employing the technology include food and animal products, off-gas from wastewater treatment facilities, pharmaceuticals, wood products manufacturing, paint and coatings application and manufacturing and resin manufacturing and application, etc. Compounds treated are typically mixed VOCs and various sulfur compounds, including hydrogen sulfide. Very large airflows may be treated and although a large area (footprint) has typically been required -- a large biofilter (>200,000 acfm) may occupy as much or more land than a football field -- this has been one of the principal drawbacks of the technology. Engineered biofilters, designed and built since the early 1990s, have provided significant footprint reductions over the conventional flat-bed, organic media type.

Biofilter

One of the main challenges to optimum biofilter operation is maintaining proper moisture throughout the system. The air is normally humidified before it enters the bed with a watering (spray) system, humidification chamber, bioscrubber, or biotrickling filter. Properly maintained, a natural, organic packing media like peat, vegetable mulch, bark or wood chips may last for several years but engineered, combined natural organic and synthetic component packing materials will generally last much longer, up to 10 years. A number of companies offer these types or proprietary packing materials and multi-year guarantees, not usually provided with a conventional compost or wood chip bed biofilter.

Although widely employed, the scientific community is still unsure of the physical phenomena underpinning biofilter operation, and information about the microorganisms involved continues to be developed. A biofilter/bio-oxidation system is a fairly simple device to construct and operate and offers a cost-effective solution provided the pollutant is biodegradable within a moderate time frame (increasing residence time = increased size and capital costs), at reasonable concentrations (and lb/hr loading rates) and that the airstream is at an organism-viable temperature. For large volumes of air, a biofilter may be the only cost-effective solution. There is no secondary pollution (unlike the case of incineration where additional CO2 and NOx are produced from burning fuels) and degradation products form additional biomass, carbon dioxide and water. Media irrigation water, although many systems recycle part of it to reduce operating costs, has a moderately high biochemical oxygen demand (BOD) and may require treatment before disposal. However, this "blowdown water", necessary for proper maintenance of any bio-oxidation system, is generally accepted by municipal POTWs without any pretreatment.

Biofilters are being utilized in Columbia Falls, Montana at Plum Creek Timber Company's fiberboard plant. The biofilters decrease the pollution emitted by the manufacturing process and the exhaust emitted is 98% clean. The newest, and largest, biofilter addition to Plum Creek cost $9.5 million, yet even though this new technology is expensive, in the long run it will cost less overtime than the alternative exhaust-cleaning incinerators fueled by natural gas (which are not as environmentally friendly). The biofilters use trillions of microscopic bacteria that cleanse the air being released from the plant.

Water treatment

Biofilter

Trickling filters have been used to filter water for various end uses for almost two centuries. Biological treatment has been used in Europe to filter surface water for drinking purposes since the early 1900s and is now receiving more interest worldwide. Biological treatment methods are also common in wastewater treatment, aquaculture and greywater recycling as a way to minimize water replacement while increasing water quality.

For drinking water, biological water treatment involves the use of naturally occurring micro-organisms in the surface water to improve water quality. Under optimum conditions, including relatively low turbidity and high oxygen content, the organisms break down material in the water and thus improve water quality. Slow sand filters or carbon filters are used to provide a place on which these micro-organisms grow. These biological treatment systems effectively reduce water-borne diseases, dissolved organic carbon, turbidity and colour in surface water, improving overall water quality.

Use in aquaculture

Biofilter

The use of biofilters are commonly used on closed aquaculture systems, such as recirculating aquaculture systems (RAS). Many designs are used, with different benefits and drawbacks, however the function is the same: reducing water exchanges by converting ammonia to nitrate. Ammonia (NH4+ and NH3) originates from the brachial excretion from the gills of aquatic animals and from the decomposition of organic matter. As ammonia-N is highly toxic, this is converted to a less toxic form of nitrite (by Nitrosomonas sp.) and then to an even less toxic form of nitrate (by Nitrobacter sp.). This "nitrification" process requires oxygen (aerobic conditions), without which the biofilter can crash. Furthermore, as this nitrification cycle produces H+, the pH can decrease which necessitates the use of buffers such as lime.

From http://en.wikipedia.org/

Best available technology

Best available technology (or just BAT) is a term applied with regulations on limiting pollutant discharges with regard to the abatement strategy. Similar terms are best available techniques , best practicable means or best practicable environmental option. The term constitutes a moving targets on practices, since developing societal values and advancing techniques may change what is currently regarded as "reasonably achievable", "best practicable" and "best available".

A literal understanding will connect it with a "spare no expense" doctrine which prescribes the acquisition of the best state of the art technology available, without regard for traditional cost-benefit analysis. In practical use the cost aspect is also taken into account.

Best practicable means was used for the first time in UK national primary legislation in section 5 of the Salmon Fishery Act 1861 and another early use was found in the Alkali Act Amendment Act 1874, but before that appeared in the Leeds Act of 1848.

The BAT concept was first time used in the 1992 OSPAR Convention for protection of marine environment of North-East Atlantic for all types of industrial installations.

Some doctrine deem it already acquired the status of customary law.

In the United States, BAT or similar terminology is used in the Clean Air Act and Clean Water Act.

European Union directives

Best available techniques not entailing excessive costs (BATNEEC), sometimes referred to as best available technology, was introduced with the 1984 Air Framework Directive (AFD) and applies to air pollution emissions from large industrial installations.

In 1996 the AFD was superseded by the Integrated pollution prevention and control directive (IPPC), 96/61/EC, which applies the framework concept of Best Available Techniques (BAT) to the integrated control of pollution to the three media air, water and soil.

In the European Union directive 96/61/EC emission limit values were to be based on the best available techniques, as described in item #17: "Whereas emission limit values, parameters or equivalent technical measures should be based on the best available techniques, without prescribing the use of one specific technique or technology and taking into consideration the technical characteristics of the installation concerned, its geographical location and local environmental conditions; whereas in all cases the authorization conditions will lay down provisions on minimizing long-distance or transfrontier pollution and ensure a high level of protection for the environment as a whole.

The directive includes a definition of best available techniques in article 2.11:

"best available techniques" shall mean the most effective and advanced stage in the development of activities and their methods of operation which indicate the practical suitability of particular techniques for providing in principle the basis for emission limit values designed to prevent and, where that is not practicable, generally to reduce emissions and the impact on the environment as a whole:

- "techniques" shall include both the technology used and the way in which the installation is designed, built, maintained, operated and decommissioned,
- "available" techniques shall mean those developed on a scale which allows implementation in the relevant industrial sector, under economically and technically viable conditions, taking into consideration the costs and advantages, whether or not the techniques are used or produced inside the Member State in question, as long as they are reasonably accessible to the operator,
- "best" shall mean most effective in achieving a high general level of protection of the environment as a whole.

United States environmental law

The Clean Air Act requires that certain facilities employ Best Available Control Technology to control emissions.

...an emission limitation based on the maximum degree of reduction of each pollutant subject to regulation under this Act emitted from or which results from any major emitting facility, which the permitting authority, on a case-by-case basis, taking into account energy, environmental, and economic impacts and other costs, determines is achievable for such facility through application of production processes and available methods, systems, and techniques, including fuel cleaning, clean fuels, or treatment or innovative fuel combustion techniques for control of each such pollutant.

The Clean Water Act (CWA) requires issuance of national industrial wastewater discharge regulations (called "effluent guidelines"), which are based on BAT and several related standards.

...effluent limitations for categories and classes of point sources,... which (i) shall require application of the best available technology economically achievable for such category or class, which will result in reasonable further progress toward the national goal of eliminating the discharge of all pollutants. ...Factors relating to the assessment of best available technology shall take into account the age of equipment and facilities involved, the process employed, the engineering aspects of the application of various types of control techniques, process changes, the cost of achieving such effluent reduction, non-water quality environmental impact (including energy requirements), and such other factors as the Administrator deems appropriate.

A related CWA provision for cooling water intake structures requires standards based on "best technology available."

...the location, design, construction, and capacity of cooling water intake structures reflect the best technology available for minimizing adverse environmental impact.

From http://en.wikipedia.org/

Best Available Control Technology

Best Available Control Technology (BACT) is a pollution control standard mandated by the United States Clean Air Act. The U.S. Environmental Protection Agency (EPA) determines what air pollution control technology will be used to control a specific pollutant to a specified limit. When a BACT is determined, factors such as energy consumption, total source emission, regional environmental impact, and economic costs are taken into account. It is the current EPA standard for all polluting sources that fall under the New Source Review guidelines and is determined on a case-by-case basis.

The BACT standard is significantly more stringent than the Reasonably Available Control Technology standard but much less stringent than the Lowest Achievable Control Technology standard.

From http://en.wikipedia.org/

Baffle spray scrubber

Baffle spray scrubber

Baffle spray scrubbers are a technology for air pollution control. They are very similar to spray towers in design and operation. However, in addition to using the energy provided by the spray nozzles, baffles are added to allow the gas stream to atomize some liquid as it passes over them.

A simple baffle scrubber system is shown in Figure 1. Liquid sprays capture pollutants and also remove collected particles from the baffles. Adding baffles slightly increases the pressure drop of the system.

This type of technology is a part of the group of air pollution controls collectively referred to as wet scrubbers.

A number of wet-scrubber designs use energy from both the gas stream and liquid stream to collect pollutants. Many of these combination devices are available commercially.

A seemingly unending number of scrubber designs have been developed by changing system geometry and incorporating vanes, nozzles, and baffles.

Particle collection

These devices are used much the same as spray towers - to preclean or remove particles larger than 10 μm in diameter. However, they will tend to plug or corrode if particle concentration of the exhaust gas stream is high.

Gas collection

Even though these devices are not specifically used for gas collection, they are capable of a small amount of gas absorption because of their large wetted surface.

From http://en.wikipedia.org/

Aerobic granulation

Aerobic granulation

The biological treatment of wastewater in the waste water treatment plant often accomplished by means of the application of conventional activated sludge systems. These systems generally require large surface areas for implantation of the treatment and biomass separation units due to the usually poor settling properties of the sludge. In recent years, new technologies are being developed to improve this system. The use of aerobic granular sludge is one of them.

Aerobic granular biomass

A definition to discern between an aerobic granule and a simple floc with relatively good settling properties came out from the discussions which took place at the “1st IWA-Workshop Aerobic Granular Sludge” in Munich (2004) and literally stated that:

“Granules making up aerobic granular activated sludge are to be understood as aggregates of microbial origin, which do not coagulate under reduced hydrodynamic shear, and which settle significantly faster than activated sludge flocs”(de Kreuk et al. 2005)"

Formation of aerobic granules

Aerobic granulation

Granular sludge biomass is developed in Sequencing Batch Reactors (SBR) and without carrier materials. These systems fulfil most of the requirements for their formation as:

Feast - Famine regime: short feeding periods must be selected to create feast and famine periods (Beun et al. 1999), characterized by the presence or absence of organic matter in the liquid media, respectively. With this feeding strategy the selection of the appropriate micro-organisms to form granules is achieved. When the substrate concentration in the bulk liquid is high, the granule-former organisms can storage the organic matter in form of poly-β-hydroxybutyrate to be consumed in the famine period, being in advantage with the filamentous organisms.

Short settling time: This hydraulic selection pressure on the microbial community allows retaining granular biomass inside the reactor while flocculent biomass is washed-out. (Qin et al. 2004)

Hydrodynamic shear force : Evidences show that the application of high shear forces favours the formation of aerobic granules and the physical granule integrity. It was found that aerobic granules could be formed only above a threshold shear force value in terms of superficial upflow air velocity above 1.2 cm/s in a column SBR, and more regular, rounder, and more compact aerobic granules were developed at high hydrodynamic shear forces (Tay et al., 2001 ).

Advantages

The development of biomass in the form of aerobic granules is being recently under study for its application to the removal of organic matter, nitrogen and phosphorus compounds from wastewater. Aerobic granules in aerobic SBR present several advantages compared to conventional activated sludge process such as:

Stability and flexibility: the SBR system can be adapted to fluctuating conditions with the ability to withstand shock and toxic loadings

Excellent settling properties: a smaller secondary settler will be necessary, which means a lower surface requirement for the construction of the plant.

Good biomass retention: higher biomass concentrations inside the reactor can be achieved, and higher substrate loading rates can be treated.

Presence of aerobic and anoxic zones inside the granules to perform simultaneously different biological processes in the same system (Beun et al.. 1999)

The cost of running a wastewater treatment plant working with aerobic granular sludge can be reduced by at least 20% and space requirements can be reduced by as much as 75% (de Kreuk et al.., 2004).

Treatment of industrial wastewater

Synthetic wastewater was used in most of the works carried out with aerobic granules. These works were mainly focussed on the study of granules formation, stability and nutrient removal efficiencies under different operational conditions and their potential use to remove toxic compounds. The potential of this technology to treat industrial wastewater is under study, some of the results:

* Arrojo et al. (2004) operated two reactors that were fed with industrial wastewater produced in a laboratory for analysis of dairy products (Total COD : 1500-3000 mg/L; soluble COD: 300-1500 mg/L; total nitrogen: 50-200 mg/L). These authors applied organic and nitrogen loading rates up to 7 g COD/(L·d) and 0.7 g N/(L·d) obtaining removal efficiencies of 80%.

* Schwarzenbeck et al. (2004) treated malting wastewater which had a high content of particulate organic matter (0.9 g TSS/L). They found that particles with average diameters lower than 25-50 µm were removed at 80% efficiency, whereas particles bigger than 50 µm were only removed at 40% efficiency. These authors observed that the ability of aerobic granular sludge to remove particulate organic matter from the wastewaters was due to both incorporation into the biofilm matrix and metabolic activity of protozoa population covering the surface of the granules.

* Cassidy and Belia (2005) obtained removal efficiencies for COD and P of 98% and for N and VSS over 97% operating a granular reactor fed with slaughterhouse wastewater (Total COD: 7685 mg/L; soluble COD: 5163 mg/L; TKN: 1057 mg/L and VSS: 1520 mg/L). To obtain these high removal percentages, they operated the reactor at a DO saturation level of 40%, which is the optimal value predicted by Beun et al. (2001) for N removal, and with an anaerobic feeding period which helped to maintain the stability of the granules when the DO concentration was limited.

* Inizan et al. (2005) treated industrial wastewaters from pharmaceutical industry and observed that the suspended solids in the inlet wastewater were not removed in the reactor.

* Tsuneda et al. (2006) , when treating wastewater from metal-refinery process (1.0-1.5 g NH4+-N/L and up to 22 g/L of sodium sulphate), removed a nitrogen loading rate of 1.0 kg-N/m3·d with an efficiency of 95% in a system containing autotrophic granules.

* Usmani et al. (2008) high superficial air velocity, a relatively short settling time of 5-30 min, a high ratio of height to diameter (H/D=20) of the reactor and optimum ogranic load facilitates the cultivation of regular compact and circular granules.

* Figueroa et al. (2008), treated wastewater from a fish canning industry. Applied OLR were up to 1.72 kg COD/(m3·d) with fully organic matter depletion. Ammonia nitrogen was removed via nitrification-denitrification up to 40% when nitrogen loading rates were of 0.18 kg N/(m3·d). The formation of mature aerobic granules occurred after 75 days of operation with 3.4 mm of diameter, SVI of 30 mL/g VSS and density around 60 g VSS/L-granule

* Farooqi et al. (2008), Wastewaters from fossil fuel refining, pharmaceuticals, and pesticides are the main sources of phenolic compounds. Those with more complex structures are often more toxic than the simple phenol. This study was aimed at assessing the efficacy of granular sludge in UASB and SBR for the treatment of mixtures of phenolics compounds. The results indicates that anaerobic treatment by UASB and aerobic treatment by SBR can be successfully used for phenol/cresol mixture, representative of major substrates in chemical and petrochemical wastewater and the results shows proper acclimatization period is essential for the degradation of m - cresol and phenol. Moreover, SBR was found as a better alternative than UASB reactor as it is more efficient and higher concentration of m cresols can be successfully degraded.

Pilot research in aerobic granular sludge

Aerobic granulation technology for the application in wastewater treatment is widely developed at laboratory scales. The large-scale experience is still limited but different institutions are making efforts to improve this technology:

* Since 1999 DHV Water, Delft University of technology (TUD), STW (Dutch Foundation for Applied Technology) and STOWA (Dutch Foundation for Applied Water Research) have been cooperating closely on the development of the aerobic granular sludge technology (Nereda). Based on the results obtained, a pilot plant was started up in September 2003 in Ede (Netherlands). The heart of the installation consists of two parallel biological reactors with each a height and diameter of 6 m and 0.6 respectively and a volume of 1.5 m3.

* From the basis of the aerobic granular sludge but using a contention system for the granules, a sequencing batch biofilter granular reactor (SBBGR) with a volume of 3.1m3 was developed by IRSA (Istituto di Ricerca Sulle Acque, Italy). Different studies were carried out in this plant treating sewage at an Italian wastewater treatment plant.

* The use of aerobic granules prepared in laboratory, as a starter culture, before adding in main system, is the base of the technology ARGUS (Aerobic Granules Upgrade System) developed by EcoEngineering Ltd.. The granules are cultivated on-site in small bioreactors called propagators and fill up only 2 to 3% of the main bioreactor or fermentor (digestor) capacity. This system is being used in a pilot plant with a volume of 2.7 m3 located in one Hungarian pharmaceutical industry.

* The Group of Environmental Engineering and Bioprocesses from the University of Santiago de Compostela is currently operating a 100 L pilot plant reactor.

The feasibility study showed that the aerobic granular sludge technology seems very promising (de Bruin et al., 2004. Based on total annual costs a GSBR (Granular sludge Sequencing Batch Reactors) with pre-treatment and a GSBR with post-treatment proves to be more attractive than the reference activated sludge alternatives (6-16%). A sensitivity analysis shows that the GSBR technology is less sensitive to land price and more sensitive to rain water flow. Because of the high allowable volumetric load the footprint of the GSBR variants is only 25% compared to the references. However, the GSBR with only primary treatment cannot meet the present effluent standards for municipal wastewater, mainly because of exceeding the suspended solids effluent standard caused by washout of not well settleable biomass.

From http://en.wikipedia.org/

AdBlue

AdBlue

AdBlue is the registered trademark for AUS32 (Aqueous Urea Solution 32.5%) and is used in a process called selective catalytic reduction (SCR) to reduce emissions of oxides of nitrogen from the exhaust of diesel engined motor vehicles. As the name AUS32 would suggest, it is a 32.5% solution of high-purity urea in demineralised water that is clear, non-toxic and is safe to handle. However, it can be corrosive for some metals, and must be stored and transported using the correct materials. The AdBlue trademark is currently held by the German Association of the Automobile Industry (VDA), who ensure quality standards are maintained in accordance with ISO 22241 specifications.

AdBlue is carried onboard SCR-equipped vehicles in specially designed tanks, and is dosed into the SCR system at a rate equivalent to 3–5% of diesel consumption. This low dosing rate ensures long refill periods and minimises the tank's impact on chassis space. On-highway SCR systems are currently in use throughout Europe, in Japan, Australia, Hong Kong, Taiwan, Korea, New Zealand and Singapore. The United States Environmental Protection Agency‎'s (US EPA) 2010 legislation will limit NOx to levels that will require North American trucks to be equipped with SCR post-2010. The current generic name in North America for AUS32 is diesel exhaust fluid (DEF). Some trucking industry OEMs have already developed branded SCR solutions, such as Daimler's BlueTec.

All European truck manufacturers currently offer SCR equipped models, and the future Euro6 emission standard is set to reinforce the demand for this technology. SCR systems are sensitive to potential chemical impurities in the urea solution, therefore, it is essential to maintain high standards of AdBlue quality according to the ISO 22241 standard.

The use of SCR technology in Europe made it necessary to develop an AdBlue supply infrastructure. AdBlue is available from thousands of service stations, this locator finder

is updated monthly with new Retail sites selling AdBlue. It can also be purchased in canisters of 5 or 10 litres (1.1 or 2.2 imp gal; 1.3 or 2.6 USgal) at service stations. Larger quantities of AdBlue can be delivered in, for example, 208 litres (46 imp gal; 55 US gal) drums, 1,000 litres (220 imp gal; 260 US gal) Intermediate Bulk Containers (IBCs), and bulk.

From http://en.wikipedia.org/

Vehicle inspection in the United States

Vehicle inspection in the United States

In the United States, vehicle safety inspection is goverened by each state individually. 18 states have a periodic (annual or biannual) safety inspection program, while Maryland requires an inspection prior to registration or transfer of ownership only.

Under the Clean Air Act (1990), states are required to implement vehicle emission inspection programs in metropolitan areas whose air quality does not meet federal standards. The specifics of those programs vary from state to state. Some states, including Kentucky and Minnesota, have discontinued their testing programs in recent years with approval from the federal government.

In most states, such inspections are done at state-operated garages, usually near the local DMV office. Pennsylvania is a notable exception, instead opting to have privately-owned garages doing inspections with approval from PennDOT. The flip side to this though is that some independently-run garages will do what is commonly known in Pennsylvania as a "lick-'em-and-stick-'em", which simply has the person pay the inspection fee and has the sticker replaced without actually checking the vehicle. This is illegal in Pennsylvania, which among other penalties could lead to a fine for the garage and a revocation of their inspection privileges. Other independently-run garages as well as chains like Pep Boys, Midas, and car dealerships are more stringent and follow PennDOT guidelines for inspections.

States and Federal Districts with periodic (e.g., annual) vehicle safety inspections

* Delaware (every year or every two years; brand new cars are exempt for the first four years provided the car remains with the same owner. Older cars registered as antiques do not require emissions testing.)
* District of Columbia (every two years; the requirement for safety inspection for private cars will end October 1, 2009)
* Hawaii (every year, except brand new vehicles receive an inspection valid for two years, emergency vehicles, school vehicles, rental cars, vehicles used in public transportation, and other, every six months)
* Louisiana (every year; emission test in the Baton Rouge metropolitan area parishes of Ascension, East Baton Rouge, Iberville, Livingtston and West Baton Rouge)
* Maine (every year; emission test in Cumberland County)
* Massachusetts (safety inspection and emissions testing annually). In 2008 the tailpipe test for 1995 model year and older vehicles was discontinued, vehicles without OBD-II systems receive a visual check of exhaust components
* Mississippi (safety inspection every year)
* Missouri (Odd numbered model year renews in odd numbered year, even model year renews in even year, except new vehicles not previously titled which are exempt during the model year and the year following, or vehicles displaying historical plates, which are completely exempt.; emissions testing in St. Louis city, St. Louis County, St. Charles County, Franklin County, and Jefferson County)
* New Hampshire (annually, emissions testing for model year 1996 and newer vehicles))
* New Jersey (safety and emissions testing every two years, brand new cars are exempt for the first four years. Effective 2010, the new car four-year exemption will transfer to the next owner if sold before the end of the four years . Older cars registered as antiques do not require emissions testing. Diesel cars under 10000 lb are also exempt.
* New York (annual safety and emissions). Model year 1996 and newer vehicles are subject to an OBD-II emissions inspection while older cars receive a visual check of exhaust components. Vehicles registered in the five boroughs of New York City as well as Long Island, Westchester County and Rockland County require a tailpipe smog-test if they are not OBD II equipped. All OBD II vehicles in those areas (1996 model year or newer) require only the OBD II test. And any vehicle 26 model years old or more does not require an emissions check of any sort. Newly registered vehicles from another state with a current inspection sticker are exempt until the out-of-state sticker expires or for one year, whichever is sooner.
* North Carolina (every year; emissions inspections in 48 of 100 counties (1996-newer, except new cars), exempting diesels and cars 35 years or older. Starting Nov 1, 2008 there won't be an inspection decal issued upon passing. )
* Pennsylvania every year for most vehicles; every six months for tractor-trailers, school vehicles (including school buses and school vans), motor coaches, mass transit buses, ambulances, fire department trucks, etc.; emissions inspections every year in 25 of 67 counties (stricter in the Pittsburgh and Philadelphia metro areas) (no emission inspection for diesel vehicles))annual inspection, emission, and semi-annual inspection stickers are color-coded, which tells which month of the year they expire. This makes it easier for police to be aware of expired stickers.
* Rhode Island (safety and emission inspection every two years)
* Texas (every year; emission test in the largest urban areas - Houston Metro, Dallas Metroplex, Austin, San Antonio, and El Paso)
* Utah (every two years for the first eight years, then every year)
* Vermont (every year)
* Virginia (every year; emission inspection every two years in urban and suburban jurisdictions in Northern Virginia)
* West Virginia (every year - safety)

States with safety inspection only required prior to sale or transfer

* Alabama
* Maryland (emission inspection required every two years in all counties)(not required in every county. The VEIP testing network consists of 18 centralized inspection stations located in 13 counties and Baltimore City)

States which only require federally mandated emissions inspections

* Arizona (Phoenix and Tucson metro areas only) annually, depending on age and type of vehicle)
* California (for most ZIP codes, every two years for all vehicles made after 1975 which are more than six years old)
* Colorado (in some localities, every year or two, depending on age and type of vehicle)
* Connecticut (every two years)
* Georgia (metropolitan Atlanta area only, every year, most recent three model year cars are exempt)
* Illinois every two years after the vehicle is four years old (Chicagoland and eastern suburbs of St. Louis, Missouri)
* Indiana (Lake and Porter counties only, every two years)
* New Mexico (Albuquerque metro area)
* Nevada (Clark County and Washoe County areas)
* Ohio (Cuyahoga, Geauga, Lake, Lorain, Medina, Portage, and Summit counties only) cars that are four years old or less do not have tested, after that period they have to tested. Testing is based on a odd-even year system. If a car was bought in 2000, it wont tested until 2010, if a car was purchased in 2003, then it will need to be tested in 2009. Franklin County (Columbus) and Hamilton County (Cincinnati) will also have be under emission testing effective in 2010. Ohio does not charge a fee for emission testing, due to Ohio's tobacco settlement.
* Oregon (Portland and Medford metro areas only)
* Tennessee in conjunction with annual registration renewal (Davidson, Hamilton, Rutherford, Sumner, Williamson or Wilson counties and city of Memphis only)
* Washington (urban areas of Clark, King, Pierce, Snohomish and Spokane counties)
* Wisconsin (Kenosha, Milwaukee, Ozaukee, Racine, Sheboygan, Washington and Waukesha; every two years)

States requiring an inspection only when bringing a vehicle from another State or jurisdiction

* Nebraska (all vehicles, ATVs, minibikes and trailers brought into Nebraska from Out-of-State)

States without safety or emissions inspections

* Alaska
* Arkansas
* Florida
* Idaho (Ada County has a county level program that requires testing)
* Iowa
* Kansas
* Kentucky
* Michigan
* Minnesota
* Montana
* North Dakota
* Oklahoma
* South Carolina
* South Dakota
* Wyoming

From http://en.wikipedia.org/

Indonesia Earthhour

Indonesia Earthhour

change the world in one hour. Switch off the lights Saturday, 27 March 2010 at 20:30 to 21:30. You can see all information about Indonesia Earthhour at www.earthhour.wwf.or.id

From http://en.wikipedia.org/

Vehicle inspection

Vehicle inspection

Vehicle inspection is a procedure mandated by national or subnational governments in many countries, in which a vehicle is inspected to ensure that it conforms to regulations governing safety, emissions, or both. Inspection can be required at various times, e.g., periodically or on transfer of title to a vehicle. If required periodically, it is often termed periodic motor vehicle inspection; typical intervals are every two years and every year.

In some jurisdictions, proof of inspection is required before a vehicle licence or license plate can be issued or renewed. In others, once a vehicle passes inspection, a decal is attached to the windshield, and police can enforce the inspection law by seeing whether the vehicle displays an up-to-date decal. In the case of a vehicle lacking a windshield (e.g., a trailer or motorcycle), the decal is typically attached to the vehicle body or license plate.

With regard to safety inspection, there is some controversy over whether it is a cost-effective way to improve road-traffic safety.

From http://en.wikipedia.org/

Supplementarity

The supplementarity principle, also referred to as the supplementary principle, is one of the main principles of the Kyoto Protocol. The concept is that internal abatement of emissions should take precedent before external participation in flexible mechanisms. These mechanisms include emissions trading, Clean Development Mechanism (CDM), and Joint Implementation (JI).

Emissions trading basically refers to the trading of emissions allowances (carbon credits) between one regulated entity and a less pollutive entity. This trading of permits results in a marginal economic disincentive to the buyer and a marginal economic incentive the abater.

CDM and JI are flexible mechanisms based on the concept of a carbon project. These projects reduce GHG voluntarily (outside the capped sectors) and therefore can be imported into the capped sector to aid in compliance.

The supplementarity principle is found in three articles of the Kyoto Protocol: article 6 and 17 with regards to trading, and article 12 with regards to the clean development mechanism.

Article 6.1 states that "The acquisition of emission reduction units shall be supplemental to domestic actions for the purposes of meeting commitments under Article 3". Article 17 states that "Any such trading shall be supplemental to domestic actions for the purpose of meeting quantified emission limitation and reduction commitments under that article". Article 12.3.b states that "Parties included in Annex I may use the certified emission reductions accruing from such project activities to contribute to compliance with part of their quantified emission limitation and reduction commitments under Article 3".

The actual meaning of the principle has been heavily argued since the signing of Kyoto Protocol in 1997. The COP/MOP is the body that represents the signers/ratifiers of the protocol and they have not been able to agree on a specific definition of the limit on use of flexible mechanisms. The original text has been interpreted to mean that anywhere from 3-50% of emissions could be offset by trading mechanisms. However, the only determination that has been thustly made is that the actual value of supplementarity should be decided at the country level.

In the United States RGGI (Regional Greenhouse Gas Initiative) has set a precedent in that it will initially allow only up to 3.3% compliance occur by means of offset projects (carbon projects). This value can increase to 5% and ultimately 10% if certain price thresholds are exceeded in the region.

From http://en.wikipedia.org/

Sandbag (non-profit organisation)

Sandbag is a non-profit campaign group designed to increase public awareness of emissions trading. The organisation was announced in 2008 by Bryony Worthington and was the first (and founding) member of The Guardian's Environment Network.

The Sandbag website centres on the European Union's Emission Trading Scheme and allows its members to campaign to reduce the number of permits in circulation and to purchase permits and cancel them. Large corporations (such as vehicle manufacturers) must obtain these permits from the EU if they need to emit greenhouse gases during production. The purchase of these permits by the public prevents their use by corporations. Worthington described her organisation as "a bit like burning money in front of someone so they can't spend it on something bad."

Worthington gave the first public talk on Sandbag (as well as emissions trading in general) at a geeKyoto meeting in London during May 2008.

From http://en.wikipedia.org/

Portable Emissions Measurement System

A Portable Emissions Measurement System (PEMS) is essentially a lightweight ‘laboratory’ that is used to test and/or assess mobile source emissions (i.e. cars, trucks, buses, construction equipment, generators, trains, cranes, etc.) for the purposes of compliance, regulation, or decision-making. Governmental entities like the United States Environmental Protection Agency (USEPA), the European Union, as various states and private sector entities have begun to utilize PEMS in order to reduce both the costs and time of mobile emissions decisions. Various state, federal, and international agencies began referring to this shorthand term in early 2000, and the nickname became part of industry parlance.

Background

Since the mid-1800’s, Dynamometers (or "dyno" for short) has been used to measure torque and rotational speed (rpm) from which power produced by an engine, motor or other rotating prime mover can then be calculated. A chassis dynamometer measures power from the engine through the wheels. The vehicle is parked on rollers which the car then turns and the output is measured. These dynos can be fixed or portable. Because of frictional and mechanical losses in the various drivetrain components, the measured horsepower is generally 15-20 percent less than the brake horsepower measured at the crankshaft or flywheel on an engine dynamometer . Historically though, dynamometer emission tests are very expensive, and have usually involved removing fleet vehicles from service for a long period of time. Also, the data derived from such testing is not representative of “real world” driving conditions, and cannot be deemed as quantifiable, especially due to the relatively low amount of repeatable tests at such a facility.

Introduction of PEMS

Portable systems began to be developed in the late 1990’s in order to better identify actual in-use performance of vehicles. PEMS are designed to measure emissions during the actual use of an internal-combustion engine vehicle or equipment in its regular daily operation, in a manner similar to operation on a chassis Dynamometer. This methodology and approach has been recognized by the USEPA

Many governmental entities (such as the USEPA and the United Nations Framework Convention on Climate Change or UNFCCC) have identified target mobile-source pollutants in various mobile standards as CO2, NOx, Particulate Matter (PM), Carbon Monoxide (CO), Hydrocarbons(HC), to ensure that emissions standards are being met. Further, these governing bodies have begun adopting in-use testing program for non-road diesel engines, as well as other types of internal combustion engines, and are requiring the use of PEMS testing. It is important to delineate the various classifications of the latest ‘transferable’ emissions testing equipment from PEMS equipment, in order to best understand the desire of portability in field-testing of emissions.

Defining Portability

An important step in the evaluation of a “Portable Emissions Measurement System” (PEMS) device is to define what a PEMS device is as well as to understand various classifications of ‘transferable vehicle testing equipment’:

Definition of the Term “Portable”

The word portable typically conveys an object that is “Carried or moved with ease, such as a light or small typewriter.”

Definition of the Term “Mobile”

The definition of mobile is essentially “…capable of moving or of being moved readily from place to place: a mobile organism; a mobile missile system.”

Definition of the Term “Instrumented”

Instrumented means to be “a device for recording, measuring, or controlling, especially such a device functioning as part of a control system.”

Therefore, the subtle difference between ‘portable’ and ‘mobile’ is that a portable system is a lightweight device able to be carried, whereas a mobile system can be readily moved, and ‘Instrumented’ means that the testing equipment has been incorporated into the host system. These distinctions are critical, especially considering additional guidelines from various US and International standards.

Definition determined by the National Institute for Occupational Safety and Health (NIOSH)

The National Institute for Occupational Safety and Health (NIOSH) defines these terms based on an equation known as the “NIOSH Lifting Equation” (http://www.cdc.gov/niosh/docs/94-110/pdfs/94-110-b.pdf

) and the “NIOSH Procedures for Analyzing Lifting Jobs (http://www.cdc.gov/niosh/docs/94-110/pdfs/94-110-c.pdf

). These clearly outline safety procedures and equipment. (these are also specified in the “Occupational Safety Hazard Act 29 CFR parts 1903, 1904, and 1910)

Safety Guidelines and Standards (the NIOSH Lifting Equation)

It is imperative to refer to existing federal standards and guidelines when determining a proper ergonomically safe and correct procedure. Not only is this important to ensure the safety of the worker(s), but also to ensure the reduction in potential future liability. Therefore, the revised NIOSH Lifting Equation is an excellent source of information to determine what a single worker should or shouldn’t perform.

Based upon the NIOSH lifting equation and assuming that this diagram is analgous to the lifting of a PEMS into the cab of a heavy duty truck the upper threshold of the total weight of a PEMS device typically should not exceed 45 lb (20 kg)., in order to be congruent with national and international safety standards. This not only allows for much more safe maneuverability and ease of use, but it also reduces the amount of workers required to safely perform such tasks.

Economic Advantage of PEMS Equipment

Because a PEMS unit is able to be carried easily by one person from jobsite to jobsite, and can be used without the requirement of ‘team lifting’, the required emissions testing projects are economically viable. Simply put, more testing can be done more quickly, by less workers, dramatically increasing the amount of testing done in a certain time period. This in turn, significantly reduces the ‘cost per test’, yet at the same time increases the overall accuracy required in a ‘real-world’ environment. Due to the fact that the law of large numbers will create a convergence of results, it means that repeatability, predictability, and accuracy are enhanced, while simultaneously reducing the overall cost of the testing.

On-road Emissions Patterns Identified by PEMS

Nearly all modern engines, when tested new and according to the accepted testing protocols in a laboratory, produce relatively low emissions well within the set standards. As all individual engines of the same series are supposed to be identical, only one or several engines of each series get tested. The tests have shown that:

1. The bulk of the total emissions can come from relatively short high-emissions episodes
2. Emissions characteristics can be different even among otherwise identical engines
3. Emissions outside of the bounds of the laboratory test procedures are often higher than under the operating and ambient conditions comparable to those during laboratory testing
4. Emissions deteriorate significantly over the useful life of the vehicles
5. There are large variances among the deterioration rates, with the high emissions rates often attributable to various mechanical malfunctions

These findings are consistent with published literature, and with the data from a myriad of subsequent studies. They are more applicable to spark-ignition engines and considerably less to diesels, but with the regulation-driven advances in diesel engine technology (comparable to the advances in spark-ignition engines since 1970’s) it can be expected that these findings are likely to be applicable to the new generation diesel engines. Since 2000, multiple entities have utilized PEMS data to measured in-use, on-road emissions on hundreds of diesel engines installed in school buses, transit buses, delivery trucks, plow trucks, over-the-road trucks, pickups, vans, forklifts, excavators, generators, loaders, compressors, locomotives, passenger ferries, and other on-road, off-road and non-road applications. All the previously listed findings were demonstrated; in addition, it was noticed that extended idling of engines can have a significant impact on the emissions during subsequent operation.

Also, PEMS testing identified several engine “anomalies” where fuel-specific NOx emissions were two to three times higher than expected during some modes of operation, suggesting deliberate alterations of the engine control unit (ECU) settings. Such data set can be readily used for developing emissions inventories, as well as to evaluate various improvements in engines, fuels, exhaust after-treatment and other areas. (Data collected on “conventional” fleets then serves as “baseline” data to which various improvements are compared.) This data set can also be examined for compliance with not-to-exceed (NTE) and in-use emissions standards, which are ‘US-based’ emission standards that require on-road testing.

PEMS Accuracy, Measurement Errors

The question often arises as to the target accuracy of PEMS. As PEMS are typically limited in size, weight and power consumption, it is often difficult for PEMS to offer the same accuracy and variety of species measured as is possible with top of the line laboratory instrumentation. For this reason, objections were raised against using PEMS for compliance verification.

On the other hand, fleet emissions deduced from laboratory measurements can be subject to significant inaccuracies if the selected engines and operating conditions were not representative of the fleet, or if deliberate anomalies (i.e., dual mapping of the ECU) were not demonstrated during laboratory testing.

The question of how accurate a monitoring system needs to be therefore cannot be objectively answered, neither can a monitoring system be easily designed, without first considering the intended application of the system and the errors associated with different approaches.

It is expected that a variety of on-board systems will be designed, ranging from suitcase-sized PEMS to instrumented trailers towed behind the tested truck. The benefits of each approach need to be considered in light of other sources of errors associated with emissions monitoring, notably vehicle-to-vehicle differences, and the emissions variability within the vehicle itself. In other words, one needs to consider the total of:

1. The difference between what is measured and what is actually emitted during a test
2. The difference between what is emitted during the test and what the vehicle emits during its everyday duties
3. The difference between the emissions characteristics of the tested vehicle and the overall emissions levels of the entire fleet.

For example, when evaluating a benefit of cleaner fuels on a fleet of city buses, one needs to compare taking a bus out of service, installing a laboratory-grade monitoring system, loading it with sandbags and driving it on a simulated route against testing several buses on their regular routes, with passengers on board, using a simpler (and possibly less accurate) monitoring system.

Additional PEMS Criteria

Another important aspect that needs to be evaluated is the safety of using PEMS on public roadways. Extensions of the tailpipe, lines and cables extending far beyond the vehicle sides, lead-acid batteries located in the passenger compartment of a bus, sharp objects, hot components accessible to bystanders, equipment blocking emergency exits or interfering with the driver, loose components likely to be caught on moving parts, and other potential hazards need to be examined. Also, any modifications to or disassembly of the tested vehicle (i.e., drilling into the exhaust, removing intake air system) need to be examined for their acceptance by both fleet managers and drivers, especially on passenger-carrying vehicles. The source of power for PEMS is a concern, as only a limited amount of power can be extracted from the vehicle electrical system. Sealed lead-acid batteries, fuel cells and generators have been used as external power sources, each with a potential significant hazard when driven on the road. A PEMS also has to be practical. Installation time and the expertise level required to perform installation and to operate the PEMS will have a significant impact on the cost of the test, and on the number of vehicles tested. Versatility (ability to test different vehicles) may be important if testing dissimilar engines or vehicles. Total size, weight and transportability of the PEMS needs to be considered when testing at different locations, including any consumables such as calibration gases. Any restrictions on transport of hazardous materials (I.E.Flame ionization detector (FID) fuel or calibration gases) need to be taken into the account. The ability of the test crew to repair PEMS in the field using locally available resources can also be essential when testing away from the base. Thus, PEMS evaluation protocol should be expanded. In addition to the laboratory comparison testing, which is a measure of how accurately PEMS measures when operated in a laboratory, the accuracy and repeatability of PEMS should also be examined on the road, possibly while driving along a well-defined, repeatable route, or while driving chassis dynamometer cycles on a test track.

PEMS Suitability to Application

Ultimately, it should be demonstrated to show that a PEMS is suitable to the desired application. If the ultimate goal is to verify the compliance with in-use emissions requirements, a fleet of vehicles with known characteristics – including engines with dual-mapping and otherwise non-compliant engines – should be made available for testing. It should be then up to the PEMS manufacturers to practically demonstrate how these non-compliant vehicles can be identified using their system.

Testing Volume and Safe Repeatability

In order to achieve the required amount of ‘testing volume’ needed to validate real-world testing, three points must be considered:

1. System accuracy
2. Federal and/or state health and safety guidelines and/or standards
3. Economic viability based on the first two points.

Once a particular portable emissions system has been identified and pronounced as accurate, the next step is to ensure that the worker(s) are properly protected from work hazards associated with the task(s) being performed in the use of the testing equipment. For example, typical functions for a worker may be to transport the equipment to the jobsite (i.e. car, truck, train, or plane), carry the equipment to the jobsite, and lift the equipment into position.

Advantages of PEMS

On-road vehicle emissions testing is very different from the laboratory testing, bringing both considerable benefits and challenges: As the testing can take place during the regular operation of the tested vehicles, a large number of vehicles can be tested within a relatively short period of time and at relatively low cost. Engines than cannot be easily tested otherwise (i.e., ferry boat propulsion engines) can be tested. True real-world emissions data can be obtained. The instruments have to be small, lightweight, withstand difficult environment, and must not pose a safety hazard. Emissions data is subject to considerable variances, as real-world conditions are often neither well defined nor repeatable, and significant variances in emissions can exist even among otherwise identical engines. On-road emissions testing therefore requires a different mindset than the traditional approach of testing in the laboratory and using models to predict real-world performance. In the absence of established methods, use of PEMS requires careful, thoughtful, broad approach. This should be considered when designing, evaluating and selecting PEMS for the desired application.

From http://en.wikipedia.org/

Onboard refueling vapor recovery

Onboard Refueling Vapor Recovery (ORVR) is a vehicle emission control system that captures fuel vapors from the vehicle gas tank during refueling. The gas tank and fill pipe are designed so that when refueling the vehicle, fuel vapors in the gas tank travel to an activated carbon packed canister, which adsorbs the vapor. When the engine is in operation, it draws the gasoline vapors into the engine intake manifold to be used as fuel. ORVR has been mandated on all passenger cars in the United States since 2000 by the United States Environmental Protection Agency‎. The use of onboard vapor recovery is intended to make vapor recovery at gas stations obsolete.

From http://en.wikipedia.org/

Not-To-Exceed (NTE)

The Not-To-Exceed (NTE) standard recently promulgated by the United States Environmental Protection Agency (EPA) ensures that heavy-duty engine emissions are controlled over the full range of speed and load combinations commonly experienced in use. NTE establishes an area (the “NTE zone”) under the torque curve of an engine where emissions must not exceed a specified value for any of the regulated pollutants. The NTE test procedure does not involve a specific driving cycle of any specific length (mileage or time). Rather it involves driving of any type that could occur within the bounds of the NTE control area, including operation under steady-state or transient conditions and under varying ambient conditions. Emissions are averaged over a minimum time of thirty seconds and then compared to the applicable NTE emission limits.

Creation of NTE

NTE standards were created by the EPA as a result of a consent decree between the EPA and several major diesel engine manufacturers. These manufacturers included Caterpillar, Cummins, Detroit Diesel, Mack, Mack's parent company Renault Vehicles Industriels, and Volvo Truck Corp. These manufacturers were accused of violating the Clean Air Act by installing devices that defeat emission controls. As part of the resulting consent decree settlement with the EPA, these manufacturers were assessed heavy fines and were subjected to new emissions standards which included NTE.

Current requirements to achieve engine operation in the "NTE Zone"

When all of the following conditions are simultaneously met for at least 30 seconds, and engine is considered to be operating in the NTE zone.

1. Engine speed must be greater than 15% above idle speed
2. Engine torque must be greater than or equal to 30% of maximum torque.
3. Engine power must be greater than or equal to 30% of maximum power.
4. Vehicle altitude must be less than or equal to 5,500 feet (1,700 m).
5. Ambient temperature must be less than or equal to 100 °F (38 °C) at sea level to 86°F at 5,500 feet (1,700 m).
6. Brake specific fuel consumption ([BSFC) must be less than or equal to 105% of the minimum BSFC if an engine is not coupled to a multi-speed manual or automatic transmission.
7. Engine operation must be outside of any manufacturer petitioned exclusion zone.
8. Engine operation must be outside of any NTE region in which a manufacturer states that less than 5% of in-use time will be spent.
9. For Exhaust gas recirculation (EGR) equipped engines, the intake manifold temperature must be greater than or equal to 86-100 degrees Fahrenheit, depending upon intake manifold pressure.
10. For EGR-equipped engines, the engine coolant temperature must be greater than or equal to 125-140 degrees Fahrenheit, depending on intake manifold pressure.
11. Engine after treatment systems’ temperature must be greater than or equal to 250 degrees Celsius.

Visual representations of NTE Zone

Not-To-Exceed (NTE)

Example NTE Control Area for Heavy-Duty Diesel Engine With 100% Operational Engine Speed Less Than 2400 rpm

Not-To-Exceed (NTE)

Example NTE Control Area for HeavyDuty Diesel Engine With 100% Operational Engine Speed Greater Than 2400 rpm

Description

The NTE test, as defined in CFR 86.1370-2007, establishes an area (NTE control area) under the torque curve of an engine where emissions must not exceed a specified emission cap for a given pollutant. The NTE cap is set at 1.25 times the FTP emission limit as described in the subsection above. For 2005 model year heavy-duty engines, the NTE emission cap for NMHC plus NOx is 1.25 times 2.5 grams per brake horsepower-hour, or 3.125 grams per brake horsepower-hour. The basic NTE control area for diesel engines has three basic boundaries on the engine’s torque and speed map. The first is the upper boundary that is represented by an engine’s maximum torque at a given speed. The second boundary is 30 percent of maximum torque. Only operation above this boundary is included in the NTE control area. The third boundary is determined based on the lowest engine speed at 50 percent of maximum power and highest engine speed at 70 percent of maximum power. This engine speed is considered the “15 percent operational engine speed”. The fourth boundary is 30% of maximum power

Controversy and deficiency regarding NTE standards

Controversy

A controversial issue is the applicability of the NTE limits to the real-world driving. In order for NTE standards to apply, the engine needs to remain within the NTE zone (limits include operation at a minimum of 30% of rated power) for at least 30 seconds. Concerns arose that performing this action could prove to be difficult, as each time the driver removes the foot from the accelerator pedal, or shifts gears on vehicles with manual transmission, the engine leaves the NTE zone.

In urban or suburban driving, this happens relatively often, to the point that NTE standards are applicable only a very small portion of the operation or, in some cases, not at all. The probability of the engine remaining within the NTE zone for over 30 seconds also decreases with the advent of high-power engines. For example, if the power required to maintain a motorcoach or an over-the-road truck at highway cruising speed is somewhere around 150 hp (110 kW), the probability that a 475 hp (354 kW) engine will consistently operate at loads above 30%, without “dips” to lower power levels, can be relatively small.

These concerns were confirmed by studies carried out by West Virginia University (WVU) under the Consent Decrees. WVU found that “remaining for 30 seconds within the NTE zone can be quite difficult. The resulting low NTE availability poses a problem as many measurements within the NTE area have to be rejected along with those from outside the NTE area. The question arises if in this way all real-life emissions are sufficiently ‘well reflected’ in the NTE test results”

A second issue of concern in the same vein is a case when an engine is compliant within the NTE zone, but exhibits elevated NOx at power levels just outside the NTE zone, or at idle. For reasons such as this the Working Group On Off Cycle Emissions is studying whether an extension of the NTE zone is rational as they ponder if there are spots on the engine map (outside of the NTE zone) that have a significant contribution in real life emissions. Their preliminary findings echo those of WVU as they found that the time of engine operation in the NTE zone is rather low.

EPA admitted deficiencies

According to the US EPA there are technical limitations of NTE under limited operating conditions which have caused the EPA to “carve-out” (see graphs above) certain portions of the NTE zone to allow for these deficiencies. Excerpts as follows:

“NTE zone was defined by a desire to have a homogeneous emissions limit. Carve-outs within that zone exclude certain areas of operation from NTE consideration or limit how much emissions from that operation can contribute to an NTE result, deficiencies allow temporary exceedences of the NTE standards due to technical limitations under limited operating conditions. The idea is not to hold the manufacturer responsible for NTE compliance during modes where the engine is not capable of operating or where it is not technically feasible to meet the NTE standards.”

Regarding the particulate matter “carve-out”

"PM-specific region is “carved out” of the NTE control area. The PM specific area of exclusion is generally in the area under the torque curve, where engine speeds are high and engine torque is low, and can vary in shape depending upon several speed-related criteria and calculations detailed in the regulations. Controlling PM in this range of operation presents fundamental technical challenges which we believe can not be overcome in the 2004 time frame. Specifically, the cylinder pressures created under these high speed and low load conditions are often insufficient to prevent lube oil from being ingested into the combustion chamber. High levels of PM emissions are the result. Furthermore, we do not believe that these engines spend a significant portion of their operating time in this limited speed and torque range"

Lawsuits and settlement

Lawsuits

In 2001, five separate lawsuits were filed against the US EPA by the Engine Manufacturers Association (EMA) and several individual trucking industry entities (such as International Truck and Engine Corporation). Each of those lawsuits challenged the legality and technological feasibility of certain engine emission control standards in EPA regulations now referred to as NTE requirements. In their challenge, EMA stated that to determine whether an engine meets a primary emission standard, engines are tested and assessed using a standardized 20-minute emissions laboratory test known as the Federal Test Procedure. The NTE, by contrast, has no specified test procedure and potentially could apply over an almost infinite number of test conditions. This, in the manufacturers’ view, made it virtually impossible to ensure total compliance with the NTE—since there is no real or practical way to test an engine under all conceivable conditions—and so made the NTE both unlawful (the CAA authorizes EPA to adopt engine standards AND accompanying test procedures) and technically infeasible.

Settlement

On June 3, 2003, the parties finalized a settlement of their disputes pertaining to the NTE standards. The parties agreed upon a detailed outline for a future regulation that would require a manufacturer-run heavy-duty in-use NTE testing (“HDIUT”) program for diesel-fueled engines and vehicles. One section of the outline stated:

“The NTE Threshold will be the NTE standard, including the margins built into the existing regulations, plus additional margin to account for in-use measurement accuracy. This additional margin shall be determined by the measurement processes and methodologies to be developed and approved by EPA/CARB/EMA. This margin will be structured to encourage instrument manufacturers to develop more and more accurate instruments in the future.”

HDIUT and Portable Emissions Measurement Systems (PEMS)

The ultimate objective of the new HDIUT program is to allow for a significant streamlining of engine certification if a truly robust in-use emissions testing program proves feasible and cost effective. Time-consuming and expensive laboratory assessments of engines could then give way to real-world, real-time emissions assessments that efficiently provides more relevant data.

Basically, the HDIUT is an industry agreed to manufacturer run, in-use, on-road testing program. It builds upon the original NTE standard. It is designed to focus on compliance in the real world, and relies on emissions testing, utilizing Portable Emissions Measurement Systems (PEMS) with NOx, HC, CO and PM being the pollutants to be measured. Measurement Accuracy Margins are being established to account for the emissions measurement variability associated with the PEMS in-use.

From http://en.wikipedia.org/