Google analytics tag

Monday, December 17, 2012

Enhance Landfill Gas to Bio-LNG

A new concept known as “hydraulic fracturing “ to enhance the recovery of land fill gas from new and existing land fill sites have been tested jointly by a Dutch and a Canadian company. They claim it is now possible to recover such gas economically and liquefy them into Bio-LNG to be used as a fuel for vehicles and to generate power.Most biofuels around the world are now made from energy crops like wheat, maize, palm oil, rapeseed oil etc and only a minor part is made from waste. But such a practice in not sustainable in the long run considering the anticipated food shortage due to climate changes. The EU wants to ban biofuels that use too much agricultural land and encourage production of biofuels that do not use food material but waste materials. Therefore there is a need to collect methane gas that is emitted by land fill sites more efficiently and economically and to compete with fossil fuels. There are approximately 150,000 landfills in Europe with approximately 3–5 trillion cubic meters of waste (Haskoning 2011). All landfills emit landfill gas; the contribution of methane emissions from landfills is estimated to be between 30 and 70 million tons each year. Landfills contributed an estimated 450 to 650 billion cubic feet of methane per year (in 2000) in the USA. One can either flare landfill gas or make electricity with landfill gas. But it is prudent to produce the cleanest and cheapest liquid biofuel namely “Bio-LNG”. Landfill gas generation: how do these bugs do their work? Researchers had a hard time figuring out why landfills do not start out as a friendly environment for the organisms that produce methane. Now new research from North Carolina State University points to one species of microbe that is paving the way for other methane producers. The starting bug has been found. That opens the door to engineer better landfills with better production management. One can imagine a landfill with real economic prospects other than getting the trash out of sight. The NCSU researchers found that an anaerobic bacterium called Methanosarcina barkeri appears to be the key microbe. The following steps are involved in the formation of landfill gas is shown in the diagram Phase 1: oxygen disappears, and nitrogen Phase 2: hydrogen is produced and CO2 production increases rapidly. Phase 3: methane production rises and CO2 production decreases. Phase 4: methane production can rise till 60%. Phases 1-3 typically last for 5-7 years. Phase 4 can continue for decades, rate of decline depending on content. Installation of landfill gas collection system: A quantity of wells is drilled; the wells are (inter) connected with a pipeline system. Gas is guided from the wells to a facility, where it is flared or burnt to generate electricity. A biogas engine exhibits 30-40% efficiency. Landfills often lack access to the grid and there is usually no use for the heat. The alternative: make bio-LNG instead and transport the bio-LNG for use in heavy duty vehicles and ships or applications where you can use all electricity and heat. Bio-LNG: what is it? Bio-LNG is liquid bio-methane (also: LBM). It is made from biogas. Biogas is produced by anaerobic digestion. All organic waste can rot and can produce biogas, the bacteria does the work. Therefore biogas is the cheapest and cleanest biofuel that can be generated without competing with food or land use. For the first time there is a biofuel, bio-LNG, a better quality fuel than fossil fuel. The bio-LNG production process: Landfill gas is produced by anaerobic fermentation in the landfill. The aim is to produce a constant flow of biogas with high methane content. The biogas must be upgraded, i.e. removal of H2S, CO2 and trace elements; In landfills also siloxanes, nitrogen and Cl/F gases. The bio-methane must be purified (maximum 25/50ppm CO2, no water) to prepare for liquefaction. The cold box liquefies pure biomethane to bio-LNG Small scale bio-LNG production using smarter methods. •Use upgrading modules that do not cost much energy. •Membranes which can upgrade to 98-99.5 % methane are suitable. •Use a method for advanced upgrading that is low on energy demand. •Use a fluid / solid that is allowed to be dumped at the site. •Use cold boxes that are easy to install and low on power demand. •Use LNG tank trucks as storage and distribution units. •See if co-produced CO2 can be sold and used in greenhouses or elsewhere. •Look carefully at the history and present status of the landfill. What was holding back more projects? Most flows of landfill gas are small (hundreds of Nm3/hour), so economy of scale is generally not favorable. Technology in upgrading and liquefaction has evolved, but the investments for small flows during decades cannot be paid back. Now there is a solution: enhanced gas recovery by hydraulic fracturing. Holland Innovation Team and Fracrite Environmental Ltd. (Canada) has developed a method to increase gas extraction from landfill 3-5 times. Hydraulic fracturing increases landfill gas yield and therefore economy of scale for bio-LNG production The method consists of a set of drillings from which at certain dept the landfill is hydraulically broken. This means a set of circular horizontal fractures are created from the well at preferred depths. Sand or other materials are injected into the fractures. Gas gathers from below in the created interlayers and flows into the drilled well. In this way a “guiding” circuit for landfill gas is created. With a 3-5 fold quantity of gas, economy of scale for bio-LNG production will be reached rapidly. Considering the multitude of landfills worldwide this hydraulic fracturing method in combination with containerized upgrading and liquefaction units offers huge potential. The method is cost effective, especially at virgin landfills, but also at landfill with decreasing amounts of landfill gas. Landfill gas fracturing pilot (2009). • Landfill operational from 1961-2005 • 3 gas turbines, only 1 or 2 in operation at any time due to low gas extraction rates • Only 12 of 60 landfill gas extraction wells still producing methane • Objective of pilot was to assess whether fracturing would enhance methane extraction rates Field program and preliminary results: Two new wells drilled into municipal wastes and fractured (FW60, FW61). Sand Fractures at 6, 8, 10, 12 m depth in wastes with a fracture radius of 6 m. Balance gases believed to be due to oxygenation effects during leachate and Groundwater pumping. Note: this is entirely different from deep fracking in case of shale gas! Conceptual Bioreactor Design: The conceptual design is shown in the figures. There are anaerobic conditions below the groundwater table, but permeability decreases because of compaction of the waste. Permeability increases after fracking and so does the quantity of landfill gas and leachate. Using the leachate by injecting this above the groundwater table will introduce anaerobic conditions in an area where up till then oxygen prevailed and so prevented landfill gas formation It can also be done in such a systematic way, that all leachate which is extracted, will be disposed off in the shallow surrounding wells above the groundwater table. One well below the groundwater table is fracked, the leachate is injected at the corners of a square around the deeper well. Sewage sludge and bacteria can be added to increase yield further Improving the business case further: A 3-5 fold increased biogas flow will improve the business case due to increasing Economy of scale. The method will also improve landfill quality and prepare the landfill for other uses. When the landfill gas stream dries up after 5 years or so, the next landfill can be served by relocating the containerized modules (cold boxes and upgrading modules). The company is upgrading with a new method developed in-house, and improving landfill gas yield by fracking with smart materials. EC recommendations to count land fill gas quadrupled for renewable fuels target and the superior footprint of bio-LNG production from landfills are beneficial for immediate start-ups
Conclusions and recommendations Landfills emit landfill gas. Landfill gas is a good source for production of bio-LNG. Upgrading and liquefaction techniques are developing fast and decreasing in price. Hydraulic fracturing can improve landfill gas yield such that economy of scale is reached sooner. Hydraulic fracturing can also introduce anaerobic conditions by injecting leachate, sewage sludge and bacteria above the groundwater table. The concept is optimized to extract most of the landfill gas in a period of five years and upgrade and liquefy this to bio-LNG in containerized modules. Holland Innovation Team and Fracrite aim at a production price of less than €0.40 per kilo (€400/ton) of bio-LNG, which is now equivalent to LNG fossil prices in Europe and considerably lower than LNG prices in Asia, with a payback time of only a few years. (Source:Holland Innovation Team)

Friday, December 7, 2012

Innovative desalination technology

Seawater desalination is a technology that provides drinking water for millions of people around the world. With increasing industrialization and water usage and lack of recycling or reuse, the demand for fresh water is increasing at the fastest rate. Industries such as power plants use bulk of water for cooling purpose and chemical industries use water for their processing. Agriculture is also a major user of water and countries like India exploit ground water for this purpose. To supplement fresh water, Governments and industries in many parts of the world are now turning to desalinated seawater as a potential source of fresh water. However, desalination of seawater to generate fresh water is an expensive option, due to its large energy usage. However, due to frequent failure of monsoon rains and uncertainties and changing weather pattern due to global warming, seawater desalinations is becoming a potential source of fresh water, despite its cost and environmental issues. Seawater desalination technology has not undergone any major changes during the past three decades. Reverse osmosis is currently the most sought after technology for desalination due to increasing efficiencies of the membranes and energy saving devices. In spite of all these improvements the biggest problem with desalination technologies is still the rate of recovery of fresh water. The best recovery in SWRO plants is about 50% of the input water. Higher recoveries create additional problems such as scaling, higher energy requirements and O&M issues and many suppliers would like to restrict the recoveries to 35%, especially when they have to guarantee the life of membranes and the plant. Seawater is nothing but fresh water with large quantities of dissolved salts. The concentration of total dissolved salts in seawater is about 35,000mgs/lit. Chemical industries such as Caustic soda and Soda ash plants use salt as the basic raw material. Salt is the backbone of chemical industries and number of downstream chemicals are manufactured from salt. Seawater is the major source of salt and most of these chemical industries make their own salt using solar evaporation of seawater using traditional methods with salt pans. Large area of land is required for this purpose and solar evaporation is a slow process and it takes months together to convert seawater into salt. It is also labor intensive under harsh conditions. The author of this article has developed an innovative technology to generate fresh water as well as salt brine suitable for Caustic soda and Soda ash production. By using this novel process, one is able to recover almost 70% fresh water against only 40% fresh water recovered using conventional SWRO process, and also recover about 7- 9% saturated brine simultaneously. Chemical industries currently producing salt using solar evaporation are unable to meet their demand or expand their production due to lack of salt. The price of salt is steadily increasing due to supply demand gap and also due to uncertainties in weather pattern due to global warming. This result in increased cost of production and many small and medium producers of these chemicals are unable to compete with large industries. Moreover, countries like Australia who have vast arid land can produce large quantities of salt with mechanized process competitively; Australia is currently exporting salt to countries like Japan, while countries like India and China are unable to compete in the international market with their age old salt pans using manual labor. In solar evaporation the water is simply evaporated. Currently these chemical industries use the solar salt which contains a number of impurities, and it requires an elaborate purification process. Moreover the salt can be used as a raw material only in the form of saturated brine without any impurities. Any impurity is detrimental to the Electrolytic process where the salt brine is converted into Caustic soda and Soda ash. Chemical industries use deionized water to dissolve solar salt to make saturated brine and then purify them using number of chemicals before it can be used as a raw material for the production of Caustic soda or Soda ash. The cost of such purified brine is many times costlier than the raw salt. This in turn increase the cost of chemicals produced. In this new process, seawater is pumped into the system where it is separated into 70% fresh water meeting WHO specifications for drinking purpose, and 7-10% saturated pure brine suitable for production of caustic soda and Soda ash. These chemical industries also use large quantities of process water for various purposes and they can use the above 70% water in their process. Only 15-20% of unutilized seawater is discharged back into the sea in this process, compared to 65% toxic discharge from convention desalination plants. This new technology is efficient and environmentally friendly and generates value added brine as a by-product. It is a win situation for the industries and the environment. The technology has been recently patented and is available for licensing on a non-exclusive or exclusive basis. The advantage of this technology is any Caustic soda or Soda ash plant located near the seashore can produce their salt brine directly from seawater without stock piling solar salt for months together or transporting over a long distance or importing from overseas. Government and industries can join together to set up such plants where Governments can buy water for distribution and industries can use salt brine as raw material for their chemical production. Setting up a desalination plants only for supplying drinking water to the public is not a smart way to reduce the cost of drinking water. For example, the Victorian Government in Australia has set up a large desalination plant to supply drinking water. This plant was set up by a foreign company on BOOT (build, own and operate basis) and water is sold to the Government on ‘take or pay’ basis. Currently the water storage level at catchment area is nearly 80% of its capacity and the Government is unlikely to use desalinated water for some years to come. However, the Government is legally bound by a contract to buy water or pay the contracted value, even if Government does not require water. Such contracts can be avoided in the future by Governments by joining with industries who require salt brine 24x7 throughout the year, thus mitigating the risk involved by expensive legal contracts.

Monday, December 3, 2012

Which is the best storage technology for Renewable energy?

The share of renewable energy is steadily increasing around the world. But storing such intermittent energy source and utilizing it when needed has been a challenge. In fact energy storage constitutes a significant portion of the cost in any renewable energy technology. Many storage technologies are currently available in the commercial market, but choosing a right type of technology has always been a difficult choice. In this article we will consider four types of storage technologies. The California Energy Commission conducted economic and environmental analyses of four energy storage options for a wind energy project: (1) lead acid batteries, (2) zinc bromine (flow) batteries, (3) a hydrogen electrolyzer and fuel cell storage system, and (4) a hydrogen storage option where the hydrogen was used for fueling hydrogen powered vehicle. Their conclusions were: ”Analysis with NREL’s (National Renewable Energy laboratory) HOMER model showed that, in most cases, energy storage systems were not well utilized until higher levels of wind penetration were modeled (i.e., 18% penetration in Southern California in 2020). In our scenarios, hydrogen storage became more cost-effective than battery storage at higher levels of wind power production, and using the hydrogen to refuel vehicles was more economically attractive than reconverting the hydrogen to electricity. The overall value proposition for energy storage used in conjunction with intermittent renewable power sources depends on multiple factors. Our initial qualitative assessment found the various energy storage systems to be environmentally benign, except for emissions from the manufacture of some battery materials. However, energy storage entails varying economic costs and environmental impacts depending on the specific location and type of generation involved, the energy storage technology used, and the other potential benefits that energy storage systems can provide (e.g., helping to optimize Transmission and distribution systems, local power quality support, potential provision of spinning reserves and grid frequency regulation, etc.)”. Key Assumptions Key assumptions guiding this analysis include the following: • Wind power will expand in California under the statewide RPS program to a level of approximately 10% of total energy provided in 2010 and 20% by 2020, with most of this expansion in Southern California. • Costs of flow battery systems are assumed to decline somewhat through 2020 and costs of hydrogen technologies (electrolyzers, fuel cell systems, and storage systems) are assumed to decline significantly through 2020. • In the case where hydrogen is produced, stored, and then reconverted to electricity using fuel cell systems, we assume that the hydrogen can be safely stored in modified wind turbine towers at relatively low pressure at lower costs than more conventional and higher-pressure storage. • In the case where hydrogen is produced and sold into transportation markets, we assume that there is demand for hydrogen for vehicles in 2010 and 2020, and that the Hydrogen is produced at the refueling station using the electricity produced from wind farms (in other words, we assume that transmission capacity is available for this when needed)? Key Project Findings Key findings from the HOMER model projections and analysis include the following: • Energy storage systems deployed in the context of greater wind power development were not particularly well utilized (based on the availability of “excess” off-peak electricity from wind power), especially in the 2010 time frame (which assumed 10% wind penetration statewide), but were better utilized–up to 1,600 hours of operation per year in some cases–with the greater (20%) wind penetration levels assumed for 2020. • The levelized costs of electricity from these energy storage systems ranged from a low of $0.41 per kWh—or near the marginal cost of generation during peak demand times—to many dollars per kWh (in cases where the storage was not well utilized). This suggests that in order for these systems to be economically attractive, it may be necessary to optimize their output to coincide with peak demand periods, and to identify additional value streams from their use (e.g., transmission and distribution system optimization, provision of power quality and grid ancillary services, etc.) • At low levels of wind penetration (1%–2%), the electrolyzer/fuel cell system was either inoperable or uneconomical (i.e., either no electricity was supplied by the energy storage system or the electricity provided carried a high cost per MWh). • In the 2010 scenarios, the flow battery system delivered the lowest cost per energy stored and delivered. • At higher levels of wind penetration, the hydrogen storage systems became more economical such that with the wind penetration levels in 2020 (18% from Southern California), the hydrogen systems delivered the least costly energy storage. • Projected decreases in capital costs and maintenance requirements along with a more durable fuel cell allowed the electrolyzer/fuel cell to gain a significant cost advantage over the battery systems in 2020. • Sizing the electrolyzer/fuel cell system to match the flow battery system’s relatively high instantaneous power output was found to increase the competitiveness of this system in low energy storage scenarios (2010 and Northern California in 2020), but in scenarios with higher levels of energy storage (Southern California in 2020), the Electrolyzer/fuel cell system sized to match the flow battery output became less competitive. • In our scenarios, the hydrogen production case was more economical than the Electrolyzer/fuel cell case with the same amount of electricity consumed (i.e., hydrogen production delivered greater revenue from hydrogen sales than the electrolyzer/fuel cell avoided the cost of electricity, once the process efficiencies are considered). • Furthermore, the hydrogen production system with a higher-capacity power converter and electrolyzer (sized to match the flow battery converter) was more cost-effective than the lower-capacity system that was sized to match the output of the solid-state battery. This is due to economies of scale found to produce lower-cost hydrogen in all cases. • In general, the energy storage systems themselves are fairly benign from an environmental perspective, with the exception of emissions from the manufacture of certain components (such as nickel, lead, cadmium, and vanadium for batteries). This is particularly true outside of the U.S., where battery plant emissions are less tightly controlled and potential contamination from improper disposal of these and other materials are more likely. The overall value proposition for energy storage systems used in conjunction with intermittent renewable energy systems depends on diverse factors. • The interaction of generation and storage system characteristics and grid and energy resource conditions at a particular location. • The potential use of energy storage for multiple purposes in addition to improving the dependability of intermittent renewable (e.g., peak/off-peak power price arbitrage, helping to optimize the transmission and distribution infrastructure, load-leveling the grid in general, helping to mitigate power quality issues, etc.) • The degree of future progress in improving forecasting techniques and reducing prediction errors for intermittent renewable energy systems • Electricity market design and rules for compensating renewable energy systems for their output Conclusions “This study was intended to compare the characteristics of several technologies for providing Energy storage for utility grids—in a general sense and also specifically for battery and Hydrogen storage systems—in the context of greater wind power development in California. While more detailed site-specific studies will be required to draw firm conclusions, we believe those energy storage systems have relatively limited application potential at present but may become of greater interest over the next several years, particularly for California and other areas that is experiencing significant growth in wind power and other intermittent renewable. Based on this study and others in the technical literature, we see a larger potential need for energy storage system services in the 2015–2020 time frames, when growth in renewable produced electricity is expected to reach levels of 20%–30% of electrical energy supplied. Depending on the success in improved wind forecasting techniques and electricity market designs, the role for energy storage in the modern electricity grids of the future may be significant. We suggest further and more comprehensive assessments of multiple energy storage technologies for comparison purposes, and additional site- and technology-specific project assessments to gain a better sense of the actual value propositions for these technologies in the California energy system. This project has helped to meet program objectives and to benefit California in the Following ways: • Providing environmentally sound electricity. Energy storage systems have the Potential to make environmentally attractive renewable energy systems more competitive by improving their performance and mitigating some of the technical issues associated with renewable energy/utility grid integration. This project has identified the potential costs associated with the use of various energy storage technologies as a step toward understanding the overall value proposition for energy storage as a means to help enable further development of wind power (and potentially other intermittent renewable resources as well). • Providing reliable electricity. The integration of energy storage with renewable energy esources can help to maintain grid stability and adequate reserve margins, thereby contributing to the overall reliability of the electricity grid. This study identified the potential costs of integrating various types of energy storage with wind power, against which the value of greater reliability can be assessed along with other potential benefits. • Providing affordable electricity. Upward pressure on natural gas prices, partly as a function of increased demand, has significantly contributed to higher electricity prices in California and other states. Diversification of electricity supplies with relatively low-cost sources, such as wind power, can provide a hedge against further natural gas price increases. Higher penetration of these other (non-natural-gas-based) electricity sources, Potentially enabled by the use of energy storage, can reduce the risks of future electricity.” (Source: California Energy Commission prepared by University of Berkeley).

Thursday, November 29, 2012

Shrinking Arctic and disappearing sea

The arctic ice cover is steadily shrinking over a period of time opening new polar shipping routes. Recently a Norwegian ship was carrying a LNG tanker to Japan through Russia, marking the beginning of new polar shipping route. There was a short documentary film on disappearance of an entire Aral Sea from the map, due to evaporation, caused by construction of dams by Russian authorities restricting the flow of rivers into Aral Sea. These dramatic events are happening right in front of our eyes. Yet, there are many Governments and people around the world are still questioning whether Global warming is real and is it man-made? Well, people do not accept science when it come to global warming because it causes them a lot of inconvenience and embarrassment for Governments. They do not want to face the reality but prefer to postpone it for another day. This is what happening with super powers and industrialized countries in the world. But how long can they sustain such skepticism and postpone urgent actions that are necessary to save the future generation of mankind? • Arctic sea ice is projected to decline dramatically over the 21st century, with little late summer sea ice remaining by the year 2100. • The simulated 21st century Arctic sea ice decline is not smooth, but contains periods of large and small changes. • The Arctic region responds sensitively to past and future global climate forcings, such as changes in atmospheric greenhouse gas levels. Its surface air temperature is projected to warm at a rate about twice as fast as the global average. Attached Sea ice concentrations simulated by GFDL’s CM2.1 global coupled climate model averaged over August, September and October (the months when Arctic sea ice concentrations generally are at a minimum). Three years (1885, 1985 & 2085) are shown to illustrate the model-simulated trend. A dramatic reduction of summertime sea ice is projected, with the rate of decrease being greatest during the 21st century portion. The colors range from dark blue (ice-free) to white (100% sea ice covered). “Satellite observations show that Arctic sea ice extent has declined over the past three decades [e.g., NOAA magazine, 2006]. Global climate model experiments, such as those conducted at NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL), project this downward trend to continue and perhaps accelerate during the 21st century. The Arctic is a region that is projected to warm at about twice the rate of the global average [Winton, 2006a] - a phenomenon sometimes referred to as “Arctic amplification”. As Arctic temperatures rise, sea ice melts—a change that in turn affects other aspects of global climate. While beyond the scope of GFDL’s climate model simulations, other research suggests that Arctic sea ice changes can impact a broad range of factors — from altering key elements of the Arctic biosphere (plants and animals, marine and terrestrial, including polar bears and fish), to opening polar shipping routes, to shifting commercial fishing patterns, etc. An Ice-Free Arctic in Summer? The three panel’s attachments are snapshots of how late summer Northern Hemisphere sea ice concentrations vary in time in a GFDL CM2.1 climate model simulation. The figures depict Sea ice concentration - a measure of how much of the ocean area is covered by sea ice, and the climate model variable that is most similar to what a satellite observes. By the late 21st century, the GFDL computer model experiments project that the Arctic becomes almost ice free during the late summer. But during the long Arctic winters (not shown) the sea ice grows back, though thinner than is simulated for the 20th century. The rate at which the modeled 21st century Arctic warming and sea ice melting occurs is rapid compared to that seen in historical observations. Abrupt Arctic changes are of particular concern for human and ecosystem adaptations and are a subject of much current research (Winton 2006B). The modeled summertime Arctic sea ice extent (the size of the area covered by sea ice) does not very smoothly in time, as there is a good deal of year-to-year variability superimposed on the downward trend. This can be seen in the graph to the right and also in animations found at www.gfdl.noaa.gov/research/climate/highlights. By the end of the 21st century, the modeled summer sea ice extent usually is less than 20% of the simulated for 1981 to 2000. The Arctic sea ice results shown here are not unique to the GFDL climate model. Generally similar results are produced by computer models developed at several other international climate modeling centers. Though some uncertainties in model projections of future climate remain, results such as these, taken together with observations that document late 20th century Arctic sea ice shrinkage, make the Arctic a region that will continue to be studied and watched closely, as atmospheric greenhouse gas levels increase. Climate implications of shrinking summer sea ice Melting sea ice can influence the climate through a process known as the ice-albedo feedback. Much of the sunlight reflected by sea ice returns to space and is unavailable to heat the climate system. As the sea ice melts, the surface darkens and absorbs more of this energy. This, in turn, can lead to greater melting. This is referred to as a “positive feedback loop” because an initial change (sea ice melting) triggers other responses in the system that eventually acts to enhance the original change (inducing more sea ice melting). At GFDL, research has focused on the role of the ice-albedo feedback in the enhancing simulated Arctic warming and on the potential for this positive feedback loop to lead to abrupt changes [Winton, 2006a]. A somewhat complex picture has emerged that shows the ice-albedo feedback as a contributor, but not necessarily the dominant factor in determining why modeled Arctic surface air temperatures warm roughly twice as fast as the global average. It also has been found that, for the range of temperature increases likely to occur in the 21st century, the Arctic ice-albedo feedback adjusts smoothly as the model’s ice declines, by reducing the ice cover at progressively earlier times in the sunlit season. This smooth adjustment maintains a fairly constant amplification of Arctic temperature change relative to global average warming. The details of how Arctic feedback processes act in climate models at various modeling centers differ, and so analysis and computer model development work continues in order to better understand and to reduce uncertainties in Arctic climate change simulations.” While many scientists are alarmed by the widening expanse of open water in the Arctic, blaming it on global warming, shippers see a new international route. The MV Nordic Barents is lugging 40,000 tonnes of iron ore from Norway to China on a shortcut through melting ice – and is making a little history in the process. It is the first non-Russian commercial vessel to attempt a non-stop crossing of a route that skirts the receding Arctic ice cap. Business Times, Singapore report (6 September 2010): The MV Nordic Barents is lugging 40,000 tonnes of iron ore from Norway to China on an Arctic Ocean shortcut through melting ice – and is making a little history in the process. Steaming east along Russia’s desolate northern coast, the ship departed on Saturday as the first non-Russian commercial vessel to attempt a non-stop crossing of a route that skirts the receding Arctic ice cap. ‘We’re pretty much going over the top,’ said John Sanderson, the Australian CEO of the Norwegian mine where the iron ore comes from. By using the northern route from Europe to Asia, the Nordic Barents could save eight days and 5,000 nautical miles of travel thought to be worth hundreds of thousands of dollars to the owners of its cargo. While many scientists are alarmed by the widening expanse of open water that the ship will traverse, blaming it on global warming, shippers see a new international route. Sanderson’s ASX-listed Northern Iron Ltd has sent 15 ships to China since it began mining in the northern Norwegian town of Kirkenes last October. All steamed south, then east through the Suez Canal or around the Cape of Good Hope. To reach Chinese steel mills hungry for ore, they had to brave pirates in the Indian Ocean. The Arctic route is no picnic either. On Saturday the polar ice sheet remained almost as big as the US mainland. But over the summer it has shrunk about as far from the Russian coast as it did during the biggest Arctic melt on record, in 2007, according to the Nansen Environmental and Remote Sensing Center. And the Russians are waking up to the business potential of a route that was mostly reserved for domestic commercial vessels in the past. ‘Suddenly there is an opening that gives this part of the world an advantage,’ said Felix H Tschudi, whose shipping company is Northern Iron’s largest shareholder. Willy Oestreng, chairman of research group Ocean Futures, called the trip of the Nordic Barents ‘historic’. ‘The western world is starting to show an interest and a capability to use that route,’ he said. Two days after Russia and Norway agreed last April to settle a 40-year-old dispute over economic zones in the Barents Sea, government and business leaders of the two countries met in Kirkenes to sweep away hurdles to international shipping. Russian law still requires icebreaker escort even where ice danger is small, due to a lack of onshore mechanical or medical support. But fees and rules are starting to loosen. ‘Russian companies and Russian authorities are now ready to assist,’ said Mikhail Belkin, assistant general manager of the state-owned Rosatomflot icebreaking fleet. Lots of Russian vessels have plied the passage, and two German ships traversed it last year with small cargos delivered to Russian ports. But the Nordic Barents, an ice-class Danish bulk carrier chartered by Tschudi, is the first non-Russian ship with permission to pass without stopping. Rosatomflot has assigned two 75,000-horsepower icebreakers to the vessel for about 10 days of the three-week voyage. Tschudi won’t say how much Rosatomflot is charging but praised it as ‘cooperative, service-minded and pragmatic.’ ‘Today the route is basically competitive with the Suez Canal, and we can subtract the piracy risk,’ he said. Excluding icebreaking fees, a bulk ship that takes the Arctic route from Hamburg to Yokohama can save more than US$200,000 in fuel and canal expenses, Mr. Oestreng said. — Reuter. Disappearance of Aral Sea from the map. “In the 1960s, the Soviet Union undertook a major water diversion project on the arid plains of Kazakhstan, Uzbekistan, and Turkmenistan. The region’s two major rivers, fed by snowmelt and precipitation in faraway mountains, were used to transform the desert into farms for cotton and other crops. Before the project, the Syr Darya and the Amu Darya rivers flowed down from the mountains, cut northwest through the Kyzylkum Desert, and finally pooled together in the lowest part of the basin. The lake they made, the Aral Sea, was once the fourth largest in the world. Although irrigation made the desert bloom, it devastated the Aral Sea. This series of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite documents the changes. At the start of the series in 2000, the lake was already a fraction of its 1960 extent (black line). The Northern Aral Sea (sometimes called the Small Aral Sea) had separated from the Southern (Large) Aral Sea. The Southern Aral Sea had split into eastern and western lobes that remained tenuously connected at both ends. By 2001, the southern connection had been severed, and the shallower eastern part retreated rapidly over the next several years. Especially large retreats in the eastern lobe of the Southern Sea appear to have occurred between 2005 and 2009, when drought limited and then cut off the flow of the Amu Darya. Water levels then fluctuated annually between 2009 and 2012 in alternately dry and wet years. As the lake dried up, fisheries and the communities that depended on them collapsed. The increasingly salty water became polluted with fertilizer and pesticides. The blowing dust from the exposed lakebed, contaminated with agricultural chemicals, became a public health hazard. The salty dust blew off the lakebed and settled onto fields, degrading the soil. Croplands had to be flushed with larger and larger volumes of river water. The loss of the moderating influence of such a large body of water made winters colder and summers hotter and drier. In a last-ditch effort to save some of the lake, Kazakhstan built a dam between the northern and southern parts of the Aral Sea. Completed in 2005, the dam was basically a death sentence for the southern Aral Sea, which was judged to be beyond saving. All of the water flowing into the desert basin from the Syr Darya now stays in the Northern Aral Sea. Between 2005 and 2006, the water levels in that part of the lake rebounded significantly and very small increases are visible throughout the rest of the time period. The differences in water color are due to changes in sediment.”

Sunday, October 21, 2012

Energy independent America

The recent debate between the presidential nominees in US election has revealed their respective positions on their policies for an energy independent America. Each of them have articulated how they will increase the oil and gas production to make America energy independent, which will also incidentally create number of jobs in an ailing economy. Each one of them will be spending a billion dollar first, in driving their messages to the voting public. Once elected, they will explore oil and gas aggressively that will make America energy independent. They will also explore solar and wind energy potentials simultaneously to bridge any shortfall. Their policies seem to be unconcerned with global warming and its impact due to emission of GHG but, rather aggressive in making America an energy independent by generating an unabated emission of GHG in the future. Does it mean an ‘energy independent America’ will spell a doom to the world including US? The best option for America to become energy independent will be to focus on energy efficiency of existing technologies and systems, combining renewable-fossil fuel energy mix, base load renewable power and storage technologies, substituting Gasoline with Hydrogen using renewable energy sources. The future investment should be based on sustainable renewable energy sources rather than fossil fuel. But current financial and unemployment situation in US will force the new president to increase the conventional and unconventional oil and gas production rather than renewable energy production, which will be initially expensive with long pay pack periods but will eventually meet the energy requirement in a sustainable way. The net result of their current policies will be an enhanced emission of GHG and acceleration of global warming. But the energy projections in the U.S. Energy Information Administration’s (EIA’s) Annual Energy Outlook 2012 (AEO2012) projects a reduced GHG emission. According to Annual Energy Outlook 2012 report: “The projections in the U.S. Energy Information Administration’s (EIA’s) Annual Energy Outlook 2012 (AEO2012) focus on the factors that shape the U.S. energy system over the long term. Under the assumption that current laws and regulations remain unchanged throughout the projections, the AEO2012 Reference case provides the basis for examination and discussion of energy production, consumption, technology, and market trends and the direction they may take in the future. It also serves as a starting point for analysis of potential changes in energy policies. But AEO2012 is not limited to the Reference case. It also includes 29 alternative cases, which explore important areas of uncertainty for markets, technologies, and policies in the U.S. energy economy. Many of the implications of the alternative cases are discussed in the “Issues in focus” section of this report. Key results highlighted in AEO2012 include continued modest growth in demand for energy over the next 25 years and increased domestic crude oil and natural gas production, largely driven by rising production from tight oil and shale resources. As a result, U.S. reliance on imported oil is reduced; domestic production of natural gas exceeds consumption, allowing for net exports; a growing share of U.S. electric power generation is met with natural gas and renewable; and energy-related carbon dioxide emissions remain below their 2005 level from 2010 to 2035, even in the absence of new Federal policies designed to mitigate greenhouse gas (GHG) emissions. The rate of growth in energy use slows over the projection period, reflecting moderate population growth, an extended economic recovery, and increasing energy efficiency in end-use applications. Overall U.S. energy consumption grows at an average annual rate of 0.3 percent from 2010 through 2035 in the AEO2012 Reference case. The U.S. does not return to the levels of energy demand growth experienced in the 20 years prior to the 2008- 2009 recession, because of more moderate projected economic growth and population growth, coupled with increasing levels of energy efficiency. For some end uses, current Federal and State energy requirements and incentives play a continuing role in requiring more efficient technologies. Projected energy demand for transportation grows at an annual rate of 0.1 percent from 2010 through 2035 in the Reference case, and electricity demand grows by 0.7 percent per year, primarily as a result of rising energy consumption in the buildings sector. Energy consumption per capita declines by an average of 0.6 percent per year from 2010 to 2035 (Figure 1). The energy intensity of the U.S. economy, measured as primary energy use in British thermal units (Btu) per dollar of gross domestic product (GDP) in 2005 dollars, declines by an average of 2.1 percent per year from 2010 to 2035. New Federal and State policies could lead to further reductions in energy consumption. The potential impact of technology change and the proposed vehicle fuel efficiency standards on energy consumption are discussed in “Issues in focus.” Domestic crude oil production increases Domestic crude oil production has increased over the past few years, reversing a decline that began in 1986. U.S. crude oil production increased from 5.0 million barrels per day in 2008 to 5.5 million barrels per day in 2010. Over the next 10 years, continued development of tight oil, in combination with the ongoing development of offshore resources in the Gulf of Mexico, pushes domestic crude oil production higher. Because the technology advances that have provided for recent increases in supply are still in the early stages of development, future U.S. crude oil production could vary significantly, depending on the outcomes of key uncertainties related to well placement and recovery rates. Those uncertainties are highlighted in this Annual Energy Outlook’s “Issues in focus” section, which includes an article examining impacts of uncertainty about current estimates of the crude oil and natural gas resources. The AEO2012 projections considering variations in these variables show total U.S. crude oil production in 2035 ranging from 5.5 million barrels per day to 7.8 million barrels per day, and projections for U.S. tight oil production from eight selected plays in 2035 ranging from 0.7 million barrels per day to 2.8 million barrels per day (Figure 2). With modest economic growth, increased efficiency, growing domestic production, and continued adoption of nonpetroleum liquids, net imports of petroleum and other liquids make up a smaller share of total U.S. energy consumption U.S. dependence on imported petroleum and other liquids declines in the AEO2012 Reference case, primarily as a result of rising energy prices; growth in domestic crude oil production to more than 1 million barrels per day above 2010 levels in 2020; an increase of 1.2 million barrels per day crude oil equivalent from 2010 to 2035 in the use of biofuels, much of which is produced domestically; and slower growth of energy consumption in the transportation sector as a result of existing corporate average fuel economy standards. Proposed fuel economy standards covering vehicle model years (MY) 2017 through 2025 that are not included in the Reference case would further reduce projected need for liquid imports. Although U.S. consumption of petroleum and other liquid fuels continues to grow through 2035 in the Reference case, the reliance on imports of petroleum and other liquids as a share of total consumption decline. Total U.S. consumption of petroleum and other liquids, including both fossil fuels and biofuels, rises from 19.2 million barrels per day in 2010 to 19.9 million barrels per day in 2035 in the Reference case. The net import share of domestic consumption, which reached 60 percent in 2005 and 2006 before falling to 49 percent in 2010, continues falling in the Reference case to 36 percent in 2035 (Figure 3). Proposed light-duty vehicles (LDV) fuel economy standards covering vehicle MY 2017 through 2025, which are not included in the Reference case, could further reduce demand for petroleum and other liquids and the need for imports, and increased supplies from U.S. tight oil deposits could also significantly decrease the need for imports, as discussed in more detail in “Issues in focus.” Natural gas production increases throughout the projection period, allowing the United States to transition from a net importer to a net exporter of natural gas Much of the growth in natural gas production in the AEO2012 Reference case results from the application of recent technological advances and continued drilling in shale plays with high concentrations of natural gas liquids and crude oil, which have a higher value than dry natural gas in energy equivalent terms. Shale gas production increases in the Reference case from 5.0 trillion cubic feet per year in 2010 (23 percent of total U.S. dry gas production) to 13.6 trillion cubic feet per year in 2035 (49 percent of total U.S. dry gas production). As with tight oil, when looking forward to 2035, there are unresolved uncertainties surrounding the technological advances that have made shale gas production a reality. The potential impact of those uncertainties results in a range of outcomes for U.S. shale gas production from 9.7 to 20.5 trillion cubic feet per year when looking forward to 2035. As a result of the projected growth in production, U.S. natural gas production exceeds consumption early in the next decade in the Reference case (Figure 4). The outlook reflects increased use of liquefied natural gas in markets outside North America, strong growth in domestic natural gas production, reduced pipeline imports and increased pipeline exports, and relatively low natural gas prices in the United States. Power generation from renewable and natural gas continues to increase In the Reference case, the natural gas share of electric power generation increases from 24 percent in 2010 to 28 percent in 2035, while the renewable share grows from 10 percent to 15 percent. In contrast, the share of generation from coal-fired power plants declines. The historical reliance on coal-fired power plants in the U.S. electric power sector has begun to wane in recent years. Over the next 25 years, the share of electricity generation from coal falls to 38 percent, well below the 48-percent share seen as recently as 2008, due to slow growth in electricity demand, increased competition from natural gas and renewable generation, and the need to comply with new environmental regulations. Although the current trend toward increased use of natural gas and renewable appears fairly robust, there is uncertainty about the factors influencing the fuel mix for electricity generation. AEO2012 includes several cases examining the impacts on coal-fired plant generation and retirements resulting from different paths for electricity demand growth, coal and natural gas prices, and compliance with upcoming environmental rules. While the Reference case projects 49 gigawatts of coal-fired generation retirements over the 2011 to 2035 period, nearly all of which occurs over the next 10 years, the range for cumulative retirements of coal-fired power plants over the projection period varies considerably across the alternative cases (Figure 5), from a low of 34 gigawatts (11 percent of the coal-fired generator fleet) to a high of 70 gigawatts (22 percent of the fleet). The high end of the range is based on much lower natural gas prices than those assumed in the Reference case; the lower end of the range is based on stronger economic growth, leading to stronger growth in electricity demand and higher natural gas prices. Other alternative cases, with varying assumptions about coal prices and the length of the period over which environmental compliance costs will be recovered, but no assumption of new policies to limit GHG emissions from existing plants, also yield cumulative retirements within a range of 34 to 70 gigawatts. Retirements of coal-fired capacity exceed the high end of the range (70 gigawatts) when a significant GHG policy is assumed (for further description of the cases and results, see “Issues in focus”). Total energy-related emissions of carbon dioxide in the United States remain below their 2005 level through 2035 Energy-related carbon dioxide (CO2) emissions grow slowly in the AEO2012 Reference case, due to a combination of modest economic growth, growing use of renewable technologies and fuels, efficiency improvements, slow growth in electricity demand, and increased use of natural gas, which is less carbon-intensive than other fossil fuels. In the Reference case, which assumes no explicit Federal regulations to limit GHG emissions beyond vehicle GHG standards (although State programs and renewable portfolio standards are included), energy-related CO2 emissions grow by just over 2 percent from 2010 to 2035, to a total of 5,758 million metric tons in 2035 (Figure 6). CO2 emissions in 2020 in the Reference case are more than 9 percent below the 2005 level of 5,996 million metric tons, and they still are below the 2005 level at the end of the projection period. Emissions per capita fall by an average of 1.0 percent per year from 2005 to 2035. Projections for CO2 emissions are sensitive to such economic and regulatory factors due to the pervasiveness of fossil fuel use in the economy. These linkages result in a range of potential GHG emissions scenarios. In the AEO2012 Low and High Economic Growth cases, projections for total primary energy consumption in 2035 are, respectively, 100.0 quadrillion Btu (6.4 percent below the Reference case) and 114.4 quadrillion Btu (7.0 percent above the Reference case), and projections for energy-related CO2 emissions in 2035 are 5,356 million metric tons (7.0 percent below the Reference case) and 6,117 million metric tons (6.2 percent above the Reference case)”. (Ref:U.S. Energy Information Administration).