Why are heat and high-temperatures hindering solar photovoltaic potential in the MENA region?

By Carlo Maragliano, Ph.D., Head of R&D.

The Middle East and North Africa (MENA) region has become, in the last few years, the favorite target for solar renewable energy companies all over the world. The reason is straightforward: the region has the highest access to sunlight throughout the year when compared to any other region around the globe. To understand this point and its implications, let’s make a simple comparison between one of the sunniest cities in Europe, Madrid, and the capital of the United Arab Emirates, Abu Dhabi. If we look at the annual sunshine hours, which measure the amount of hours that the sun shines during one-year time, Madrid has approximately 2750 sun-hours, while Abu Dhabi has over 4100. Imagine now that you have a solar panel, and are asked where to install it to obtain the maximum power output. Where would you put it? In Abu Dhabi, off course! In the capital of the UAE, indeed, the same module could potentially produce up to 150% of the power that it would generate in Madrid. This should be enough encouragement to move all solar power generation businesses to the MENA region, right? However good these numbers, solar power growth in the MENA region has not yet lived up to global expectations.


So, what is hindering the great potential of solar PV in this region? One point is the heat. The MENA region is characterized by very high temperatures, which during the summer can even reach 50C. Why is that negative for solar PV? Solar panels loose a % of their efficiency as their operative temperature rises above 25C, which is the standard temperature at which they are tested to determine their “nominal” efficiency. Standard Si modules (which represent over 90% of the market) loose in average 0.6 % in power conversion efficiency for every C increase in their working temperature. This figure is widely known by the PV community as the temperature coefficient of the solar module. To put this in an example, a solar module with a proven efficiency of 20% at 25C, has at a working temperature of 60C an efficiency of only 15.8%, meaning that it loses more than 20% of its original capability of producing electricity. It is also important to note that the outside temperature is not the only factor that determines the operative temperature of a solar module. Under standard operation, a solar module, other than producing electricity, generates a lot of heat. Imagine that in a 20%-efficient solar module, almost 60% of the incoming energy is turned into heat. As this heat is not converted into electricity, it raises the temperature of the solar panel, thus negatively affecting its capability to produce electrical power. It is not unusual then that solar panels in the MENA region reach, during the summertime, temperatures above 75C. Under this condition, solar modules can loose over 30% of their original power conversion efficiency.

Can we work around this problem? The PV community has embarked on different paths to reduce heat-related losses. The first path consists in improving material quality with the scope of bringing down the temperature coefficient of a solar module. This has produced Si solar modules with temperature coefficients as low as 0.3-0.35 %/C, with the caveat of being more expensive due to higher manufacturing costs. The second path consists in using alternative materials with lower temperature coefficients. Thin-film materials (like CIGS and CdTe) exhibits temperature coefficients as low as 0.2 %/C, which are considerably lower than that of Si modules. Although thin-film modules have also a competitive price, their low efficiencies and reduced durability represent today strong limiting factors, which are ultimately discouraging their use.

Are we then, out of options? No! We at Solar Bankers undertook a different path, focusing on the source of the problem, the light, rather than solar cell materials. The sunlight is made of different colors, also called wavelengths, which together compose the sunlight spectrum. Among these wavelengths there are visible and invisible, infrared (IR) colors. Silicon solar modules can convert the visible spectrum and only a small portion of the IR light. The remaining wavelengths pass undisturbed trough silicon, but are turned into heat at the electrodes, which cover the back surface of solar cells. As a result, not only these wavelengths are not converted into useful electricity, but they also reduce the performance of the module, as they increase its operative temperature. Solar Bankers developed and patented a high-tech solution to get rid of unwanted, efficiency-lowering wavelengths, thus achieving better performance than standard modules even in high-temperature environments. The product consists in a nanostructured optical film that efficiently selects the light colors suitable for photovoltaic conversion and bends away those that only produce heat. Field test have confirmed that Solar Bankers film can reduce the difference between the module operating temperature and that of the ambient by almost 35%, leading to the recovery of the previously lost power output of up to 30% depending on solar cell quality.

Now you might be thinking, “Great news, but I have already installed many MW and I am still paying for them!” Our patented technology can be added to existing solar panels, making it easier for you to upgrade your solar farm. Read more about how our technology can help you in www.solarbankers.com/technology.html or contact us! Stay tuned for more tips on how to increase the performance of your solar panels! And don’t forget to follow us on Twitter!

Nano Technology Outperforming Panel

“Low cost solar energy for all”, Mr. Ban Ki-Moon, Secretary General of the UN

Solar Bankers’ new generation of solar modules employ a nano-structured polymer foil on their cover glass which refracts and concentrates specific wavelengths of light to improve module performance. The polymer device’s refractive abilities allow it to separate absorbable, or “desirable”, wavelengths of solar radiation from efficiency-lowering wavelengths, such as infrared and other short-wave light. The foil then also acts as a lens and concentrates the separated spectra of light onto different areas on the module.


Long wavelengths of light, like infrared, are usually dispersed on the solar cell in the form of heat energy, which significantly reduces cell efficiency. Our nano-structured foil’s “light-splitting” effect allows efficiency-lowering radiation to be concentrated away from the actual cell, so that cell efficiency remains unaffected by incoming heat energy.

In parallel, the foil is able to concentrate “desirable” wavelengths directly onto the cell. This concentration – regardless of the efficiency of the cell used – increases the amount of absorbable solar radiation received by the cell per unit area by up to 40%.

Hence our foil significantly improves cell/module performance with the double-effect of A) protecting cells from efficiency-lowering light while B) increasing the amount of convertible solar energy arriving at the cell per unit area.

Our second-generation module can even use the heat energy refracted away from the cell to also produce electricity, further improving module efficiency.


This means modules using the foil can reduce the size of the cell – and the amount of silicon! – they employ by up to 90% while producing the same output as before. Given silicon is a module’s most expensive component, the described effect can reduce module unit production costs to an unprecedented degree.


Renewable energy investment in MENA: Algeria has become a target market for Solar Banker’s game-changing PV module

Algeria’s experience in reforming its energy policy to emphasise feed-in tariffs, as well as its strong macroeconomic position, are likely to create a profitable environment for RE investments. Its technology being both more efficient and considerably more cost-effective than regular silicon-based PV, Solar Bankers is negotiating with local partners and regulators in countries like Algeria to take part in the region’s vigorous PV development.

It hardly is a secret that North Africa has a unique suitability and hence a unique potential for the development of solar energy. Throughout the region governments are reforming their energy policies, launching procurement initiatives to diversify their energy mix using foreign know-how and technology. Achieving this requires in turn trade policies tailored to attracting international developers of renewable energies based on a system of financial incentives, usually feed-in tariffs, and backed by a strong regulatory framework. And it is the latter aspect which is of crucial importance to developers assessing the investment risk associated with North Africa’s emerging RE markets. Profitability and true investment security can only be guaranteed to developers if an accountable set of institutions manages the costs of subsidising the novel technology in a transparent and consistent manner. Indeed, over the past decades, local institutional and political barriers have been the greatest obstacle to the promotion of a more sustainable energy mix in North African countries. Algeria is a useful example of this.

Until recently, the Algerian government’s dominant political need to provide not only stable but affordable electricity conflicted with calls for the integration of more sustainable yet less reliable energy sources. The political unpopularity of loading off the cost of RE subsidies on final consumers coincided with the reluctance of state grid operators to adjust their feed-in patterns and possibly face redundancy costs. Meanwhile heavy subsidies to dirty energies continued to drive down electricity prices. The result was the completion of half-hearted PPAs lacking rigorous legislative backing and commitment from key state players, eroding investment security for developers. Recent reforms have produced more favourable circumstances.

Tracing the evolution of Algeria’s feed-in tariff schemes will elucidate the frequently political origins of investment risk in North Africa’s PV markets.

Algeria is probably North Africa’s FiT pioneer. It became the first country in the region to introduce FiTs in March 2004, later updating the program’s aims with the “Renewable Energy and Energy Efficiency Program” in March 2011. Despite setting ambitious targets, the 2004 and 2011 constitute mere experimental efforts. As indicated above, regulatory mechanisms were far from managing the essential trade-off between consumer protection and investment security well.

Screen Shot 2015-06-19 at 1.51.44 PM
(“Evaluation of feed-in tariff-schemes in African countries”, J. Energy South. Afr. vol. 24 n. 1, Cape Town, Jan 2013)

 According to an article by M. Meyer-Renschhausen in the Journal of  Energy in Southern Africa, “the Algerian energy policy [of 2004],  characterised by insufficient FITs and regulatory obstacles, obviously  attempts to promote renewable energies without increasing power  prices and without endangering the financial stability of the national  grid company” (J. Energy South. Afr. vol. 24 n. 1, Cape  Town, Jan 2013). The 2004 FiT scheme’s contradictory design  demonstrates this. Instead of a fixed tariff associated with a  particular capacity, RE developers were paid a bonus on the current,  general market price of electricity depending on the type of  technology used. Consequently revenue streams for RE developers  and investors became uncertain as they were made to fluctuate with  the prevailing electricity market price. Meanwhile the Algerian  government continued to subsidise natural gas power stations on a large scale, causing “a decrease of retail prices for power and thus  [reducing] the profitability of renewable energy technologies.” The  seeming lack of communication between government, hydrocarbon  producers, and regulatory authorities severely hurt the country’s RE  investment climate. In 2005, the government eventually committed to fixing electricity prices at 0.03 USD/kWh, which produced not unremunerative circumstances to PV developers receiving a 300% bonus – by comparison to international tariff levels at the time. However, more immediate institutional factors neutralised any profitability presented on paper.

 Algeria’s early FiT scheme did not obligate the state grid operator, Sonelgaz, to give the promoted green energy priority in feeding the grid. According to Meyer-Renschhausen, “authorization of new power plants and lacking provisions to avoid bottlenecks in the grid is protecting the incumbent power generator against stranded cost and the grid company against rising costs.” The scheme’s general cost-allocation regime was unclear and unaccountable; state utilities’ resistance against engaging at all in purchasing contracts with RE developers was both tolerated and facilitated. Due to the scheme’s legislative deficiencies with respect to priority rules and cost-distribution, the FiT’s details were essentially “left to negotiations” between developers and the state energy company, Sonelgaz, which the latter effectively controlled and tailored to its own interests. Investors were offered a highly unfavourable environment.

Algeria’s new feed-in tariff scheme, introduced in February 2015, is likely to rectify many of the institutional and regulatory deficiencies described above. With the remodelled scheme the country is able to combine a more effective cost-distribution regime with a tighter regulatory framework to provide investors and developers with greater security. The RE development aims associated with the scheme involve an increase of the total targeted installed capacity from 12 GW set out in 2004 to 22 GW now. Additionally, the current 13.5 GW target for photovoltaic development constitutes a 400% increase to the installed capacity envisaged initially by the program in 2011.

PV and other RE projects now receive a guaranteed fixed tariff, relative to their capacity, over the 20-year term of their PPA with one of the four subsidiaries of the state-owned grid operator. Improvements in accountability have also been achieved: the details of the PPAs are now legally enforced and largely standardised, no longer at the discretion of Sonelgaz.

In its financial set-up, the scheme is not only generous by regional comparison, it also adjusts to the effective productivity of projects over time. Initially, for the first five years of the 20-year subsidy period, solar projects with capacities between 1 MW and 5 MW receive a fixed tariff of DZD 15.94/kWh. For projects larger than 5 MW, tariffs stand at DZD 12.75/kWh. After the initial five-year period, tariffs are individually revised for each project according to its effective operating hours. Tariffs to projects with low productivity are increased up to 15%, rates received by more productive plants are decreased in the same manner. This adjustment measure may require a bureaucratic effort, but it renders the cost-allocation regime more flexible.

Although PV magazine has raised the issue that the FiT applies only for a fixed number of hours every year, in excess of which the electricity is apparently sold at the unsubsidised price, the current tariff structure shows significant improvement in investment security compared to the bonus-system of the past. Other institutional reforms confirm this picture. The state grid company, Sonelgaz, is now also legally bound to give RE priority grid access, further giving developers security concerning the consistent deployment of their energy. Furthermore, the distribution of the FiT scheme’s cost has been clearly set out by the Algerian government. According to law firm Jones Day, “The subsidized feed-in tariffs will be financed through a National Fund for Renewable Energies and Cogeneration (Fonds National pour les Energies Renouvelables et la Cogénération), established by a 1 percent tax levy on the state’s oil revenues, and through other resources or contributions, including a premium paid by end-users.” One could go on to question the long-term viability of the principle of linking the funding of RE development to the health of Algeria’s oil trade. Growing demand could turn Algeria into a net importer of oil or insurgent exploitation methods in North America could hold down oil prices and, therefore, the country’s oil revenues.

But, beside the fact that Algeria controls substantial alternative sources of finance (having larger foreign exchange reserves than France, Germany, or Britain), the improvement in cost-allocation is of a more fundamental, structural nature. It is not the lack of finance, as in many sub-Saharan African countries, that has obstructed RE promotion in the past, but the administration thereof. Current reforms will ensure that the conflicts produced by the government’s insistence on protecting the grid operator’s financial health are resolved. Similarly, with the introduction of consumers into the cost management of RE subsidies, consumer protection is diminishing as a significant political barrier to a fair and effective FiT scheme.

Projects are run on an IPP basis and foreign developers will usually have to partner with a local, state-owned power company, such as a branch of Sonelgaz’ power generation subsidiary. Land ownership for development by foreign investors requires state authorisation; the majority of projects have to settle on state-owned land under a concession regime.

A vital aspect of the developer’s investment appraisal yet outstanding is the exchange rate risk associated with Algeria. The Algerian government does not appear to be assuming exchange rate risk directly, implied by the fact that tariffs are listed in the national currency, the dinar. Egypt’s FiT scheme, by comparison, also pays developers in its national currency, but explicitly assumes at least part of the exchange risk by allowing investors to convert a portion of the Egyptian pounds they receive with every invoice into US dollars at a fixed rate. Despite having a very tightly managed floating exchange policy, the Egyptian pound’s relative volatility makes the government’s assumption of exchange rate risk in its FiT scheme an essential security offered as part of the deal. Yet from what can be surmised from commentators, Algeria’s government is not making similar explicit provisions.

Closer analysis will reveal an ambivalent picture. On one hand, according to the IMF, the Algerian dinar has a composite soft peg, largely guided by the US dollar (mainly due to Algeria’s specialisation on hydrocarbon exports). Thus Algeria’s exchange rate policy is designed to ensure that at regular intervals the dinar returns to a specific benchmark exchange rate with a variety of international currencies. More importantly, Algeria has comparatively large foreign exchange reserves (the 12th largest in the world), giving the government flexibility in the control of exchange rates and unexpected inflationary/deflationary events. And it is probably these reserves – greater control over the value and volatility of its currency, as compared to other North African countries – which allows Algeria to provide at least a certain degree of exchange rate security without directly and formally assuming currency risk through legislation.

Yet the Algerian dinar’s inherent volatility must be acknowledged. With oil and natural gas exports forming the country’s economic base, its currency remains very sensitive to commodity markets and vulnerable to domestic inflationary pressures. Oil prices usually determine the strength of the dinar: the recent fall in oil markets caused its value to depreciate from 78.6 DZD per USD in spring 2014 to 98.8 DZD per USD in April 2015. BMI Research expects the currency to stabilise over 2015; but even with the deployment of its extensive for-ex reserves, the Algerian government will only be able to return the currency’s value to around 95 DZD per USD. A poor harvest season in 2014-15, entailing large-scale imports of grain due to Algeria’s limited production base, contributed to this trend and pushed inflation up to over 5.7% in early 2015.

Yet, in any case, a comparatively high exchange rate risk is an inherent feature of most investment in emerging economies. It has been our thesis that rather institutional and political issues have been the primary obstacle to the advance of FiTs and sustainable energy production. As North Africa’s strongest economy, Algeria is leading the way in overcoming these structural problems.

Comparing performance in the economics of energy production: seeing through theory

 In the economics of energy production, the eventual value of a unit of electricity – the figure most interesting to consumers and governments – is influenced by a complex array of interconnected factors, ranging from the generating unit’s ability to produce at peak demand to how much of its capacity it can effectively use. To take all these factors into account, and thus make the economic performance of different energy sources comparable, economists prevalently use the measurement of “levelised costs”. This quantity is synonymous with the net present value of all capital and operating costs of a generating unit over its lifetime, divided by the amount of electricity (in megawatt-hours) it is expected to produce. This intends to provide an indication of the economic effort required to produce a unit of electricity. Levelised costs have thus become the dominant parameter in comparing economic efficiencies, influencing energy policy.

Yet already in a 2011 paper, economist Paul Joskow of MIT noticed that this method of standardization inaccurately represents the value of electricity. It seems levelised costs are less useful for ranking and comparing alternative technologies than previously thought. And this especially applies to sources of renewable energy.

To re-evaluate the standard economic performance of renewable technologies, Joskow analyzes their interaction with the standard electricity. The economic side-effects of this interaction, so his argument, are not adequately represented by levelised costs.

For instance, most non-carbon energy units (take solar and wind technologies) can produce at only a comparatively small fraction of their capacity and have highly variable performance. The fluctuations in their generating capabilities may not concur with daily variations in electricity demand. Especially from countries increasingly emphasizing these technologies in the standard electricity system, this demands an awkward compromise. For stable grid output to be maintained, conventional electricity from fuel-based generation must be injected into the system as a supplement to the renewable energies. Since the fluctuations in non-carbon performance are unpredictable, this supplement must often occur through a process of redundancy production from the traditional energy infrastructure – which implies that conventional power plants are not only kept on stand-by, but are effectively kept running to be ready to supply injections. The costs associated with balancing the electricity system in this way when renewables go offline (what Joskow calls the cost of intermittency), among other factors, is not taken into account by standard levelised costs calculations. Therefore, levelised costs tend to understate the cost of electricity derived from renewable energy sources.

In a paper published in May 2014, Charles Frank of the Brookings Institution presents a more appropriate approach to ranking alternative technologies. He extends the spectrum of phenomena and side-effects taken into account by basing his parameter on a cost-benefit analysis. As formulated by Frank: “rather than using levelised costs to compare alternative technologies, one should compute the annual costs and benefits of each project and then rank those projects by net benefits delivered per megawatt (MW) of new electrical capacity”. Therein, “the benefits of a new electricity project are its avoided carbon dioxide emissions, avoided energy costs and avoided capacity costs [or the value of the fuel that would have been used if a fuel-based plant had produced the same amount of energy]”. The costs include, among others, the unit’s “own carbon dioxide emissions, its own energy cost, and its own capacity cost” as well as the cost of intermittency (which itself encompasses the costs associated with operating the supplement generating units).

It must also be noted that in itself the connection of renewable technology to the system is often an elaborate and expensive exercise, incurring broader costs for the grid. The most suitable sites for large-scale harvesting are often remote from regions of highest demand (urban areas, etc.). 

The Economist published a telling chart on the issue:


(from “Sun, wind and drain” by The Economist, Free Exchange column, July 26th 2014 issue)

For instance, comparing the costs and benefits of different non-carbon sources, it must be assumed that renewable technologies do not avoid carbon emissions or capacity costs (intermittency costs) when they are not running. The magnitude of their benefits in this respect consequently depends on their ability to run at a large percentage of their capacity (basically run for the longest). Therefore, within this parameter, nuclear power plants – which on average run at 90% of capacity – show the best economic performance of the zero-carbon technologies (avoiding almost six times as many carbon emissions per unit capacity as solar power plants). Taking intermittency cost into account, furthermore, a 1 MW solar power plant running at 16% of capacity could replace only roughly 0.15 MW of a coal-fired plant running at 90%. A nuclear power plant running at 90% of capacity could replace effectively the entire capacity of fuel-based energy.

Nonetheless, with respect to other cost and benefits, nuclear power plants have comparatively high capital and operating costs (taking into account nuclear waste handling and other associated hazard-management), as well as being uninsurable. Yet due to their high use of capacity, capital and operating costs are only 75% greater per MW for nuclear power plants than for solar plants.

Thus, solar and wind power appear very uncompetitive when compared to nuclear energy and conventional fuel-based production methods. And it must be noted that the avoided energy/capacity costs in Mr Frank’s analysis assume a carbon price of 50 USD per tonne. The economic inefficiency and general expensiveness of wind and solar energy, already shown to be worse than previously assumed by Mr Frank, would be even more pronounced if actual carbon prices (below 10 USD per tonne in Europe) were incorporated in the calculations. Carbon prices would have to surpass 185 USD per tonne for solar energy to show a net benefit with its current rate of emission avoidance.

According to Mr Frank’s analysis, on balance the economically most efficient non-carbon source is nuclear power – the least efficient sources are wind and solar power. This not only implies that the cost of solar and wind generation for the economy is larger than previously assumed, but also that these types of generation constitute the most expensive ways of reducing carbon emissions. Yet governments are spilling billions in subsidies onto solar and wind industries with the justification of helping battle carbon emissions.

The implications of Mr Juskow’s and Mr Frank’s insights are diverse. Artificially building renewable energies that are both economically inefficient and as of now highly variable in their performance is the most expensive and least effective method of reducing carbon emissions. The subsidization and promotion of cost-inefficiency within the energy mixes of developed countries may continue to raise electricity prices. And, if the reliable connection of already highly expensive renewable generating units requires conventional carbon-based plants to be kept running “just-in-case”, then it is in any way questionable whether significant increases in energy spending (and electricity prices) are worth that marginal reduction in carbon emissions they may effect. As The Economist summarizes: “governments should target emissions reductions from any source rather than focus on boosting certain kinds of renewable energy.”

Yet the implications of the above described insights are not satisfying to the solar enthusiast, or the solar industry in general. In a way, they appear to destroy any firm incentive to continue investment in solar technologies. But when departing from the theoretical world of economics and political criticism, the above implied pessimism is more illusory than practically appropriate. The derivation of recommendations from the economists’ statistical game has hitherto ignored public opinion with respect to the promotion of renewable energy, as well the innovation potential of certain technologies.

Furthermore, all the data above shows is that, in political settings, solar technologies are ineffectively instrumentalized in an artificial and overly expensive battle against carbon emissions.

Criticism towards this current mistreatment of renewable energy does not mean a serious, longer-term belief in solar’s contribution to solving the world’s energy problems is in any way unjustified. Moreover, we maintain that a solution to the above described problems requires investment in solar.

Needless to say, the incentive to invest in solar remains as strong as ever. Firstly, nuclear power remains too unpopular (and too hazardous on a large scale) to constitute an ultimate solution. To avoid any potential hazards, after all, the renewable energy industry has developed towards the pursuit of replacing not only hydrocarbon-based power, but nuclear power as well.

The aim remains to achieve stable grid output based only on renewable energy sources. In order for solar to become competitive in this respect, and thus make a powerful contribution to solving the problems described above, it must be able to neutralize the fluctuations in its performance. This will benefit the technology’s economic efficiency by eliminating intermittency costs and, in consequence, its ability to avoid significant quantities of carbon emissions. The development of storing technologies and pv-thermal combinations currently shows the greatest potential for maximizing solar’s capacity utilization. At the heart of this endeavor is a tight cooperation between material research and product development on one hand, and proper entrepreneurial deployment on the other in order to get economic efficiencies under control.


The Net Benefits of Low and No-carbon Electricity Technologies, by Charles Frank, Brookings Institution, May 2014

Comparing the Costs of Intermittent and Dispatchable Electricity-Generating Technologies, by Paul Joskow,  Massachusetts Institute of Technology, September 2011

“Sun, wind, and drain”, by The Economist newspaper, Free Exchange column, July 26th 2014 issue.

“Why is renewable energy so expensive?”, The Economist newspaper, The Economist Explains column, January 5th 2014, http://www.economist.com/blogs/economist-explains/2014/01/economist-explains-0.

Making silicon obsolete: thoughts on developments in perovskite research

“I don’t know any group that works on photovoltaics that isn’t looking at perovskites,” stated Dr Henry Snaith, leader of the perovskite research team of the Oxford University Department of Physics. With developments in perovskite research appearing to commence a materials revolution in the pv industry, Solar Bankers hopes to take a step back and humbly offer its opinion.

It seems the main obstacle hampering pv technology’s advancement to the forefront of energy production is the difficulty of simultaneously optimizing its cost and performance. The past tendency has been one of high efficiency necessitating increased costs, one of reduced costs being coupled with reduced performance. Very efficient commercial solar cells are based on thick silicon sheets produced in costly processes involving high temperatures. Conversely, thin-film models using copper indium gallium selenide (CIGS) save material costs but have poor efficiency. And those based on gallium arsenide currently produce the highest efficiencies among solar cells but are too expensive to attract mass investment.

This trade-off may in the future be resolved by developments in materials research. The perovskite class of materials could replace silicon as the main pv semiconductor by combining inexpensive production methods with competitive power-conversion efficiencies. Dr Snaith has stated that this could cut the cost of a watt of solar generating-capacity by three-quarters.

Perovskite is a class or species of minerals composed of, or similar in structure to, calcium titanium oxide. The intricacies of the material’s physical or chemical properties are not of interest here, but it is important to note that perovskite can describe either calcium titanium oxide itself, or a variety of other elemental combinations with the same crystal structure as calcium titanium oxide. This is an important distinction because, thus, many elemental concoctions can adopt the same semi-conducting properties and behavior as the calcium titanate base. Manipulating the proportions and quantities of the substances involved allows the semi-conductor’s behavior and properties to be calibrated, such as the frequency of electromagnetic radiation it absorbs best. 

For instance, Oxford Unviersity’s research team has produced a rather sophisticated organic-inorganic combination (what they call an organometal). The organic component of this perovskite cell acts as a dye which increases the number of frequencies of light the crystal is sensitive to, optimizing the crystal’s absorption capacity. The inorganic component then serves to conduct the additional electrons (current) thus released.

Perovskites are themselves naturally occurring materials. They consist of cheap bulk minerals and metals often contained in disposed electrical items, blended and purified at room temperature, in low-cost and comparatively primitive processes. Laboratory prototypes have emerged with a per-watt cost of 40 dollar cents (half the standard commercial value for silicon-based cells). Dr Snaith predicts that this may halve again at industrial scale. With respect to performance, perovskites have a comparatively high conversion potential because of their ability to absorb light on a very large spectrum. Researchers at UCLA reported in 2014 the development of a cell with an efficiency of 19.3% while the scientific community predicts efficiencies will continue to rise in excess of 20% by the end of the year.

Nonetheless, several concerns about the reliability of perovskite cells have arisen. And as the wonder of the perovskite phenomenon, with all its positive facets and revolutionary prospects, has been exhaustively discussed in a great variety of media (and we are here presenting ourselves as mere regurgitators of common wisdom), playing the devil’s advocate and practicing skepticism should prove a far more productive and interesting mental exercise.

Especially environmental concerns have hitherto deterred investors from taking a closer look at perovskite. Those perovskite concoctions which until now have produced the highest efficiencies were predominantly lead-based. Yet perovskite arrangements are also quite unstable – they are saltlike minerals easily dissolved in water – likely release lead if exposed to humid conditions for longer periods of time. Large amounts of lead possibly being carried into the environment by natural processes at utility-scale production would rather defeat renewable energy’s purpose of eco-friendliness, and is unattractive for commercialization. Furthermore, the toxic by-products and wastes associated with lead mining and production could pose a significant environmental hazard, especially if lead production expands to meet increasing demand from a growing perovskite industry.

Perovskite enthusiasts and researchers advance a number of arguments against the concern. According to an article from IEEE Spectrum, Dr Snaith argues that annual lead emissions from coal combustion are still 10 times the amount that would be needed for a terawatt of perovskite cells. It may be the case here that the article’s author misinterpreted Snaith’s statement, for the argument is weakened by a fallacy of commensurability. The argument would be stronger if phrased in a way that compared the lead emissions from coal and perovskite generating capacities of equal size. In this way, Dr Snaith quite rightly points out that concerns are slightly exaggerated if one terawatt of electricity from coal combustion required 10 times as many lead emissions as one terawatt of perovskite energy would.

Concern among investors is nevertheless understandable, considering the difficulty of marketing an apparently green technology which could carry toxic substances into the environment – even if only by negligible amounts. Researchers Dr Snaith, the current leader in the field, and his UCLA colleague Yang Yang (the current efficiency record holder) have both admitted that an ideal solution should dispense with lead.

Perovskite optimists have argued that concerns about lead emissions are inappropriate because, as in the case of lead glass, proper chemical binding can prevent lead emission all together. A valid point, but it neglects the sensitivity of the perovskite concoction – and thus its behavior as a semiconductor – to the chemical additions it interacts with. Thus an entirely new branch of perovskite research has emerged, aiming to replace the cells’ lead elements without significant losses in performance. This branch has been developing rigorously, with lead foundations rendered nearly obsolete even though significant losses in performance had to be conceded (a drop from the record 19.3% to around 3% according to initial trials). A skeptic might argue that while resolving the difficult trade-off between performance and cost, perovskite has merely produced a new trade-off between performance and environmental friendliness or reliability. It is unclear whether this may be the case: in 2013-14 several labs independently developing tin-based cells with efficiencies of around 7%. And researchers predict progress will accelerate. Yang Yang of UCLA has stated that tin might even contribute much more favorably to the material’s conductive behavior than lead, raising the material’s efficiency potential.

(Another issue surrounding the developments is the accuracy of the reported efficiency values. The scientific community has noticed that perovskite solar cells demonstrate rather pronounced hysteretic behavior (when the value of a physical quantity lags behind the effect causing it). This has led to concern whether claimed power-conversion efficiencies are accurate. Nature Materials magazine reckons the community should focus more on seeking confirmation from independent certification laboratories.)

Yet the replacement of its toxic elements does not resolve the more fundamental weakness indicated by the material’s previous tendency to release lead. We reckon that the main issue still remains proving the durability of the perovskite-based cell model. A perovskite cell would be required to maintain the high performance it promises throughout an effective lifetime of up to 30 years (roughly the standard for pv products). Needless to say, natural forces like weather and erosion must be taken into account. Perovskite has to prove its durability especially in a European setting, facing long-term exposure to the humidity it is so vulnerable to. For perovskite-instability in humid conditions does not only mean any toxic substances contained therein could be emitted, but more importantly that the general durability and reliability of the generating medium is yet unproven. Many regions in Africa and the Middle East as well as India – considered ideal locations for large-scale, yet non-intrusive solar harvesting – would demand a lot from cells. There, high temperatures combine with unusually strong air-particle corrosion due to the abundance of sand and strong winds. It just happens to be the case that especially humidity renders perovskite cells unstable, but a broader point is made here: it is now durability and reliability, rather than cost efficiency and generating power, that determine whether perovskite-based technology is fit for utility-scale and mass-market deployment. As a corollary, it is also these current concerns surrounding utility fitness, rather than efficiency potential, that pro tem slow the industrial deployment of perovskite cells – and keep large-scale investors reluctant.

Japan: World’s Hottest Solar Energy Bonanza is the Land of the Rising Sun in 2013

High-efficiency photovoltaic technology is favorable in Japan with their ability to power from smaller home and rooftops. This would also be a great opportunity for Alfred Jost’s, CEO of Solar Bankers’, patented game-changing high-efficiency innovative solar module to saturate the market– using less silicon and area to produce energy efficiently.

Solar energy is decidedly the focus of attention and investment in Japan in the wake of the costly ongoing nuclear disaster in Fukushima.  And, Japanese energy suppliers know that while conventional electric plants are not so costly, they create too much pollution and emit too much CO2 into the atmosphere.  Many in the green, renewable energy field remain daunted by the high costs of doing business in Japan, especially given the small size of the country vis-à-vis China, Brazil, India, etc.  However, what they miss is that the higher cost of doing business in Japan implies the higher energy costs — and thus the higher profits to be made — per energy unit.  Japan is many times smaller than, say, China, but the energy cost per unit is many times higher than in China, and each Japanese person uses much more electric power than does the average Chinese person.

Goldman Sachs is well aware of this Japanese advantage, and has been planning to invest in Japan’s green renewable energy market at the optimal time, as reported by Shigeru Sato in Bloomberg Magazine on May 20, 2013:

“Goldman Sachs to Invest $486 Million in Japan Renewable Energy

Goldman Sachs Group Inc. plans to invest as much as 50 billion yen (US$487 million) in renewable energy projects in Japan in the next five years, tapping demand for electricity produced from solar and wind-power generators. The Wall Street firm also plans to take as much as 250 billion yen of bank loans and project-financing over the same period to move ahead with projects that would cost a total of 300 billion yen, Hiroko Matsumoto, a Tokyo-based spokeswoman for Goldman, said by telephone. The Nikkei newspaper reported the plan earlier today. Japan began offering incentives in July through feed-in tariffs to encourage renewables after the Fukushima nuclear-plant crisis stemming from the March 11, 2011 earthquake and tsunami. Japan has been forced to slash its reliance on atomic power generation since Fukushima. Goldman Sachs formed the Japan Renewable Energy Co. unit in August to plan, design and operate power plants run on sun, wind, fuel cells and biomass fuels, it said on its website

Investor Attraction

Renewable energy has attracted interest from investors ranging from billionaire Masayoshi Son’s Softbank Corp. and financial-services company Orix Corp. to the country’s biggest banks led by Mizuho Financial Group Inc….Japan will probably become the largest solar (& wind with VAWT) market in the world after  China this year, according to data compiled by Bloomberg….

Despite the rosy outlook for solar and other renewable energies in Japan at present, some investors worry whether demand and institutional support can remain strong in Japan for the foreseeable future, and whether Japanese producers and suppliers won’t make extra efforts to take market share from the foreign concerns now entering the market.

Prior to the disaster that struck the Fukushima nuclear power plant in March of 2011, Japan had 50 nuclear reactors with a production capacity of 50 GW. To this day, just two of the 50 reactors have been restarted. Gradually, Japan will be powering up their nuclear reactors as they pass safety checks and defeat the protests and concerns of anti-nuclear backers.

With 25 percent of base load power lost from the nuclear shutdown, Japan imports fossil fuels to satisfy demand. This is not only non-ecofriendly but also increases the utility rate of the Japanese consumers. With solar as a candidate to fill in for the energy shortage, Japan will have to prepare for significant market growth. The strong performance in the first quarter, having shipped 1.7GW, indicates a remarkable start to the sudden growth of the Japan PV market.  If subsequent growth holds steady, the Japanese PV market will reach 5.3GW in 2013 — with the possibility of reaching 6.1GW.

The potential is so great that it actually places Japan in a position to surpass Germany and Italy and come in second, right next to China, the current solar market leader; this also makes Japan number one in the global market in dollars, while China remains crowned number one in wattage. The escalation is encouraged through the high 38 cent per kWh feed-in tariff (FIT), with a ten-year term for small rooftops and twenty years for larger installations.

The following data is taken from Eric Wesoff’s article on greentechmedia.com, “Japan: the World’s Hottest Solar Market in 2013,” July 22nd, 2013:

  • Residential installs 1Q13: 356 megawatts
  • Commercial installs 2Q12 to Feb. 2013 (10 kilowatts to 1 megawatt): 309 megawatts
  • Large-scale installs 2Q12 to Feb. 2013 (>1 megawatt): 110 megawatts
  • Commercial applications 2Q12 to Feb. 2013: 4,575 megawatts
  • Large-scale applications 2Q12 to Feb. 2013: 6,436 megawatts
  • Residential shipments 1Q13: 562 megawatts — up 70 percent over 1Q12.
  • Commercial shipments 1Q13: 407 megawatts — up 89 percent over 4Q12 and 5,187 percent over 1Q12.
  • Large-scale shipments 1Q13: 762 megawatts — up 146 percent over 4Q12 and 1,378 percent over 1Q12.
  • Total shipments 1Q13: 1,734 megawatts

If foreign solar investors and suppliers stay focused and disciplined, they will succeed in providing solar and other renewable energies to Japan in a steady, sustainable way that will promise solid growth and profits for decades to come. Solar Bankers is poised to lead the field with its new low-cost high-efficiency technology.


POWER AFRICA: Rural Electrification in Kenya – Different Energy Sources in Comparison

Solar Bankers is investigating the business opportunities in providing rural Kenyan households with a reliable and sustainable source of electricity. As already explained in previous discussions, rural electrification is pivotal to Kenya’s future economic development because of its benefits to social communication, living conditions, and industrial efficiency.

For this brief soliloquy we shall attempt to create an accessible case study, reflective of Kenyan energy demand, to compare the feasibility and efficiency of using different energy sources for rural electrification. We shall consider a rural Kenyan community of 10,000 people, and project onto this fictive scenario the data required to make broad comparisons between energy sources. Individual energy solutions will, primarily, be compared in terms of their financial costs, to give an idea of their economic efficiency. All calculations and outcomes are educated estimates, not intended to reflect definitive or absolutely precise realities, but to give an impression of the proportions and magnitudes involved.

The first and most fundamental aspect of the case study to be elucidated is the likely electricity demand/consumption of a rural Kenyan village of 10,000 people. We must assume that when a rural household, currently not connected to the energy grid, is provided with an access to electricity, its electricity consumption will be similar to that of a household which currently is fully connected. These households are now concentrated mainly in the urban centers of Nairobi and Mombasa. To then calculate the likely energy consumption of rural community only recently provided with electricity, a few pieces of data are necessary. For instance,

The total population estimate for Kenya in 2013 is 43,500,00.

Around 15% of Kenyans have access to grid-electricity.

The total annual electricity consumption in Kenya is 6.6 TWh, or 6,600,000 MWh.

This means that the number of Kenyans with access to electricity is 6,525,000.

One may then calculate the annual per capita consumption of those Kenyans with access to electricity:

(energy consumption) / (number of consumers)             =         1.01 MWh/year/person

=          1010 kWh/year/person

Daily per capita consumption, hence, is:

1010 / 365    =         2.78 kWh/day/person

Then, the total daily electricity consumption of a community of 10,000 people is around 27,800 kWh.

The total installed capacity to supply the community may then be deduced by dividing daily consumption by the amount of hours energy is supplied every day. We assume energy is produced 24 hours per day, which implies that:

Total installed capacity required      =         27,800 /24        =        1158 kW          =         around 1.2 MW

With a 20% capacity backstop, the total installed capacity required to supply a Kenyan community of 10,000 with electricity would be around 1.5 MW.

When considering the solution of solar energy, a sustainable and reliable energy supply is necessarily based on an energy mix. This is because average daily solar exploitation in Kenya peaks at around 6 hours, while our calculation assumes a daily exploitation of 24 hours. Geothermal energy – efficient, relatively inexpensive, highly available, and very suited for Kenya’s geological profile – should cover the hours of solar inactivity.

The availability rates of modern geothermal plants range, on average, between 80% and 90%. If we assume a geothermal availability rate of 80%, the community of our case study could theoretically be supplied with geothermal energy for 19.2 hours every day. One must, of course, note that an availability rate does not describe patterns of daily inactivity and activity. It usually describes the annual, or longer-term, availability of an energy source, taking into account broad and major periods of likely inactivity. But for the purposes of our calculation, we shall levelize the availability factor to describe more specific, daily periods of inactivity. This is not very reflective of reality, but creates a more accessible and workable theoretical scenario.

So, the community is supplied with geothermal energy 19.2 hours of the day, regardless of whether the plant continues to produce and store electricity, or shuts down. The solar component of our theoretical system has an assumed availability factor of 20%, and accounts for 4.8 hours of daily electricity production. This is deliberately less than the maximum availability mentioned earlier, ensuring the solar system does not have to consistently work at maximum productivity.

The total installed energy mix is, hence, comprised of 1.5 MW of solar, and 1.5 MW of geothermal energy. The cost of this system is then deduced on the basis of estimates by the US Department of Energy for the average levelized cost for energy plants. Costs are given in USD per MWh produced by the particular energy source. The calculations for system-costs are as follows:

A. Geothermal:

Total system levelized cost            =         99.6 USD/MWh

=         0.16 USD/kWh

Costs for the geothermal component of our proposed system amount to around 10 dollar-cents per kWh produced.

B. Solar:

Total system levelized costs          =         156.9 USD/MWh

=         0.16 USD/kWh

Total costs for the solar component are around 16 dollar-cents per kWh produced. The costs for the entire system, thus, amount to around 26 dollar-cents per kWh produced. And this rough estimate may be compared with the cost of producing the same amount of energy from coal or fuel oil.

In the case of oil, current prices render a fuel-based energy supply of a rural Kenyan community of 10,000 people economically very inefficient. The cost of oil by itself is already considerable:

C. Fossil fuel

The amount of residual fuel oil used to generate one kWh of electricity is estimated to be around 0.002 barrels. Now, we have hypothetically calculated the daily electricity consumption of a Kenyan community (of 10,000 people) to be around 27,800 kWh. The multiplication of the two quantities produces the community’s daily residual fuel consumption for purposes of electricity production – around 56 barrels. The price of oil is around 106 USD/Barrel. The daily cost of fuel used for electricity production is, hence, around 5430 USD. Converted, this is equivalent to around 20 dollar-cents per kWh of electricity produced. But this calculation considers solely the cost of the fuel used for production. To deduce the total system costs of production, significant O&M and capital costs for the fuel-based power plant would have to be taken into account. Also, volatile oil prices render the costs of a fuel-dependent energy supply uncontrollable.

Coal, on the other hand, is slightly cheaper than the renewable energy-mix in levelized cost, but of course bears several disadvantages.

D. Coal

Around 1.07 pounds of coal produce one kWh of electricity. Based on this assumption, the newly grid-connected community of our scenario consumes around 32207 pounds of coal every day for electricity production. This is equivalent to a daily consumption of 14.6 metric tons. If one assumes the current price of coal is around 60.40 USD/metric ton, then the daily cost of coal amounts to 882 USD. When compared to daily electricity consumption, these costs are equivalent to around 3 dollar-cents per kWh produced. But, again, this takes into account only the cost of coal. In fact, the purchase of coal for electricity production constitutes around a third of the total costs for an entire coal-based system. Consider, the estimated, average levelized cost for coal-based electricity production is around 100 USD/MWh, or 10 dollar-cents per kWh produced.

Coal, however, similar to fuel, makes energy production too dependent on the uncontrollable developments of international commodity markets. With ever-scarcer supply, the price of coal will tend to rise and Kenya, as an emerging economy, will have difficulty extending its influence on international markets to ensure a reliable supply of coal. So, coal-based electricity production, in the short run, appears to be economical – but in the long run it is neither reliable nor sustainable.