Featured

Power Systems Analysis Tool Adequacy Survey

This is the result of a survey I conducted for my MBA. Most of the people that took the survey are employees and managers from European TSO’s and North American ISO companies that were involved in power systems planning or simulation. I know that, because I contacted them individually but I did not record their data in the survey itself. Other participants are power system professionals and academics from my network. At the time of writing, 53 people have completed the survey.

The next question asks if the tools that we use are ready for one of the greatest challenges in the industry: To adopt renewable energy to decarbonize the electric sector. So, are the tools that we use ready for that? the answers I allowed are are a bit tricky because I didn’t want to simply put Yes or No as the options for such an important question. Instead, I put “Yes, we need to adapt them” and “No, we are working on it“. If you think about it, these two are the same answer: “No, the tools are not ready and that forces us to take action“. That accounts for 80% of the answers.

Hence, the next question: Do you need to develop software because the tools are not ready? The answer is yes. To some degree, we are forced to develop tools internally because the commercial ones do not fulfill our needs.

What about CIM, the so-called standard format?It looks like people are in the process of adapting it. In my experience, the CIM adoption goes very slow because it is a nightmare. It is remarkable though that 25% of the people don’t know what CIM is.

From my experience, this is the key question of the survey. It translates to: How “bottleneck” are the tools in your process? The answer is “very much” since the results lean towards a “very labor intensive” collaboration.

The next question is about Web and cloud based tools. The distribution of the answers is not very surprising; most are doing something in the cloud, a significant portion cannot have anything on the cloud due to internal policy and a minority will migrate everything to the cloud. In general, regardless of where the “cloud” is, server-based technologies enable collaboration.

The next question asks if people want anything other than Windows? 30% of the participants want their programs to run in an alternative environment to Windows, 45% are fine with Windows as their only option and a 25% are indifferent. It should be noted that if a program is only for Windows, that disables it in practice for server environments where Linux reigns with a market share of 96%. So in practice, Windows only means no cloud.

We use many different tools to get our results, but how integrated among them are they? This question asks the bottleneck question from a different angle. It asks about the amount of work required to go from tool A to tool B to get things done. People consider that things are not so bad, but still labor prone.

This is a big one and I admit I was biased. This question asks about the potential use of open source software in electrical companies. I was expecting a lot more “No, it cannot be trusted” but instead I get the complete opposite. People are in favor of using open source programs that have been properly tested.

The next question asks about the integration of the tools with GIS systems or simply tools like google maps. Apparently, the tools are not integrated with maps very much. This needs to change since power systems, and specially power systems planning requires maps for having a sense of the impact of your work and also for better visualization and understanding of the results.

The next one asks about the usefulness of several methods when learning how to use a new program. It is remarkable that people find the vendor trainings to be the least useful method, even behind learning by yourself. On the other hand, video tutorials and examples are the most appreciated. We certainly need more of those.

Are you going to be involved in a software development? The majority of the participants think that it is likely to be involved in a software development. That makes sense because, since the vendor solutions are not addressing our issues with their products, there has to be software developed to deal with that. Therefore, we have to be involved in requirements gathering, testing or directly by programming.

Here I sprinkled some topics that are common challenges. All of them require some attention but the people deem that optimization, flexibility and better visualization definitely require more attention.

Finally, the last question, are you happy with your current set of tools?. This is the one I’m having a hard time understanding. People seem to be OK or happy with their tools; My problem understanding this is that if you have a very labor intensive collaboration process and you need to adapt the tools to make them useful for your work, how can you be happy? Could it be that we accept the current situation? Could it be that there is no conscience that the amount of labor to collaborate in a common model could be zero? or that the map integration could be complete and effortless? or that the tools could be fully integrated?

Then, eight people felt like writing some extra remarks that I find very much on point.

Any other comment (these are comments from the survey)

  • Fully integrated analysis software with GIS software would be wonderful. Additionally, it should have a troubleshooting module related with mistakes in GIS.
  • Looking for tools to address specific challenges in offshore sectors.
  • Thank you for the survey. As a developer of open-source software, I think one of the most important issues is really the ensure compliance between different data models and have reliable data interfaces.
  • Data manipulation and conversion (e.g. from SCADA or fault recorders) for input into models is still a big challenge for us, particularly trying to integrate multiple data sources.
  • The current tools used by TOs and ESOs will not be sufficient to operate renewable energy grid. They need to pivot to measurement based models.
  • Modeling wind/solar, and batteries/load responses/P2X require changes in tools – more data and more chronological assessment for flexibility needs and sources, also shifting loads by hours and days
  • We are pushing to Open source go-development through the initiative LF Energy which we launched with the Linux Foundation
  • Some degree of standardization in tools would be nice (this would allow to share expertise among TSO community for instance)

The hidden value of reinventing the wheel

How many times have you heard sentences like “if it is not broken, don’t fix it” or “do not reinvent the wheel”? many times for sure. The underlying message is that, if there is something out there that already works, do not waste your time making it your self. This article advocates exactly for the opposite.

We live in a time where we delegate most of our activities on proven technology that we do not question or understand. For instance. the average person does not know how a car works in detail, or how it was made. Only when a person needs to build a car, will encounter all sorts of barriers, that is why almost no one starts a car company. We assume that building a car ourselves is going to be more expensive than buying one, so we accept it and move on.

But what if we decided to build the car anyway?

Shu-Ha-Ri is a concept from the Japanese martial arts that describes the stages of achieving the mastery of a subject;

  • Shu: Copy and learn the traditional knowledge, techniques, heuristics and proverbs.
  • Ha: Break with the tradition, start to innovate discarding or modifying the forms.
  • Ri: Completely depart from the forms and open the door to creative technique, and arrive in a place where we act in accordance with what our heart/mind desires, unhindered while not overstepping laws.

When we accept and not question the world around us with sentences like do not reinvent the wheel. We are in a perpetual state of ignorance (not even in the Shu stage), where we deny ourselves the possibility of becoming masters in something.

By using a technology, we accept all the conditions and design decisions that were involved in the development of that technology. That means that we cannot modify that technology to serve us, but rather we need to adapt ourselves to the conditions imposed by it. If we refuse to accept those conditions we would have to go through the shu-ha-ri process in order to first reproduce the technology, and then redesign it for our own benefit. The key is to be able to asses if the long and hard process of becoming a master is worth the cost of breaking apart with the convenience of available technology.

Some times it is.

I am electrical engineer in power systems and it is safe to say that the field is one of the most conservative ones among the engineering disciplines. Yet, the field is facing tremendous challenges that demand innovation. During the first years of practice, I observed that everyone dealt with the calculations in a very dogmatic manner, relying on the available calculation software as if it was something coming from the gods, as something that you should embrace and not change. But I saw myself caged by those very old fashioned programs, so I decided to change.

I started developing my own software to serve my needs. Can you imagine yourself programming a new word processor just because you don’t like Microsoft Word? That is exactly what I did. It took me a huge amount of time to learn and often re-invent how certain parts of the program should work, because those steps are not documented anywhere, not in books, not in papers and not even in other software. I developed my own power systems solver, which I utilise to work with. I am the blacksmith who built his own forge. That is like that because I refused to adapt to the ill conditions imposed by the available technology and I decided to try to impose my own design decisions.

Because those design decisions adapt better to the needs of the power systems of the future, I can work in the needs of the power systems of the future with far less effort than using the traditional software that was designed for a world with almost no renewable power and where the time-varying planing did not exist. This situation has saved me hundreds of hours of having to adapt my work to the software, because I had my own software that I was able to easily adapt for my work.

The benefit of reinventing the wheel is not only the capricious advantage of having my own technology, the real benefit is to get to master the basic principles and to be able to incorporate ground breaking innovation into the software I make. I am in the stage of Ri. That is the true value of reinventing the wheel; the innovation, the long term cost saving, and the journey itself.

 

A few words on the word “model”

The Oxford dictionary describes the word model in several ways, but the one that I want to talk about is this:

A simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions.

i.e.: ‘a statistical model used for predicting the survival rates of endangered species’ 

A model is a static thing. We make a model and we use it. We may even update it!

The effects

The word model has an intriguing effect; It turns something complex into something simple in the minds of the interlocutor. Those of us that make technical software to deal with sophisticated modelling of the reality, face the effects of the word “model”.

Imagine a computer program that uses hundreds of physical and mathematical laws working together to simulate and predict the prices of the electrical markets in the long term (…) Humm, it sounds hard.

Now, replace that thought with “a market prediction model”. A market prediction model may very well be an excel sheet with a linear regression. We’ve just made our life so much simpler!

Right there is where the devastating effect of the word model resides. The word “model” simplifies the role of the modeller to something in the reach of the interlocutor; We all can model, so what you do should not be that difficult.

People say model when they actually mean software

The word “model” is often used to describe software. I would say that the word software is a generalisation of the word model in the sense that we are describing. However, a model does not run on the customer premises. A model does not handle the thousands of required inputs. A model does not cooperate with other models automatically and in parallel to provide a faster and more accurate answer. Software does, and software is hard to make.

Models are good. Models in software are great

We all can model, we know that. But if you can put a model into a computer program and deliver it, you have a product. Products are what make money, models do not and the management needs to recognize this fact. Models are fine, but the revenue comes though software, because that is what you can sell and maintain. A model in software provides credibility before the client. Trying to sell a model to a client shows that you are an amateur. Unfortunately many clients can not recognize that either.

If you can make models and put them into a computer program you are a pro. Remember this words about the word “model” when you, modeller that makes software, negotiate your next salary.

 

 

Software in consultancy

Consultancy is a business based on the sell of expert labour. As such, consultants are expected to be busy as much time as possible at a commercial rate; i.e.: If the commercial rate is 100 €/hour, and the consultant works 1,800 hours/year, the company expects the consultant to produce a number as close to 180,000€/year as possible. I will call this the consultant’s nominal production.

When a consultancy company recruits technical professionals, some understand that we can trick the system. The trick is called software. If we automatize a part of our work, we have more time to slack, while “producing” closer to our nominal production. Maybe if a consultant is motivated enough, will produce over the nominal rate by working all the time and automatizing the over time that is most of the times “required by the job”.

Many smart individuals automatize their work in order to work less while keeping the production on schedule. Sometimes, the management is the one fostering this automation in order to raise the productivity. In this regard, the production of software is a natural consequence of the competitiveness in the consultancy business.

Now comes the horror; Most people understand the effects of automation. Very few understand the consequences. It turns out that there is a complete discipline that has evolved around the idea of task automation and its consequences. It is called software engineering. This fact is often neglected, either by ignorance or by deliberate action: I’m no programmer, I’m an engineer, physicist, mathematician, etc…

Fear the excel engineers

The first level of horror is composed by the people that believe that Microsoft Excel is the best program ever produced, and hence it is ideal for everything they do. Including software. These people might be very knowledgeable about a certain topic, and they feel that they can put some of that knowledge into an Excel workbook to be able to produce faster.

The problem lies in the practical impossibility of fact-checking a “program” produced in excel. Hundreds of formulas in dozens of sheets might be correct at some point in time. However, the chance that an Excel workbook remains correct as it evolves and changes hands is close to zero. That is why whenever someone claims that they have an “Excel sheet for that” I tremble.

Yeah, VLOOKUP doesn’t do what you think it does…

Fear the Matlab engineers even more

Deep down, Excel engineers know that what I just said is true, but they cannot do it better because of the lack of formal programming skills. Some people have had programming formation at the university. Those are the ones that use the proprietary programming languages learnt at the university. Those would mostly be Matlab, Mathcad, Mathematica, GAMS, etc… Each of those programs require a license, and none of them provides modern programming paradigms. In fact most of them foster the all-in-one-file mentality despite having some sort of modular capabilities.

The problem here is more subtle.

These people using these proprietary languages, are very productive. Usually they have convinced the management about the countless benefits of their software up to the point that the management believes that they have a product. These Matlab (insert here your proprietary language) products never start intentionally, they start as the automation of a tedious calculation. But because they prove the concept of the calculation they were programmed for, they secure internal funding to continue. Because those languages lack the possibility of better software structures (Object structures, maybe functional programming, …) and because the people that produce them think that writing 3000-line functions is a good practice (everything is in one place you know…), those programs become unmaintainable pieces of spaghetti code.

Subtle but destructive.

Eventually these people will realize that what they have done is a cumulative summation of terrible practices. Hence, they will start trying to convince the management to fund a project to re-write the software in C++, Java or Python. Don’t get me wrong, if you need Matlab, Mathcad, Mathematica, GAMS, etc… and you can afford the license, go for it. But do not pretend to be doing any serious software with it. Those languages are fine to produce a proof of concept, which is the typical application that they serve in the university.

Software done right saves money, the opposite is also true

The cost of the Excel engineers is the cost of having done everything wrong and be unable to realize about it. Horror story #1Horror story #2 Mind the dates of those stories.

Usually the cost of the Matlab engineers is not the incorrectness but the cost of rewriting everything so it does make sense.

What to do?

For the managers:

  • The first thing to do always is to document the process. Make the technical people write clear literature for yourself and for the future generations of users. Some people in the organization would refuse to do it because they think they would be revealing their secret sauce. Try to convince them to come around. Otherwise fire them or reduce the team dependency on them, they are not that good.
  • The next thing to do: Whenever you see that some internal tool is taking traction: make everyone stop and think where is that going. If it is going to become more than a script, make everyone involved make a design, if the budget allows it, bring good software engineers in the process and give them control.
  • The cost of not doing this is usually much higher than the cost of doing it right. Bear in mind that a bad design is something that will be making people unhappy for years, and if people are miserable at their job they quit. Design is paramount.

For the consultants:

  • Document everything. If you use a real programming language, tools like Sphinx make everything easier.
  • Learn software engineering principles: Design patterns, best practices, etc. This includes acknowledging your limitations.
  • Pack everything into well documented libraries to be customized for each project.
  • Beautiful and easy to read is better than “efficient”. Write code as if Donald Knuth was going to have a look at it.

I hope this helps, it certainly has helped me realize of my limitations, to go back to learn the things I don’t know instead of come up with suboptimal solutions, to foresee problems and to try to avoid the re-writing trap.

Making consultancy scalable

After having been working in consultancy in a variety of companies, I have witnessed the following process over and over;

  • Consultancy project “A” is sold to a customer.
  • Software “A” is made to meet the customer requirements.
  • Because of resources restrictions, the software “A” is done as a monolithic block. It is “integrated”.
  • Software “A” works, and gets the management recognition. The bespoke software “A” becomes a part of the company catalogue for other customers.

some months pass


  • The management proposes to offer software “A” for the project “B”. This seems logical because “A” and “B” are in the same domain of expertise.
  • Engineers agree to modify software “A” to do “B”.
  • After a month of work, engineers find out that adapting “A” to “B” is impossible, and after convincing the management of it, they produce software “B”, again as a monolithic block to match the requirements of project “B” due to the resources constraints.  Best practices? there’s no time for that.
  • Software “B” works. Now there are two bespoke software pieces to be offered in the future.
  • The cycle continues, while the management claims that they need to improve revenue, cut down costs, etc.

Any clue?

This is the usual modus operandi of a software as a product company. In such organizations, the software is seen as a commodity, with little to no added value other that the person-hours used to produce it. Because the software is seen as a commodity, there is no added value in a good design nor the use of best practices in order to recycle that knowledge (to scale). However the management asks for actions to scale, which in business neo-language, translates into more hours per the same salary per employee.

Software that scales

The first intuitive feature of a computer program is that it automatizes something, it does something faster or better than a human being. This is the low hanging fruit of producing software and it is almost always achieved. Even the most horrific piece of software will be faster than a human being at calculating.

The second, and most important feature of software is that it can be incremental. This one is harder to achieve, because it is not obvious. This feature implies that the addition of every new feature takes less and less human work as the software evolves, making the potential marginal revenue higher. This feature is explained in the book  Clean Architecture by Robert C. Martin.

If there comes the time when to add a new feature is more complicated than to re-write the whole application, that is the empirical prove that the software is not scalable.

Practical keys

So, how does one make scalable software in a consultancy-like business?

Make libraries and never monolithic “integrated” blocks. Once you have a working library that is tested, you can add communications, GUI, etc. When another project comes, the library shall be the core of the new project, having the chance to discard the customized layers added on top.

Write technical documentation of the libraries. This separates the wheat from the chaff, and shows that the code does actually have a scientific background. At least an informed reader can discern if the science behind the code is good enough for the new purpose.

Make libraries and technical documentation available in the organization. If a department (or even an individual) is being scalable, there is no good reason to keep that isolated. Everyone in the company should be aware of the company’s state of the art, and have the chance to benefit from it.

By implementing these simple rules the projects revenue explode. I have examples where months of developments reduced to hours for an equivalent outcome.

Conclusion

  • Make libraries (packed knowledge)
  • Document the knowledge.
  • Make the knowledge available.

I spoke about software because the examples are perhaps clearer to see, however this applies to all human activities including regular consultancy. If from every project one can extract the core of it in a document of some sort, it is already scaling up for the next time by having ready-to-consume knowledge. And that is the key.

 

 

 

 

GridCal: Open source ethics

GridCal is about to hit version 2.0 after three years of development. So I take the opportunity to announce it. After all it is good and it is free.

GridCal is an open source power systems calculation software that I started because there was nothing like it. There are other competitive open source software programs, but none has a work flow like the comercial programs with graphical user interface, easy to use, powerful, etc. I believe that GridCal surpasses many comercial software in that aspect. I made GridCal so that it is a user friendly program and a calculation library for people to use it as they desire.

GridCal

Developing an open source project is quite a journey. A journey that has served a double purpose for me; To learn how electrical models work for real and to create an invaluable tool to work with. It takes commitment and a certain work ethic to be able to develop something that others can rely on and extend without your explicit involvement. The commitment part is specially true in my case because I do it in my spare time after working 10 hours a day in a consultancy company.

I have been contacted by Germans, Koreans, Americans, Russians… people at university courses and more seasoned engineers alike. I really enjoy knowing how people use the software and if they find bugs or difficulties that I can solve.

Electrical models

I have had a particular hard time finding models that actually work in a computer implementation. I have spent a fortune on electricity books, and very few provide the essential details that are necessary to build an efficient computer implementation. Some missing topics would be how to compute a line loading, graph algorithms to check if there are islands, or state of the art short circuit among other topics. The electric sector is not as good as the computer science sector in sharing knowledge. Many people develop models and keep them for their own, publish a paper or two with incomplete information and it all stays there.

Luckily I found that the models of another open source project (MatPower) were quite comprehensible and were implemented with speed in mind. The way R. D. Zimmerman, C. E. Murillo formulated the circuit equations, make the implementation much simpler and efficient, and this formulation cannot be found in books. That is because the people who write them do not write software themselves, hence their explanation is  academic (simplistic if you will) and it is hard to make it work in the real life.

Open knowledge

Open source software in the electric field should be used in each and every university. Students would be then able to see the “guts” of the program and learn how it is done in real life. They would be able to see methods that work for real size grids (hundreds or thousands of nodes), instead of a simple method that will lead them nowhere if they want to implement it by themselves to learn. I faced that myself.

I was a researcher, and I was once in trouble by relying on a comercial program which limitations blocked my research. That led me to start programming open source simulators in my own time to be able to do research at work with them. Crazy I know, but I have it now and I use GridCal at work for real projects, and my co-workers do too.

Conclusion

Regardless if you are a student or a professional, give my program a try. And if you feel like giving some feedback or contributing I’ll appreciate it very much. The program can be obtained here.

Enjoy.

Academia and the new science

When we go to school we are taught about the scientific method; Think of a premise, test it, and if the results are not satisfactory, change the premise and test it again. This has led us to an unprecedented level of innovation over the last two centuries.

Science is about reproducible evidence. If anyone with comparable means and knowledge cannot reproduce a published experiment, such experiment is probably wrong, and its publication should be dismissed unless the authors make the underlying results public and disclose all the possible (and not published) trickery. Otherwise a scientific article is nothing but paper for wrapping the fish.

The academia should be the paladin of such procedure, a temple of truth, right?

WRONG.

The science of hype

Once in a while I like to explore the state of the art of a subject within electrical engineering called “power flow”. Last Christmas I did so.

Lately there has been a break trough in the methods used to solve the problem. However the new method is not mature and cannot be compared yet to the traditional algorithms, but it has a great potential. So once in a while I check if the corners have been polished. This new method is a high profile subject and it is not well understood yet, so if you submit a paper about it, you’re probably going to get it published.

Ding**ding**ding**ding**! I found two winners! Two articles published in 2017 in the most prestigious electrical engineering institution (IEEE) had incorrect formulations of the problem leading to a useless implementation and a complete waste of my time.

I do read the papers on the power flow subjects, and yes I actually program the methods and I publish the source code (I have the means and the knowledge to do it). As a good scientist, I wrote to the authors asking for clarification. One of them promised me some prove of the method (never heard from it again) The other asked me for my references, after providing them along with his method corrected, and source code, I received no further reply.

Both papers had very high claims and really nice looking charts. Both failed to provide any improvement over the existing method while claiming otherwise. Both of them were from research groups from reputed universities. Both were published in a prestigious magazine. Both were a fraud.

Unfortunately, this is not the first time I spend hours verifying a published method only to find some inconsistency or simply that the methods performs poorly when announced otherwise. The truth does not sell magazines apparently.

The incentives turn scientists into hustlers

The universities measure a researchers success in terms of how many papers he/she gets published in so called “high impact magazines” over a period of time. This means that a researcher is in a position where he/she can make up an article from the beginning to the end, with no evidence whatsoever, and still get it published if it looks fine enough (right wording, plenty of famous references, familiar yet “new”, etc.) If you get caught you might lose your reputation and so on, but who does test the stuff published in papers? Well, I do.

Once I was told by a chairman that to copy from one person is to plagiarize, to copy from many is research. I was outraged by this comment. It reflects perfectly the mentality that prevales in the academia. Yet somehow it contains truth, but the words seem so wrong to me.

In academia the game is to publish, to publish a lie if necessary. That will make you pass the bureaucratic requirements and remain as a researcher and thrive.

Another bleeding subject is the authorship. Many times, the poor PhD student (the legitimate author) appears in second or third position in the article authorship, just because being the lead author brings more points to the so called researcher (the department chairman usually) The students accept in fearing of the possible retaliation. This creates a vassalage relationship that endures over time and infiltrates in the academic culture.

Sometimes, the results are legit although not great. Then the facts must be exaggerated and oversold in order to convince the magazine reviewers (also in the game) that the article is worthy of publication.

Hustlers in academia sell hype and bad science.

Separating the wheat from the chaff

Many authors hide facts and publish half way results, or simply wrong statements in order to attract attention and obtain the validation that comes with a published article. This goes against the very purpose of publishing: To lead others towards a research path or avoiding the suffering of going through an unfruitful one. Only optimistic articles seem to be publishable.

‘Novel, amazing, innovative’: positive words on the rise in science papers

My former boss in R&D used the following parable: The seed makers in The Netherlands do not keep their secrets for themselves for too long. They know that making information and seeds available to other seed makers will raise the overall quality of the seeds, improving the benefits for the guild. This illustrates perfectly how research works; Do something, get some benefit and make it available. Others will do the same and you will benefit further from your initial effort.

In my opinion, open source software is the perfect way to do science. It is open, verifiable, and if it is wrong, you can correct it easily. I have met open source researchers both in engineering and biology, and all of them are very successful (they meet their papers quotas) while publishing strictly verifiable findings.

The key is that the development of open source software has lead them to the forefront of their fields. The mastering of the basics along with an open platform where to innovate incrementally has enabled them to go one step further.

Therefore, magazines like IEEE should start asking for hard evidence of the articles published, and open source code is hard evidence in many cases where the publication involves a computer algorithm. This would increase the reputation of the publication and would mean that the published facts are the truth and nothing but the truth, raising the level of the whole field of expertise.

Certainly it would make the subscription fee worth the money.

The electricity market as opposed to the dispatch optimization

In this post I am going to discuss how the market has substituted the power plant dispatch, and which mechanisms have been designed so that the market of electricity can provide a feasible real world solution. (Spoiler: there’s still some dispatch optimization being done)

Traditionally, power plants were dispatched so that the overall cost of production was the lowest. This is known as “economic dispatch” and it is an optimization problem that minimizes the cost of satisfying a certain demand with a combination of power plants, while keeping the grid loses to the lowest possible values and avoiding the violations in voltage and loading of the electrical grid components. In my opinion this is the best possible way to operate a power system, since the lowest electricity cost is guaranteed by the mathematical method. Meaning the consumer will pay the minimum possible cost for the given available set of power plants, for which the operational costs have been computed correctly.

Here lies the problem. Given a set of power plants and being those operated at the minimal system cost, why would an electric company invest in more efficient and cleaner technologies? The answer is that they would not invest at all in cleaner technologies unless there is a clear economic benefit such as a feed in tariff.

The market mechanisms are introduced to foster the improvement of the existing power plants and to promote cheaper technologies such as wind and lately solar. So, what is the market? and how can it be applied to the dispatch of power plants?

market_bids_curves

For every hour or every 15 minutes or any other interval considered by the market for the day ahead, every power plant bids an amount of energy that it can generate at their marginal cost (cost that covers their operation plus a benefit). Low operational cost technologies such as wind and solar can offer their energy at a very low price, whereas very high operational cost technologies such as gas turbines must offer their energy at a higher price. For the same auction, the energy retail companies offer to buy amounts of energy for certain prices. Those that “desperately” need to have a certain amount of energy offer the highest price (knowing that the final price would likely be lower than their bid) and those companies that have flexibility and can afford not to buy the energy in this auction, offer the lowest price.

The generator bids and the retail bids are added up in the respective generation and demand bid curves, and then the crossing point determines the matched demand and its price.

Does the market matched demand have to be the system actual demand? Of course not. It can be anything. In fact every day there is a more or less important deviation of the market matched energy with the actual system demand. Furthermore, the system actual demand of the day ahead for which the market solution is being computed is unknown, so there is always uncertainty.

I will call the market crossing point, the market solution to the system dispatch. In a market with hourly intervals the market solution is composed by 24 values. Of course this solution does not minimize anything, in a perfect competition market it would, but the electric system cannot be in perfect competition because I, as consumer cannot choose where to get the electrons from. Because of this, the transmission system operator (TSO) needs to run an optimal power flow (an optimization problem much like the optimal dispatch) to check whether the market solution makes any physical sense. Imagine the following:

There is a country with an electricity market. In this country a group of very talented people, develop a technology that can offer infinite power at zero cost. Therefore these people could offer all the country energy for 1 cent of Euro, and it would still be profitable. In the market, this technology would leave out all the possible competitors, being the final market solution that all the country energy is generated by the infinite power plant at the cost of 1Euro cent.

This would imply that all the country energy would come from a single point, and this would cause massive voltage violations and overloads in the existing infrastructure, since the infrastructure is not as cutting edge as the new power plant.

The TSO has the duty of checking that the market solutions are feasible, moreover, they must ensure that in real time, the actual generation matches the demand exactly. That is why they must complement the market solution mismatch with other services.

Market_mechanisms.png

The market mechanisms only looks at how to satisfy a demand at the minimum cost (given the bids, which have little to do with the real cost of the system), this is why the TSO must ensure that the economic solution also satisfies the real world constraints. Again the advantage of the market is that it allows more diverse players than the traditional scheme.

Any particular power plant is retributed by the amount of energy that it managed to sell on the market, plus the amount of ancillary services energy that it managed to sell to the TSO. I’ll discuss how to optimize these revenues for a power plant in another post.

Does the market hypothesis hold?

The efficient market hypothesis as a price optimizator only works if there is a large number of market players. Why?

If there is a limited number of market players, those will agree prices, or artificially put power plants in maintenance in order to benefit from the market demand. Therefore the market prices will not be the real prices but artificial ones.

On the other hand if there are many players offering and demanding energy, it is much more difficult that they agree prices with enough time. Furthermore, if there are more players the demand mismatch and the TSO corrections are likely to be lower since the market generation solution will be more distributed, and probably closer to the load points.

The more players in the game, the more efficient the market is. The opposite is also true.

Final remark

The proper planning of a power system will always produce a better price than any market mechanism, because it would minimize the waste of energy, and would maximize the performance of economic investments system wise. This solution is philosophically closer to socialism, where the individuals are surrogated to the system and there is no freedom to act or innovate (Our infinite power plant would not be accepted).

The market mechanism allows freedom and diversity, but requires the existence of many actors maximizing their own benefit without looking at other actors, to work. Still it will not minimize the energy waste or the energy price.