Recent developments towards novel high - Semantic Scholar

Report 7 Downloads 113 Views
1

Recent Developments Towards Novel High Performance Computing and Communications Solutions for Smart Distribution Network Operation Gareth A. Taylor, Member, IEEE, David C.H. Wallom, Sebastien Grenard, Member, IEEE, Angel Yunta Huete, and Colin J. Axon Abstract – Information and data communications technology will be crucial to the operation of future electricity distribution networks. This will be mainly driven by the need to process and analyze increasing volumes of data produced by smart meters from residential and commercial customers, sensors monitoring the condition of network assets, distributed generation, and responsive loads. Complexity is further introduced as these diverse data-streams will be gathered at different rates and analyzed for different purposes such as near-to-real-time system state estimation and life-cycle condition monitoring analysis. However, the nature of active networks dictates that all relevant information will need to be exploited within the same operational framework. We are developing novel Information and Communications Technology (ICT) and high performance computing tools and techniques to enable near-to-real-time state estimation across large-scale distribution networks whilst concurrently supporting on the same computational infrastructure condition monitoring of network assets and advanced network restoration solutions. These platforms are promoting and supporting the emergence of new distribution network management systems, with inherent security and intelligent communications, for smart distribution network operation and management. We propose cost-effective scalable ICT solutions and initial investigation of realistic distribution network data traffic and management scenarios involving state estimation. Furthermore, we review the prospects for off-line trials of our proposed solutions in three different countries. Index Terms – Distribution management systems, Highperformance computing, Intelligent condition monitoring, Network operation, Smart grids.

T

I. INTRODUCTION

O fully enable flexible distribution network operation [1] highly accurate state estimation of large-scale distribution networks becomes essential. Achieving near-to-real-time state

The HiPerDNO project: www.hiperdno.eu The authors thank the EC Seventh Framework Programme (FP7/20072013) under grant agreement number: 248135 for their support of the HiPerDNO project. G. A. Taylor, Brunel Institute of Power Systems, Brunel University, London, UK ([email protected]). D. C. H. Wallom, Oxford e-Research Centre, University of Oxford, 7 Keble Road, Oxford, UK ([email protected]). S. Grenard, EDF R&D, Clamart, France ([email protected]). A. Yunta Huete, Union Fenosa Distribución, Madrid, Spain ([email protected]). C. J. Axon, Brunel Institute of Power Systems, Brunel University, London, UK ([email protected]).

estimation requires algorithms and procedures that are highly scalable. Within existing Distribution Management Systems (DMS) data processing, storage, and communications resources are often significantly limited, in for example, data communications bandwidth. Existing electricity distribution networks have very few monitoring or metering points, thus they support neither the connection of massively distributed active network assets (sensors and instrumentation, responsive load), nor active electricity customers (smart meters, smallscale embedded generation). Distribution Network Operators (DNOs) across Europe are planning or beginning major asset replacement and renewal programmes. For example, the age profile of UK medium voltage (MV) infrastructure suggests that a significant quantity of assets are now approaching the end of their operational life [1]. The transition from passive to active distribution networks is essential in order to improve the performance and flexibility of network operation. While it is impractical and not economically viable to replace all ageing assets at the same time, it is becoming increasing important that these assets are monitored closely to ensure security of supply as well as the safety of utility employees and the general public. Off-line condition monitoring, such as scheduled oil sample analysis, has been common practice among utilities. However, on-line monitoring offers far more efficient utilization of assets for a variety of reasons [2]. This paper discusses the characteristics of ICT and high performance computing (HPC) systems which can meet the demands for the control and monitoring of future (active) electricity distribution networks. The challenges are wide ranging: near-to-real-time Distribution System State Estimation (DSSE), intelligent condition monitoring and asset management systems, communications and messaging infrastructure, and HPC platforms whilst not overly raising operational costs. II. DISTRIBUTION SYSTEM STATE ESTIMATION The application of state estimation techniques to maintain desirable voltage, power flow control and the security of the system is widely used at transmission level. However, state estimation at a distribution network level has not been widely deployed due to the lack of measurement data or an accurate measurement model. State estimation in distribution networks is a non-linear optimisation problem that uses a limited number of measurements. The measurements are collected by the Supervisory Control And Data Acquisition (SCADA) system and combined with a network model in order to

2

estimate the electrical state of the network in real-time. These raw transmitted measurements are often contain noisy and erroneous data. A good state estimator can overcome the effect of these erroneous measurements and can accurately determine the real state of the system. At present most distribution networks are passive in nature with limited communications infrastructure or intelligent automation. Most technical system issues are solved at the infrastructure planning stage or through network reinforcement. Active distribution networks can improve or maintain quality of service, reduce costs and increase the capacity of the grid to host Distributed Generation (DG). Furthermore, such networks can effectively defer investments and potentially support higher levels of demand. In addition, smart distribution networks will be able to contribute to improved asset management decisions and make efficient use of sensors and new Automatic Metering Infrastructure (AMI) in terms of measurement data and communications infrastructure. In the future, DSSE will be regarded as the core of the DMS and will therefore be a prerequisite for smart grid functionality. Existing DMS can be extended to include new functionalities that build on real-time state estimation and controllable energy units like generators, storage and responsive loads. Moreover, a new generation of DMS functionality will facilitate more accurate asset management decision making in order to defer distribution network capital expenditure as and when appropriate. Fig. 1 presents an overview of the proposed DMS.

time and the pseudo measurements. The various data origins such as measurements through field sensors and load estimation techniques lead to a new challenge for DMS. There will be redundancy in the information provided by the data-streams, but the level of trust on each data item will be different depending on the type of the data and accuracy of the instruments. Therefore, traditional state estimation techniques that have been applied at transmission level cannot be used effectively at the distribution level. Our novel methods are expected to provide high accuracy and low computation time for the volume of system status data from the measurement devices with a minimum implementation cost. In addition, the state estimators should be able to take into account the significant integration of DG as well as smart meters in future scenarios. Fig. 2 represents the overall system requirements of a DSSE tool.

Fig. 2. Distributed System State Estimation for a new generation of Distribution Management System.

III. INTELLIGENT CONDITION MONITORING

Fig. 1. A proposal for novel DMS functionality.

A. Challenges Distribution networks are typically large-scale and extensive networks (100,000s km long) consisting of hundreds of thousands of nodes that can generate high volumes of data. As existing distribution systems are typically poorly monitored, the availability of real-time measurements is limited. Thus, data are expected to be the combination of real-

A. Off-line and On-line Condition Monitoring The traditional condition monitoring technique is an offline sample analysis taken according to a schedule. Electrical equipment faults arise from short circuits, overheating and partial discharge under high stress. In the case of transformers, decomposition products from breakdown of the oil, paper or insulating materials etc. are transported through the transformer by the coolant oil. The low molecular weight gases dissolved in oil can be identified by gas chromatography. Other solid degradations include furans, cresols and phenols can be detected by liquid chromatography [2]. For example, through analyzing transformer oil, the concentration of gases, the type, intensity and location of the fault can be identified. Although many traditional off-line condition monitoring techniques have been applied successfully to detect asset faults, on-line monitoring is becoming increasingly popular for several reasons. Some faults are load-related which cannot be dynamically detected by off-line monitoring techniques. Off-line condition monitoring provides time-based maintenance rather than condition-based maintenance; condition-based maintenance reduces the frequency of equipment inspections as well as planned equipment outages [2]. Finally, on-line monitoring brings more efficient use of assets by allowing additional

3

network reinforcement capacity through continuous monitoring. There are a variety of on-line monitoring techniques [3]. On-line dissolved gas monitoring is one of the most common practice among utilities. A laboratory-based dissolved gas analysis (DGA) is typically conducted on a periodic basis while a conventional unscheduled gas-in-oil analysis is performed based on threshold alarm signals. Through continuous monitoring, the rate of change of gases dissolved in oil can be recorded, which is a valuable diagnostic for deciding the severity of the developing fault. Thus an early warning can be generated allowing timely corrective actions. The main differences between off-line and on-line condition monitoring are given in Table I, which describes the advantages and disadvantages of each scheme. The IEEE guide [4] provides a basis for diagnosis of the off-line condition monitoring. TABLE I THE DIFFERENCES BETWEEN ON- AND OFF-LINE CONDITION MONITORING

Off-line

On-line

Maintenance Schedule

Time-based

Condition-based

Fault Indication

Threshold alarm

Intelligent alarm handling

Diagnosis Theory

Well established

Ongoing research

Decision

Experts

Expert asset management system

New Investment

None or not significant

Could be significant

For on-line condition monitoring, correlation between historical monitoring data and ageing of assets require further study. Diagnostic decisions are often made by engineering experts in the off-line scheme while more efficient expert asset management system is applied to the on-line scheme. However, depending on the requirements of different utilities, initial investment could be significant which can result in a long payback period. Hence, a detailed cost-benefit analysis is required to ensure such investment is commercially viable. Risk-based economic tools are also needed to prioritise capital expenditures to replace ageing assets versus risk of assets failure. B. Asset Management for Distribution Networks The aim of asset management is to optimise the use of assets through effective maintenance and replacement strategies. Assets in present MV and LV networks are replaced only after malfunctioning. Improving the maintenance and replacement strategies will improve the reliability of supply of distribution networks, minimising Customer Interruptions (CIs) and Customer Minutes Lost (CMLs). In order to clearly identify resources strategies, the accurate loading of network equipment needs to be identified in order to understand the correlation between the life duration of the equipment and its historical loading.

An enhanced DSSE will benefit the assessment of network operating condition through increased observability. A typical asset management system would predict asset failure probabilities as well as providing assessment on system reliability [3]. Based on parameters such as condition, failure probability and system reliability, risk-based economic tools can be applied to prioritise capital expenditures on refurbishing or replacing problematic assets against the cost due to risk of asset failure. The use of state estimation with an appropriate asset management system would be particularly useful to: • Characterise the condition of equipment on the distribution system. • Provide data for predictive maintenance applications. • Determine remaining lifetime of equipment based on many operational factors. A new generation of DMS functionality will facilitate more accurate asset management decision making in order to defer distribution network capital expenditure as and when appropriate. IV. NOVEL INFORMATION AND COMPUTING TECHNOLOGY The novel DMS will need to accommodate network operation in a distributed manner. As more of the distribution network becomes active, moving rapidly from the current minority to the future majority, operational procedures and algorithms must be highly scalable in order to achieve near-toreal-time performance. There are four principal operational activities the DMS will perform and that the HPC platform will need to support, namely DSE, fault restoration, condition monitoring, and general historic data mining. Distribution State Estimation is likely to require multi-processor capability and will be run at frequent regular intervals, and perhaps continuously. Fault restoration should be infrequent, but will become the highest priority when required. This means that the HPC system must be capable of suspending other tasks to devote sufficient resources to fault restoration, but be able to restart the other tasks at precisely the point they were halted. Condition monitoring represents a large number of serial problems and will also need to be executed at regular intervals. General data mining tasks will be the lowest priority but will have highly varying computational requirements. All of these operations though must be able to be completely controlled through the DMS in a ‘lights out’ type of functionality. A. Cost-Effective HPC Architectures High Performance Computing is now passing from the specialised limited utilisation type of facility for high-end engineering and research to one that is being embraced by a significantly larger number of different user communities [5]. Through the increasing availability of knowledge and skills for the development of algorithms and applications that can make use of HPC systems it is becoming a commodity technology, particularly through the availability of Linux based Beowulftype cluster systems [6]. Couple with the emergence of multicore technology and cheaper methodologies to interconnect systems within the clusters hitherto unavailable computing power is now within reach of use cases that would previously

4

not have bothered. The emerging near-to-real-time HPC architectures offer great potential for hosting of novel DMS applications [7]. Data communications bandwidth is one example of a potentially severe bottleneck in state-of-the-art DMS. The main ICT requirements and challenges presented are: Scalability: with the large volume of data that distribution networks will generate locally and regionally, local and wide area HPC facilities are required to process real-time information. Security: the functionality presented by this system will become essential components of a DNO operations. Therefore the system must be designed with security of the data, the operations and communications as a primary concern. Low cost: in comparison with other existing HPC technologies, the platforms will be designed to be significantly lower in cost. In addition overall costs can be further reduced since large centralised data management facilities may no longer be required. Reliability: in the context of ICT and HPC, there are different issues to be addressed. First, a reliable message layer is required for data transport with real-time quality of service (QoS). Secondly, reliable hardware is needed to guarantee QoS. Finally, reliability of algorithms and the whole interacting system are also important in order to achieve improved network performance. Interoperability: it is essential for future DMS to be interoperable both internally and externally and to adopt emerging industry standards for such interoperability such as Common Information Model (CIM). The requirements analysis for implementation of the prototype can be summarised as follows: • Computations are requested as a service from the HPC Engine (client-server model) by the DMS. • All applications would be executed subject to an appropriate scheduling system (in the prototype: Maui scheduler on Torque resource manager). Job priorities in the scheduling system are a matter of individual DNOs policies. Notice that the prototype allowed for pre-emption (kill-and-requeue) of lower priority jobs should higher priority ones be submitted. This will require checkpointing and warm restarting of any application that is pre-empted. • All applications running on the HPC Engine will need to run in batch mode, therefore no interactive applications would be allowed on the HPC Engine. They could pose considerable risks to the integrity of the HPC Engine and, although indirectly, to the DSS preventing rogue codes running on the HPC Engine, whether willingly or unwillingly, would present possibly insurmountable difficulties, both to enforce and monitor. • Data are requested or posted as a service (client-server model) to the Data Subsystem (DSS) by the DMS or by the HPC Engine. • Data cannot be transferred directly from the DMS or from elsewhere to the HPC Engine.







Data in the DSS can only be access in a client-server scheme. Data are required by the application (client) and the request is then serviced, in accordance to specified criteria of authority of access, by the DSS. The hardware/software complex making up the DSS itself, are not visible to either applications or any other agent but by the DSS system console. Doing otherwise could result in considerable weakening of defences thus possibly exposing the whole DSS to accidental or malicious corruptions. The computational facility is intended to be operated as a ‘lights out’ facility such that the internals, other than applications configuration, priorities and invocation, are hidden. The only interfaces to these higher level functions will be through the DMS system. This will allow for future outsourcing of such a facility. A fast storage device or devices are attached to the HPC Engine and are an integral part of it. Data can be transferred from the DSS to HPC Engine fast storage for any job if so required. Data in fast storage is temporary and so will be removed on job termination (they will be retained in all cases of jobs being killed and requeued, particularly checkpointing information).

The issue of real-time data is a serious one. A number of applications would require restricted time windows. Hence, data for all these cases will have to follow the same principles as listed above. Doing otherwise, would require security of data and validation of their provenance to be built into the applications, which could need revision and updates with any modification of the DNO network. One of the principal characteristics for the HPC and ICT technology platforms is to integrate near-to-real-time data and information processing. The result from the requirements and specification analyses was to recommended the use of Beowulf-type clusters for the following reasons: 1. Cost: being based on commodity components, clusters are cheap to purchase as well as administrate (using only a single system image). In recent years, the costs and reliability of high performance interconnects has also dropped considerably. 2. Performance: every single HPC installation is a cluster, irrespective of detailed technological aspects and mode of use. The reason for that is that in terms of GFlops/euros they are unrivalled. 3. Flexibility: clusters can trivially run a host of serial applications as well as highly parallel applications, subject to the policies of the DNOs. The availability of fast, high-bandwidth and low-latency interconnects allow them to run optimally not just simply data parallel tasks, but also closely coupled tasks, i.e. when parallelism (and data exchanges) are interwoven in the fabric of the algorithms themselves. 4. Well-established technology: a huge range of solutions exist from a wide range of system integrators that can produce turn-key systems according to specifications. Much software for clusters is open source (for example Rocks, on which the system we will deploy to partners

5

5.

will be based). Many tools for administering clusters are also available. Upgradeability: clusters offer both horizontal (i.e. extending the number of nodes) as well as vertical upgradeability (increasing the power of each node). For example, GPUs are fast becoming an essential component within HPC clusters: while we did not advocate their use within HiPerDNO, because of resources and time constraints, none the less any system proposed must in the future be able to employ their computational power: many modern clusters employ GPU technology as numerical co-processors.

The computational platform ensures that the expected level of performance is provided under all network conditions. Fig. 3 provides a high level overview of the HPC platform and the relationship between the components within this platform where the user API facilitates the interaction with the DMS end user.

Fig 4. The PELICAN processing framework showing the data driven nature with separate processing pipelines. Courtesy of F. Dulwich, B. Mort, S. Salvini, and C. Williams (Oxford e-Research Centre).

Fig. 3. Schematic of the interactions for the HPC platform.

A key aspect of moving HPC technologies into other user communities is the availability of easy to use and understand tools and technologies to support the development of relevant applications. By utilising a framework that has already been developed within other application areas a new community is able to get a head start and obtain maximum value from the solution. The project has chosen to use the PELICAN framework [8] within which a uniform application interface can be presented to developers, with management of data streaming from and back to the DSS whilst wrapping the individual applications in pipelines that they are easily presented to the scheduler to make rules for application priority etc as easy as possible. These type of pipelines are also able to wrap a wide variety of application languages and formats. A schematic of the PELICAN framework is shown in Fig. 4. Fig. 5. HPC internal configuration overview.

6

The full system as shown in Fig. 5 includes the internal configuration and representation of the PELICAN pipelines for each different type of application and the control channel which will make use of the OGF recommended standard, HPC Basic Profile to prevent scheduler and hence vendor lock-in. All interactions are between the DSS and HPC engine with simple control messages passing down from the DMS and notifications and results passing back. This allows the HPC system to be completely separate as specified from the DMS and hence other functions. B. Novel High Speed Messaging Layer Configurations The HPC and Message Layers must be integrated (Fig. 6). It is important to note that the amounts of data generated by the MV/LV level devices require scalable and robust middleware to conflate the raw data in order to produce aggregated data-streams. The messaging layer facilitates this functionality by extracting only the relevant and timely information from the data collectors or concentrators and sending this information as messages. Timely delivery of these messages to the utility control centre is monitored and the QoS of this messaging layer is evaluated at all communication levels. The messaging layer will provide the following functionalities: • receive massive amount of data, • concentrate data to reduce volumes, • send system critical information via high speed communication channels, • low latency and very high message throughput, • reliability, • high availability, • flexible message delivery, • fast message filtering, and • dynamic monitoring of ICT performance and congestion control.

Management Interface (BEMI) developed by the Fraunhofer IWES. BEMI is designed for LV distribution grid connection points and there is potential interfacing BEMI with a scalable high speed messaging layer. A simple test-bed installation for the evaluating the performance of wireless 2-3G cards and PLC-based communications infrastructure has been created. It is capable of using different cellular technologies and transport protocols. In addition to wireless technology, the test-bed has also included a pair of simple power line communication devices.

Fig. 7. High level architecture of the messaging layer.

A simple test-bed installation for the evaluating the performance of wireless 2-3G cards and PLC-based communications infrastructure has been created. It is capable of using different cellular technologies and transport protocols. In addition to wireless technology, the test-bed has also included a pair of simple power line communication devices. V. OFF-LINE FIELD TRIALS

Meter Data Input publishes raw AMI data InfoBridge API Conflation API

Application / Conflation plug-in

” Data Volume Management: flow control & volume reduction

Fig. 6. Overview of the integrated HPC and high .speed messaging layers.

We are developing a novel high speed messaging layer and Fig. 7 describes the proposed hierarchical conflation of MV/LV distribution network data. In considering various configurations, we have examined the Bidirectional Energy

To demonstrate the potential benefits of using HPC to offer enhanced capability by bringing together new functionality, field trials are being designed for Slovenia, Spain, and the UK. Union Fenosa Distribución (Spain) are focusing on MV network restoration algorithms. After data are supplied from the SCADA / DMS systems, a simplified state estimator is executed for the selected area. A new algorithm is being tested for selecting the best strategies to restore service in the faulted area, with the result returned to the DMS operators. Different scenarios are being considered with various characteristics: • varying load values, • critical network points, • support for substation analysis, • differentiating rural and urban sites, • consideration of power constraints, • ability to undertake sensitivity analyses, • investigating local effects. The systems involved in the trial include the SCADA, DMS, GIS, PC and HPC for performance optimization.

7

The service restoration algorithm has two main parts. First, a genetic analysis and the associated optimized parameters such as population size, iteration limits, mutations and characteristics. Secondly, a fast heuristic analysis to check the validity for the large number of alternatives generated. Finally, the operator is shown the optimum state for switching required for complete restoration, or at least for restoring the greatest possible amount of power. In the first prototype phase, the complete environment is to be controlled in a Matlab environment. There are different calls to control the flow of information between the algorithms (state estimation, load flow analysis and the genetic computation for the network restoration). In Fig 8 shows a simplified description of the interactions between the implementation modules.

DMS (off line) Network Static Data

MatLab Environment

I/O Processing

Aux Process MV/LV substations to feeders assignment

State Estimation

SCADA (off line) Network Dynamic Data Topology / Measurements

Failure Definition

HPC Platform C++ Library SRA

Trials design

The trial with UK Power Networks is focusing on condition monitoring and asset management – problem which many utilities are facing today. An increasing proportion of MV cables are greater than 40 years old. The consequences are that an increasing fault rate is to be expected if these aging assets are not replaced. Furthermore, this leads to increasing maintenance and operational costs, therefore there is a need to find ways to target cable replacement. A number of on-line partial discharge monitoring and mapping techniques are used to assess the condition of cables. New data mining approaches are being developed in HiPerDNO to more reliably predict incipient faults so cable replacement is successfully carried out before failure – a ‘preventive replacement’ approach. The success of this methodology can be demonstrated by analysing the defective cable sample (or joint) and also checking that the discharge activity disappears once the circuit had been reenergised. Despite the differences between these geographically spread trials, some common methodology is required: • a set of criteria by which to judge the deployment of necessarily limited sensors and instrumentation, and • a set of proto-standards of data acquisition ready for processing using the HPC toolset.

Final State

Presentation of Results

Fig. 8. Overview of the fault restoration trial system.

Different choices to introduce HPC into this trial have been considered particularly because of the very large number of possible topologies to analyze and the related – even larger number – of load flows to calculate. Currently, parallelization of the genetic algorithm suitable for deployment in an HPC environment is concentrating on a new expansion / contraction process mutating each father to give several sons instead of just one. Other performance improvement tasks have to do with the parallelization in the load flow algorithm, especially at the matrix inversion phase. The Slovenian trial, with Elektro Gorenjska (EG), is in a small network. The aim is to improve decision-making processes, and other applications, using on the data collected from the mass deployment of sensors in both the MV and LV networks. EG are concentrating on applications such as optimal network reconfiguration, load management, and network development. The basic objectives are improving reliability indices, and improving quality of service with increasing integration of distributed generation. To reach these goals, EG also focus on improving their AMI system. A wide range of problems need to be addressed concurrently, including network losses, reactive power in distributed generation environments, voltage quality, demand side management, and asset management.

Once field trials are underway and it will possible to investigate the performance of all of the interacting subsystems, both hardware and software. Data mining and feature extraction methods may enable estimations for the likely optimal data rates and sensor distribution for a number of broad network scenarios. This will form part of the evaluation process for the integrated system performance as a whole (including the HPC requirements). As these trials are necessarily limited in scope and duration, enabling others to benefit from this projects experience is vital. Thus, we will scope out a set of recommendations for future similar trials to enable our data and results to be directly compared. VI. CONCLUSION The successful operation of future smart distribution networks will require a new generation of DMS functionality based upon secure and scalable high performance ICT infrastructure. This will include near-to-real-time network restoration services with the aim of exploiting on-line condition monitoring of major assets in the distribution network and then restoring services through the combination of real-time DSSE, fault analysis and related restoration actions. To achieve this, we are developing scalable, low cost and reliable ICT and HPC platforms. Off-line field trials are an important step towards demonstrating the viability of integrating together these platforms with new algorithmic approaches to the management of future distribution networks.

8

VII. ACKNOWLEDGEMENTS

IX. BIOGRAPHIES

The authors are the Workpackage Leaders and Dissemination Officer, and represent all the individual members of the HiPerDNO consortium who have contributed to the research presented in this paper. The HiPerDNO consortium has eleven partners: Brunel University (UK), EDF SA (France), Elektro Gorenjska (Slovenia), Fraunhofer IWES (Germany), GTD (Spain), IBM (Israel), Indra (Spain), Korona (Slovenia), UK Power Networks (UK), Union Fenosa Distribución (Spain), and the University of Oxford (UK).

Gary Taylor is Reader and Co-director, Brunel Institute of Power Systems, Brunel University. He joined Brunel University in May 2000. He was appointed as a National Grid Post-doctoral Scholar from 2000-2003. He was promoted to Reader in May 2010. His current research interests include: smart grids, renewable energy systems, reactive power and voltage control, power systems and network communications. He is coordinating the EU Smart Grids project HiPerDNO. He is also currently collaborating on smart grids research projects with major UK organisations such as National Grid, UK Power Networks, University of Oxford and Converteam Ltd. David Wallom is Associate Director (Innovation) of the Oxford e-Research Centre, University of Oxford. He is the Technical Director of the UK National Grid Service, and current chair of the UK e-Science Engineering Task Force. Sebastien Grenard is project manager at EDF R&D. His present projects include novel DMS systems, the connection of distributed energy resources to the distribution network and future smart grids applications. In this project, Angel Yunta Huete is project manager for power systems operation and smart grid research in Unión Fenosa Distribución (GNF group) working on several projects for smart grids, DMS applications and the impact of electric vehicles. He qualified as Industrial Engineer (Electrical) in 1979, starting work in power plant engineering in Empresarios Agrupados. Later, he moved to Unión Eléctrica where he was initially working on overhead lines. He has been responsible in operations for EMS, DMS, network analysis, dispatcher training simulators, demand supervision, and load management systems. Colin Axon is Senior Research Fellow at the Brunel Institute of Power Systems, Brunel University, joining in January 2011. Previously he was in the Department of Engineering Science at the University of Oxford. Colin is an Associate at the Oxford e-Research Centre. His main research interests are in the use of data in LV networks and household power demand modelling.

VIII. REFERENCES [1] Ofgem, “Electricity Distribution Price Control Review 2010-2015”. [Online]. Available at http://www.ofgem.gov.uk. [2] B. Pahlavanpour and A. Wilson, “Analysis of transformer oil for transformer condition monitoring,” in Proc. 1997 IEEE Colloquium on An Engineering Review of Liquid Insulation, pp. 1-5. [3] D. Chu and A. Lux, “On-Line Monitoring of Power Transformers and Components: A Review of Key Parameters,” in Proceedings 1999 Electrical Insulation Conference and Electrical Manufacturing and Coil Winding Conference, pp. 669-675. [4] IEEE Guide for Loading Mineral-Oil-Immersed Transformers, IEEE Standard C57.91-1995, 1996. [5] N. Wilkins-Diehr, D. Gannon, G. Klimeck, S. Oster, and S. Pamidighantam, “TeraGrid Science Gateways and Their Impact on Science,” Computer, vol. 41, no. 11, pp. 32-41, Nov. 2008. [6] P. M. Papadopoulos, M. J. Katz, and G. Bruno, “NPACI Rocks: Tools and Techniques for Easily Deploying Manageable Linux Clusters,”, in Proceedings 2001 IEEE International Conf. on Cluster Computing, pp. 258-267. [7] M. R. Irving, G. A. Taylor, C. Huang, and P. R. Hobson, “Scalable monitoring and control of distributed generation using grid computing,” 42nd International Universities Power Engineering Conference, pp. 847-853, Brighton, UK, 4-6 Sept. 2007. [8] S. Salvini, F. Dulwich, B. Mort, and C. Williams, “The PELICAN Framework”, http://www.oerc.ox.ac.uk/research/pelican