R&D Project 2 – Wind resource database and visualisation

Wind power is becoming one of the most important power sources in the power grid. At present, China’s accumulated wind power capacity is 188 GW, and the total installed capacity has leapt to first in the world. While the penetration rate of wind power is increasing, it generates a huge amount of data for recording the operational status of wind turbines, and so it needs to be studied using big data technology 

The key technologies of power big data include the following five parts: data acquisition, data storage, data pre-processing, data analysis, and data visualization. This wind energy production data often comes from multiple heterogeneous sources and different types of sensors. The data set may include recorded weather data containing temperature and humidity readings, recordings of precipitation, levels and wavelengths of incident solar radiation. The data includes recordings of wind and gust speeds along with their dominant directions. The wind speed is usually defined as the average air velocity over a chosen time frame, whereas the gust speed is defined by the highest recorded speed in this timeframe. Additionally, barometric pressure levels from different locations may be recorded to estimate and analyze the development of winds. As the weather actively influences the power output of a wind park, understanding its influences and trends are important to network and power plant operators. 

Visualization is the computer-aided technique of creating images or animations in order to communicate a message to a viewer. Visualization uses the remarkable perceptual abilities of the human’s visual system and the brain’s visual cortex. The visual cortex is the part of the brain responsible for processing any visual information. Humans can scan, recognize, and recall images in a fraction of a second. The brain can detect changes or patterns in size, colour, shape, movement, or texture. Visualization is valuable in many different application domains by providing a valuable assistance for data analysis and decision-making tasks. Depending on the source and purpose of the data, which is to be visualized, the research field is traditionally subdivided into the two areas Scientific Visualization and Information Visualization, which are both discussed below. 

Scientific Visualization 

Scientific visualization is the research field of generating a graphical representation of physical phenomena, which aims to assist scientific investigations. The goal is to discover things that might not be apparent in numerical form. Scientific Visualization involves scientific data with an inherent physical component. Common visualization techniques include direct volume rendering, ray tracing or projection, two- or three-dimensional flow visualization and many more. Applications are found in every area where large amounts of data with a physical component are created and need to be processed. 

Information Visualization 

Information Visualization is the research field of creating images from abstract data that has, in strong contrast to data in scientific visualization, no explicit spatial reference. This type of data has no natural mapping and thus no trivial display space. Temporal or spatial components may occur, but the data exists in an abstract, conceptual data space. Common data sets which are visualized include, for ex ample, stock market data, poll results, network graphs, and social webs. The challenge is how to effectively filter and then map and render this kind of data on the computer screen. 

Visual Analysis Frameworks 

A visual analysis framework is a computer software, which integrates various visualization and interaction techniques to support users to perform an effective visual analysis of data. Frameworks are highly modular and allow the user to freely choose and combine these tools and features. The requirements for a visual analysis framework as well as its key features are discussed below. 

Visual analysis software must solve variety of technical challenges. for instance, such a software system needs to be responsive after a user has triggered a computation on the underlying data or during a redraw of a visualization. This makes parallel or multi-threaded computing techniques compulsory. Such systems are increasingly confronted with a large amount of information, thus should be highly scalable. Ideally, the system also needs to be versatile, because it might need to handle lots of various varieties of data, data sources, data formats, and tasks. Furthermore, the system needs to be easily extensible to permit developers to hide new user tasks by providing novel task-oriented computations and visualizations. In 1996, Ben Schneiderman made this perfectly clear by stating that any visual analysis framework that desires to be successful as a software package “will must provide smooth integration with existing software and support the complete task list: Overview, zoom, filter, details-on-demand, relate, history, and extract”. 

Effects of Wind Speed on Wind Power Output 

High Wind Speed Shutdown (HWSS) of wind farms can cause significant loss of infeed across geographic regions and must be considered and can be difficult to quantify given the diversity of conditions experienced across windfarms and diversity of wind turbine types and capacities. Individual turbines may experience different weather conditions on a given site, and even though the same turbines will have the same, or similar, cut-off and re-connect schemes for extremes of wind, this aggregated impact of this across windfarms, never mind on regional, national, or supranational scales, can be challenging to quantify. A curve was fitted to a raw power curve provided by D. Brayshaw from source code referenced in and like that deployed of the following form: 

𝑦 =𝑎/ 1+ 𝑒 (−𝑓(𝑤)(𝑥−𝑤+ d)) 

a is a normalization factor set at the value of the maximum of the power curve, and w, f(w) is the power curve represented as an x, y co-ordinate set to which the curve is fitted, with d a shift in the x direction. 

The data collected by the meteorological department is utilized to draw the curve. The data is then analyzed with the help of montecarlos and other visualization tools. The modification in the software is done to clearly understand the data and different methods and improvisations are being done such as the radial diagram.