The Permanent Gauge workflow
Permanent Gauges are a remarkable source of information of both long term production data and the capture of occasional buildups that may be described as ‘free well tests’. Data are acquired at high frequency and over a long duration. The down side is the large number of data points gathered, which can amount to hundreds of millions per sensor which is far beyond the processing capability of today’s fastest PC. There are a number of challenges: storing and accessing the raw data, filtering, transferring this to the relevant analysis module and finally sharing both filtered data and analyses.
KAPPA-Server is a client-server solution for reservoir surveillance that addresses these issues in a shared environment. It permanently mirrors raw data from any data historian, reduces the number of points with wavelet-based filtering, stores and shares the filtered data. Filtered data can be exported to third party databases.
Derived data can be created and updated by user controlled mathematical operations on existing data. Boolean alarms can be created and used over a network. KAPPA-Server also stores technical objects and maintains the data with enterprise-wide consistency avoiding the need for repetitious data handling and speeding the workflow. KAPPA-Server is administered, and partially operated by a WEB client.
What Permanent Gauges data provides
Permanent Gauges acquire pressure data at high frequency and over a long duration. A typical data set will include two types of information; each spike is an unscheduled shut-in that may be treated as a ‘free’ well test for PTA. In addition the long term global producing pressure response, ignoring these spikes, can be used in association with the well production to perform rate transient analysis and/or history matching.
The data is there and it is already paid for. It is ‘simply’ a matter of getting at and interpreting the data. Nice idea, one not so little problem; the available data is vast and growing. For one single gauge there are typically 3 to 300 million data points. This will bring even the fastest of today’s PCs to a grinding halt. But we need both short-term high frequency data for pressure transient analysis and long-term low frequency data for rate transient analysis.