The digital oil field (DOF) aims to advance the way the industry manages oil and gas assets by leveraging automation to improve process efficiencies and well performance while reducing cost. To realize this vision, proactive field surveillance and model-based analytics must be performed on an asset scale of hundreds or even thousands of wells. However, the underlying technologies that power traditional integrated modeling approaches fall short of offering the rich two-way communication environment that can enable real-time interaction between engineers, systems and support systems and with it the automated analytics that can help engineers identify equipment issues or optimization opportunities. A new predictive analytics software platform designed specifically for DOF initiatives introduces model-based proactive field surveillance at scale to allow easy integration to well data and models from multiple channels.

A new platform

Baker Hughes’ FieldPulse platform is the first in the industry to combine data connectivity, well models, asset key performance indicators (KPIs) and tailored workflows into a single tool that can be applied at scale across thousands of wells. The platform can be deployed quickly, used both in the field and in the office and repeated across multiple assets, making it possible to better manage those assets to improve production while reducing capex and opex.

Using model-based analytics and data-driven KPIs rather than traditional threshold-based methods, the new software platform empowers asset teams with insight into the past, present and potential future state of an asset and generates actionable information to help the teams manage complex fields or large well counts more efficiently and effectively. Using a tolerance- and volatility-based approach significantly reduces the need to modify KPIs in large well counts. This reduces the number of false positive or “stale” KPIs, which often can hinder the adoption of large-scale surveillance solutions.

A built-in vendor-neutral nodal analysis engine makes it possible to use existing well models for realtime surveillance without having to incur additional cost for well-modeling licenses. Running in real time and at scale enables critical operational workflows such as virtual metering, artificial lift diagnostics, model deviation, well test validation and automatic well model calibration. Applying the models to perform basic scenario modeling significantly reduces the time required to make informed operational decisions. The platform’s nodal technology also supports multilateral and smartwell completions, enabling workflows such as zonal rate allocations and KPIs such as crossflow detection.

An integrated approach

For more than a decade DOF practitioners have realized the additional value of integrating models and simulations into automated workflows. The challenge is how best to integrate all the variables, including multiple simulators and models, into a cost-effective and robust tool that is reliable and repeatable across multiple assets.

Most DOF projects to date have relied on either a loosely coupled or tightly coupled systems approach. The loosely coupled systems approach involves reusing all of the existing simulators, models and applications available to the operator and using application-programmable interfaces provided by a third-party software vendor to integrate them with an automated workflow engine. The loosely coupled approach requires highly skilled programmers and IT professionals to glue the system together, and the results often are not fault-tolerant or scalable without significant IT support.

The tightly coupled approach somewhat improves the interoperability and overall robustness of the solution and reduces the skills needed to deploy and maintain it. This approach restricts the operator to using only those applications designed to run within the closed “ecosystem,” but it is generally quicker to deploy than a loosely coupled system and can recover when tasks fail. However, the operator may have to translate, rebuild or discard models or engines that are not approved. Scaling with this approach is challenging because of premium software licensing and support prices.

This new platform captures the robustness of tight integration with the flexibility of being able to run models and data sources from multiple vendors. It provides the benefits of a tightly coupled tool but removes some of the application programmable interface or license restrictions that can limit the ability of DOF solutions to integrate large volumes of models and data into a robust environment.

Successful Middle East pilot test

Field development plans for an offshore operator in the Middle East called for increased use of highly instrumented multizone inflow-control-valve (ICV) completions. To improve production efficiency, the operator adopted an integrated reservoir management process and implemented several data platforms to support it. The operator’s objective was to expand the DOF platform to enable automation of workflows that involve interaction with models, data and scenarios. The ultimate vision was an integrated asset model that will incorporate models and simulations to achieve cohesive reservoir, wells and facilities workflows.

A pilot study was undertaken on nine producing wells that included two smart wells with dual-zone ICV completions. Model-based workflows included rate estimation, zonal rate allocation, well test validation and model calibration. Data-driven workflows included automatic well test reporting and production target tracking, including a rate-vs.-allowable-rate KPI. The platform was connected to the operator’s existing data sources. Existing well models were loaded into the platform and repurposed for run-time use.

Applying four different virtual metering techniques enabled real-time model-based rate estimations. Zonal allocations for smart wells also were performed. Well test records were generated from an offshore tower-based multiphase flowmeter test unit. Here, data-driven logic was used to automatically detect whether wells were on test and the test was valid and to auto-record the results. The test data were automatically compared with the well model, and the asset team was alerted to deviations. The integrated nodal technology was used to recalibrate well models to match the well tests. Oil, gas and water estimates were then used to calculate the KPI to measure production targets vs. the allowable rates. The predictive analytics software put models online, where they could automatically and continually self-update with more insightful KPIs from real-time data. Issues such as ICV set point optimization, reservoir pressure tracking and zonal production issues could be automatically identified. Crucially, the platform provides real-time virtually metered oil, gas and water rates for each producing zone and for the commingled production stream of each well.

To test the solution performance at scale, the operator cloned the nine wells along with their data and model configurations to create 1,000 wells. The tool was run in a controlled environment using live data. About 17,900 datapoints were streamed into the platform during the test period. One thousand well models were used, and five different KPIs were performed by the workflow engine. The platform was hosted on a single virtual server. The tool completed an entire run of each automated workflow within 28 minutes. From these data it is reasonable to estimate that the platform could provide model-based and data-driven KPIs for each well on a 30-minute basis. Since this pilot the platform has been stress-tested offline using modern cloud computing power, where performance times have been reduced even further.