Maps of oil and gas reservoirs have been made for more than a century as critical steps in planning field development strategy and have also become progressively more accurate as technology — especially 3-D seismic — improved. In the 1990s, geocellular modeling became increasingly popular as a way to quantify and visualize reservoir architecture and rock properties. Once this 3-D tool was coupled with flow simulation models, true reservoir characterization was possible. In fields worldwide, the products of such simulations are used to plan new wells via identification of undrained areas, and history-matched models are run in look-forward mode to generate production forecasts.

Reservoir characterization

Reservoir characterization means different things to different people. Before embarking on a huge project requiring months of team effort and computing time, it is important to define what questions will be answered and to specify deliverables. It is not enough to describe the end result as “maps of permeability anisotropy” or “a fully populated geomodel.” To be successful, the entire team needs to feel ownership of a product defined as “a history-matched flow simulation model incorporating XX wells and YY flow units by year-end” or some similar tightly defined target that has real value to the asset funding the study.

Geocellular models and flow simulation models are built for many purposes, including static applications (volumetrics or well planning) and dynamic applications (production rates, reserve estimates, reservoir management, and enhanced recovery). They are commonly designed to be run in multiple scenarios to evaluate sensitivities and uncertainties, and they can help define the need for additional data to better quantify risk.

Simulations are designed for specific fluids (e.g., black oil models, oil-gas compositional models, miscible fluid models, surfactant models); for specific recovery methods (e.g., steam or combustion models); or for specific rock properties (e.g., dual-porosity models).

Typically the reservoir characterization workflow is cyclical (Figure 1) and never really ends throughout the life of an asset because new data, new technology, or new analogs kick-start the process repeatedly. Reservoir models have time value and are constructed for various purposes throughout asset lifetimes. Geocellular and flow simulation models assist with:
• Evaluations of acquisition or farm-in opportunities;
• Pre-drill estimation of reserves;
• Pre-commerciality decisions;
• Pre-development decisions;
• Asset management throughout the production period;
• Enhanced oil recovery/waterflood planning, execution, and monitoring;
• Re-development of old fields; and
• Production cessation/abandonment.

Through these progressive steps the models become increasingly data-rich, with a resulting shift over time in distribution of the properties they contain from stochastic to deterministic.

Geocellular models

Geocellular models (Figure 2) typically contain a million or more cells, each populated with values of porosity, permeability, shale content, or many other parameters. A necessary upscaling from logs, which sample the reservoir at sub-foot scale, to layers several feet thick, or downscaling from seismic volumes with horizontal and vertical resolution several times greater than a model cell, results in a compromise in resolution. If the end use for such models is considered during their construction, much time and effort can be saved. For example, if geosteering of horizontal wells (Figure 3) is a prime application of the resulting geomodel, and directional drillers promise positioning accuracy of plus or minus 10 ft (3 m) vertically, there is limited value in building a detailed model with layers as thin as 1 ft (.3 m).

Adding more attributes in model cells complicates calculations and increases computation time. If porosity distribution is the main concern (Figure 4), together with permeability and water saturation when a geocellular model is to be the template for a flow simulation model, there may be little value in populating either model with extraneous parameters like bulk density or abundance of specific minerals.

Similarly, capturing and displaying structural detail — over a scale ranging from faults mapped from seismic data to fractures that must be accounted for by adjusting properties within individual cells — requires up-front planning.

The dominant structural grain of a region usually dictates the layout of a seismic acquisition program. Similarly, it should factor prominently in grid orientation during geomodel construction. Early attention to fault patterns pays big dividends later in the workflow. Faults are rarely linear and vertical, and representing them as “stair-steps” through orthogonal stacks of cells can be avoided with proper planning and software.

Populating model cells away from well control may be simple (null values below an oil-water contact, for example) or complex (seismic inversion for porosity or multi-point geostatistics to distribute properties). Models can be constructed using pixel-based cellular architecture (think of a stack of sugar cubes) or object-based geobodies (visualize fruit pieces suspended in a Jello salad). Geobodies are advantageous when, for example, a known direction and geometry of channels should be introduced in a sandstone reservoir. Their dimensions typically are derived using analogs (modern river channels, outcrops of similar sandstones, or subsurface producing fields in similar geologic settings).

Coupling reservoir characterization with basin history analysis can be a useful exercise in defining a field’s depositional, structural, and hydrocarbon charge history in a similar context with its production profile, but it requires creative thinking and cross-discipline understanding among team members who visualize time in million-year increments (geologists), days on production (reservoir engineers), or milliseconds (geophysicists).

Reservoir characterization means more than geomodeling. It encompasses all the data and interpretations that define reservoir architecture and performance. Having the right modeling tools and workflows at hand ensures that the money spent acquiring data and the hours spent interpreting are actually adding value.