As exploration moves into areas of increasing geological complexity, reservoir evaluation is often based on the interpretation of one seismic image. Building a suitable velocity model followed by prestack depth migration plays an important role in the creation of this image on which economic evaluations are based. In many cases, drilling commitments are planned far in advance, and geologists have a good idea about the geometry and size of a potential reservoir but require accurate interpretation and positioning in the depth domain.

The amount of uncertainty associated with the image is poorly quantified. A typical depth migration velocity model-building project will deliver a final velocity model and its associated image products. Quantitative measures of the reliability of these data are limited. Comparisons can be made with auxiliary data, and volumetric residual moveout and image structural simplicity also may be analyzed. This provides an indication of how well the model has converged to a solution that satisfies the data.

Tomography often is used to derive the model. Its inherent nonlinearity can, however, yield multiple solutions that honor the data with the same convergence criterion. In isolation, such data provide little useful evidence of the reliability of any one model.

A new workflow initially establishes both the resolution of the tomography and the degree to which these criteria may be perturbed before the tomography fails to recover that perturbation. The criteria are used to generate a set of models that all equally conform to the data. From this model population an additional set of deliverables (mean, variance and standard deviation of the velocity model, and a spatial reliability indicator for the final image) is created. These help mitigate the risk associated with target positioning and volumetrics.

Concept

The workflow draws upon the principles of Big Data analysis and uses repeated and randomized sampling of the model space to derive estimates of the uncertainty of any model.

Tomographic inversions are performed on each model within the population (of the order of 100 models). These are all constrained by the same observed data. The aim is to recover a perturbation that is applied to each model in the population. Quantitative metrics are used to evaluate the effectiveness of the inversion to recover the perturbation and refine the usable model population.

Having created and inverted for a large model population, a migration is then performed to generate a collection of imaged stacks and gathers. From this and alongside existing deliverables, a new set of products is created. A statistical analysis across the total population of inverted models is performed for each grid location to reveal the mean, variance and standard deviation of the local inverted velocity. Volumes are created for all three statistical parameters. Subsequent migrations utilizing the model population allow error envelope analysis studies at key target interpretations (Figure 1) and a volumetric depth error metric.

FIGURE 1. An error envelope on a target reflector gives an indication of the spatial reliability. The calculated model variance is co-rendered. (Source: PGS)

Uncertainty inversion engine

Most industry-standard velocity model-building tools use some variation of velocity tomography and are constrained by data resulting from an initial prestack depth migration. This new tomographic inversion platform uses beam migration and wavelet shift tomography based on Sherwood et al., 2009.

This method is adapted for the model uncertainty flow to accommodate a large model population to both invert for and migrate with. Using the hyperTomo platform and beam migration allows this to be achieved in a relatively short time frame.

The first two steps in the workflow determine the resolution and magnitude of error recoverable by the inversion given a specific dataset.

The resolution at which a model is constrained by the data during a tomographic inversion is dependent on many variables. These include the spatial sampling of the image space, limitations caused by the acquisition geometry and the subsurface reflectivity. A checkerboard test is used to understand the achievable resolution and assess the resolution of the tomographic inversion to resolve anomalies within the model.

A perturbation is imposed on the migration model using a checkerboard. The individual cells in the checkerboard have a known spatial wavelength. The inversion is then run and the updated model recovered. Analysis is then performed to judge how well the inversion has recovered the model modification. The ability of the inversion to recover the perturbation is a quantitative measure of the resolution provided by the data in constraining the model.

The correlation between the perturbation and the residual from the inversion is then analyzed. This metric quantifies whether the inversion is able to recover the spatial perturbation and establishes the resolution limit of the data. The information is then used to constrain the model population by rejecting those models that are unable to recover the perturbation.

Amplitude analysis

Once the resolution of the model has been determined, a random amplitude series is generated and applied to the model in the form of a checkerboard perturbation. This is undertaken to optimize the creation of the model set by establishing the maximum level of amplitude perturbation the tomography is able to recover.

From this, migrated gathers are generated to determine a quantitative measure of the volumetric model error based upon the moveout error in the common image gathers. A threshold is determined based on the measured moveout of the data from the initial, perturbed and recovered models. This metric indicates the maximum error recoverable by the inversion.

The resolution and amplitude analysis is carried forward to create the model population. Additionally, the resolution and amplitude analysis may be used during the velocity model-building phase to define the optimal parameterization to employ within a given tomographic inversion.

Uncertainty analysis and metric generation

Following the creation of the model population, the entire model set is inverted and migrated. The products of this process are then used to generate attributes that reflect the statistical reliability of a model.

The mean, variance and standard deviation parameters are computed from the model population for each cell. These cubes may be co-rendered with the residual moveout volumes to visualize the spatial constraint of the model attribute (Figure 2).

FIGURE 2. This inline, depth slice and 3-D intersection shows the model population variance co-rendered on the seismic data. (Source: PGS)

The model realizations also may be used to indicate the positioning error. This is done in two ways. The first is based on the spatial reliability at a given target event. An error envelope is determined using the image population from the migration of the entire model set. Correlation analysis is used to construct a mean vertical position and error envelope. This is adjusted to account for local dip, giving a 3-D error envelope. The second method uses model integration to create a volumetric 1-D depth error.

All the aforementioned products are deliverables provided as part of the workflow. The deliverables help mitigate risks associated with generating a single model and image in traditional processing projects.

Case study

An example of the integrated use of these metrics is presented in Figure 3. The model population variance cube generated with the new model uncertainty workflow is superimposed with the underlying 3-D seismic image and the error envelope analysis for a given target. The combination of these additional deliverables provides interpreters with important information as to the local reliability of the seismic image from which they are seeking to extract reservoir information.

FIGURE 3. PGS seismic data are shown with co-rendered model uncertainty variance attribute, error envelope analysis for one horizon (left) and illumination distribution on the same surface generated by wavefield extrapolation (right). (Source: PGS)

As shown in Figure 3, additional information about the local illumination strength, for example, can be added to highlight any possible correlations between poor illumination and high model uncertainty.

References available.