If workflows are simply recipes for achieving some desired result, why aren’t we all gourmet chefs? The foregoing is a simple analogy that attempts to describe the complexity of seismic imaging and effective model building. The same analogy could be applied to other “workflows” such as symphonic musical scores — just being able to read the notes doesn’t allow the conductor to create the masterpiece.

Geoscientists have been imaging the subsurface for more than a century. One might conclude that the science is quite mature by now and that only incremental improvements can reasonably be expected at this point. If you thought this way, you’d be wrong.

Quantum leaps are still possible

Coil shooting provides superior full-azimuth seismic acquisition with a single vessel. (Images courtesy of WesternGeco)

In fact, tremendous improvements in imaging capabilities have been achieved in the past few years, largely enabled by quantum leaps in computer science. Efficiencies realized by the gains in processing power have made many hitherto unaffordable techniques now viable components of depth imagers’ and interpreters’ toolboxes.

To create seismic images, one must have data to process. Highly efficient and accurate acquisition services allow for full-azimuth marine acquisition, and new techniques like coil shooting allow these data to be acquired faster, with a single vessel, than previously achieved with multiple vessels. The wide range of azimuths sampled using the technique sets the stage for characterization of azimuthal anisotropy, which has been identified as one of the principle impediments to understanding the effect of fracture and stress orientation in the overburden layers and in the reservoir itself.

Anisotropy — anathema or ally?

Schematic of a workflow to develop an anisotropic velocity model.

Years ago, geophysics professors would begin their lectures with the statement, “Assume an isotropic, homogeneous formation.” While this might be a good start for the fundamentals of seismic imaging, it didn’t take long for the students to realize that no such medium exists. Seismic anisotropy is the way of nature, and geophysicists must learn to deal with it.

Fortunately, in the seismic imaging world there are ways to deal with it. By using sophisticated processing workflows that always honor and exploit the full range of individual acquisition azimuths, it is possible to get a better understanding of anisotropy. The integration of azimuth-rich seismic data with other non-seismic measurements provides additional insight that allows more tightly constrained models to be built. Data can then be more precisely migrated using high-end algorithms such as reverse time migration (RTM), for example.

Another important and challenging question is about the axis of symmetry for anisotropy. Vertical transverse isotropy (VTI) and tilted transverse isotropy (TTI) are two commonly adopted modes of anisotropy. Here, the challenge is to identify the most likely TTI velocity model from a plethora of non-unique solutions by eliminating those solutions that are geologically implausible. While model-building

from VTI is better understood and constrained than TTI, under actual field conditions the VTI assumption for depth imaging can yield questionable results because the true axis of symmetry may not be vertical. Tectonic forces, metamorphoses, and salt intrusions may have folded or twisted the beds or created a major geomechanical upheaval over time. Therefore, most often in TTI the axis of symmetry is assumed to be perpendicular to the bedding planes, in whichever attitude they may presently repose. This is also referred to as structurally conformant transverse isotropy.

The solution?

Reliable interpretation requires top-quality final images, but final images need more than just accurate migration algorithms. Hence, the ultimate objective is to be able to produce accurate velocity models. Building an accurate velocity model can be extremely challenging, but all of the migration techniques in the world are wasted if the velocity field is not accurate.

Early imaging specialists gave up on the idea of developing a true velocity model and used gradient-based methods to deliver a single solution, which was, at best, an approximation. However, due to the challenges of imaging in complex environments (for example, subsalt), the early methods produced results that were not even good approximations. As technology advanced, the precision of velocity models improved, but due to the inherent non-uniqueness in the seismic experiment, it must be accepted that there are countless variants of an earth velocity model that fit the seismic data we acquired. So which model should be used?

The focus has now shifted from attempting to provide a high-quality subsurface image to providing a velocity model, with accompanying depth image, which is as close as possible to the actual subsurface structure. Time-to-depth conversion is not new. Check shots have been taken since the earliest days of exploratory drilling to help tie seismic events to measured depths. Vertical seismic profiles help to extend the time-depth conversion out from the borehole with some success. But the ubiquitous and non-unique nature of anisotropy defied all earlier attempts to tame its effects. Even with well data, geoscientists have been unable to constrain completely the vertical velocity.

After a decade of careful analysis, the concensus is that the best approach is to face up to the inevitability of anisotropy and use an alternative approach that acknowledges the non-uniqueness of the problem. Several solutions are derived that fit the data equally well, from which the most geologically plausible solution is chosen. This approach will be correct, or very close to correct, most frequently.

Advanced workflows provide solutions you can trust

Often, the key to data analysis is not how much data are acquired, but what is done with the data. Using anisotropic tomography, augmented with available non-seismic information, TTI depth models have proven to yield more realistic descriptions of anisotropy than isotropic or VTI equivalents. Wide- and full-azimuth data provide additional information that help reduce ambiguity. With these tools and TTI reliable anisotropic calibration can be performed in deviated wells.

A new imaging workflow includes uncertainty analysis to help quantify and understand the underlying non-uniqueness of current solutions. With uncertainty analysis, model-builders and interpreters can detect the non-uniqueness and then explore alternatives and select the most geologically plausible one. Further analysis of the alternative equivalent models (also known as null space analysis) can be used to reveal which additional data, if any, are needed to further constrain the model. Furthermore, imaging uncertainty analysis turns data into valuable information to support decision-making and reduce risk.

Advanced imaging techniques such as RTM, TTI anisotropy, and uncertainty analysis demand ever greater compute resources. New acquisition methodologies like coil shooting have led to an explosion in data quantity that doubles this challenge. However, the compute power now available has made all these approaches viable as part of an overall imaging solution that is both time- and cost-effective. Looking ahead, the need for more accurate velocity models in ever more complex environments will continue to redefine the frontiers of imaging, and new technology will evolve to meet those needs. For example, full waveform inversion will address the demand for more automated data-driven high-resolution velocity models. There’s definitely more to come.