The oil and gas industry has a huge brain trust of very bright minds. But sometimes it helps to look outside for answers.
Such is the case with a recent research collaboration between Shell and IBM. Announced in February, this collaboration is intended to extend the life of oil and natural gas fields by finding better ways to integrate geophysical and reservoir engineering field data by applying improved algorithms, analytics, and accelerated simulations. Simply put, the concept of history matching in reservoir simulation will now be reformulated and automated.

For those who thrive on iterating and reiterating potential scenarios until one sort of matches the production history, this might be bad news. For the vast percentage of companies who jump through these ridiculous hoops hoping to find the right solution, it’s very good news indeed. But it’s also a tall challenge.

The inception
IBM has devised a unique system of finding collaborative opportunities that it calls “Innovation Discovery.” According to Ulisses Mello, manager of petroleum and energy analytics for IBM, the process involves meeting with C-level executives from various industries to identify their “pain points” and see if IBM can align its skills to address those pain points, particularly when they’re aligned with IBM’s research agenda.

In this case, the alignment was perfect. “We have a very strong push in the direction of business analytics and optimization,” Mello said. “This project fits very well with that. Companies in the digital oil field are collecting more and more data, they’re collecting time-lapse data, and there’s an opportunity to do analytics on the data itself to reduce costs and loss of production.”

The collaboration is intended to integrate the dynamic data from wells with the static time-lapse seismic data using formal inversion processes. “It’s like seismic history-matching,” Mello said. “They can match not only the dynamic data but also the seismic data using a more formal approach to incorporate the latest innovations in inversion.” These include computational aspects and algorithmic aspects like model reduction, he added.

This project is expected to add analytics capabilities to a larger solution within IBM called “Integration Framework.” The oil and gas industry is notorious for operating with multiple standards, and IBM wants to develop a layer on top of those standards that allows seamless communication.

“For example, in an energistic model, a well may have a different name scheme than industry software has, but if you have that integration layer on the top, it doesn’t matter,” Mello said. For IBM’s purposes, developing this layer will help add analytical Web services so that data may be manipulated regardless of its origin database.

The hoped-for result
From Shell’s perspective, the goal is to address uncertainty in reservoir modeling. “We want to identify conceptually different geological model realizations that matter from a relevant perspective,” said Paul van den Hoek, manager of quantitative reservoir management R&D in Shell. “It sounds logical, but it’s something we’ve never addressed before.”

seismic, Gulf of mexico, Shell, IBM

A subsurface seismic view of Shell’s Mars field in the Gulf of Mexico; this image was captured in one of Shell’s 3-D seismic virtual environments. (Images courtesy of Shell)

Instead, he said, currently these problems are addressed through statistical uncertainty, where a parameter is assigned a distribution of values and model realizations are generated based on these distributions. “It’s a bit of a brute-force approach, and it has one other disadvantage,” he said. “You don’t allow yourself to think about conceptually different geological models to get different horizon or fault locations or different facies types.

“We take measurements when we develop reservoirs, when we drill wells, when we shoot seismic, but still that leaves a lot of room for many totally different realizations.”

The current approach can result in hundreds if not thousands of possible realizations, whereas the goal is to quickly turn data into information that results in an action. In other words, permeability in a distant corner of the reservoir probably doesn’t matter in plotting the next well location, but current methods would include that information.

“Shell tells us it takes months to do this model reduction or history-matching because it’s mostly a trial-and-error process,” Mello said. “We want to automate as much as we can to reduce the trial and error.”

Added van den Hoek, “We want to come up with a systematic criteria to discriminate between these different model realizations so that we will be able to reduce a possible million (reservoir model realizations) to a manageable number.”

sub-surface, Shell, subsalt

A spectacular image of some of the subsurface data available to Shell. In this picture, the bright colors represent subsalt oil and gas reservoirs.

The inclusion of time-lapse information is critical here, added Hans Potters, program manager for Shell’s reservoir surveillance technology team. “The mission is integration, and this increasingly needs to include time-lapse seismic data,” he said. “We know that geophysicists are apt at producing all sorts of color maps of seismic attributes. But they have very little, if any, meaning for the people closer to the valves.

“There’s a lot you can do in a qualitative sense with 4-D seismic, but we need to link that 4-D seismic more closely to the dynamic flow model. You can interpret seismic attributes, but you really need to bring this closer into the quantitative numbers that you need in a simulator.”

He added that the cultural gap between these disciplines still exists, and a workflow that integrates the disparate data seamlessly could actually have a beneficial impact on improved communications.

Mello stressed time-lapse as well. “Let’s suppose you have a single well in your whole field,” he said. “You don’t have enough data, and because of that you may have multiple solutions for the same problem. But when you add the spatial and temporal information from the time-lapse, you are able to reduce the uncertainty.”

The end result
The collaboration is expected to last two years, with the possibility of a third year being tagged on if necessary. The companies have different goals for the final product — Shell hopes to use it for a competitive advantage, while IBM hopes to leverage it to suit the needs of other industries.

Shell, image, data, IBM, modeling

An example of the type of modeling image Shell uses to represent a very complex set of data. The collaboration with IBM is expected to bring even greater clarity to reservoir modeling.

“The specific part we’re focusing on is identifying criteria to come up with these flow-relevant geological models,” said van den Hoek. Added Potters, “We’re not in a dream world. We have components here of isolated approaches that will need to be brought together.

“We have unique expertise in Shell in subsurface and also in data assimilation and knowing what’s important, and we have workflows and mathematical solutions for that. So does IBM, and that’s where we hope to get the benefit of both sides by bringing this together and comparing and contrasting and seeing what fits.”

For IBM, the work opens up a host of opportunities for solutions in other industries. For example, Mello said, a water network in a big city could incorporate a model for flow in pipes. “Smart” buildings could benefit from calculating the heat dissipation across multiple buildings. “If you know the thermal conductivity for the walls of every building, you can invert the best thermal conductivity to explain the observations,” he said.

Overall, he added, the hope is to find an analytical way to incorporate vast amounts of data and find the right answer. “This is part of a class of problems we call ‘the smarter planet,’” he said. “You have a lot of data, and you have models that can be statistical or deterministic. How do you find the best model that fits the data?”