[Editor's note: A version of this story appears in the September 2020 edition of E&P Plus. Subscribe to the magazine here.]  

Oil and gas (O&G) companies are continuously striving to optimize overall equipment effectiveness, performance and profitability within a highly volatile and regulated environment. Several of those regulations are coming from an increasing industry effort toward the reduction of emissions that affect both health and the environment. Initiatives are growing from industry and governmental groups. For example, O&G companies that operate in the U.K. Continental Shelf are stepping up to reduce carbon emissions to net zero by 2050 in the U.K. Another example is the decarbonization efforts led by the European Commission, which aims to initiate the transition toward “a climate neutral economy” by 2050. This will require the active involvement and investment of different industry, technology and governmental sectors for mid- and long-term solutions.

To start getting results today, it is key to take advantage of underutilized data in combination with process expertise that is already in place. Aim to improve process workflows to have a better and more efficient emissions control and reduction.

There is an increasing need to exploit the large set of data being generated from sensors, instruments and assets. Traditional methods of Big Data solutions require complex IT projects and data scientists to build and maintain models. Aside from being costly and time-consuming, this way of working can also create resource bottlenecks in the organization and underutilize the process and asset experts. Turning big industrial data into actionable information might seem like a huge task, but self-service industrial analytics makes it easy for process engineers to optimize the processes by themselves. Results are delivered fast, directly into the hands of the process experts who can really provide meaningful interpretations to the data, allowing them to uncover insights at all levels of production, improving day-to-day decision-making.

Improving performance

Sulfur recovery units (SRU) are becoming more and more important not only due to the rising demand for sulfur in various applications but mainly due to the increasing concern and number of regulations around emissions control and climate change. SRUs typically include burners, catalytic stages, most of the time a Superclaus unit and a tail gas incinerator. The Claus process partially burns the H2 S. It then converts catalytically the H2 S and the CO 2 combustion products to elemental sulfur and water vapor. One of the most important process key performance indicators is sulfur recovery. Low process efficiency (in this case, lower than 99.2) results in lower sulfur recovery and unprocessed and unwanted H2 S and sulfur dioxide emissions.

Figure 1
FIGURE 1. This Gantt chart shows low recovery periods longer than 20 minutes, in combination with other operational contextual information to speed up root cause analysis. (Source: TrendMiner)

Analyzing the data

A prerequisite for data analytics is to have the data readily available through a live connection to the historian for automatically visualizing the tags in user-friendly trend views. The next step is to start the data exploration and searching for specific process events in the SRU throughout multiple years of data. With the use of modern self-service industrial analytics software, the process expert can focus on discovering the periods of low-sulfur recovery over the last two years.

The process expert will search for and visualize the low recovery periods, focusing on the H2 S content behavior. The sulfur that is recovered has a high dependency on the H2 S content that is measured before the Superclaus unit; whenever the H2 S online analyzer has sudden increases, the sulfur recovery rate will decrease. In this use case, the search showed 15 periods of low recovery, out of which nine presented a similar increase pattern of the H2 S content—even when the content value was still under the content target of 0.8% vol, which was typically managed by a feedback control system to recover the plant’s sulfur at maximum efficiency.

The process expert decided to set up a monitor to follow the pattern of sudden increases of the H2 S content independent of the absolute value. Through patented pattern recognition technology, the process expert was able to identify particular behaviors for periods longer than 20 minutes. By saving the search, the H2 S content behavior can be monitored in real time. Each time a user-configurable percentage of similarity is matched, an alert via email is sent to the operator for taking appropriate measures to control the process.

Context accelerates analysis

The created monitor, running in the background, can be used to capture specific low-recovery events, which can be combined with other operational contextual data from other systems. The Gantt chart view (Figure 1) gives the users a quick overview of information from the manufacturing execution system for operational status of the unit in combination with maintenance periods, operator manual entries of high skin temperature in the reactor furnace and automatic logins coming from the monitor of the sudden increase of the H2 S content.

All this information can be used to create an analytics-driven production cockpit (Figure 2). Looking at a timeframe set by the user (in this case one week) containing the live status of the H2 S content as an alert, a quick overview is provided to all relevant time-series data following the Sulphur recovery in MT/D (megatons per day), the total steam production in lb/h as well as several temperature measurement upstream of the Superclaus. Lastly, the operator has access to a counter that shows the history of the behavior of the H2 S content alert for a determined period of time.

Figure 2
FIGURE 2. An SRU production cockpit visualizes the operational performance of the unit, including the alert tile showing four  low-sulfur recovery events. (Source: TrendMiner)

In this use case, with the dashboard in place for the control room, an increase of H2 S was detected for more than 20 minutes, triggering the H2 S status alert. During the shift handover, it was decided to further investigate the issue with the self-service analytics software. Just with one click on the alert tile, the engineers move to the time-series data universe to start a root cause analysis with the time frame and tags of interest available.

Since the issues are not immediately clear by using the tags around the H2 S content analyzer, it is decided to look further upstream. Instead of trial and error, the self-service analytics software can suggest root causes through using the recommender engine. In this use case, the recommender engine suggests a strong negative correlation between the operating temperature of the first Claus unit and the H2 S content value. An immediate call to action to bring the process and the recovery back is to check fluctuations in the sulfur flow and steam around the first Claus unit and/or increase the process of its inlet process gas temperature.

 

Closing the analytics loop

With the low operating temperature of the first Claus unit identified as the root cause for the increase in the H2 S content and therefore low-sulfur recovery during the last shift, it is then time to look into the last four events of H2 S increase that happened throughout the last week. Looking into the data for those periods, lets the expert conclude that there is a more consistent problem with the operating temperature of the first Claus unit. A deeper analysis of the flow measurement and control loops tuning takes place, but no immediate deviation is spotted.

As a complement for the consequent process discussions, a quick look by using the recommendation engine for all four periods confirms a hypothesis that the root cause could then be on the utility side (steam) of the SRU unit. In comparison with the process side of the unit, the utility side is frequently neglected in both the details of the conceptual design and in the normal day-to-day operation.

The process team then focuses on looking at a heat loss in the steam side, and after a couple of field checks, it is found that there is a problem with one of the steam traps. This resulted in the creation of a monitor set on the operating temperature of the first Claus unit with the recommendation for the operator to look into the steam side. More particularly, the steam traps around it to manually check the temperature measurement at the inflow of the trap and at the condensate side of the trap. As an extra outcome of the analysis, it is decided to double check the preventive maintenance program for the steam trap.

The self-service analytics tool has helped the process expert to easily visualize and monitor the process, assess the size of the problem, drilldown to a series of root causes and finally set up a monitor to prevent the issue from happening in the future. There was no need for a long multidisciplinary data analytics project, the process engineers could easily do all the data analytics themselves, including the use of contextual data from other business applications and easily create a production cockpit to monitor, control and improve the process. In this way, it helps reduce emissions, maintenance costs and production losses.