You may have read in the pages of this magazine articles about new approaches to data processing, primarily through the use of graphics processing units (GPUs). Those who promote this technology claim that GPUs can process more data faster than CPUs, which will eventually render the cluster solution and possibly even the super-computer obsolete.

But applications for seismic data processing are not designed to run on GPUs. While processors might encounter some speed increase, it’s peanuts compared to the dizzying speeds gained by employing seismic data processing software specifically geared for this new environment.

That’s the story behind Acceleware, a company that specializes in designing software for GPU processing purposes. Founded in 2004, the company first examined the market of electromagnetics, designing systems for cell phone designers. It next turned its attention to seismic data processing.

“Acceleware develops software that increases the speed of simulation and processing of large datasets,” said Charlee Forbrigger, marketing manager for Acceleware. “How do we do that? Using NVIDIA graphics cards.”

NVIDIA, one of the top two GPU manufacturers in the world, has teamed with Acceleware and in 2007 invested US $3 million into the company, becoming its largest stakeholder. The combination of NVIDIA hardware and Acceleware software has led to an “appliance” that plugs into a desktop computer and basically turns it into a super-computer (larger units are also available).

Here’s how it works on the desktop level: one or more GPUs can be added to the server. GPUs are 128-way parallel processors, meaning that, as Steven Joachims, vice president of business development for Acceleware, said, “they have 128 brains versus your computer, which may have two brains if it’s a dual-core processor. If we put two GPUs in your high-performance computer, we’ve just added 256 brains of processing power.”

Not surprisingly, this leads to fairly significant performance gains. For instance, a simple deskside application as described above is eight times faster than a 4X quad core computer. But there are other advantages as well. Unwilling to make unsubstantiated claims about total cost of ownership, Joachims settled on two easily measured variables — cost of power and cost of air conditioning to keep the units running smoothly.

“If you and I had processing centers, and you’d built up your system with industry-standard servers and clusters and could run so many traces per hour, and I built a solution that had the same capacity but was using our solution, my shop would burn 75% less power and AC than yours,” he said. Other variables that are harder to measure but would also provide cost savings would include the ability to have a smaller data center, less inventory, less downtime and fewer employees. The deskside solution would also allow the geophysicists to get more done at their desks and not vie for resources with other processors.

Finally, the solution, he said, leads to better accuracy and data quality. The reason is that 128 processors can all be run simultaneously to look at potential scenarios, whereas a single processor can only look one at a time. “We actually compute 128 alternatives simultaneously, and we don’t choose the result we want until we’re at the end of the computation,” he said. “We can compute 128 possible alternatives, choose the best and throw the other 127 out. And it didn’t cost us anything to do those extra 127.”

Many companies are already using NVIDIA technology for their data processing, so it begs the question of what Acceleware brings to the table. Simple, Joachims said — his company is focusing on wringing every extra bit of acceleration out of the hardware, not adding all of the workflow bells and whistles that a standard processing flow entails. “Our designers live and breathe the hardware,” he said. “The application programmers think like geophysicists. Our guys are the mechanics; they’re the drivers. They can get to where they’re going, but they’re not going to change the pistons.”

Acceleware can be used as a stand-alone product or in conjunction with the popular processing programs on the market today, he added.

Next on the company’s list is the medical industry, where it will introduce a Data Acceleration Solution for image reconstruction. Somewhere in the not-too-distant future will be that most compute-intensive of challenges — reservoir simulation. Joachims said that at this point his system is geared to 32-bit processing, while reservoir simulation does better in the 64-bit environment. But NVIDIA is already gearing its hardware to meet this challenge.

“NVIDIA is investing $400 million a year in research and development, and Acceleware is leveraging that technology to speed up simulation times, processing times and reconstruction times,” said Forbrigger.