Conventional supercomputers are monolithic number-crunchers, capable of processing vast amounts of data quickly. To achieve this, they tend to use several processors working in parallel and sharing a single memory. These need to be incorporated into an expensive, complex architecture, running special software that enables processing tasks to be distributed amongst the processors efficiently. As the market for such machines is relatively limited, it’s not surprising that the individual components used can be expensive.
The key technology, however, is not the array of processors at the heart of a supercomputer. The crucial technological success factor is the ability to make the processors work together reliably. The latest data communications, messaging and advanced networking technology is significantly faster and more reliable than ever before. This means that processors no longer need to be in close physical proximity in order to work together as a supercomputer. Processors can now be clustered together to deliver supercomputer levels of performance.
Even better, the processors in a clustering solution do not have to be purpose-designed. They can be the same processors that you find in an off-the-shelf desktop or laptop computer. As these are created for the mass-market, they tend to be cheaper than processors designed only for the limited supercomputer market.
The cost:performance benefits are enormous. Compared to a supercomputer, you will obtain around ten times the level of computing power from a clustering solution.
Cluster computing is an affordable way of dedicating supercomputer performance to a task. Another option is to use the spare processing capacity in your existing network to create a virtual supercomputer. This option – called ‘grid computing’ – depends on advanced technology to manage your computing tasks efficiently and reliably.