LIBRA(GPU) Cluster



These systems belong to the category of cluster computing, where the user of the system splits his/her job in to number of pieces and are executed in number of processors. The user uses parallel processing for executing his applications.IP address of Libra is .

What is GPU Computing?

The GPU accelerates applications running on the CPU by offloading some of the compute-intensive and time consuming portions of the code. The rest of the application still runs on the CPU. From a user's perspective, the application runs faster because it's using the massively parallel processing power of the GPU to boost performance. This is known as "heterogeneous" or "hybrid" computing.

A CPU consists of four to eight CPU cores, while the GPU consists of hundreds of smaller cores. Together, they operate to crunch through the data in the application. This massively parallel architecture is what gives the GPU its high compute performance. There are a number of GPU-accelerated applications that provide an easy way to access high-performance computing (HPC).


Core comparison between a CPU and a GPU

Tesla GPUs are designed as computational accelerators or companion processors optimized for scientific and technical computing applications. The latest Tesla 20-series GPUs are based on the latest implementation of the CUDA platform called the "Fermi architecture ". Fermi has key computing features such as 500+ gigaflops of IEEE standard double-precision floating-point hardware support, L1 and L2 caches, ECC memory error protection, local user-managed data caches in the form of shared memory dispersed throughout the GPU, coalesced memory accesses, and more.

Why Are GPUs So Fast?

GPU originally specialized for math-intensive,highly parallel computation So, more transistors can be devoted to data processing rather than data caching and flow control