Treffer: Cheap and Easy Parallelism for Matlab on Linux Clusters

Title:
Cheap and Easy Parallelism for Matlab on Linux Clusters
Contributors:
The Pennsylvania State University CiteSeerX Archives
Collection:
CiteSeerX
Document Type:
Fachzeitschrift text
File Description:
application/pdf
Language:
English
Rights:
Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Accession Number:
edsbas.FFBA0B2D
Database:
BASE

Weitere Informationen

Matlab is the most popular platform for rapid prototyping and development of scientific and engineering applications. A typical university computing lab will have Matlab installed on a set of networked Linux workstations. With the growing availability of distributed computing networks, many third-party software libraries have been developed to support parallel execution of Matlab programs in such a setting. These libraries typically run on top of a message-passing library, which can lead to a variety of complications and difficulties. One alternative, a distributed-computing toolkit from the makers of Matlab, is prohibitively expensive for many users. As a third alternative, we present PECON, a very small, easy-to-use Matlab class library that simplifies the task of parallelizing existing Matlab programs. PECON exploits Matlab’s built-in Java Virtual Machine to pass data structures between a central client and several “compute servers ” using sockets, thereby avoiding reliance on lower-level messagepassing software or disk i/o. PECON is free, open-source software than runs ”out of the box ” without any additional installation or modification of system parameters. This arrangement makes it trivial to parallelize and run existing applications in which time is mainly spent on computing results from small amounts of data. We show how using PECON for one such application – a genetic algorithm for evolving cellular automata – leads to linear reduction in execution time. Finally, we show an application – computing the Mandelbrot set – in which element-wise matrix computations can be performed in parallel, resulting in dramatic speedup.