posted on 2018-10-02, 11:09authored byTim Suss, Nils Doring, Andre Brinkmann, Lars NagelLars Nagel
The internal parallelism of compute resources increases permanently, and graphics processing units (GPUs) and other accelerators have been gaining importance in many domains. Researchers from life science, bioinformatics or artificial intelligence, for example, use GPUs to accelerate their computations. However, languages typically used in some of these disciplines often do not benefit from the technical developments because they cannot be executed natively on GPUs. Instead existing programs must be rewritten in other, less dynamic programming languages. On the other hand, the gap in programming features between accelerators and common CPUs shrinks permanently. Since accelerators are becoming more competitive with regard to general computations, they will not be mere special-purpose processors in the future. It is a valid assumption that future GPU generations can be used in a similar or even the same way as CPUs and that compilers or interpreters will be needed for a wider range of computer languages. We present CuLi, an interactive Lisp interpreter, that performs all computations on a CUDA-capable GPU. The host system is needed only for the input and the output. At the moment, Lisp programs running on CPUs outperform Lisp programs on GPUs, but we present trends indicating that this might change in the future. Our study gives an outlook on the possibility of running Lisp programs or other dynamic programming languages on next-generation accelerators.
History
School
Science
Department
Computer Science
Published in
IEEE International Conference on Cluster Computing, CLUSTER 2018
Citation
SUSS, T. ... et al, 2018. And now for something completely different: running Lisp on GPUs. Presented at the 2018 IEEE International Conference on Cluster Computing (CLUSTER), Belfast, UK, 10-13 September 2018, pp.434-444.