Speaker: Prof. Bernard Goossens, University of Perpignan, France Title: How to Compute on a Manycore Processor Manycore processors mark time. They barely reach a hundred of cores after ten years of existence. According to Moore's law, we should have more than a thousand. The GPU themselves have more than 5000 SP+DP cores. In the talk I will show that it mainly comes from a useless complexity of the memory (tens of MB when a GPU uses only a few MB) and the interconnect (a NoC or a ring, when in a GPU, cores are simply abutted). I will inventoriate the needed hardware to compute in parallel. I will insist on the importance of determinism, the uselessness of memory and I will point out a favoured communication direction, from the cause to the effect of a causality. I will describe the design of a parallelizing core, build to be combined with itself to form a 3000 core processor. The core design is simple because it embarks quite no memory and it has connections only with two neighbours. On the software side, I will present a new parallel programming model, not based on an OS thread parallelization but on a hardware parallelization relying on a new "fork" machine instruction added to the ISA. I will present various patterns to parallelize imperative language programming structures: functions, for and while loops, reductions. I will show how such patterns can be used to parallelize classical C functions and how the created threads populate the available hardware thread slots in the processor cores. The hardware does not use memory. Instead, each core uses a set of registers and functional units which are enough to compute scalars from scalars. In our parallel programming model, we avoid data structures: no arrays, no structures, no pointers, no lists. A parallel computation gets the elements of structured data from parallel inputs and puts the computed elements of stuctured data from parallel outputs. Inside the computation, only scalars are handled. The proposed parallel programming model is deterministic. The semantic of a parallel execution is given by a referential sequential order. Hence, running the code sequentially or in parallel produces the same result. Testing and debugging parallel programs is as easy as testing and debugging a sequential run. Our parallel programming model has strong connections with the functional programming paradigm from the composition of no side effect functions.