%0 Journal Article %T How to Safely Integrate Multiple Applications on Embedded Many-Core Systems by Applying the ¡°Correctness by Construction¡± Principle %A Robert Hilbrich %J Advances in Software Engineering %D 2012 %I Hindawi Publishing Corporation %R 10.1155/2012/354274 %X Software-intensive embedded systems, especially cyber-physical systems, benefit from the additional performance and the small power envelope offered by many-core processors. Nevertheless, the adoption of a massively parallel processor architecture in the embedded domain is still challenging. The integration of multiple and potentially parallel functions on a chip¡ªinstead of just a single function¡ªmakes best use of the resources offered. However, this multifunction approach leads to new technical and nontechnical challenges during the integration. This is especially the case for a distributed system architecture, which is subject to specific safety considerations. In this paper, it is argued that these challenges cannot be effectively addressed with traditional engineering approaches. Instead, the application of the ¡°correctness by construction¡± principle is proposed to improve the integration process. 1. Introduction Multicore processors have put an end to the era of the ¡°free lunch¡± [1] in terms of computer power being available for applications to use. The ¡°end of endless scalability¡± [2] of single-core processor performance appears to have been reached. Still, the currently available Multicore processors with two, four, or eight execution units¡ª¡°cores¡±¡ªindicate just the beginning of a new era in which parallel computing stops being a niche for scientist and starts becoming mainstream. Multicore processors are just the beginning. With the amount of cores increasing further, Multicores become many-cores. The distinction between these two classes of parallel processors is not precisely defined. Multicore processors typically feature up to 32 powerful cores. Their memory architecture allows the usage of traditional shared memory programming model without suffering from significant performance penalties. Many-core processors on the other hand comprise more than 64 rather simple and less powerful cores. With a increasing number of cores, a scalable on-chip interconnect between cores on the chip becomes a necessity. Memory access, especially to off-chip memory, constitutes a bottleneck and becomes very expensive. Therefore, traditional shared memory architectures and corresponding programming models suffer from significant performance penalties¡ªunless they are specifically optimized. Comparing their raw performance figures, the approach of having many, but less powerful cores, outperforms processor architectures with less, but more powerful cores [2, 3]. Of course in reality, this comparison is not as clear cut. It largely depends on the software, which has %U http://www.hindawi.com/journals/ase/2012/354274/