TEXT NEEDS TO BE CHANGED!
Runtime systems are capable of observing programs as they execute, a unique feature that static techniques cannot achieve alone. Using such introspection, runtimes can find hot sections of code or changing patterns in program execution that will greatly benefit from optimizations. We instrument operating system kernels and virtual machine runtimes with low-overhead components that perform code inspection and optimizations in order to improve system and application performance and energy efficiency.
While advanced compiler passes can be used to achieve performance or energy enhancements, they typically require data to be profiled during runtime which is used during a second compilation of the application, and source code usually needs to be annotated. Rather than requiring the developer to gather this information and recompile their program, we focus on modifying the runtime to provide this capability without any source code modifications. We target systems ranging from many-core workstations and servers to resource-limited mobile devices.
Current research being done by the CCCP research group into runtime systems includes the following:
- Dynamic offloading of compute intensive code: As mobile devices shrink in form factor to become more versatile and portable, the computation power of such devices is limited by the amount power that smaller batteries can provide. At the same time, the applications that run on these devices are becoming more complicated, thus placing strain on their available processing capability and battery life. In this work, we investigate how we can dynamically identify sections of code that would benefit, in terms of both execution time improvements and boosting energy efficiency, from remote execution, and how to transfer all state necessary to execute such computationally intensive code on a more powerful computer.
- Dynamic thread-level parallelism: The fact that parallel programming is hard has gained common acceptance. Parallel programming paradigms like OpenMP, MPI, Nvidia's CUDA, OpenCL, Intel's Ct, TBB make the programmer's job relatively easy. However, parallel program performance depends on a variety of factors, like available system resources, inputs, hardware configurations, which can also change during the course of execution of the program. In this project we provide solutions to address these challenges, relieving the programmer of some of the burden of efficient parallel programming.
Past projects include:
- Mojtaba Mojtaba Mojtaba Mojtaba
Page last modified January 22, 2016.