SciComp@Wayne Seminar Series: A High-Productivity Tool for Parallel Programming
This event is in the past.
Detroit, MI 48202
SciComp@Wayne Seminar Series
with guest speaker
Dr. Ritu Arora, Texas Advanced Computing Center, University of Texas at Austin
"A High-Productivity Tool for Parallel Programming"
September 17, 2019 @ 3:30 p.m.
Purdy/Kresge Library - Room 110
The Office of the Vice President for Research invites the university community to a SciComp@Wayne seminar Tuesday, September 17, 2019 at 3:30 p.m. in the Purdy/Kresge Library, room 110. The seminar is free.
The guest speaker for this seminar will be Dr. Ritu Arora, research scientist at the Texas Advanced Computing Center, University of Texas at Austin. Dr. Arora received her Ph.D. in computer and information science from the University of Alabama at Birmingham. She has made significant contributions in the areas of developing abstractions for parallelizing legacy applications and application-level checkpointing. Ritu provides consultancy on automating Big Data workflows on national supercomputing resources, and is engaged in health informatics projects. She promotes the usage of technology for creating social impact and is active in broadening the participation of individuals from underrepresented groups in HPC and Big Data disciplines. Her areas of interest and expertise are HPC, fault-tolerance, generative programming, domain-specific languages, big data management, workflow automation, and health informatics.
Dr. Arora will present, "A High-Productivity Tool for Parallel Programming."
Abstract: Parallel programming typically involves breaking-down large and complex computations into smaller pieces and running these pieces simultaneously on multiple cores or processors to reduce the time-to-results. It is critical for efficiently using not only the current and future generation supercomputers but also the commodity computers equipped with manycore and multi-core processors. Developing efficient parallel programs for different hardware platforms can be a challenging task. It requires the knowledge of parallel programming concepts, hardware architecture, and the syntax of the parallel programming paradigms that are relevant to the chosen hardware architectures. It may also involve manual code reengineering. Additionally, there are no set rules or guidelines for efficient parallelization. Some trial-and-error may be involved in the process. Troubleshooting the errors in parallel programs can also be difficult as the error messages are typically not very descriptive. Incorrect parallel programs can crash, deadlock or suffer from race-conditions without providing any helpful information to the developers. These challenges motivated the development of the Interactive Parallelization Tool (IPT).
IPT is a high-productivity tool that can semi-automatically parallelize serial C/C++ programs. It solicits the specifications for parallelization from the users, such as, what to parallelize and where. On the basis of these specifications, IPT translates the serial programs into working parallel versions using one of the three popular parallel programming paradigms, which are: MPI, OpenMP, and CUDA. Hence, IPT can free the users from the burden of learning the syntax of the different parallel programming paradigms, and any manual reengineering required for parallelizing the existing serial programs. IPT can be used to parallelize applications from multiple-domains but currently, it is mainly being used for educational purposes. For the test cases that we have considered so far, the performance of the parallel versions generated using IPT is within 10% of the performance of the best hand-written parallel versions available to us.
We hope you can join us for this interesting seminar!