DOWNLOAD PRESENTATIONWATCH VIDEO
Analysis and visualization of the data generated by scientific simulation codes is a key step in enabling science from computation. However, a number of challenges lie along the current hardware and software paths to scientific discovery. These challenges occur over several different axes, including: data size, data complexity, type of visualization, number of nodes in an HPC system, and the increasing amount of parallelism within a node. Further, as the computational improvements outpaces those of I/O, more data will be discarded and I/O-heavy analysis will suffer. Furthermore, the limited memory environment, particularly in the context of in situ analysis that can sidestep some I/O limitations, will require efficiency of both algorithms and infrastructure. We present work that characterizes the performance of visualization techniques in a variety of HPC settings, as well as different visualization algorithms. We also present work on a new library, the Extreme Scale Analysis and Visualization Library (EAVL), which has been developed to efficiently utilize the massive amounts of parallelism available on current, and future compute nodes, as well as provide a more descriptive data model for large scientific applications.