Open|SpeedShop now on github
The Open|SpeedShop and Component Based Tool Framework (CBTF) sources are now available on github.
The repositories may be found at these locations:
The Open|SpeedShop release tarballs will be moving to github in the future. The release tarballs are currently still being hosted on sourceforge.
Open|SpeedShop is a community effort by The Krell Institute. Open|SpeedShop builds on top of a broad list of community infrastructures, most notably Dyninst and MRNet from UW, libmonitor from Rice, and PAPI from UTK. Open|SpeedShop is an open source multi platform Linux performance tool which is targeted to support performance analysis of applications running on both single node and large scale Intel, AMD, ARM, Intel Phi, PPC, GPU processor based systems and on Blue Gene and Cray platforms.
Open|SpeedShop is explicitly designed with usability in mind and is for application developers and computer scientists. The base functionality include:
- Program Counter Sampling
- Support for Callstack Analysis
- Hardware Performance Counter Sampling and Threshold based
- MPI Lightweight Profiling and Tracing
- I/O Lightweight Profiling and Tracing
- Floating Point Exception Analysis
- Memory Trace Analysis
- POSIX Thread Trace Analysis
In addition, Open|SpeedShop is designed to be modular and extensible. It supports several levels of plug-ins which allow users to add their own performance experiments.
Open|SpeedShop development is hosted by the Krell Institute. The infrastructure and base components of Open|SpeedShop are released as open source code primarily under LGPL.
- Comprehensive performance analysis for sequential, multithreaded, and MPI applications
- No need to recompile the user’s application.
- Supports both first analysis steps as well as deeper analysis options for performance experts
- Easy to use GUI and fully scriptable through a command line interface and Python
- Supports Linux Systems and Clusters with Intel and AMD processors
- Extensible through new performance analysis plugins ensuring consistent look and feel
- In production use on all major cluster platforms at LANL, LLNL, and SNL
- Four user interface options: batch, command line interface, graphical user interface and Python scripting API.
- Supports multi-platform single system image(SSI) and traditional clusters.
- Scales to large numbers of processes, threads, and ranks.
- Ability to automatically create and attach to both sequential and parallel jobs from within Open|SpeedShop.
- View performance data using multiple customizable views.
- Save and restore performance experiment data and symbol information for post experiment performance analysis
- View performance data for all of application’s lifetime or smaller time slices.
- Compare performance results between processes, threads, or ranks between a previous experiment and current experiment.
- GUI Wizard facility and context sensitive help.
- Interactive CLI help facility which lists the CLI commands, syntax, and typical usage.
- Python Scripting API accesses Open|SpeedShop functionality corresponding to CLI commands.
- Option to automatically group like performing processes, threads, or ranks.
- Create traces in OTF (Open Trace Format).
How-To-Use Open|SpeedShop HPC-Admin Magazine Article and Scientific Computing Article
In this article, we will describe how to use Open|SpeedShop through step-by-step examples illustrating how to find a number of different performance bottlenecks. Additionally, we will describe the tool’s most common usage model (workflow) and provide several performance data viewing options.
Open|SpeedShop at SC16
Members of the Open|SpeedShop team will be at Super Computing in 2016. This year we will, once again, have booth and will be giving demonstrations on-demand throughout the show at the Open|SpeedShop booth:
To schedule a meeting with Jim or Don please send email to jeg AT krellinst.org.
Members of our team are presenting the “How to Analyze the Performance of Parallel Codes 101” tutorial on Monday, 11/14/16 (8:30am-5:00pm). The slides are not available at this time, but here are the slides from last years tutorial: Here is the URL.