High-Performance Computing
High-performance computing enables CCMSI to address problems that require large basis sets, extensive sampling, or high levels of electron correlation. We profile workloads to identify bottlenecks, select parallelization strategies (MPI, OpenMP, GPU acceleration where supported), and tune I/O patterns to maintain throughput on shared resources. Our guidance emphasizes predictable job behavior, considerate queue usage, and clear documentation of resource requirements so that studies remain reproducible across systems.
Best Practices
- Prefer scalable algorithms whose accuracy/cost characteristics are well characterized.
- Use checkpointing and deterministic seeds to facilitate restarts and comparisons.
- Automate pre- and post-processing steps to reduce manual error and preserve provenance.
- Maintain small validation runs for rapid regression testing alongside production calculations.
We share cluster-agnostic templates that expose key parameters (cores, memory, wall time) and provide examples of environment modules and containerized executions where institution policies permit. The objective is not only to run larger jobs, but to run all jobs more reliably, transparently, and respectfully in multi-user environments.