**Convergence and time-consuming issues of large-scale calculations ** |
- Date: 2018/03/23 12:49
- Name:
**xmzhang**
<xmzhang@theory.issp.ac.cn>
Dear OpenMX developers and user：
I would like to consult with you some problems of convergence and time-consuming issues in large-scale calculations. First, When atoms of system is 300, Kpoints 3x3x1, 48 CPUs, the calculating time of 1SCF is 45s. But when the number of atoms is 700, Kpoints 1x2x1, 48 CPUs, the process is killed.This is the error: yhrun: error: cn603: task 14: Segmentation fault (core dumped) yhrun: First task exited 60s ago yhrun: tasks 11-12,15: running yhrun: tasks 0-10,13-14,16-23: exited abnormally yhrun: Terminating job step 627108.0 slurmd[cn602]: *** STEP 627108.0 KILLED AT 2018-03-22T09:39:06 WITH SIGNAL 9 *** yhrun: Job step aborted: Waiting up to 2 seconds for job step to finish. slurmd[cn602]: *** STEP 627108.0 KILLED AT 2018-03-22T09:39:06 WITH SIGNAL 9 *** yhrun: error: cn602: task 11: Killed When CPUs is 144,the calculation time of 1SCFis 13440s(700 atoms, kpoints 1x2x1). At the same time, the convergence is poor. I would like to consult, when the number of atoms increases, the computational efficiency will produce such a big difference? Second, I would like to consult that how to set parameters to improve calculation time and release virtual memory? Last one is how to set parameters that could make the convergence of large-scale systems as soon as possible?
Thank you!
| |